Monday, August 12, 2013

Fabric (py orchestration util) issue with my FreeBSD node, and Easy Fix

Recently I started implementing a Fabfile to deal with 7-8 different types of linux/bsd distributions at a time.

Now for the folks new to term "Fabfile", it's the key what-to-do guide for Fabric. Fabric is a fine Orchestration utility capable of commanding numerous remote machines in parallel. For the ruby community folks, you can call it Capistrano's Python sibling.

The way Fabric works is whatever you ask it to "run(<cmd>)" over remote nodes, it runs that command by passing it to " bash -l -c '<cmd>' ".

Now as the issue was with my FreeBSD machine is it didn't had "bash" in the first place. So, it was failing.

It can be fixed in 2 ways


First way,
add a line like following to make Fabric use the SHELL present on the remote node, as FreeBSD node had 'C Shell'... so

env.shell = '/bin/csh -c'


Second way,
install 'bash' on the remote nodes and make sure it's reachable via path "/bin/bash"

pkg_add -r bash 
ln -sf /usr/local/bin/bash /bin/bash


Saturday, May 18, 2013

perform performance test using multi-mechanize

gave multi-mechanize a try recently, when in-need of behavior analysis for an application on massive concurrent requests being sent to it

What? multi-mechanize is an OpenSource load testing framework written and configurable in Python. So either make calls to any web service or simple import and utilize any python-accessible API/service.
It's successor of an old Python load testing framework called Pylot.

Developed and maintained @GitHub

Install available through 'pip' too as
   $ pip install multi-mechanize
It also requires matplotlib library if you wanna view Graphs in the generated HTML report (or otherwise), BUT it's not mandatory as the framework would perform all tests fine with an import error for matplotlib.

This also follows a bit convention oriented run structure. We'll see how.

Before starting any performance check user is supposed to create a project using
$ multimech-newproject project_name

This creates a dir 'project_name' at place of execution with following dir-structure
./project_name
├── config.cfg
└── test_scripts
    └── v_user.py

here './project_name/config.cfg' mainly contains global configuration around the duration to run,


[global]
run_time = 30                # second duration for test to run, required field
rampup = 0                   # second duration for users to ramp-up, required field
results_ts_interval = 10  # second time series interval for result analysis, required field
progress_bar = on         # console progress bar on/off, default=on
console_logging = off    # console logging to standard ouput, default=off
xml_report = off            # xml/jtl report generation, default=off
# results_database = sqlite:///results.db ## optional component to push results to DB
# post_run_script = do_whatever.py ## to run any script to do anything on test completion 
[user_group-1]
threads = 3                  # number of threads to run the following script in
script = v_user.py        # the actual script that will be run to test, give any 
## similar more users with same/different threads and script values can be added with names like user_group-ANYTHING.

Now, to run this prepared project
multimech-run ./project_name

As I already mentioned it picks certain implementations by convention, most important to notice is the way user-group-script is to be prepared.
Basically it's to be a python script that can utilize all Python magic you know or install. But it need to have a 'Transaction' class with 'run' method.
When you run your project using multi-mechanize, it instantiates 'Transaction' class and keeps calling 'run' method in a loop...

Example user-script to test load at your 'python -m SimpleHTTPServer'
# v_user_server.py
import httplib


class Transaction(object):
    def run(self):
        conn = httplib.HTTPConnection('127.0.0.1:8000')
        conn.request('GET', '/')
        resp = conn.getresponse()
        conn.close() 
        assert ((resp.status / 400) == 0), 'BadResponse: HTTP %s' % resp.status
# finito file


Find the detailed scripting guide with plenty good examples herehttp://testutils.org/multi-mechanize/scripts.html

Thursday, May 16, 2013

pycallgraph : usage example

'pycallgraph' is a fine python utility enabling developers to prepare call graph for any python code piece... it also notifies the number of times a call being made and total time spent in it

Homepage: http://pycallgraph.slowchop.com/pycallgraph/wiki
Codebase: https://github.com/gak/pycallgraph

it require 'graphviz' (http://www.graphviz.org/) to be present on the same machine for image generation from the analyzed calls data...

install : $ pip install pycallgraph

Usage#1 Selective graphing

wrap the code to be graphed as follows...
import pycallgraph
pycallgraph.start_trace()
fetch() # logic to be graphed
pycallgraph.stop_trace()
just_log() # logic NOT to be graphed
pycallgraph.start_trace()
process() # logic to be graphed
pycallgraph.stop_trace()
pycallgraph.make_dot_graph('path_to_graph.png')

Usage#2 Decorator

Import the decorator method and place the decorator over any method where call graph is required...
import this file wherever you require the @callgraph decorator and use it

Decorator Python code. Sample code with decorator usage and selective trace usage are at end of this blog... from this gist: https://gist.github.com/abhishekkr/5592520



pycallgraph.start_trace() method lets you pass a filter function to not graph some modules on purpose, say in nova I don't wanna graph all the dependency libraries magical calls...


STATUTORY WARNING: Graph will be huge for a very high level call and probably unreadable due to overworked image generator. Use it wisely on burnt areas.


Call Graph images for code sample in Gists

for call with decorator



for call with selective trace

Friday, April 12, 2013

console and json

recently Alan posted a very nice aticle around prettifying json, reminded me of <this draft>... he posted 2 out of 3 utilities I was gonna mention... so here is 3rd and the shell profile way to use first 2

$ sudo wget -c -O /etc/profile.d/a.json.sh https://raw.github.com/abhishekkr/tux-svc-mux/master/shell_profile/a.json.sh
it contains 2 functions available at shell

# usage example:
# $ json_me 'echo {"a": 1, "b": 2}'
# $ json_me 'curl http://127.0.0.1/my.json'
json_me(){
bash -c $@ | python -mjson.tool
}
# requirement: $ pip install pjson
# usage example:
# $ pjson_me 'echo {"a": 1, "b": 2}'
# $ pjson_me 'curl http://127.0.0.1/my.json'
pjson_me(){
bash -c $@ | pjson
}



3rd utility is 'jq', an awesome utility which prettifies an perform sed-like operations on json. This tutorial describes all its magical powers.

Friday, March 8, 2013

make rvmsudo using running user's env values

I was automating my personal linux box setup in opscode chef, to ease my life hence-forth and reach practice what I preach state...
While setting-up the 'development' tools recipe, automating the 'nodejs' set-up... due to missing reliable yum repository, decided on using 'nvm' (it's for nodejs as rvm is for ruby).

to utilize 'nvm' to install/manage 'nodejs', need to install it first
$ curl https://raw.github.com/creationix/nvm/master/install.sh | sh

How install.sh here works is, it calculates NVM_TARGET="$HOME" to handle the location for '.nvm' directory cloned from the nvm git repo, where the entire nodejs environment lives.

I had the install command as execute with the user value referred to the dev-user of my choice.

While running 'rvmsudo chef-solo ....', it picks desired user but because of $HOME inference in the 'nvm/install.sh' still the HOME value got picked for '/root'. It messed up the situation.

To fix it, and any certain issues around permission which might occur... using
$ rvmsudo USER=$USER HOME=$HOME chef-solo -j....
it's working with desired output.

Though, I need to correct my nodejs set-up by pushing the required RPM to my yum reposiotry.

But, this 'rvmsudo' trick can be used for any similar scenarios where you need sudo privileges for rvm but desire to provide current users' environment values.

Sunday, January 13, 2013

Apache httpd VirtualHosts : one gets default, unknown faults

Recently faced a situation where even after removing a VirtualHost, its ServerName was giving HTTP 200 response. It was all because of missed RTFM agenda.

When VirtualHosts get applied in Apache HTTPD server configuration, the first definition encountered by Apache Controller gets selected as the default route logic selected if the ServerName doesn't match any provided.

To get an explicit _default_ provider, one of the vhost definitions need to be told so... as in last configuration file piece.

~
~