Friday, December 26, 2014

sphinx-doc failing for manpage builder

Sphinx-doc ( is a utility to create intelligent and friendly documentation with existing support for Python and C/C++ projects.

It can produce documentation in varied formats like html, LaTeX, epub, Texinfo, manpages and plaintext. It has lots of other features and multiple community created extensions to hyper-activate its usability.

To give it a quick try "Sphinx - First Steps" tutorial can be used, or a bit more with "sampledoc tutorial".

Here we will be discussing issue faced whilst creating manpages from a sphinx-documented project raising issue for "man" builder not being supported.

The solution is very simple but being a beginner (or external user) makes one doubt it.

System package version of Sphinx-doc for some of the distros is still not above v1.0 and the manpage support is just found in those.
So, to counter this error just don't use system package for it but instead use "pip install sphinx".

If you don't have pip already, follow this article: .

Thursday, February 13, 2014

xopen : shell function to have verbose xdg-open : Mac's "open" alternative for Linux

I have been using xdg-open for sometime now, it's a utility similar to "open" utility popular among MacOSx users.

It enables you to open any file in the default "open with" program assigned to it's type. So, just passing any type of file to this utility would let you open it in the program it's supposed to open.

What is xdg-open?

There is just this little shell function, that makes your xdg-open usage a bit more verbose in case of errors (bad-syntax/ghost-file/ghost-program/open-failure) faced about the reason for it. And also shortens the access util name obviously.

xopen ()
    xdg-open "$@";
    if [ "${_TMP_EXITCODE}" == "1" ]; then
        echo "[ERROR:] Error in command line syntax";
        if [ "${_TMP_EXITCODE}" == "2" ]; then
            echo "[ERROR:] One of the files passed on the command line did not exist";
            if [ "${_TMP_EXITCODE}" == "3" ]; then
                echo "[ERROR:] A required tool could not be found";
                if [ "${_TMP_EXITCODE}" == "4" ]; then
                    echo "[ERROR:] The action failed";
    return $_TMP_EXITCODE

Monday, August 12, 2013

Fabric (py orchestration util) issue with my FreeBSD node, and Easy Fix

Recently I started implementing a Fabfile to deal with 7-8 different types of linux/bsd distributions at a time.

Now for the folks new to term "Fabfile", it's the key what-to-do guide for Fabric. Fabric is a fine Orchestration utility capable of commanding numerous remote machines in parallel. For the ruby community folks, you can call it Capistrano's Python sibling.

The way Fabric works is whatever you ask it to "run(<cmd>)" over remote nodes, it runs that command by passing it to " bash -l -c '<cmd>' ".

Now as the issue was with my FreeBSD machine is it didn't had "bash" in the first place. So, it was failing.

It can be fixed in 2 ways

First way,
add a line like following to make Fabric use the SHELL present on the remote node, as FreeBSD node had 'C Shell'... so = '/bin/csh -c'

Second way,
install 'bash' on the remote nodes and make sure it's reachable via path "/bin/bash"

pkg_add -r bash 
ln -sf /usr/local/bin/bash /bin/bash

Saturday, May 18, 2013

perform performance test using multi-mechanize

gave multi-mechanize a try recently, when in-need of behavior analysis for an application on massive concurrent requests being sent to it

What? multi-mechanize is an OpenSource load testing framework written and configurable in Python. So either make calls to any web service or simple import and utilize any python-accessible API/service.
It's successor of an old Python load testing framework called Pylot.

Developed and maintained @GitHub

Install available through 'pip' too as
   $ pip install multi-mechanize
It also requires matplotlib library if you wanna view Graphs in the generated HTML report (or otherwise), BUT it's not mandatory as the framework would perform all tests fine with an import error for matplotlib.

This also follows a bit convention oriented run structure. We'll see how.

Before starting any performance check user is supposed to create a project using
$ multimech-newproject project_name

This creates a dir 'project_name' at place of execution with following dir-structure
├── config.cfg
└── test_scripts

here './project_name/config.cfg' mainly contains global configuration around the duration to run,

run_time = 30                # second duration for test to run, required field
rampup = 0                   # second duration for users to ramp-up, required field
results_ts_interval = 10  # second time series interval for result analysis, required field
progress_bar = on         # console progress bar on/off, default=on
console_logging = off    # console logging to standard ouput, default=off
xml_report = off            # xml/jtl report generation, default=off
# results_database = sqlite:///results.db ## optional component to push results to DB
# post_run_script = ## to run any script to do anything on test completion 
threads = 3                  # number of threads to run the following script in
script =        # the actual script that will be run to test, give any 
## similar more users with same/different threads and script values can be added with names like user_group-ANYTHING.

Now, to run this prepared project
multimech-run ./project_name

As I already mentioned it picks certain implementations by convention, most important to notice is the way user-group-script is to be prepared.
Basically it's to be a python script that can utilize all Python magic you know or install. But it need to have a 'Transaction' class with 'run' method.
When you run your project using multi-mechanize, it instantiates 'Transaction' class and keeps calling 'run' method in a loop...

Example user-script to test load at your 'python -m SimpleHTTPServer'
import httplib

class Transaction(object):
    def run(self):
        conn = httplib.HTTPConnection('')
        conn.request('GET', '/')
        resp = conn.getresponse()
        assert ((resp.status / 400) == 0), 'BadResponse: HTTP %s' % resp.status
# finito file

Find the detailed scripting guide with plenty good examples here

Thursday, May 16, 2013

pycallgraph : usage example

'pycallgraph' is a fine python utility enabling developers to prepare call graph for any python code piece... it also notifies the number of times a call being made and total time spent in it


it require 'graphviz' ( to be present on the same machine for image generation from the analyzed calls data...

install : $ pip install pycallgraph

Usage#1 Selective graphing

wrap the code to be graphed as follows...
import pycallgraph
fetch() # logic to be graphed
just_log() # logic NOT to be graphed
process() # logic to be graphed

Usage#2 Decorator

Import the decorator method and place the decorator over any method where call graph is required...
import this file wherever you require the @callgraph decorator and use it

Decorator Python code. Sample code with decorator usage and selective trace usage are at end of this blog... from this gist:

pycallgraph.start_trace() method lets you pass a filter function to not graph some modules on purpose, say in nova I don't wanna graph all the dependency libraries magical calls...

STATUTORY WARNING: Graph will be huge for a very high level call and probably unreadable due to overworked image generator. Use it wisely on burnt areas.

Call Graph images for code sample in Gists

for call with decorator

for call with selective trace

Friday, April 12, 2013

console and json

recently Alan posted a very nice aticle around prettifying json, reminded me of <this draft>... he posted 2 out of 3 utilities I was gonna mention... so here is 3rd and the shell profile way to use first 2

$ sudo wget -c -O /etc/profile.d/
it contains 2 functions available at shell

# usage example:
# $ json_me 'echo {"a": 1, "b": 2}'
# $ json_me 'curl'
bash -c $@ | python -mjson.tool
# requirement: $ pip install pjson
# usage example:
# $ pjson_me 'echo {"a": 1, "b": 2}'
# $ pjson_me 'curl'
bash -c $@ | pjson

3rd utility is 'jq', an awesome utility which prettifies an perform sed-like operations on json. This tutorial describes all its magical powers.

Friday, March 8, 2013

make rvmsudo using running user's env values

I was automating my personal linux box setup in opscode chef, to ease my life hence-forth and reach practice what I preach state...
While setting-up the 'development' tools recipe, automating the 'nodejs' set-up... due to missing reliable yum repository, decided on using 'nvm' (it's for nodejs as rvm is for ruby).

to utilize 'nvm' to install/manage 'nodejs', need to install it first
$ curl | sh

How here works is, it calculates NVM_TARGET="$HOME" to handle the location for '.nvm' directory cloned from the nvm git repo, where the entire nodejs environment lives.

I had the install command as execute with the user value referred to the dev-user of my choice.

While running 'rvmsudo chef-solo ....', it picks desired user but because of $HOME inference in the 'nvm/' still the HOME value got picked for '/root'. It messed up the situation.

To fix it, and any certain issues around permission which might occur... using
$ rvmsudo USER=$USER HOME=$HOME chef-solo -j....
it's working with desired output.

Though, I need to correct my nodejs set-up by pushing the required RPM to my yum reposiotry.

But, this 'rvmsudo' trick can be used for any similar scenarios where you need sudo privileges for rvm but desire to provide current users' environment values.