tag:blogger.com,1999:blog-62964511410194889682024-02-21T06:41:33.862-08:00Technology @Walktech-learnings gained on tasks-trekkingabionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.comBlogger35125tag:blogger.com,1999:blog-6296451141019488968.post-58958057401466558032014-12-26T22:59:00.001-08:002014-12-26T22:59:34.949-08:00sphinx-doc failing for manpage builder<div dir="ltr" style="text-align: left;" trbidi="on">
<b>Sphinx-doc</b> (<a href="http://sphinx-doc.org/latest/index.html">http://sphinx-doc.org/latest/index.html</a>) is a utility to create intelligent and friendly documentation with existing support for Python and C/C++ projects.<br />
<br />
It can produce documentation in varied formats like html, LaTeX, epub, Texinfo, manpages and plaintext. It has lots of other features and multiple community created extensions to hyper-activate its usability.<br />
<br />
To give it a quick try "<a href="http://sphinx-doc.org/latest/tutorial.html" target="_blank">Sphinx - First Steps</a>" tutorial can be used, or a bit more with "<a href="http://matplotlib.org/sampledoc/" target="_blank">sampledoc tutorial</a>".<br />
<br />
Here we will be discussing issue faced whilst creating manpages from a sphinx-documented project raising issue for "man" builder not being supported.<br />
<br />
The solution is very simple but being a beginner (or external user) makes one doubt it.<br />
<br />
System package version of Sphinx-doc for some of the distros is still not above v1.0 and the manpage support is just found in those.<br />
So, <b>to counter this error just don't use system package </b>for it <b>but</b> instead use <b>"pip install sphinx".</b><br />
<br />
If you don't have pip already, follow this article: <a href="https://pip.pypa.io/en/latest/installing.html">https://pip.pypa.io/en/latest/installing.html</a> .</div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-17876405161132352512014-02-13T05:48:00.001-08:002014-02-13T05:48:38.579-08:00xopen : shell function to have verbose xdg-open : Mac's "open" alternative for Linux<div dir="ltr" style="text-align: left;" trbidi="on">
I have been using xdg-open for sometime now, it's a utility similar to "open" utility popular among MacOSx users.<br />
<br />
It enables you to open any file in the default "open with" program assigned to it's type. So, just passing any type of file to this utility would let you open it in the program it's supposed to open.<br />
<br />
What is <b><span style="font-family: Verdana, sans-serif;">xdg-open</span></b>?<br />
source[1]: <a href="https://wiki.archlinux.org/index.php/xdg-open">https://wiki.archlinux.org/index.php/xdg-open</a><br />
source[2]: <a href="http://linux.die.net/man/1/xdg-open">http://linux.die.net/man/1/xdg-open</a><br />
<br />
There is just this little shell function, that makes your xdg-open usage a bit more verbose in case of errors (bad-syntax/ghost-file/ghost-program/open-failure) faced about the reason for it. And also shortens the access util name obviously.<br />
<br />
xopen ()<br />
{<br />
xdg-open "$@";<br />
_TMP_EXITCODE=$?;<br />
if [ "${_TMP_EXITCODE}" == "1" ]; then<br />
echo "[ERROR:] Error in command line syntax";<br />
else<br />
if [ "${_TMP_EXITCODE}" == "2" ]; then<br />
echo "[ERROR:] One of the files passed on the command line did not exist";<br />
else<br />
if [ "${_TMP_EXITCODE}" == "3" ]; then<br />
echo "[ERROR:] A required tool could not be found";<br />
else<br />
if [ "${_TMP_EXITCODE}" == "4" ]; then<br />
echo "[ERROR:] The action failed";<br />
fi;<br />
fi;<br />
fi;<br />
fi;<br />
return $_TMP_EXITCODE<br />
}</div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-17666702669770802432013-08-12T08:58:00.000-07:002013-08-12T08:58:10.553-07:00Fabric (py orchestration util) issue with my FreeBSD node, and Easy Fix<div dir="ltr" style="text-align: left;" trbidi="on">
Recently I started implementing a Fabfile to deal with 7-8 different types of linux/bsd distributions at a time.<br />
<br />
Now for the folks new to term "<b><i>Fabfile</i></b>", it's the key what-to-do guide for Fabric. <b><a href="http://docs.fabfile.org/en/1.7/" target="_blank">Fabric </a></b>is a fine Orchestration utility capable of commanding numerous remote machines in parallel. For the ruby community folks, you can call it Capistrano's Python sibling.<br />
<br />
The way Fabric works is whatever you ask it to "<b><i>run(<cmd>)</i></b>" over remote nodes, it runs that command by passing it to " <b><i>bash -l -c '<cmd>'</i></b> ".<br />
<br />
Now as the issue was with my FreeBSD machine is it didn't had "bash" in the first place. So, it was failing.<br />
<br />
It can be fixed in 2 ways<br />
<br />
<br />
<div>
<b>First way</b>,</div>
<div>
add a line like following to make Fabric use the SHELL present on the remote node, as FreeBSD node had 'C Shell'... so</div>
<br /><blockquote class="tr_bq">
env.shell = '/bin/csh -c'</blockquote>
<br />
<br />
<b>Second way</b>,<br />
install 'bash' on the remote nodes and make sure it's reachable via path "/bin/bash"<br />
<br />
<blockquote class="tr_bq">
pkg_add -r bash </blockquote>
<blockquote class="tr_bq">
ln -sf /usr/local/bin/bash /bin/bash</blockquote>
<br /><br /></div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-37080823843944715732013-05-18T13:09:00.000-07:002013-05-18T13:09:22.726-07:00perform performance test using multi-mechanize<div dir="ltr" style="text-align: left;" trbidi="on">
gave <span style="font-size: large;"><a href="http://testutils.org/multi-mechanize/" target="_blank">multi-mechanize</a></span> a try recently, when in-need of behavior analysis for an application on massive concurrent requests being sent to it<br />
<br />
What? multi-mechanize is an OpenSource load testing framework written and configurable in Python. So either make calls to any web service or simple import and utilize any python-accessible API/service.<br />
It's successor of an old Python load testing framework called Pylot.<br />
<br />
Developed and maintained <complete id="goog_1023292122"><a href="https://github.com/cgoldberg/multi-mechanize" target="_blank">@GitHub</a></complete><br />
<br />
Install available through 'pip' too as<br />
<i><span style="font-family: Georgia, Times New Roman, serif;">$ <b>pip install multi-mechanize</b></span></i><br />
It also requires matplotlib library if you wanna view Graphs in the generated HTML report (or otherwise), BUT it's not mandatory as the framework would perform all tests fine with an import error for matplotlib.<br />
<br />
This also follows a bit convention oriented run structure. We'll see how.<br />
<br />
Before starting any performance check user is supposed to create a project using<br />
<blockquote class="tr_bq">
$ <b>multimech-newproject project_name</b></blockquote>
<br />
This creates a dir 'project_name' at place of execution with following dir-structure<br />
<b>./project_name</b><br />
<b>├── config.cfg</b><br />
<b>└── test_scripts</b><br />
<b> └── v_user.py</b><br />
<br />
here <b><i>'./project_name/config.cfg'</i></b> mainly contains global configuration around the duration to run,<br />
<br />
<br />
<blockquote class="tr_bq">
<b>[global]</b><br />run_time = 30 <i># second duration for test to run, required field</i><br />rampup = 0 <i># second duration for users to ramp-up, required field</i><br />results_ts_interval = 10 <i># second time series interval for result analysis, required field</i><br />progress_bar = on <i># console progress bar on/off, default=on</i><br />console_logging = off <i># console logging to standard ouput, default=off</i><br />xml_report = off <i> # xml/jtl report generation, default=off</i><br /># results_database = sqlite:///results.db <i>## optional component to push results to DB</i><br /># post_run_script = do_whatever.py <i>## to run any script to do anything on test completion </i></blockquote>
<blockquote class="tr_bq">
<b>[user_group-1]</b><br />threads = 3 <i># number of threads to run the following script in</i><br />script = v_user.py <i># the actual script that will be run to test, give any </i></blockquote>
<blockquote class="tr_bq">
<i>## similar more users with same/different threads and script values can be added with names like user_group-ANYTHING.</i></blockquote>
<br />
Now, to run this prepared project<br />
<blockquote class="tr_bq">
$ <b>multimech-run ./project_name</b></blockquote>
<br />
As I already mentioned it picks certain implementations by convention, most important to notice is the way user-group-script is to be prepared.<br />
Basically it's to be a python script that can utilize all Python magic you know or install. But it need to have a 'Transaction' class with 'run' method.<br />
When you run your project using multi-mechanize, it instantiates 'Transaction' class and keeps calling 'run' method in a loop...<br />
<br />
<b>Example user-script</b> to test load at your <b>'python -m SimpleHTTPServer'</b><br />
<blockquote class="tr_bq">
# v_user_server.py<br /><span style="font-family: inherit;">import httplib<br /><br /><br />class Transaction(object):<br /> def run(self):<br /> conn = httplib.HTTPConnection('127.0.0.1:8000')<br /> conn.request('GET', '/')<br /> resp = conn.getresponse()<br /> conn.close() </span></blockquote>
<blockquote class="tr_bq">
<span style="font-family: inherit;"> assert ((resp.status / 400) == 0), 'BadResponse: HTTP %s' % resp.status</span><br /># finito file</blockquote>
<br />
<br />
Find the detailed scripting guide with plenty good examples <a href="http://testutils.org/multi-mechanize/scripts.html" target="_blank">here</a> : <a href="http://testutils.org/multi-mechanize/scripts.html">http://testutils.org/multi-mechanize/scripts.html</a><br />
<br />
</div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-37419268350555515952013-05-16T08:32:00.000-07:002013-05-16T08:32:03.469-07:00pycallgraph : usage example<div dir="ltr" style="text-align: left;" trbidi="on">
'<a href="http://pycallgraph.slowchop.com/pycallgraph/wiki">pycallgraph</a>' is a fine python utility enabling developers to prepare <a href="http://en.wikipedia.org/wiki/Call_graph">call graph</a> for any python code piece... it also notifies the number of times a call being made and total time spent in it<br />
<br />
Homepage: <a href="http://pycallgraph.slowchop.com/pycallgraph/wiki">http://pycallgraph.slowchop.com/pycallgraph/wiki</a><br />
Codebase: <a href="https://github.com/gak/pycallgraph">https://github.com/gak/pycallgraph</a><br />
<br />
it require 'graphviz' (<a href="http://www.graphviz.org/">http://www.graphviz.org/</a>) to be present on the same machine for image generation from the analyzed calls data...<br />
<br />
install : <b>$ <i>pip install pycallgraph</i></b><br />
<br />
<div>
<h4 style="text-align: left;">
Usage#1 Selective graphing</h4>
wrap the code to be graphed as follows...<br />
<blockquote class="tr_bq">
import pycallgraph<br />
pycallgraph.start_trace()<br />
fetch() # logic to be graphed<br />
pycallgraph.stop_trace()<br />
just_log() # logic NOT to be graphed<br />
pycallgraph.start_trace()<br />
process() # logic to be graphed<br />
pycallgraph.stop_trace()<br />
pycallgraph.make_dot_graph('path_to_graph.png')</blockquote>
<br />
<h4 style="text-align: left;">
Usage#2 Decorator</h4>
Import the decorator method and place the decorator over any method where call graph is required...<br />
import this file wherever you require the @callgraph decorator and use it<br />
<br />
Decorator Python code. Sample code with decorator usage and selective trace usage are at end of this blog... from this gist: <a href="https://gist.github.com/abhishekkr/5592520">https://gist.github.com/abhishekkr/5592520</a><br />
<br />
<br />
<br />
pycallgraph.start_trace() method lets you pass a filter function to not graph some modules on purpose, say in nova I don't wanna graph all the dependency libraries magical calls...<br />
<br />
<br />
<b>STATUTORY WARNING:</b> Graph will be huge for a very high level call and probably unreadable due to overworked image generator. Use it wisely on burnt areas.<br />
<br />
<br />
Call Graph images for code sample in Gists<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgH4oepTUHV45wruIbB-RlGbUscZYqI-rq1sX3kuZs7g1hXiEKFFCIry-Lh0DUVA2p5g33C_y-rYILaIZd1nktG07PpGqN1y6B6NocaJXLeFZLe2kJF26sSjbuoKBHO9ab_Ny3Ucr3TuNc/s1600/add_n_mul.png"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgH4oepTUHV45wruIbB-RlGbUscZYqI-rq1sX3kuZs7g1hXiEKFFCIry-Lh0DUVA2p5g33C_y-rYILaIZd1nktG07PpGqN1y6B6NocaJXLeFZLe2kJF26sSjbuoKBHO9ab_Ny3Ucr3TuNc/s320/add_n_mul.png" /></a> for call with decorator<br />
<br />
<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMYX0IDQK2Y8flhdqN7RjuwEyv0S7Rl8XxXVTwe-OeJKp30qGlByzhUT-eEBsWh7LLGc3oc2ycWKBdkqxEqxLDNLiMgY0t50GRRXGx1EPp6ylvJHcAhK4YUIdoYpqHS_xZO4AbT6Gevf0/s1600/selective_add_n_mul.png"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMYX0IDQK2Y8flhdqN7RjuwEyv0S7Rl8XxXVTwe-OeJKp30qGlByzhUT-eEBsWh7LLGc3oc2ycWKBdkqxEqxLDNLiMgY0t50GRRXGx1EPp6ylvJHcAhK4YUIdoYpqHS_xZO4AbT6Gevf0/s1600/selective_add_n_mul.png" /></a> for call with selective trace</div>
<div>
<script src="https://gist.github.com/abhishekkr/5592520.js"></script>
</div>
</div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-14485971557568708092013-04-12T09:03:00.001-07:002013-04-12T09:06:28.089-07:00console and json<div dir="ltr" style="text-align: left;" trbidi="on">
recently<b><i> <a href="http://www.skorks.com/2013/04/the-best-way-to-pretty-print-json-on-the-command-line/" target="_blank">Alan posted a very nice aticle around prettifying json</a></i></b>, reminded me of <this draft>... he posted 2 out of 3 utilities I was gonna mention... so here is 3rd and the shell profile way to use first 2<br />
<br />
<blockquote class="tr_bq">
$ sudo wget -c -O /etc/profile.d/a.json.sh <a href="https://raw.github.com/abhishekkr/tux-svc-mux/master/shell_profile/a.json.sh">https://raw.github.com/abhishekkr/tux-svc-mux/master/shell_profile/a.json.sh</a></blockquote>
it contains 2 functions available at shell<br />
<br />
<blockquote class="tr_bq">
# usage example:<br /># $ json_me 'echo {"a": 1, "b": 2}'<br /># $ json_me 'curl http://127.0.0.1/my.json'<br />json_me(){<br />bash -c $@ | python -mjson.tool<br />}</blockquote>
<blockquote class="tr_bq">
# requirement: $ pip install pjson<br />
# usage example:<br />
# $ pjson_me 'echo {"a": 1, "b": 2}'<br />
# $ pjson_me 'curl http://127.0.0.1/my.json'<br />
pjson_me(){<br />
bash -c $@ | pjson<br />
}</blockquote>
<br />
<br />
<br />
3rd utility is <b><i><a href="http://stedolan.github.io/jq/" target="_blank">'jq'</a></i></b>, an awesome utility which prettifies an perform sed-like operations on json. <a href="http://stedolan.github.io/jq/tutorial/" target="_blank">This tutorial</a> describes all its magical powers.</div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-1476416499210909782013-03-08T05:50:00.000-08:002013-03-08T05:50:06.342-08:00make rvmsudo using running user's env values<div dir="ltr" style="text-align: left;" trbidi="on">
I was automating my personal linux box setup in <a href="http://docs.opscode.com/chef_quick_overview.html" target="_blank">opscode chef</a>, to ease my life hence-forth and reach practice what I preach state...<br />
<div>
While setting-up the 'development' tools recipe, automating the '<a href="http://nodejs.org/" target="_blank">nodejs</a>' set-up... due to missing reliable yum repository, decided on using '<a href="https://github.com/creationix" target="_blank">nvm</a>' (it's for nodejs as <a href="https://rvm.io/" target="_blank">rvm</a> is for <a href="http://www.ruby-lang.org/en/" target="_blank">ruby</a>).</div>
<div>
<br /></div>
<div>
to utilize 'nvm' to install/manage 'nodejs', need to install it first</div>
<div>
<blockquote class="tr_bq">
<b>$</b> <code><span style="color: #660000;">curl https://raw.github.com/creationix/nvm/master/install.sh | sh</span></code></blockquote>
<br />
How install.sh here works is, it calculates <b><span class="nv" style="background-color: white; border: 0px; color: teal; font-family: Consolas, 'Liberation Mono', Courier, monospace; font-size: 12px; line-height: 16px; margin: 0px; padding: 0px; white-space: pre;">NVM_TARGET</span><span class="o" style="background-color: white; border: 0px; color: #333333; font-family: Consolas, 'Liberation Mono', Courier, monospace; font-size: 12px; line-height: 16px; margin: 0px; padding: 0px; white-space: pre;">=</span><span class="s2" style="background-color: white; border: 0px; color: #dd1144; font-family: Consolas, 'Liberation Mono', Courier, monospace; font-size: 12px; line-height: 16px; margin: 0px; padding: 0px; white-space: pre;">"$HOME"</span></b> to handle the location for '.nvm' directory cloned from the nvm git repo, where the entire nodejs environment lives.<br />
<br />I had the install command as execute with the user value referred to the dev-user of my choice.<br />
<br />
While running 'rvmsudo chef-solo ....', it picks desired user but because of $HOME inference in the 'nvm/install.sh' still the HOME value got picked for '/root'. It messed up the situation.<br />
<br />
To fix it, and any certain issues around permission which might occur... using<br />
<blockquote class="tr_bq">
$ <b>rvmsudo USER=$</b>USER<b> HOME=$</b>HOME chef-solo -j....</blockquote>
it's working with desired output.<br />
<br />
Though, I need to correct my nodejs set-up by pushing the required RPM to my yum reposiotry.<br />
<br />
<b>But, this 'rvmsudo' trick can be used for any similar scenarios where you need sudo privileges for rvm but desire to provide current users' environment values.</b></div>
</div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-43540153150818175622013-01-13T14:21:00.001-08:002013-01-13T14:21:41.871-08:00Apache httpd VirtualHosts : one gets default, unknown faults<div dir="ltr" style="text-align: left;" trbidi="on">
<div>
Recently faced a situation where even after removing a VirtualHost, its ServerName was giving HTTP 200 response. It was all because of missed RTFM agenda.<br /><br />When VirtualHosts get applied in Apache HTTPD server configuration, the first definition encountered by Apache Controller gets selected as the default route logic selected if the ServerName doesn't match any provided.<br /><br />To get an explicit _default_ provider, one of the vhost definitions need to be told so... as in last configuration file piece.</div>
<div>
<br /></div>
~<br />
<div>
<script src="https://gist.github.com/4526414.js"></script></div>
~</div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-87573958677296668032012-12-03T07:11:00.000-08:002012-12-03T07:11:22.580-08:00foodcritic rake task ~ one that works for meFoodCritic (a lint tool for your OpsCode Chef cookbooks)<br />
$ gem install foodcritic --no-ri --no-rdoc<br />
<br />
Rake Task to get the FoodCritic rolling at your cookbooks<br />
(here cookbook reside in dir 'cookbooks' at the root of directory with Rakefile)<br />
the one at wiki doesn't work for me, but this does<br />
<br />
<div>
<script src="https://gist.github.com/4078239.js"> </script>
</div>
_abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-4907207965615569102012-10-22T10:19:00.001-07:002012-10-22T10:19:22.880-07:00varnish ~ sometimes fails but don't tell<div dir="ltr" style="text-align: left;" trbidi="on">
<b><a href="https://www.varnish-cache.org/" target="_blank">Varnish-Cache</a></b><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://www.varnish-cache.org/" target="_blank"><img border="0" height="85" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwYxSTGWTbXpOLFM3RwaRo0sGSRoPdn0gBAuzDmbjsJg2WzDz9T4NVkoBgKJ4T7x89YpFjIs1JbTlZYjL47ukRVdvria37hvBzD4URVV54xGfFng1aHJISHtHIbmReAXfocjoTuyvE1J8/s320/varnish_backend.jpg" width="320" /></a><span id="goog_708874568"></span><span id="goog_708874569"></span><a href="http://www.blogger.com/"></a></div>
<br />
We were playing around with the automation around famous Varnish-Cache service and stumbled upon something... the silent killer style of Varnish Cache.<br />
<br />
The automated naming was such that it was appending different specific service-node host-names and service-names to form backends and then load balance onto them using "director ... round-robin" configuration. It was creating names that way to avoid name collision for same service running on different nodes load-balanced by Varnish-Cache.<br />
<br />
We checked for the configuration correctness<br />
<blockquote class="tr_bq">
$ <b>varnishd -C -f /my/varnish/config/file</b></blockquote>
<br />
It <b>passed</b>.<br />
<br />
We started the Varnish service<br />
<blockquote class="tr_bq">
$ <b>/etc/init.d/varnish start</b></blockquote>
<br />
It <b>started</b>.<br />
<br />
We tried accessing the services via Varnish.<br />
<br />
It failed saying there <b><i>no http service running at Varnish machine:port</i></b>.<br />
<br />
Now what. Can't figure out anything wrong in configuration VCL. So, start looking at logs.<br />
Tried starting varnishlog service<br />
<br />
<blockquote class="tr_bq">
$ <b>service varnishlog start</b></blockquote>
<br />
<b><i>This failed giving error about _.vsm being not present, which was actually present and with right user permissions.</i></b><br />
<br />
Then a colleague of mine suggests he has faced such issue before due to extremely long backend names.<br />
<br />
I extremely shortened the backend name and it started working.<br />
<br />
So, the length there does effect but the VCL gives no error when starting Varnish-Cache.<br />
<br />
BTW, from the checks performed... the maximum character <b>backend name working for the configuration was 44 Character long</b>.</div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com2tag:blogger.com,1999:blog-6296451141019488968.post-87535947561854617082012-08-11T02:09:00.001-07:002012-08-11T02:09:07.878-07:00Project::Inception.include? Infrastructure['base_architecture']<div dir="ltr" style="text-align: left;" trbidi="on">
requirements change.....<br />
<br />
Requirements are supposed to change, else there will be no market.<br />
<br />
But when you are involved with the client to formulate the Project which has to be 'developed and released', you try to cover all the important specs related to 'development and release'.<br />
<br />
In this era, mostly people can be seen to opt for IaaS, PaaS solutions for their production release (and for development depending on budget).<br />
There are lots of Organizations, who have had their own SysAdmin Team for years for years and have more faith on them for cheaper, better, secure, and/or conrolled environment.<br />
Then there are also projects that need to be on internal organizational networks due to some internal boxe-d service or company policies.<br />
<br />
Either way, initially there can be two situations. They will have an idea about where they want it implemented, or not.<br />
<br />
If they decide, you know better. It's better. You come up with the best suited solution respective to projected requirement. And get a yes from them.<br />
<br />
If they decide to drive the decision. Even more better. Now analyze the solution, and project the respective pros, cons and the patch-y aid that will be required. Get them to agree with approach that wouldn't make the project-life a hell.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwpF6p9bUOYQFH5RTqGvCe2gt8JXIFa82T5aPe6DxKpEdF8GSV3RiwMU0eVHAicmJO559kiyZaehl5DS9yAWqddEBxyhyW5aiik5JXsVUNFCKRBhadDQOz4x5JuDcBTYYlUdN8o-GaIhE/s1600/inception.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="101" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwpF6p9bUOYQFH5RTqGvCe2gt8JXIFa82T5aPe6DxKpEdF8GSV3RiwMU0eVHAicmJO559kiyZaehl5DS9yAWqddEBxyhyW5aiik5JXsVUNFCKRBhadDQOz4x5JuDcBTYYlUdN8o-GaIhE/s400/inception.jpg" width="400" /></a></div>
<br />
It needed to be done at start, once you start with project..... these will only slow you down and 'slum your time' once you are deciding it post-project-inception.</div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-51790033259126095812012-06-11T09:55:00.003-07:002012-06-11T10:03:42.211-07:00Creating RPM of MCollective for Ruby1.9<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: right;">
<span style="font-size: x-small;">Note:</span> <span style="font-family: Georgia, 'Times New Roman', serif; font-size: x-small;"><b>With little tweaks comes major pre-requisite checks.</b></span></div>
<br />
<i>Clone the latest branch from GitHub repo for MCollective</i><br />
<b>$</b> <span style="font-family: 'Courier New', Courier, monospace;">git clone <span style="color: #cc0000;">git://github.com/puppetlabs/marionette-collective.git</span></span><br />
<br />
<i>Removing Ruby 1.8 version specification from RedHat Spec</i><br />
<b>$</b> <span style="font-family: 'Courier New', Courier, monospace;">sed -i '<span style="color: #660000;">s/.*ruby.abi.*//g</span>' <span style="color: #cc0000;">ext/redhat/mcollective.spec</span></span><br />
<div style="text-align: right;">
from the changes <i>11/Jun/2011</i> : <a href="https://github.com/puppetlabs/marionette-collective/commit/ba86f7762d53c218b5959b70f845847e89784ad3">marionette-collective/commit/<b>ba86f7762d</b></a></div>
<div style="text-align: right;">
the lines removed by above command are</div>
<div style="text-align: right;">
<span style="font-family: 'Courier New', Courier, monospace;">BuildRequires: ruby(abi) = 1.8</span></div>
<div style="text-align: right;">
<span style="font-family: 'Courier New', Courier, monospace;">Requires: ruby(abi) = 1.8</span></div>
<br />
<i>Removing Rubygem related specification from RedHat Spec</i><br />
<b>$</b> <span style="font-family: 'Courier New', Courier, monospace;">sed -i '<span style="color: #660000;">s/.*rubygem.*//g</span>' <span style="color: #cc0000;">ext/redhat/mcollective.spec</span></span><br />
<div style="text-align: right;">
from the changes <i>11/Jun/2011</i> : <a href="https://github.com/puppetlabs/marionette-collective/commit/ba86f7762d53c218b5959b70f845847e89784ad3">marionette-collective/commit/<b>ba86f7762d</b></a></div>
<div style="text-align: right;">
the lines removed by above command are</div>
<div style="text-align: right;">
<span style="font-family: 'Courier New', Courier, monospace;">Requires: rubygems</span></div>
<div style="text-align: right;">
<span style="font-family: 'Courier New', Courier, monospace;">Requires: rubygem(stomp)</span></div>
<div>
<br /></div>
<div>
<i>create the RPMs, generating this would require you to be on a RedHat-base system with packages like ruby, rubygem rake, redhat-lsb (& rpm-build, I think) installed</i></div>
<div>
$ <span style="font-family: 'Courier New', Courier, monospace;">rake rpm</span></div>
<div>
<br />
<br />
<br /></div>
<div>
The checks you need:<br />
On the machine where you install the created rpms..... remember to get<b> <a href="http://yum-my.appspot.com/flat_web/index.htm">ruby</a>1.9.3 and rubygem <a href="http://rubygems.org/gems/stomp">stomp</a></b> already installed.<br />
<br />
Can get older created MCollective 1.3.2 RPMs friendly with Ruby 1.9 at <a href="http://yum-my.appspot.com/flat_web/index.htm">http://yum-my.appspot.com/flat_web/index.htm</a> .<br />
<br />
The latest branch will give you <b>version 2.0.1 </b>.</div>
</div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-62638791265637299842012-04-25T06:13:00.001-07:002012-04-25T06:13:37.189-07:00quick PuppetMaster Service Script for gem installed puppet set-up<div dir="ltr" style="text-align: left;" trbidi="on">
On easy rubygem-way `gem install puppet` installation method for Puppet doesn't get you the *nix platforms service script... so here is one allowing you to perform Start||Stop||Restart||Status service task for puppetmaster.<br/>
<ul>
<li>Save the file below as '/etc/init.d/puppetmaster'<br/>
$ <i>sudo curl -L -o /etc/init.d/puppetmaster https://gist.github.com/raw/2479100/15f79c68be3f6f6bf516adf385aac1f29f802a45/gistfile1.rb</i></li>
<li>and turn on its eXecutable bit<br/>
$ <i>sudo chmod +x /etc/init.d/puppetmaster</i></li>
<li>Now you can use it as<br/>
$ <i>service puppetmaster status</i></li>
</ul>
PuppetMaster Service Script<br />
<div>
<script src="https://gist.github.com/2479100.js?file=gistfile1.rb">
</script>
</div>
</div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-37526506300752379482012-03-14T00:31:00.001-07:002012-03-14T00:39:45.062-07:00[Puppet] Exported Resources is a beautiful thing..... thing to use and improvise<div dir="ltr" style="text-align: left;" trbidi="on">
Lately, I've been real blunt towards Puppet because of the soup that leaked some specific scenario flaws in very busy times.<br />
So, it's my duty to applause if I like something which is a novel and beautiful concept.<br />
<br />
<b><i><span style="font-family: Georgia, 'Times New Roman', serif;">For a well organized auto-magically managed set-up apart from a fine infrastructure and its configuration management mechanism, a very important part is for the monitoring and logging solution to spread across the infrastructure in a similar seamless and scalable fashion.</span></i></b><br />
<b><i><span style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></i></b><br />
<b><i><span style="font-family: Georgia, 'Times New Roman', serif;">Puppet enables it very finely with the use of exporting and collecting resources.</span></i></b><br />
<br />
Exported Resources are super virtual resources.<br />
Once exported with the storeconfig setting being true, the puppetmaster consumes and keeps these virtual resources out available for all hosts.<br />
<br />
Read in detail: <a href="http://docs.puppetlabs.com/guides/exported_resources.html">http://docs.puppetlabs.com/guides/exported_resources.html</a><br />
<br />
An example from the link above, using a simple File resource<br />
<blockquote class="tr_bq">
<span style="font-family: 'Courier New', Courier, monospace;">node a {<br /> @@file { "/tmp/foo":<br /> content => "fjskfjs\n",<br /> tag => "foofile",<br /> }<br />}<br />node b {<br /> File <<| tag == 'foofile' |>><br />} </span></blockquote>
<div>
<br />
It'll have its flaws (as all softwares do, mine and others)..... just hope they don't interfere in my use-cases.</div>
</div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-34735546847595996662012-03-05T08:09:00.002-08:002012-03-05T08:11:41.828-08:00MCollective can't handle Puppet ~ just like psychotic love stories<div dir="ltr" style="text-align: left;" trbidi="on">
Some puppet-izers didn't like what I said in my last post... still a mess is a mess no matter how worth it might be.<br />
Past 2 months, <i>I've been in pain due to the </i><b><i>psychotic love story of MCollective and Puppet</i></b>.<br />
<br />
YES, they are very helpful products to automate configuration management and orchestrate metadata-based multicast-ed actions.<br />
<br />
YES, they are now under the same organization PuppetLabs which is whole-heartedly working to improve them so they could retain their status in the started-to-glamour-izing DevOps domain. So, both of them will improve a lot.<br />
<br />
But, first of all.<br />
<i><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">If you don't properly test your corporate-aiming projects over Ruby1.9.x; please do post a big notice on your projects page or at least on first page of your amazing Doc.</span></i><br />
My story for past few weeks:<br />
I start using a project..... oh it's failing, debug..... oh it's still failing..... debug.<br />
Ahhhhh, ok ask at IRC.<br />
WHAT? I'm using Ruby1.9.x so its possible I might be facing *some* issues.<br />
Is this is a Joke, I'm trying to stabilize an important project over here.<br />
At least be truthful, and don't behave like TV Commercials with small+quick Disclaimers.<br />
<br />
And Again.<br />
<i><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">I'm managing MCollective over all my instances via Puppet. Obviously, because that's what Puppet is for..... managing state of instances.</span></i><br />
<i><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">I've a CI which instigates MCollective to orchestrate different actions over my infrastructure. That's why I took the pain to get the changing-but-not-releasing MCollective over my infrastructure.</span></i><br />
<i><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"><br /></span></i><br />
<i><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">Now, this also involves MCollective firing up Puppet in non-daemonizing mode.</span></i><br />
<i><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">This used to raise an Exception in MCollective due to un-handled 'nil' return value failing up my entire build.</span></i><br />
<i><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">I go on IRC, again trying to get a solution..... I get to know that even this has been fixed later in the git-repo since I rpm-ed it last time. But I also get an answer that this wouldn't work as Puppet no-daemon mode has problems. And if I wanna follow this approach, its my decision.</span></i><br />
<i><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">First of all, what the hell should I do when there is just one straight approach left. Except for if I start using something else to manage MCollective.</span></i><br />
<br />
Though, I took the latest pull..... rpm-ed it and updated all my nodes. And the Puppet non-daemon problem has not yet been raised.<br />
But, what kind of response is it.<br />
<b><i>If it wasn't for the team decision..... I'd have changed it overnight x(</i></b></div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-85923753416802985992012-02-22T13:21:00.000-08:002012-02-22T13:27:15.665-08:00Puppet ain't sweet anymore & Marionette-Collective hopeful<div dir="ltr" style="text-align: left;" trbidi="on">
its a fine piece of utility working mostly fine, but my luck..... mostly end up getting blocked in my tasks by the flaws in library/utility/framework :(<br />
<div style="text-align: center;">
<span style="font-family: 'Courier New', Courier, monospace;"><i>Puppet ain't sweet.</i></span></div>
<br />
<b>Puppet</b> is one of the most famous automated configuration management, built upon ruby enforcing just it's DSL to be used to use the power underneath.<br />
<div style="text-align: right;">
<i><b>(</b>new to it, wanna know more <a href="http://puppetlabs.com/puppet/how-puppet-works/">http://puppetlabs.com/puppet/how-puppet-works/</a><b>)</b></i></div>
<b>MCollective</b> is a novel concept with good technical design <i>(which can be still matured, though)</i> for parallel server orchestration with a filter-enabled broadcast of request distribution.<br />
<div style="text-align: right;">
<i><b>(</b>new to it, get a glimpse at <a href="http://puppetlabs.com/mcollective/introduction/">http://puppetlabs.com/mcollective/introduction/</a><b>)</b></i></div>
<br />
<b>Why? What happened? Puppet is Good.</b><br />
Puppet has been there in the wild for a long time (at-least more than 5yrs) now as compared to the closest competitor to it 'Chef'.<br />
<i>Saying it out-loud the feel I'm getting using them lately ~ </i><i style="font-weight: bold;">as if it is acting like an old pop-artist trying to put out a lot at-once to maintain their superiority over upcoming rockstars.</i><br />
I've used their older service before which wasn't much glossy (at design level) but a lot more stable and composed at implementation..... which is more important from the perspective of a system-maintaining utility.<br />
<br />
<span style="font-family: Georgia, 'Times New Roman', serif;"><b>When you are always in pace of bringing changes and improving upon your current set-up, you don't want to keep on getting blocked by quickly adapted & recently obsoleted features.</b></span><br />
<br />
Why do you want me to place 'pluginsync = true' on all the boxes to get this new feature working. Why can't enabling it over PuppetMaster should take care of it. In what kind of consistent system does you require custom-resource-type to be not recognizable over all clients.<b><i> If you don't wanna it get used over any node, you wouldn't use it there. </i>Plain Simple *hit.</b><br />
<br />
You set-up a new puppet-master at Client side and find out your old, working, correct manifest from tested set-up suddenly start failing. Why? Because the freaking parser is broken and not able interpret the symbol notation anymore. Move onto value-convention and it is working.<br />
Ok, you smashed your head and got that working. Now say you have learnt your lesson of not trusting the newer versions and Gemfile-d it for the bundler to handle. Suddenly it starts failing 'cuz you find out the <b>newer version exists no-more. It was yanked off.</b><br />
<br />
Some of there new suggested approaches in custom facter<b> require Facter v 1.7.0; the gem available up their is still v1.6.5</b>. What? Legacy fact distribution is not supported in the new distributions and is obsolete.<br />
<br />
There are also several other design-level implementation which I don't agree and sometime don't agree at all. But, not considering them all,<b><i> there is too much chaos in Puppet right-now.</i></b><br />
<br />
What do they expect, to keep on pulling the latest build from repository..... building its rpm/gem and then using it. In what freaking world do you expect this from a platform which has been built to sustain a deterministic state of machine.<br />
<br />
MCollective is mostly good, a light of hope that Puppet adopted to lit its gloomy days. But to find out its not properly tested over Ruby1.9.x gets me worried.</div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-64302692266835635292012-01-27T11:52:00.000-08:002012-02-01T22:05:59.974-08:00resque and WildCard usage<div dir="ltr" style="text-align: left;" trbidi="on">
<div>
<div dir="ltr" style="text-align: left;" trbidi="on">
While testing we were starting resque workers as<br />
<pre style="border: 1px groove; padding: 1px; text-align: right;"><span style="font-family: 'Courier New',Courier,monospace;"> rake resque:work QUEUE=* </span></pre>
</div>
<div dir="ltr" style="text-align: left;" trbidi="on">
this used to start the workers listening on all the Resque's queues and never exit.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjp1iOVmVfrGgHCTAEZAupqIDpJzRB_-VLV17Mfm20ylxWecez0i0VelRDfR5bI7OFtpkxUhcTtjJuCwjYptcgDLyIVuI4FFPf3_udTgwSyE8nRy76puHADZBbTHj3lfC9MoU_Jsn_GAuo/s1600/redisresque.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="106" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjp1iOVmVfrGgHCTAEZAupqIDpJzRB_-VLV17Mfm20ylxWecez0i0VelRDfR5bI7OFtpkxUhcTtjJuCwjYptcgDLyIVuI4FFPf3_udTgwSyE8nRy76puHADZBbTHj3lfC9MoU_Jsn_GAuo/s320/redisresque.gif" width="320" /></a></div>
So hoping Resque understands wildcards well, we made<br />
<pre style="border: 1px groove; padding: 1px; text-align: right;"> <span style="font-family: 'Courier New',Courier,monospace;">rake resque:work QUEUE=q01.* </span></pre>
</div>
a part of our automated configuration tasks to start all our queues within Resque's queue starting with name q01 like q01.rss and q01.atom.<br />
<br />
These queues were getting started fine and displayed well on Resque's WebUI.<br />
But on scheduling a task for these queues, the worker started with q01.* wasn't able to pick up the queued task for neither q01.rss nor q01.atom.<br />
<br />
Investigating the <b>Resque</b> codebase the understanding we developed was<b> just plain '*' is separately handled out to return entire array of all available queues and is acted upon. But it doesn't handle Wildcards.</b><br />
<br />
So, we started it as<br />
<pre style="border: 1px groove; text-align: right;"> <span style="font-family: 'Courier New',Courier,monospace;">rake resque:work QUEUE=q01.rss
rake resque:work QUEUE=q01.atom</span>
</pre>
and it worked fine.<br />
<br />
Don't do the same mistake.<br />
<br />
<b><i>if you don't already know what is redis and resque</i></b>:<br />
<br />
<b>Resque</b><i><span style="font-size: x-small;"> :</span><span style="font-family: Georgia,'Times New Roman',serif;"> a famous opensource ruby library to create background workers on multiple queues and processing them when any task is available</span></i><br />
<b>Redis</b> :<i><span style="font-family: Georgia,'Times New Roman',serif;"> it's a popular opensource project for in-memory key value data-store </span></i><br />
<i>(why were you reading then actually)</i>
</div>
</div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-44368015619284091202011-12-22T02:24:00.001-08:002011-12-27T13:45:34.903-08:00is 'Splunk' eating up your disk space<div dir="ltr" style="text-align: left;" trbidi="on">
if you have been using default set of splunk configurations, soon you could also face your entire disk space being filled with splunk's database.....<br />
<br />
<div class="action-body flooded">
you can keep a check over that by lowering down its upper-limit over database indices size from several <b>100s 0f 1000s MBs</b> ((<i>default maxTotalDataSizeMB per index is 500Gigabytes</i>)) <b>to the desired/affordable Size in MBs</b>.<br />
<br />
<br />
<a href="http://docs.splunk.com/skins/splunk/images/icon-Splunk.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://docs.splunk.com/skins/splunk/images/icon-Splunk.png" /></a><b> </b><br />
<b>File:<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;"> </span><span style="font-family: "Courier New",Courier,monospace;">/var/ebs/splunk/etc/system/local/indexes.conf</span></span></b><br />
<blockquote class="tr_bq" style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;">
<b>maxTotalDataSizeMB = 3000</b></blockquote>
<br />
<br />
managing index size in Splunk is better covered at <br />
<i><a href="http://blogs.splunk.com/2011/01/03/managing-index-sizes-in-splunk/">http://blogs.splunk.com/2011/01/03/managing-index-sizes-in-splunk/</a></i></div>
</div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-39106095535852527862011-11-21T21:50:00.001-08:002011-11-22T08:21:03.649-08:00[OpsCode Chef] when chef's changes can be re-edited but un-available to Search<div dir="ltr" style="text-align: left;" trbidi="on">
[DevOps:///OpsCode.Chef] <br />
<br />
<span style="font-size: x-small;"><i>ate a weird meal last evening, chef was angry I think.....</i></span><br />
<br />
When I created an AWS instance in same way (by swiss-<b style="font-family: "Trebuchet MS",sans-serif;">'knife ec2 server create...'</b> toolset) using same old boot-up script to get that insance auto-configured as chef-client; instance<i><b> got created and was visible in the instance list but not </b></i>available to my recipes trying <i><b>to search</b></i> for it using its applied role and other tags.<br />
<br />
The same <i><b>procedure has worked successfully for all previous time</b></i>, and with no change it suddenly started failing.<br />
<br />
I <i><b>logged-in</b></i> to the freshly created instance and exec '<b style="font-family: "Trebuchet MS",sans-serif;">chef-client --once</b>' again, it had a <i><b>successful run</b></i> but <i><b>still</b></i> the <i><b>recipe failed to grab the node</b></i> by its role and tags.<br />
So it was a <i><b>perfectly performing instance according to its run_list</b></i>, but <i><b>not available to</b></i> other features trying to grab its information over <i><b>Chef Search</b></i> mechanism.<br />
<br />
<div style="font-family: Georgia,"Times New Roman",serif;">
<span style="font-size: large;"><b>What could be the problem?</b></span></div>
The answer to it is hidden in knowing how Chef Search function.<br />
<br />
Chef Search <i><b>uses Solr Indexing service</b></i>, which an enterprise-grade OpenSource search server based upon Apache Lucene.<br />
<br />
<br />
It has a system service '<b style="font-family: "Trebuchet MS",sans-serif;">chef-solr</b>' which acts as a wrapper around Solr to be run for Chef.<br />
Though, this <i><b>service was fine</b></i> as I was able to search all earlier created nodes..... just the new additions were escaping the act.<br />
<br />
Now, if Chef-Solr was working fine then we need to look its source for data. Obviously, it was <i><b>working fine for already indexed data</b></i> but didn't happen to index anything newly added.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhO4fhZXWfw8AEIYP8oX4hwpnjplPHXrpUwE2ai3XDCriNymqn2iTlqauLpSsjqeIMDj7ZhqodAwpHmXsWB9QX53KPN1-3NEmre3aICAzoWTNQtmJBxEJ9phXbtWTtrht_ZrCKXQW1onyE/s1600/chef.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="131" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhO4fhZXWfw8AEIYP8oX4hwpnjplPHXrpUwE2ai3XDCriNymqn2iTlqauLpSsjqeIMDj7ZhqodAwpHmXsWB9QX53KPN1-3NEmre3aICAzoWTNQtmJBxEJ9phXbtWTtrht_ZrCKXQW1onyE/s320/chef.jpg" width="320" /></a></div>
<i><b>Chef-Solr gets data passed on from the message queue of RabbitMQ</b></i>, placed before to ease the load of several concurrent requests for Solr.<br />
<i><b>Chef-Exapander</b></i> is a system-level service that fetches data from RabbitMQ's queue and formats them before <i><b>passing on to chef-solr</b></i>.<br />
<br />
This combination of chef-exapnder and chef-solr acts as final feature of Chef Indexer.<br />
<br />
So, I looked for <i><b>chef-expander service.</b></i><br />
<i><b>It was in failing mode, restarted the service</b></i> and bam!<br />
<i><b>We are back up and the new aditions are visible in Chef Search.</b></i></div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-53422422701082827132011-11-20T14:49:00.001-08:002011-11-20T15:09:22.990-08:00am an MVB..... now, officially<div dir="ltr" style="text-align: left;" trbidi="on">
have been walking along-with the technology and blogging about it.....<br />
<br />
few days back, I got a mail inviting to become a part of <a href="http://www.dzone.com/aboutmvb"><b>MVB</b> (Most Valued Blogger)</a> program at <a href="http://www.dzone.com/">DZone</a>..... we finished the formalities and now, officially I'm an MVB.<br />
<br />
feels good after having a look at some of other members from MVB <a href="http://www.dzone.com/page/mvbs">http://www.dzone.com/page/mvbs</a></div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-12542922020327176872011-11-16T11:10:00.001-08:002011-11-16T13:03:59.303-08:00Ruby/Rails utilizing Solr in Master/Slave setup<div dir="ltr" style="text-align: left;" trbidi="on">
<span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">Apache <a href="http://lucene.apache.org/solr/" style="font-size: x-large; font-weight: bold;">Solr</a></span> is a high-performance enterprise grade search server being interacted with a REST-like API, documents are provided to as xml, json or binary. It extends <b><a href="http://lucene.apache.org/"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">Lucene</span></a></b> search library.<br />
<br />
Ruby/Rails communicates with this awesome search server using <b><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><a href="https://github.com/outoftime/sunspot">Sunspot</a></span></b> (sunspot, sunspot_rails gem) library, to do a full-text search. <i><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"><a href="http://railscasts.com/episodes/278-search-with-sunspot">Nice tutorial to use Solr in your Rails project using Sunspot.</a></span></i><br />
<br />
Solr can perform as a standalone search server and even as master/slave instances collaborating with each other, with slaves polling master to sync-in their data.<br />
<br />
Solr instances can be configured like Slave and as a Master.<br />
Configuration file 'solrconfig.xml' needs to be edited with a new RequestHandler for Replication configured:<br />
<blockquote class="tr_bq">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhb07FQOTKOp8KM3k8KMlCLA-YInX3SeeJtFVT0O-5OV2N9AgqQpSgCg3hd7t5AGwyIymFn_cczXjWCWUJIzvDCUzb03HEE2UHmH5z3UMnGXbYt1PJYdMqHc1D5LVCmeaqOiPSI7GDCHOA/s1600/SOLR.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="101" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhb07FQOTKOp8KM3k8KMlCLA-YInX3SeeJtFVT0O-5OV2N9AgqQpSgCg3hd7t5AGwyIymFn_cczXjWCWUJIzvDCUzb03HEE2UHmH5z3UMnGXbYt1PJYdMqHc1D5LVCmeaqOiPSI7GDCHOA/s200/SOLR.png" width="200" /><span class="Apple-style-span" style="color: black;"> </span></a></blockquote>
<blockquote class="tr_bq">
<b><i>{for Slave} :: </i></b>to either poll at their Master's machine address at regular intervals</blockquote>
<blockquote class="tr_bq">
<b><i>{for Master} :: </i></b>or commit the changes and clone the mentioned configuration files when Slave asked for it </blockquote>
<blockquote class="tr_bq">
<i>for <b>detailed</b> reference:</i><b><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><a href="http://wiki.apache.org/solr/SolrReplication">http://wiki.apache.org/solr/SolrReplication</a></span></b> </blockquote>
<blockquote class="tr_bq">
<i>for optimizaed ways using<b> ssh/rsync based replication</b>:</i><br />
<a href="http://wiki.apache.org/solr/CollectionDistribution"><b><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">http://wiki.apache.org/solr/CollectionDistribution</span></b></a></blockquote>
<br />
Now, here you can even use a <b><i>single configuration file</i></b> with '<b><i>Replication</i></b>' node having fields for both Master and Slave with an '<b><i>enable</i></b>' child-node with possible values '<b><i>true|false</i></b>' set as per requirement for Master & Slave nodes.<br />
<br />
Sunspot dealing with a single Solr instance, gets $ProjectRoot/config/sunspot.yml<br />
<br />
<blockquote class="tr_bq">
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><b>production</b>:<br /> <b>solr</b>:<br /> hostname: standaloneSolr.mydomain.com<br /> port: 8983<br /> log_level: WARNING</span></blockquote>
<br />
Sunspot dealing with a master/slave Solr set-up, gets $ProjectRoot/config/sunspot.yml<br />
<br />
<blockquote class="tr_bq">
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><b>production</b>:<br /> <b>solr</b>:<br /> hostname: slaveSolr.mydomain.com<br /> port: 8983</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"> master_hostname: masterSolr.mydomain.com</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"> master_port: 8983</span><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><br /> log_level: WARNING<br /> <b>master_solr</b>:<br /> hostname: masterSolr.mydomain.com<br /> port: 8983<br /> log_level: WARNING</span></blockquote>
If you have more than one slaves, they need to be handled by a load-balancer and the DNS Entry for that Load Balancer comes here in slave's hostname field.<br />
<br />
Also, the fields <b><i>'master_hostname'</i></b> and<i> <b>'master_port'</b></i> below <b><i>'solr:'</i></b> are not mandatory and supposed to be referred from<b><i> 'master_solr:'</i></b> block. But, it has been observed in some cases that mentioning them explicitly avoids non-picking of configuration.<br />
<br />
By default, <b>Sunspot configures Ruby/Rails application to Write-Only to Master and Read-Only from Slave.</b><br />
<br /></div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-65708047312124975452011-09-26T14:19:00.000-07:002011-09-26T14:19:31.848-07:00explicit update SASS in Rails Application<div dir="ltr" style="text-align: left;" trbidi="on">
Sass (Syntactically Awesome StyleSheets)?<br />
Available @ <a href="http://sass-lang.com/">http://sass-lang.com/ </a><br />
Is a fine way to produce clean and easy CSS code using a meta-scripting-language. It has feature for developer ease like variables, nesting, mixins, and selector inheritance.<br />
<br />
If you are somehow using Sass in your Ruby-On-Rails application and using the default Webrick to serve the web content, it automatically updates the "<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">public/stylesheets/*.css</span>" files from your styling meta-script in "<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">public/stylesheets/sass/*.css</span>".<br />
<br />
But, if you are using any other web-server (like, thin)..... in that case you'll have to get your CSS files to be explicitly updated using 'sass --update' action.<br />
<br />
<span class="Apple-style-span" style="background-color: black;"><span class="Apple-style-span" style="color: lime;">$ <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><b>bundle exec sass --update </b></span></span></span><br />
<span class="Apple-style-span" style="color: lime; font-family: 'Courier New', Courier, monospace;"><b style="background-color: black;">--watch public/stylesheets/sass:public/stylesheets</b></span><br />
<b><span class="Apple-style-span" style="font-family: Times, 'Times New Roman', serif;"><br /></span></b><br />
<span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"><b>So, include it in your deploy Rake Tasks and get updated stylesheets everytime without fail.</b></span></div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-24762033915540522852011-09-05T13:48:00.000-07:002011-09-26T14:20:13.537-07:00mysql-server retained old credentials, got work-around but no solution<div dir="ltr" style="text-align: left;" trbidi="on">
<span class="Apple-style-span" style="background-color: white; font-family: Arial, 'Liberation Sans', 'DejaVu Sans', sans-serif; font-size: 14px; line-height: 18px;"></span><br />
While trying re-configuration of an existing <b>mysql-server</b> over <b>CentOS</b>, tried '<i><b>yum remove</b></i>' the mysql-server and then again '<b><i>yum install</i></b>' it.<br />
<div>
<br /></div>
<div>
<b><i>When I tried setting up a new password for 'root' using 'mysqladmin', it raised an error. Some random troubleshooting showed it still had earlier-installation's root credential working for it.</i></b><br />
<br />
Trying some more stuff, I<i> manually</i> set '<i>old_password=0</i>' in '<i>/etc/my.cnf</i>' and then <i>tried re-installing</i>. <b><i>It still had the earlier password working for it.</i></b></div>
<div>
<br /></div>
<div>
For time being, got a work-around fixing the problem... </div>
<blockquote>
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><b>$ yum erase mysql mysql-server<br />$ rm -rf /var/lib/mysql<br />$ yum install mysql mysql-server<br />$ service mysqld restart </b></span></blockquote>
<br />
<div>
The location of<b><i> user's information</i></b> i.e. the user table data resides in '<b>/var/lib/mysql/mysql/user.MDY</b>'; but just <b><i>removing that only file wouldn't work</i></b> because it don't get recreated on re-installation if entire directory structure is present.</div>
<div>
<br /></div>
<div>
<b><i>A service restart is required to create a vanilla copy of '/var/lib/mysql'. </i></b></div>
<div>
Still... the issue exists of why to do it manually.</div>
<div>
The incidence is under discussion at the link below. If you have a non-hackish solution for the problem or could spot the actual mistake, please reply there.</div>
<div>
<a href="http://stackoverflow.com/questions/6606869/mysql-retains-password-of-earlier-installation-even-after-proper-yum-remove">http://stackoverflow.com/questions/6606869/mysql-retains-password-of-earlier-installation-even-after-proper-yum-remove</a></div>
</div>
abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-73732575680354900862011-08-21T06:21:00.000-07:002011-08-21T06:21:53.495-07:00cucumber's selenium test failed and firefox ran blank..... OOOPS, version in-compatibility<div dir="ltr" style="text-align: left;" trbidi="on">I faced my cuke test failing a while back, and recently saw some of my colleagues re-checking their environment settings, cucumber installations and tweaking their cuke-features in different ways..... and I thought I missed spreading a major sports-page news for all headline cuke readers which might have been wasting a lot of time of a lot of people just because the word is not well spread.<br />
<br />
<div style="text-align: center;"><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: large;"><b>Newer Versions of Firefox</b></span></div><div style="text-align: center;"><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: large;"><b>are not friendly with </b></span></div><div style="text-align: center;"><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: large;"><b>in-trend Selenium Drivers</b></span></div><br />
So, <b><i>if you have direct selenium tests failing, or write cucumber/capybara behavioral tests</i></b> but seek the cause for failure of correct behavior... <b><i>lookout for a version downgrade for your firefox from whatever 4.x, 5.x or higher Beta versions you have to any of 3.x versions</i></b>.<br />
And in case of being the victim of same in-compatibility, you would <b>see your tests <span class="Apple-style-span" style="background-color: black; color: lime;">GREEN</span> again</b>.</div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0tag:blogger.com,1999:blog-6296451141019488968.post-76335832753590419012011-08-09T08:52:00.000-07:002011-08-09T08:52:51.028-07:00Tech-Xpress-Guide so far... from learnings of year 2010<div dir="ltr" style="text-align: left;" trbidi="on">GitHub Repo ~ <a href="https://github.com/abhishekkr/a-techXpress-guide">https://github.com/abhishekkr/a-techXpress-guide</a><br />
<br />
As in what I picked up nicely last year, I did Uploaded few How-To Express Guide in past few months on my SlideShare page ~ <br />
<a href="http://www.slideshare.net/AbhishekKr/documents">http://www.slideshare.net/AbhishekKr/documents</a><br />
<br />
Tech-Xpress-Guide so far ~ <br />
<br />
[] using <b>SNMP</b> for secure remote resource monitoring ~<br />
<a href="http://www.slideshare.net/AbhishekKr/an-express-guide-ltlt-snmp-for-secure-rremote-resource-monitoring">http://www.slideshare.net/AbhishekKr/an-express-guide-ltlt-snmp-for-secure-rremote-resource-monitoring</a><br />
<br />
[] using <b>Nagios</b> for quick & efficient IT Infrastructure monitoring<br />
<a href="http://www.slideshare.net/AbhishekKr/an-express-guide-ltlt-nagios-for-it-infrastructure-monitoring">http://www.slideshare.net/AbhishekKr/an-express-guide-ltlt-nagios-for-it-infrastructure-monitoring</a><br />
<br />
[] using <b>Cacti </b>for IT Infrastructure monitoring with nice analytic graphing<br />
<a href="http://www.slideshare.net/AbhishekKr/an-express-guide-cacti-for-it-infrastructure-monitoring-graphing">http://www.slideshare.net/AbhishekKr/an-express-guide-cacti-for-it-infrastructure-monitoring-graphing</a><br />
<br />
[] using <b>Zabbix</b> for IT Infrastructure monitoring with easy-to-use Web UI<br />
<a href="http://www.slideshare.net/AbhishekKr/an-express-guide-zabbix-for-it-monitoring">http://www.slideshare.net/AbhishekKr/an-express-guide-zabbix-for-it-monitoring</a><br />
<br />
[] using <b>DummyNet to mock different Network latencies</b> & bandwidth for testing or other purpose<br />
<a href="http://www.slideshare.net/AbhishekKr/an-express-guide-dummynet-for-tweaking-network-latencies-bandwidth">http://www.slideshare.net/AbhishekKr/an-express-guide-dummynet-for-tweaking-network-latencies-bandwidth</a><br />
<br />
[] creating and using <b>Solaris Native Zones</b> {similar to Linux LXC but much efficient}<br />
<a href="http://www.slideshare.net/AbhishekKr/a-tech-xpressguidesolariszonesnativenlxbranded">http://www.slideshare.net/AbhishekKr/a-tech-xpressguidesolariszonesnativenlxbranded</a><br />
<br />
[] <b>Ethernet Bonding</b> in *Nix for Load Balancing <br />
<a href="http://www.slideshare.net/AbhishekKr/a-tech-xpressguideethernetbondingfornics">http://www.slideshare.net/AbhishekKr/a-tech-xpressguideethernetbondingfornics</a><br />
<br />
[] setting up <b>Squid Cache Proxy</b> service<br />
<a href="http://www.slideshare.net/AbhishekKr/a-tech-xpressguidesquidforloadbalancingncacheproxy">http://www.slideshare.net/AbhishekKr/a-tech-xpressguidesquidforloadbalancingncacheproxy</a><br />
<br />
[] setting <b>Syslog Centralization</b> for *nix machines<br />
<a href="http://www.slideshare.net/AbhishekKr/a-tech-xpressguidesyslogcentralizationloggingwithwindows">http://www.slideshare.net/AbhishekKr/a-tech-xpressguidesyslogcentralizationloggingwithwindows</a></div>abionichttp://www.blogger.com/profile/06276198262605731980noreply@blogger.com0