Thursday, December 22, 2011

is 'Splunk' eating up your disk space

if you have been using default set of splunk configurations, soon you could also face your entire disk space being filled with splunk's database.....

you can keep a check over that by lowering down its upper-limit over database indices size from several 100s 0f 1000s MBs ((default maxTotalDataSizeMB per index is 500Gigabytes)) to the desired/affordable Size in MBs.

File: /var/ebs/splunk/etc/system/local/indexes.conf
maxTotalDataSizeMB = 3000

managing index size in Splunk is better covered at

Monday, November 21, 2011

[OpsCode Chef] when chef's changes can be re-edited but un-available to Search


ate a weird meal last evening, chef was angry I think.....

When I created an AWS instance in same way (by swiss-'knife ec2 server create...' toolset) using same old boot-up script to get that insance auto-configured as chef-client; instance got created and was visible in the instance list  but not available to my recipes trying to search for it using its applied role and other tags.

The same procedure has worked successfully for all previous time, and with no change it suddenly started failing.

I logged-in to the freshly created instance and exec 'chef-client --once' again, it had a successful run but still the recipe failed to grab the node by its role and tags.
So it was a perfectly performing instance according to its run_list, but not available to other features trying to grab its information over Chef Search mechanism.

What could be the problem?
The answer to it is hidden in knowing how Chef Search function.

Chef Search uses Solr Indexing service, which an enterprise-grade OpenSource search server based upon Apache Lucene.

It has a system service 'chef-solr' which  acts as a wrapper around Solr to be run for Chef.
Though, this service was fine as I was able to search all earlier created nodes..... just the new additions were escaping the act.

Now, if Chef-Solr was working fine then we need to look its source for data. Obviously, it was working fine for already indexed data but didn't happen to index anything newly added.

Chef-Solr gets data passed on from the message queue of RabbitMQ, placed before to ease the load of several concurrent requests for Solr.
Chef-Exapander is a system-level service that fetches data from RabbitMQ's queue and formats them before passing on to chef-solr.

This combination of chef-exapnder and chef-solr acts as final feature of Chef Indexer.

So, I looked for chef-expander service.
It was in failing mode, restarted the service and bam!
We are back up and the new aditions are visible in Chef Search.

Sunday, November 20, 2011

am an MVB..... now, officially

have been walking along-with the technology and blogging about it.....

few days back, I got a mail inviting to become a part of MVB (Most Valued Blogger) program at DZone..... we finished the formalities and now, officially I'm an MVB.

feels good after having a look at some of other members from MVB

Wednesday, November 16, 2011

Ruby/Rails utilizing Solr in Master/Slave setup

Apache Solr is a high-performance enterprise grade search server being interacted with a REST-like API, documents are provided to as xml, json or binary. It extends Lucene search library.

Ruby/Rails communicates with this awesome search server using Sunspot (sunspot, sunspot_rails gem) library, to do a full-text search. Nice tutorial to use Solr in your Rails project using Sunspot.

Solr can perform as a standalone search server and even as master/slave instances collaborating with each other, with slaves polling master to sync-in their data.

Solr instances can be configured like Slave and as a Master.
Configuration file 'solrconfig.xml' needs to be edited with a new RequestHandler for Replication configured:
{for Slave} :: to either poll at their Master's machine address at regular intervals
{for Master} :: or commit the changes and clone the mentioned configuration files when Slave asked for it 
for detailed reference: 
for optimizaed ways using ssh/rsync based replication:

Now, here you can even use a single configuration file with 'Replication' node having fields for both Master and Slave with an 'enable' child-node with possible values 'true|false' set as per requirement for Master & Slave nodes.

Sunspot dealing with a single Solr instance, gets $ProjectRoot/config/sunspot.yml

    port: 8983
    log_level: WARNING

Sunspot dealing with a master/slave Solr set-up, gets $ProjectRoot/config/sunspot.yml

    port: 8983

    master_port: 8983
    log_level: WARNING
    port: 8983
    log_level: WARNING
If you have more than one slaves, they need to be handled by a load-balancer and the DNS Entry for that Load Balancer comes here in slave's hostname field.

Also, the fields 'master_hostname' and 'master_port' below 'solr:' are not mandatory and supposed to be referred from 'master_solr:' block. But, it has been observed in some cases that mentioning them explicitly avoids non-picking of configuration.

By default, Sunspot configures Ruby/Rails application to Write-Only to Master and Read-Only from Slave.

Monday, September 26, 2011

explicit update SASS in Rails Application

Sass (Syntactically Awesome StyleSheets)?
Available @
Is a fine way to produce clean and easy CSS code using a meta-scripting-language. It has feature for developer ease like variables, nesting, mixins, and selector inheritance.

If you are somehow using Sass in your Ruby-On-Rails application and using the default Webrick to serve the web content, it automatically updates the "public/stylesheets/*.css" files from your styling meta-script in "public/stylesheets/sass/*.css".

But, if you are using any other web-server (like, thin)..... in that case you'll have to get your CSS files to be explicitly updated using 'sass --update' action.

$ bundle exec sass --update 
--watch public/stylesheets/sass:public/stylesheets

So, include it in your deploy Rake Tasks and get updated stylesheets everytime without fail.

Monday, September 5, 2011

mysql-server retained old credentials, got work-around but no solution

While trying re-configuration of an existing  mysql-server over CentOS, tried 'yum remove' the mysql-server and then again 'yum install' it.

When I tried setting up a new password for 'root' using 'mysqladmin', it raised an error. Some random troubleshooting showed it still had earlier-installation's root credential working for it.

Trying some more stuff, I manually set 'old_password=0' in '/etc/my.cnf' and then tried re-installingIt still had the earlier password working for it.

For time being, got a work-around fixing the problem... 
$ yum erase mysql mysql-server
$ rm -rf /var/lib/mysql
$ yum install mysql mysql-server
$ service mysqld restart 

The location of user's information i.e. the user table data resides in '/var/lib/mysql/mysql/user.MDY'; but just removing that only file wouldn't work because it don't get recreated on re-installation if entire directory structure is present.

A service restart is required to create a vanilla copy of '/var/lib/mysql'. 
Still... the issue exists of why to do it manually.
The incidence is under discussion at the link below. If you have a non-hackish solution for the problem or could spot the actual mistake, please reply there.

Sunday, August 21, 2011

cucumber's selenium test failed and firefox ran blank..... OOOPS, version in-compatibility

I faced my cuke test failing a while back, and recently saw some of my colleagues re-checking their environment settings, cucumber installations and tweaking their cuke-features in different ways..... and I thought I missed spreading a major sports-page news for all headline cuke readers which might have been wasting a lot of time of a lot of people just because the word is not well spread.

Newer Versions of Firefox
are not friendly with 
in-trend Selenium Drivers

So, if you have direct selenium tests failing, or write cucumber/capybara behavioral tests but seek the cause for failure of correct behavior... lookout for a version downgrade for your firefox from whatever 4.x, 5.x or higher Beta versions you have to any of 3.x versions.
And in case of being the victim of same in-compatibility, you would see your tests GREEN again.

Tuesday, August 9, 2011

Tech-Xpress-Guide so far... from learnings of year 2010

GitHub Repo ~

As in what I picked up nicely last year, I did Uploaded few How-To Express Guide in past few months on my SlideShare page ~

Tech-Xpress-Guide so far ~

[] using SNMP  for secure remote resource monitoring ~

[] using Nagios for quick & efficient IT Infrastructure monitoring

[] using Cacti for IT Infrastructure monitoring with nice analytic graphing

[] using Zabbix for IT Infrastructure monitoring with easy-to-use Web UI

[] using DummyNet to mock different Network latencies & bandwidth for testing or other purpose

[] creating and using Solaris Native Zones {similar to Linux LXC but much efficient}

[] Ethernet Bonding in *Nix for Load Balancing

[] setting up Squid Cache Proxy service

[] setting Syslog Centralization for *nix machines

Sunday, June 19, 2011

[dev-ruby] "JRubyOnRails Blames ActiveRecord" *ing: culprit Cucumber, inspector JConsole

I was working on JRubyOnRails project, where we used Cucumber in providing a web-app feature that required functional testing of some web portals.

     In this Rails application, 'Cucumber' tests for different web-portals are invoked every scheduled interval using 'derailed_cuke' and then the test results are processed and saved in a MySQL database.

Gems Used:
     #for Rails3
          'rails', '3.0.3'
     #for MySQL ActiveRecord
          'activerecord-jdbcmysql-adapter', '1.0.2'
          'activerecord-jdbc-adapter', '1.0.2'
          'jdbc-mysql', '5.0.4'
     #for Cucumber
          'cucumber', '0.10.0'
          'cucumber-rails', '0.3.2'
          'gherkin', '2.3.3'
     #with JRuby version 1.5.5


  • After starting Rails Server and waiting for some non-specific time, an exception started arising mentioning ActiveRecord and an Heap OutOfMemory Exception being raised.
  • I checked for my database services, schema & content; all was as expected and even working fine from 'rails console'.


  • My bad old debug-style... the first thing I did was inserted 'print STATUS_MSG' at all suspected locations for ActiveRecord.
  • It worked :), and the location sorted out was the place where cucumber-tests' processed results was saved in database.
  • So now, 'Cucumber' was under suspicion radar... I google-d over recent incidents including both 'OutOfMemory Exception' and 'Cucumber', and there were few blogposts about similar problem of 'memory leakage' in Cucumber.
     I had a JConsole window already fired-up and profiling the Memory level changes
     in my JRubyOnRails appication (one of the major benefit of using JRuby).
  • So, I started tweaking the Cucumber test's schedule interval to a higher and lower value.
    This clearly showed it's impact over the Memory Profiling by JConsole and the relativity of duration in which the Exception was being raised.

     Until the Cucumber flaw is corrected, the work-around used was
        [] pushing in some more RAM
        [] using JVM Parameters to increase the Heap Size used by Rails application
         #jruby -J-Xmx2500m -J-Xms2500m -J-Xmn512m -S rails s
                  -J-Xmx2500m ~ means maximum Java Heap Size increased to 2500m
                  -J-Xms2500m ~ means initial Java Heap Size increased to 2500m
                  -J-Xmn2500m ~ means eden Java Heap Size increased to 500m
                  for server-side applications, Xmx & Xms is preferred to be kept same

[dev-ruby] 'gherkin' need a native install AND I ported entire jruby dir

When I started working on 'derailed_cuke', I was newbie at 'Cucumber' and it's dependency list included 'Gherkin'... I was using JRuby.

[[ derailed_cuke ]] 
this project is a simple way to use cucumber tests independent of any Rails using Ruby's Rake.

Due to some reasons... I copied this project on another machine along-wth entire JRuby-Pack from older machine.
Now, this JRuby-Pack of mine from older machine already had all the gems installed in it and was kin'of a portable setup with script declaring environment variables for '$JRUBY_HOME' & '$JRUBY_HOME\bin' to be added to $PATH.

I setup JRuby-Pack & started the application 'derailed_cuke'. There was the error for gem 'gherkin' of not being present.
But, I had it installed on earlier machine when I got Cucumber installed.

I called in 'gem install gherkin' again to make-up for whatever have been missing.
Installation Successful. But still a 'gherkin' error was there. Running 'gem list' showed that now I had two versions of gherkin installed.
Looking a more into issue, resolved it as:
[*] the newest version just installed wasn't desired by Cucumber,
[*] the version desired by Cucumber was copied from another box and,
[*] 'gherkin' requires a 'Native Install' 

*** thus do a clean 'bundle install' to avoid such newbie mistakes

So, doing a native install of specific 'gherkin' gem was required.

Wednesday, January 26, 2011

[app-security] Scanning VoIP Service for SIP-based Vulnerabilities

Scanning VoIP Service for SIP-based Vulnerabilities
(the version details and product specifcs belong to 2010)

Task Detail: 
Scanning the state of SIP implementation in VoIP Service.

SIP (Session Initiation Protocol) is a popular protocol used for controlling multimedia connection session for VoIP based services. It is also used for providing some services in our Organization even over a public network. Thus, we required a security analysis of this service.

Execution Method:
Attackers are actively seeking exposed PBX systems to launch Phishing Scams & route fake calls.
In recent years scams have evolved to include SMS Solicitations (Smishing) and Fake Phone Calls through VoIP (Vishing).
Two common VoIP frauds are:
1. Compromise of server to allow outbound calls at owner's
2. Use of VoIP in Vishing Scams {combined with phishing e-mails
   or automated attendant}

There are many tools available to hackers supporting SIP exploitation like Cain-n-Abel, Intelligent War Dialer, Sipsak, siVus, Scapy, Vomit, SIPVicious, SIPCrack, Nessus, Protos, Asteroid SIP DoS and many more.
VoIPPack is now coming with which enables a user to send invite to IP Phone and collect the Digest Authentication Response. After enough collection of responses, it can launch brute-force attack on challenge-response and guess the password.

Tools/Technology Used:
Cain & Abel :
Sipsak :
Scapy :
VoIPPack :
SIPVicious :

VoIP systems should be configured keeping in mind the following points:
1. Check for weak Account Credentials
2. Build a Dial-Plan only allowing what is required
3. Limit exposure of your entire Network
4. Be informed of new exploits in the arena
All the VoIP enabled devices were running SIP service at its default port. So identifying the IP for devices to be exploited for SIP vulnerabilities was already made easy. As now, we had to provide the IP of those devices to self-sustained SIP fuzzing utilities to check for different kind of information. This information could range from extensions of various VoIP phones, passwords of devices to audio recording of actual conversation taking place using those devices.

New Updates to VOIPPack consists:

* two new tools called “bypassalwaysreject” and “sipopenrelay”
* DoS exploits for Asterisk PBX called “asteriskdiscomfort”, “asterisksscanfdos” and “iax2resourceexhaust”
* Generic DoS exploit “sipinviteflood”
* Optimizations for the SIP Digest leak tool “sipdigestleak” and the SIP digest cracker

[net-security] Security Analysis of WiFi implementation WPA2-AES

WiFi has several vulnerable protocols still in use for backward compatibility. There have been new updates made available for the WiFi implementations, but still they all can be exploited in some way.

Execution Method:
[] The best WiFi setup you can have is WPA2-AES, its the most secure but not hacker-proof... you still need to be cautious making it secure enough.
So, go ahead and implement WPA2-AES standardized WiFi setup and then be cautious for what I mention ahead...

[] Recent WiFi implementations uses WPA2 for network authentication, AES for data encryption, PEAP as EAP type to provide stronger security as opposed to older WiFi implementation. They are vulnerable to attacks if improper configurations have been done at the client side. 
The correct configuration procedure is given below:
1. Open the Properties of your Wireless NIC.
2. From there open the properties of your Network.
3. Click on 'Wireless Networks' tab, and select properties of
   EAP Type.
4. Here, if 'Validate Server Certificate' checkbox is unchecked,
   or 'Do not prompt users to..." is not checked in, then it's a
   Privacy flaw making clients vulnerable to PEAP Attack.

[] Using 'WiFish Finder', one can easily figure-out the networks being used by client and the encryption type configured for it.
Then, it can probe as a fake Network Provider and attack the client by tricking it to send authentication packets to it disguising as original network added to trusted network listing of client.

Tools/Technology Used:
WPA2-AES PEAP, Pentoo, Airmon-ng, WiFish Finder

Not many practical attacks are available but still weak implementation could make WPA2-AES WiFi Network vulnerable. 
The attack vector raised by WiFish-Finder is an under-rated possibility.

[net-security] Subset Scan for Old Clients At New Networks

WHY? Subset Scan for Old Clients At New Networks

Task Detail:
If you are supposed to perform vulnerability assessment of a new network for some client you have already worked for.
They might have all newly configured devices supposed to be checked for security.
But one thing mostly they can't resist having is similar machine images being used for the new setup.
HOOOOLAAA... what you need to do is sniff out all the flaws noted in earlier scan.

Several vulnerabilties still persist if the images have been used because
[] These machine images are not so regularly updated, as thats the benefit of having images
so they act like a vulnerability copying bot.

Wednesday, January 19, 2011

[net-security] all need Authentication most need Domain Controllers 'n hackers love it

Domain Controllers are devices responsible for maintenance of data about all corporate user accounts, software resources and user ACLs. So, specific vulnerability assessment was required for them. We were supposed to assess the Domain Controllers with a more intense vulnerability scan cycle.

Execution Method:
Similar to previous task, we first scanned using NMap, and then launched NeXpose Scans on the domain servers one by one.
The scan result showed that all the machines ran the exploitable services and we tried testing those using Metasploit and tools specific to the identified vulnerability.
Here, we especially checked for User Account Enumeration and found most of the Domain Controllers to be vulnerable to CIFS Vulnerabilities, resulting in enumeration of all User Accounts with their details.

Tools/Technology Used:
NMap, Rapid7's NeXpose, Metasploit, SNMP Fuzzer, SNScan, Hunt, SuperScan, User2SID, SID2Use

Rapid7's NeXpose:
SNMP Fuzzer:
User2SID & SID2Use:

[net-security] Internal Network Scan : major NeXpose work

Even if a network has strong intrusion detection and prevention mechanism implemented, it is as safe as machines present within the network. If any network device within the network is infected with Trojan, Virus or even running a vulnerable service; it could lead to the compromise of entire network.

Execution Method:
Rapid7 team of Metasploit, have a network vulnerability assessment tool named 'NeXpose'.
It has a huge, regularly updating database of exploits and vulnerabilities to be tested against limited set of machines in its Community Version.

First start scanning the subnets with the best network scanner, NMap revealing some interesting information about Machines, Ports and services running on those ports.

Next, launch NeXpose Scans for all machines identified in the first step in small batches. Here, NeXpose will again do some NMap like testing and a lot more extra self-checking of whether certain exploit is useful against the machine.
Keep a record of all machines with exploitable services and tried hacking those using Metasploit and tools specific to vulnerabilities.

Also use tools like SNMPFuzz, Hunt mainly on server like machines... say AD Server, etc.; you could get lucky anytime.

Tools/Technology Used:
NMap, Rapid7's NeXpose, Metasploit, SNMP Fuzzer, SNScan, Hunt

Monday, January 17, 2011

[net-security] sometimes dumbest try hits hardest, our lovely 'Port Scan'

even the dumbest tries could hit opponent real hard... that is how the case is with Port Scanning today.

almost every Network Techie knows its importance and ways to secure them,
still everyone does leave a gap or even if no gap is left... its too hard to make network services hide their basic instincts and leave no trace...

 Normally, you shouldn't hope for any big revelations... but you can always crawl after the probable services active and their versions.

so a general procedure you can follow for some thrill is

perform it externally on available network resources and an internal multi-style scan
1. Simple Port Scanning
Doing a plain 'intensive mode' port scanning for all the ports from outside the network.
Do it for more specific ports on internal network, otherwise you could overflow your network routing tables or say DDoS your own lan. Know-More-About-it-Link
2. No-Ping Port Scanning ('n few more scan-modes)
Next, doing a scan in No-Ping mode made the Port-Scanner assume the host is already up and scan for the host's open ports. In certain better protected network scenarios, this is more successful than earlier style. Know-More-About-it-Link
3. Firewalking (just for fun... but you'll learn a lot)
In external scanning task, you could also use this old-school trick just to have some fun and try your luck to its limit. Know-More-About-it-Link

Tools/Technology Used: 
NMap, PortBunny, Telnet, Firewalk and NetCat present on BackTrack pen-test distribution.

[security] prying 'ears' at 'get-some-fresh-air-spots' near office

the first task I did in an organization (it was an assigned task) was to PRY around and see if I get caught... though it was easy to roam around office and connect to network once I'm past the entry checks for office...

But, this made me think what about information coming to me walking out of those entry check-points... at hangout spots near offices, where techies come in group or alone on a call and then they can't resist all the headache this-n-that-technology is causing them mostly with intricate details (which they feel is normal data as everyone in their group already know it, but what about prying ears).

I don't pry around, just go to tea-stall outside my office with my friends... and still get to know some internal details of other organizations 'cuz they are chattered by a person standing next to me...

if someone do it with motive and attention  'a lot can happen over a cup of tea'  ;)