Sunday, January 13, 2013

Apache httpd VirtualHosts : one gets default, unknown faults

Recently faced a situation where even after removing a VirtualHost, its ServerName was giving HTTP 200 response. It was all because of missed RTFM agenda.

When VirtualHosts get applied in Apache HTTPD server configuration, the first definition encountered by Apache Controller gets selected as the default route logic selected if the ServerName doesn't match any provided.

To get an explicit _default_ provider, one of the vhost definitions need to be told so... as in last configuration file piece.

~
~

Monday, December 3, 2012

foodcritic rake task ~ one that works for me

FoodCritic (a lint tool for your OpsCode Chef cookbooks)
$ gem install foodcritic --no-ri --no-rdoc

Rake Task to get the FoodCritic rolling at your cookbooks
(here cookbook reside in dir 'cookbooks' at the root of directory with Rakefile)
the one at wiki doesn't work for me, but this does

_

Monday, October 22, 2012

varnish ~ sometimes fails but don't tell

Varnish-Cache

We were playing around with the automation around famous Varnish-Cache service and stumbled upon something... the silent killer style of Varnish Cache.

The automated naming was such that it was appending different specific service-node host-names and service-names to form backends and then load balance onto them using "director ... round-robin" configuration. It was creating names that way to avoid name collision for same service running on different nodes load-balanced by Varnish-Cache.

We checked for the configuration correctness
$ varnishd -C -f /my/varnish/config/file

It passed.

We started the Varnish service
$ /etc/init.d/varnish start

It started.

We tried accessing the services via Varnish.

It failed saying there no http service running at Varnish machine:port.

Now what. Can't figure out anything wrong in configuration VCL. So, start looking at logs.
Tried starting varnishlog service

$ service varnishlog start

This failed giving error about _.vsm being not present, which was actually present and with right user permissions.

Then a colleague of mine suggests he has faced such issue before due to extremely long backend names.

I extremely shortened the backend name and it started working.

So, the length there does effect but the VCL gives no error when starting Varnish-Cache.

BTW, from the checks performed... the maximum character backend name working for the configuration was 44 Character long.

Saturday, August 11, 2012

Project::Inception.include? Infrastructure['base_architecture']

requirements change.....

Requirements are supposed to change, else there will be no market.

But when you are involved with the client to formulate the Project which has to be 'developed and released', you try to cover all the important specs related to 'development and release'.

In this era, mostly people can be seen to opt for IaaS, PaaS solutions for their production release (and for development depending on budget).
There are lots of Organizations, who have had their own SysAdmin Team for years for years and have more faith on them for cheaper, better, secure, and/or conrolled environment.
Then there are also projects that need to be on internal organizational networks due to some internal boxe-d service or company policies.

Either way, initially there can be two situations. They will have an idea about where they want it implemented, or not.

If they decide, you know better. It's better. You come up with the best suited solution respective to projected requirement. And get a yes from them.

If they decide to drive the decision. Even more better. Now analyze the solution, and project the respective pros, cons and the patch-y aid that will be required. Get them to agree with approach that wouldn't make the project-life a hell.


It needed to be done at start, once you start with project..... these will only slow you down and 'slum your time' once you are deciding it post-project-inception.

Monday, June 11, 2012

Creating RPM of MCollective for Ruby1.9

Note: With little tweaks comes major pre-requisite checks.

Clone the latest branch from GitHub repo for MCollective
$ git clone git://github.com/puppetlabs/marionette-collective.git

Removing Ruby 1.8 version specification from RedHat Spec
$ sed -i 's/.*ruby.abi.*//g' ext/redhat/mcollective.spec
from the changes 11/Jun/2011 : marionette-collective/commit/ba86f7762d
the lines removed by above command are
BuildRequires: ruby(abi) = 1.8
Requires: ruby(abi) = 1.8

Removing Rubygem related specification from RedHat Spec
$ sed -i 's/.*rubygem.*//g' ext/redhat/mcollective.spec
from the changes 11/Jun/2011 : marionette-collective/commit/ba86f7762d
the lines removed by above command are
Requires: rubygems
Requires: rubygem(stomp)

create the RPMs, generating this would require you to be on a RedHat-base system with packages like ruby, rubygem rake, redhat-lsb (& rpm-build, I think) installed
$ rake rpm



The checks you need:
On the machine where you install the created rpms..... remember to get ruby1.9.3 and rubygem stomp already installed.

Can get older created MCollective 1.3.2 RPMs friendly with Ruby 1.9 at http://yum-my.appspot.com/flat_web/index.htm .

The latest branch will give you version 2.0.1 .

Wednesday, April 25, 2012

quick PuppetMaster Service Script for gem installed puppet set-up

On easy rubygem-way `gem install puppet` installation method for Puppet doesn't get you the *nix platforms service script... so here is one allowing you to perform Start||Stop||Restart||Status service task for puppetmaster.
  • Save the file below as '/etc/init.d/puppetmaster'
    $ sudo curl -L -o /etc/init.d/puppetmaster https://gist.github.com/raw/2479100/15f79c68be3f6f6bf516adf385aac1f29f802a45/gistfile1.rb
  • and turn on its eXecutable bit
    $ sudo chmod +x /etc/init.d/puppetmaster
  • Now you can use it as
    $ service puppetmaster status
PuppetMaster Service Script

Wednesday, March 14, 2012

[Puppet] Exported Resources is a beautiful thing..... thing to use and improvise

Lately, I've been real blunt towards Puppet because of the soup that leaked some specific scenario flaws in very busy times.
So, it's my duty to applause if I like something which is a novel and beautiful concept.

For a well organized auto-magically managed set-up apart from a fine infrastructure and its configuration management mechanism, a very important part is for the monitoring and logging solution to spread across the infrastructure in a similar seamless and scalable fashion.


Puppet enables it very finely with the use of exporting and collecting resources.

Exported Resources are super virtual resources.
Once exported with the storeconfig setting being true, the puppetmaster consumes and keeps these virtual resources out available for all hosts.

Read in detail: http://docs.puppetlabs.com/guides/exported_resources.html

An example from the link above, using a simple File resource
node a {
  @@file { "/tmp/foo":
      content => "fjskfjs\n",
      tag => "foofile",
  }
}
node b {
  File <<| tag == 'foofile' |>>
}

It'll have its flaws (as all softwares do, mine and others)..... just hope they don't interfere in my use-cases.