Pages

Parsing HTML Output of the Nagios Service Availability Report

No comments:
Nagios has a Service Availability Report feature which is usable from Reports section of its web interface. But this feature is not designed as a web service architecture like a RESTful system. So in order to get these reports from an application interface we must make an Http request and parse the results. I just want to give an example of a python script does this kind of automation.

Nagios version used in this example is 3.5.0. Since Nagios serves with https and requires basic authentication, we have to use a ssl context and an authentication header. Python version is 2.7. Beautifulsoup4 library is used for parsing Http output.

PowerShell Script for Switching Between Multiple Windows

No comments:
Windows PowerShell has strong capabilities. I have a separate computer with a big lcd screen in which I am watching regularly some web based monitoring applications. So I need those application windows switch between on a timely basis. Then I wrote this simple powershell script to achieve this. You can change it according to your needs.


Feeding Active Print Jobs to Graphite

No comments:
It's pretty obvious that lots of print jobs are running on your cups server. In my case there are more than one cups servers running behind a load balancer. So tracking their active jobs will work for you to check if they spread across on the servers smoothly.

Schema definitions for Whisper files in the storage-schemas.conf are expressed:

[print_server_stats]
pattern = ^print_stats.*
retentions = 1m:7d,30m:2y

Two retention policies are defined. One for short term (samples are stored once every minute for seven days) and the other for long term (samples are stored once every thirty minute for two years).

Following bash script is used for sending active jobs count to graphite in every cups print server:



Once you create the script it can be started as a background job from a shell terminal:

# ./feed_graphite.sh >/dev/null 2>&1 &

Linux Process Status Codes

No comments:
In a Linux System, every process has a status expressed with the 'STAT' column in output of the 'ps' command. 'ps' command displays an uppercase letter for the process state.

Here are the different values for the output specifiers:

D    uninterruptible sleep (usually IO)
R    running or runnable (on run queue)
S    interruptible sleep (waiting for an event to complete)
T    stopped, either by a job control signal or because it is being traced
W    paging (not valid since the 2.6.xx kernel)
X    dead (should never be seen)
Z    defunct ("zombie") process, terminated but not reaped by its parent

for illustration, an example output of a 'ps' command:

$ ps -eo state,pid,user,cmd
S   1            root           /sbin/init
S   5274      root           smbd -F
D   4668     postgres     postgres: wal writer process
S   7282      root           nmbd -D
S   7349      root           /usr/sbin/winbindd -F
R   11676   postfix       cleanup -z -t unix -u
S   25354   _graphi+    (wsgi:_graphite)  -k start


Using ssh-agent for Unattended Batch jobs with Ssh Key Passphrase

No comments:
In some cases, It is needed to make ssh connections to another servers in order to run shell commands on them remotely. But when it comes to run these commands from a cron job, password interaction will be a concern. Using ssh key-pair with an empty passphrase may be an option but it is not recommended. There is another option automates passphrase interaction.

Ssh-agent provides a storage for unencrypted key because the most secure place to store a key is in program memory.

I am going to explain how to run batch/cron shell script integrated with ssh-agent:

There are two servers, server1 and server2.

On server1, ssh key pair is created.

# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): <your passphrase here>
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
........

On server2 copy content of the id_rsa.pub file from server1 and insert it to /root/.ssh/authorized_keys and give appropriate permissions to this file (700 for .ssh directory, 600 for authorized_keys file). From now on, from server1 ssh connections can be made to server2 using key passphrase.

On server1, it can be tested.

# ssh server2
Enter passphrase for key '/root/.ssh/id_rsa': <your passphrase here>
# (that is server2's shell prompt!)

On server1, we invoke an ssh-agent just once, thereafter cron jobs can use this agent for authentication.

# ssh-agent bash
# ssh-add /root/.ssh/id_rsa
Enter passphrase for /root/.ssh/id_rsa: <your passphrase here>
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)

Ssh agent provides access to its services through a unix socket. If you have access to this socket you will obtain the right to use of keys.

On server1, write out two specific environment variables to a file.

# echo "export SSH_AUTH_SOCK=$SSH_AUTH_SOCK" > aginfo
# echo "export SSH_AGENT_PID=$SSH_AGENT_PID" >> aginfo

Now open an another terminal window on server1 and save the following shell script as an example and run it.

# cat cron_test.sh
#!/bin/bash
source ./aginfo
ssh -o 'BatchMode yes' server2 hostname

# ./cron_test
server2

Now we have achieved our goal. Script can be put in the crontab and run periodically. But keep in mind that after a reboot ssh-agent won't live, so that ssh-agent setup process should be done again.


Linux find command (exec vs xargs)

No comments:
As a matter of fact, i detest having to learn more than one method to achieve a job when it comes to shell scripting. But most of the time, sysadmins should find their needs to be met in the best way.

Find has the -exec option to perform actions on the files that are found. It is a common way of deleting unnecessary files without xargs.

$ find . -name "*.tmp" -type f -exec rm -f {} \;

In the above example "{}" is safe to substitute for every file with a space in its name. But "rm" command is executed once for every single file that is found. If we think about tons of files to be removed then a lot of fork processes are likely to take place.

How about using xargs:

$ find . -name "*.tmp" -type f -print0 | xargs -0 -r rm -f

With xargs, "rm" will be executed once for all files, decreasing overhead of the fork. It would be safe to use "-print0" option for files with space. Xargs "-r" option is for not running if stdin is empty. Of course there is a limit for the argument list xargs can have at a time. Otherwise xargs will split the input and try to execute the command repeatedly. With "-s" flag this limit can be overriden.


Ansible Playbook for cleaning all print jobs

No comments:
- hosts: print_servers
  tasks:
    - name: Clears all print jobs from the queues of the specified printers.
      shell: for i in $(/usr/bin/lpstat -o {{ item }} | awk '{ print $1 }'); do /usr/bin/cancel $i; done
      with_items:
        - printer1
        - printer2