Pages

An Experiment with Filebeat and ELK Stack

No comments:
ELK Stack is one of the best distributed systems to centralize lots of servers' logs. Filebeat is a log shipper that keeps track of the given logs and pushes them to the Logstash. Then logstash outputs these logs to elasticsearch. I am not going to explain how to install ELK Stack but experiment about sending multiple log types (document_type) using filebeat log shipper to logstash server.

Linux Foundation Zephyr Project

No comments:
On the February 17, 2016, The Linux Foundation has launched a project named Zephyr, which is a new player on the Internet of Things field, to encourage an open source, Real-Time Operating System for Internet-of-things (IoT) devices. The Project supports multiple system architectures and is available through the Apache 2.0 license. Just because it is an open source project, the community can improve it to support new hardwares, tools, sensors and device drivers.

The Project is modular. Many of IoT devices need a hardware that addresses the very smallest memory footprint. Linux has already proven to be very good at running with limited resources. Zephyr kernel can be run on 8kB of memory. One disable or enable as many modules as needed.

It will be also secure. Security is very important to all IoT devices. The Linux Foundation is working out on a group whose jobs are to the task of maintaining and improving the project security in particular.

At first Zephyr will Support:

  • Bluetooth
  • Bluetooth Low Energy
  • IEEE 802.15.4
  • 6Lowpan
  • CoAP
  • IPv4
  • IPv6
  • NFC
  • Arduino 101
  • Arduino Due
  • Intel Galileo' Gen 2
  • NXP FRDM-K64F Freedom board


For now founding members of its ecosystem are
  • Intel
  • NXP Semiconductors N.V.
  • Synopsys Inc.



Parsing HTML Output of the Nagios Service Availability Report

No comments:
Nagios has a Service Availability Report feature which is usable from Reports section of its web interface. But this feature is not designed as a web service architecture like a RESTful system. So in order to get these reports from an application interface we must make an Http request and parse the results. I just want to give an example of a python script does this kind of automation.

Nagios version used in this example is 3.5.0. Since Nagios serves with https and requires basic authentication, we have to use a ssl context and an authentication header. Python version is 2.7. Beautifulsoup4 library is used for parsing Http output.

PowerShell Script for Switching Between Multiple Windows

No comments:
Windows PowerShell has strong capabilities. I have a separate computer with a big lcd screen in which I am watching regularly some web based monitoring applications. So I need those application windows switch between on a timely basis. Then I wrote this simple powershell script to achieve this. You can change it according to your needs.


Feeding Active Print Jobs to Graphite

No comments:
It's pretty obvious that lots of print jobs are running on your cups server. In my case there are more than one cups servers running behind a load balancer. So tracking their active jobs will work for you to check if they spread across on the servers smoothly.

Schema definitions for Whisper files in the storage-schemas.conf are expressed:

[print_server_stats]
pattern = ^print_stats.*
retentions = 1m:7d,30m:2y

Two retention policies are defined. One for short term (samples are stored once every minute for seven days) and the other for long term (samples are stored once every thirty minute for two years).

Following bash script is used for sending active jobs count to graphite in every cups print server:



Once you create the script it can be started as a background job from a shell terminal:

# ./feed_graphite.sh >/dev/null 2>&1 &

Linux Process Status Codes

No comments:
In a Linux System, every process has a status expressed with the 'STAT' column in output of the 'ps' command. 'ps' command displays an uppercase letter for the process state.

Here are the different values for the output specifiers:

D    uninterruptible sleep (usually IO)
R    running or runnable (on run queue)
S    interruptible sleep (waiting for an event to complete)
T    stopped, either by a job control signal or because it is being traced
W    paging (not valid since the 2.6.xx kernel)
X    dead (should never be seen)
Z    defunct ("zombie") process, terminated but not reaped by its parent

for illustration, an example output of a 'ps' command:

$ ps -eo state,pid,user,cmd
S   1            root           /sbin/init
S   5274      root           smbd -F
D   4668     postgres     postgres: wal writer process
S   7282      root           nmbd -D
S   7349      root           /usr/sbin/winbindd -F
R   11676   postfix       cleanup -z -t unix -u
S   25354   _graphi+    (wsgi:_graphite)  -k start


Using ssh-agent for Unattended Batch jobs with Ssh Key Passphrase

No comments:
In some cases, It is needed to make ssh connections to another servers in order to run shell commands on them remotely. But when it comes to run these commands from a cron job, password interaction will be a concern. Using ssh key-pair with an empty passphrase may be an option but it is not recommended. There is another option automates passphrase interaction.

Ssh-agent provides a storage for unencrypted key because the most secure place to store a key is in program memory.

I am going to explain how to run batch/cron shell script integrated with ssh-agent:

There are two servers, server1 and server2.

On server1, ssh key pair is created.

# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): <your passphrase here>
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
........

On server2 copy content of the id_rsa.pub file from server1 and insert it to /root/.ssh/authorized_keys and give appropriate permissions to this file (700 for .ssh directory, 600 for authorized_keys file). From now on, from server1 ssh connections can be made to server2 using key passphrase.

On server1, it can be tested.

# ssh server2
Enter passphrase for key '/root/.ssh/id_rsa': <your passphrase here>
# (that is server2's shell prompt!)

On server1, we invoke an ssh-agent just once, thereafter cron jobs can use this agent for authentication.

# ssh-agent bash
# ssh-add /root/.ssh/id_rsa
Enter passphrase for /root/.ssh/id_rsa: <your passphrase here>
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)

Ssh agent provides access to its services through a unix socket. If you have access to this socket you will obtain the right to use of keys.

On server1, write out two specific environment variables to a file.

# echo "export SSH_AUTH_SOCK=$SSH_AUTH_SOCK" > aginfo
# echo "export SSH_AGENT_PID=$SSH_AGENT_PID" >> aginfo

Now open an another terminal window on server1 and save the following shell script as an example and run it.

# cat cron_test.sh
#!/bin/bash
source ./aginfo
ssh -o 'BatchMode yes' server2 hostname

# ./cron_test
server2

Now we have achieved our goal. Script can be put in the crontab and run periodically. But keep in mind that after a reboot ssh-agent won't live, so that ssh-agent setup process should be done again.