Skip to main content

Posts

Common HTTP Load Balancing Methods

Estimating utilization of the application servers may not be always possible. In many companies, while average cpu utilization of any server is changing between %20 and %30, they may become hardly respond to incoming requests at peak times. When an unexpected over-utilization occurs, the first method that comes to mind is provisioning new instances (of course your company is using a virtualization or cloud architecture) to the application server pool.
Sometimes it may not be the solution for over-utilization. As the number of app servers increases it becomes important that the requests are distributed equally across these servers. Becoming some of the servers unresponsive may have a devastating effect on the environment.

Traditionally, load balancers are used to distribute incoming traffic across multiple app instances. Load balancing provides some benefits like scaling the application farm, supporting heavy network traffic, detecting unhealthy app instances and automatically removin…
Recent posts

Iptables Rules For 2 Node Elasticsearch Cluster

Below shell script is useful for securing two node elasticsearch cluster. To apply suitable iptables rules, just run it on each of the ES Nodes.

With this script rules are applied for: Allowing traffic on loopback adapters. Allowing ES Nodes to communicate each other. Allowing incoming ssh connections. Allowing incoming icmp (ping) requests. Allowing outgoing DNS requests. Allowing Access to Elasticsearch HTTP Interface. Dropping all other traffic.

Logstash Grok Filter Example For Jboss Server Access Logs

Logstash is a great tool for centralizing application server logs. Here is an excerpt from a jboss application server's access logs and corresponding grok filter for them.
Jboss Access Logs:

Converting some fields' data types to numbers (in the example integer and float) are useful for later statistical calculations. Logstash Filter (Logstash Version 2.3.4)

When logs are sent to elasticsearch, string fields would be stored as analyzed fields.

Listing Volume Mount Information For Each Container in Docker

Docker inspect command returns container information in JSON format. While you want to get specific objects from the returning json array, --format or -f option formats the output using the Go’s text/template package. Sometimes I just want to get source and destination folders of the volume mounts for every container. I have written following bash script to achieve this:


Keeping TableSpace Statistics with Graphite

Most of the time, monitoring usable free size of oracle tablespaces is helpful. Especially for production systems. Keeping that statistical data for some time is also meaningful so as to see how much new data enter into the database.
With the following bash script each tablespace's free size can be sent to graphite database. Just query oracle data dictionary views (dba_tablespaces, dba_data_files, dba_free_space) then send each value to graphite using netcat.


On the side of graphite carbon storage-schemas.conf whisper file schema definitions must be updated like the following example. This file is scanned for changes every 60 seconds, so no need to reload any service.
[oracle_tablespace_free_space] pattern = ^ora_tbls.*.free_space_mb$ retentions = 10m:90d
I am using grafana to visualize the metrics. It looks like: