Skip to main content

Mysql server memory variables

join_buffer_size: (PER SESSION) Controls the amount of memory allocated to perform joins on tables that have no keys which can be used to perform a condition filter. Allocated for each table joined without necessary filter conditions.

key_buffer_size: (GLOBAL) (MyISAM-only) Controls the amount of memory allocated to the MyISAM index key cache.

innodb_buffer_pool_size: (GLOBAL) (InnoDB-only) Controls the amount of memory allocated to the InnoDB cache containing both clustered data and secondary index pages.

innodb_additional_mem_pool_size: (GLOBAL) (InnoDB-only) Controls the amount of memory allocated to the buffer storing the InnoDB internal data dictionary.

innodb_log_buffer_size: (GLOBAL) (InnoDB-only) Controls the amount of memory allocated to the buffer storing InnoDB write-ahead log entries.

query_cache_size: (GLOBAL) Controls the amount of memory allocated to the Query Cache.

read_buffer_size: (PER SESSION) Controls the amount of memory allocated to the connecting thread in order to process a table scan.

read_rnd_buffer_size: (PER SESSION) Controls the amount of memory allocated to the buffer used to read previously sorted results.

sort_buffer_size: (PER SESSION) Controls the amount memory allocated to the buffer used to sort result sets before returning the set to the calling client.

thread_stack: (PER SESSION) Controls the default stack memory allocated for each connecting thread.

tmp_table_size: (GLOBAL) Controls the maximum memory to allocate to a temporary table before MySQL converts it into an on-disk MyISAM table.

thread_cache_size: (GLOBAL) Determines the number of thread connection objects that MySQL keeps in a cache to mitigate resource creation costs.

Comments

Popular posts from this blog

Sending Jboss Server Logs to Logstash Using Filebeat with Multiline Support

In addition to sending system logs to logstash, it is possible to add a prospector section to the filebeat.yml for jboss server logs. Sometimes jboss server.log has single events made up from several lines of messages. In such cases Filebeat should be configured for a multiline prospector.
Filebeat takes lines do not start with a date pattern (look at pattern in the multiline section "^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}" and negate section is set to true) and combines them with the previous line that starts with a date pattern.

server.log file excerpt where DatePattern: yyyy-MM-dd-HH and ConversionPattern: %d %-5p [%c] %m%n
Logstash filter: