Set threadpool tuning and fstype

The Elasticsearch cluster was running with the default thread pool, and
while functional, it will start rejecting work under heavy load. This
change sets the thread pool using the upstream guidelines for a more
highly performant cluster. The cluster is using niofs as the default
index store so that the kernel is better able to manage indexes instead
of only relying on memory mapped inodes.

Change-Id: I0239f1622c42cb25c21de69fd0ad5dc9e78ed6c5
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This commit is contained in:
Kevin Carter 2018-06-08 12:29:35 -05:00
parent 3a2328af00
commit 3f73621393
No known key found for this signature in database
GPG Key ID: 9443251A787B9FB3

View File

@ -18,6 +18,11 @@ path.data: /var/lib/elasticsearch
#path.logs: /var/lib/elasticsearch/logs/
path.logs: /var/log/elasticsearch/
# Set the global default index store. More information on these settings can be
# found here:
# <https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-store.html>
index.store.type: niofs
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
@ -81,3 +86,17 @@ gateway.recover_after_nodes: {{ master_node_count | int // 2 }}
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true
# Thread pool settings. For more on this see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html>
{% set thread_pool_size = ansible_processor_cores * ((ansible_processor_threads_per_core > 0) | ternary(ansible_processor_threads_per_core, 1)) %}
thread_pool:
search:
size: {{ thread_pool_size }}
queue_size: {{ thread_pool_size * 64 }}
index:
size: {{ thread_pool_size }}
queue_size: {{ thread_pool_size * 128 }}
bulk:
size: {{ thread_pool_size }}
queue_size: {{ thread_pool_size * 256 }}