openstack-ansible-ops/elk_metrics_6x/createElasticIndexes.yml
Kevin Carter b6343c57a4
Convert logstash groks to a multi-pipeline setup
The logstash groks were running in line using the legacy method which uses
lexical sorting of all logstash filter files and loads them in order. While
this works it makes it so all data has to travel through all filters.
This change makes use of the logstash multi-pipeline capabilities
using a distributor and fork pattern. This allows data to flow through
logstash more quickly and not block whenever there's an issue with an
output plugin.

Finger-prints using SHA1 when there's a message and UUID when not. This
will ensure we're duplicating log entries which will help speed up
transations and further reduce the storage required.

Change-Id: I38268e33b370da0f1e186ecf65911d4a312c3e6a
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
2018-07-27 12:04:05 -05:00

48 lines
1.3 KiB
YAML

---
- name: Create/Setup known indexes in Elasticsearch
hosts: "elastic-logstash[0]"
become: true
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
pre_tasks:
- include_tasks: common_task_data_node_hosts.yml
tags:
- always
tasks:
- name: Create basic indexes
uri:
url: http://127.0.0.1:9200/{{ item.name }}
method: PUT
body: "{{ item.index_options | to_json }}"
status_code: 200,400
body_format: json
register: elk_indexes
until: elk_indexes is success
retries: 3
delay: 5
with_items:
- name: "osprofiler-notifications"
index_options:
settings:
index:
codec: "best_compression"
mapping:
total_fields:
limit: "10000"
refresh_interval: "5s"
- name: "_all/_settings?preserve_existing=true"
index_options:
index.refresh_interval: "10s"
- name: "_all/_settings?preserve_existing=true"
index_options:
index.queries.cache.enabled: "true"
indices.queries.cache.size: "5%"
- name: "_all/_settings"
index_options:
index.number_of_replicas: "{{ elasticsearch_number_of_replicas }}"