Collectd-ceilometer-plugin is essential for further
more detailed metrics collection, smarter scheduling and service
assurance.
Change-Id: I8da572980de370517ec120d745ad1d36e316b465
Implements: blueprint collectd-ceilometer-plugin
Added ansible role for influxdb
Introduced host groups for monitoring and influxdb and assign role
Monitoring is deployed on a separate node called monitoring01 by
default
Co-Authored-By: zhubingbing <zhubingbing10@gmail.com>
Change-Id: If2465a14b18c6c3fd657af587a0b85f6b7a0191a
Partially-Implements: Blueprint performance-monitoring
This changed introduces 4 new parameters to be able to use an existing
elasticsearch service for central logging.
* elasticsearch_address - address of elasticsearch server
* elasticsearch_protocol - protocol (HTTP/HTTPS) used by elasticsearch server
* enable_elasticsearch - deploy elasticsearch container
* enable_kibana - deploy kibana container
Closes-bug: #1584861
Change-Id: Ia1ff9ae8b6d9929c3826da02693d1e2fc9ea2522
Remove the unnecessary option in the group_vars/all.yml file.
* removed some cinder.conf options like volume_backend_name,
iscsi_helper, iscsi_protocol etc. these value can be configured by
custom cinder.conf file, no need export as global variables.
* remove meaningless iscsi_ip_addess, which is not used in LVM driver
* force start iscsi relate when enable_cinder_backend_lvm is yes
TrivialFix
Change-Id: Ifcbfdad15e4d68bc5f20fc77e0315a09983ef022
Previously, kolla did not support neutron lbaas functionality.
Only Lbaasv2 is supported in Mitaka. Additional information can
be found here:
http://docs.openstack.org/mitaka/networking-guide/adv-config-lbaas.html
Magnum uses Neutron Lbaas to provide high availability to COE API
and Etcd endpoints within a bay. Therefore, Neutron Lbaas is required
for Kolla to support Magnum.
Co-Authored-By: Serguei Bezverkhi <sbezverk@cisco.com>
Partial-Bug: #1551992
Change-Id: I05360b7c447c601fcb3c2b6b2a913ef5cc0f3a1b
This partially implements iscsi and lvm2 support for cinder
in Kolla. Add integration with Kolla infrastructure.
Change-Id: I5b7d59163518080f38aec0c00617440de0763f1d
Implements: blueprint iscsi-lvm2-docker
Currently the delegate_to doesnt happen and the neutron role creation is
attempted once on the first server and is skipped. The re-ordering of hosts in
site.yml seems to make the first host to be one inside neutron-server group
yielding the expected results. This patch needs to be re-visited as soon as a
version of ansible is chosen that fixes the issues with delegate_to
Co-Authored-By: Steven Dake <stdake@cisco.com>
Co-Authored-By: Vikram Hosakote <vhosakot@cisco.com>
Co-Authored-By: Nate Potter <nathaniel.potter@intel.com>
Co-Authored-By: Ganesh Mahalingam <ganesh.mahalingam@intel.com>
Change-Id: Ia712b323aa9d750d470a11ee899ab1b3054a903f
Partial-Bug: #1546789
Heka depends on haproxy and keepaived being present to communicate
with ElasticSearch. If we start ElasticSearch prior to haproxy and
keepalived, the number of errors are reduced in heka.
Change-Id: Id2c742ea572c6450a371421e21f34aa69355bb8b
Partial-Bug: #1560779
The in-process cache for keystone tokens has been deprecated due to
"incosistent results and high memory usage" with the expectation we
switch to memcached_servers if we want to stay performant.
Add memcache_servers [cache] section to the appropriate servers as the
[DEFAULT]\memcache_servers options was deprecated.
TrivialFix
Related-Id: Ied2b88c8cefe5655a88d0c2f334de04e588fa75a
Change-Id: Ic971bdddc0be3338b15924f7cc0f97d4a3ad2440
This patch includes changes relative to integrating Heka with
Elasticsearch and Kibana.
The main change is the addition of an Heka ElasticSearchOutput plugin
to make Heka send the logs it collects to Elasticsearch.
Since Logstash is not used the enable_elk deploy variable is renamed
to enable_central_logging.
If enable_central_logging is false then Elasticsearch and Kibana are
not started, and Heka won't attempt to send logs to Elasticsearch.
By default enable_central_logging is set to false. If
enable_central_logging is set to true after deployment then the Heka
container needs to be recreated (for Heka to get the new
configuration).
The Kibana configuration used property names that are deprecated in
Kibana 4.2. This is changed to use non-deprecated property names.
Previously logs read from files and from Syslog had a different Type
in Heka. This is changed to always use "log" for the Type. In this
way just one index instead of two is used in Elasticsearch, making
things easier to the user on the visualization side.
The HAProxy configuration is changed to add entries for Kibana.
Kibana server is now accessible via the internal VIP, and also via
the external VIP if there's one configured.
The HAProxy configuration is changed to add an entry for
Elasticsearch. So Elasticsearch is now accessible via the internal
VIP. Heka uses that channel for communicating with Elasticsearch.
Note that currently the Heka logs include "Plugin
elasticsearch_output" errors when Heka starts. This occurs when Heka
starts processing logs while Elasticsearch is not yet started. These
are transient errors that go away when Elasticsearch is ready. And
with buffering enabled on the ElasticSearchOuput plugin logs will be
buffered and then retransmitted when Elasticsearch is ready.
Change-Id: I6ff7a4f0ad04c4c666e174693a35ff49914280bb
Implements: blueprint central-logging-service
The generic driver for manila need the neutron agents
and OVS / Linuxbridge running on the same node as manila_share.
This is necessary when the DHSS (Driver Handles Share Servers)
is the value "True", so that the manila_share can talk
with NFS manager.
Change-Id: I21904659b1789fa71118401bfb6ac2227ae564da
Partially-Implements: blueprint enable-manila-containers
Part of ELK stack. Includes Dockerfiles for both Centos and Ubuntu.
Change-Id: I9f76adf084cd4f68e29326112b76ffd02b5adada
Partially-implements: blueprint central-logging-service
*** Requires Docker 1.10 which is released ***
Documentation will be in the next patch. You must set the following
in your docker.service daemon control file for propogation to work:
[Service]
MountFlags=shared
======================================================================
Thanks to mount propagation in Docker 1.10 we can use thin containers
finally! This is extremely useful to operators since now they can
access the network namespaces from the hosts (outside the neutron
container). But additionally it allows us to implement the VPN agent
and other services easier.
Neutron containers and the neutron role are brought into the standards
of the new Kolla. Completely with drop-root and ansible formating
updates.
The ip_wrapper.py script was (thankfully) not needed so it has been
removed from the repo.
Partially-Implements: blueprint upgrade-neutron
Change-Id: Iaf5555283240457e1912459f397a6393d886fba1
Part of ELK stack. Includes Dockerfiles for both Centos and Ubuntu.
Change-Id: I1d955a5c51e416cc572eb2c9b4c57982a1d6ab67
Partially-implements: blueprint central-logging-service
Previous work on Watcher added the Docker images, this
change adds the ansible configuration.
There is support for HA, via haproxy to balance across the
Watcher API hosts.
There is also a hook into nova.conf to conditionally add
Nova compute Host metrics via Ceilometer if Watcher is enabled.
This defaults to enabled false.
Change-Id: I8763528bb6ff12943b810212c71396d2d7cf6836
Partial-bug: #1598929
Partially-implements: bp watcher
Signed-off-by: Dave Walker (Daviey) <email@daviey.com>
Throughout the project overtime some of these file permissions have
changed to have an executable bit. They should not have this bit set.
TrivialFix
Change-Id: I1748b5bde813a0fcac36aeecdfd83245b8ee5be3
Without this the haproxy role doesn't have the facts it needs to
render it's template, resulting in the following error:
TASK: [haproxy | Copying over config(s)]
************************************** fatal: [control01] => {'msg':
"AnsibleUndefinedVariable: One or more undefined variables: 'dict
object' has no attribute u'ansible_eno1'", 'failed': True}
This is similar to the fix applied previously for other services in
I99b7dbebd5a6193e192ee258ddf576d18db90ed7.
Change-Id: Idb7fa8763cff64ad761c5b8a1a3bf92a27f4f501
Closes-Bug: 1524738
Without this the haproxy role doesn't have the facts it needs to
render it's template, resulting in the following error:
TASK: [haproxy | Copying over config(s)]
************************************** fatal: [control01] => {'msg':
"AnsibleUndefinedVariable: One or more undefined variables: 'dict
object' has no attribute u'ansible_eno1'", 'failed': True}
This is similar to the fix applied previously for other services in
I99b7dbebd5a6193e192ee258ddf576d18db90ed7.
Change-Id: I279374e8861c02e3aa12988b885be7361e0cf2f5
Closes-Bug: 1524739
Adjust all the configs to list all the rabbitmq hosts rather than
running rabbitmq through the VIP. This is made possible by clusterer
which has already merged.
Change-Id: I5db48f5f10ec68f4c8863a29bc13984f6845a4f9
Partially-Implements: blueprint rabbitmq-clusterer
As the bug mentions this file is complex and has caused problems in
the past. It will likely cause problems in the future.
Change-Id: I28db6a38406ce0dd38340319eea7ef9134682007
Closes-Bug: #1512582
Unfortunately there was no was to avoid memcache for consoleauth, so
we might as well take advantage of it for Horizon as well.
Change-Id: Idd338a025b031f6b50fe0c9f03c2c8d862f9d4c0
Closes-Bug: #1504606
Closes-Bug: #1504800