In the Xena cycle it was decided to remove the Monasca
Grafana fork due to lack of maintenance. This commit removes
the service and provides a limited workaround using the
Monasca Grafana datasource with vanilla Grafana.
Depends-On: I9db7ec2df050fa20317d84f6cea40d1f5fd42e60
Change-Id: I4917ece1951084f6665722ba9a91d47764d3709a
Historically Monasca Log Transformer has been for log
standardisation and processing. For example, logs from different
sources may use slightly different error levels such as WARN, 5,
or WARNING. Monasca Log Transformer is a place where these could
be 'squashed' into a single error level to simplify log searches
based on labels such as these.
However, in Kolla Ansible, we do this processing in Fluentd so
that the simpler Fluentd -> Elastic -> Kibana pipeline also
benefits. This helps to avoid spreading out log parsing
configuration over many services, with the Fluentd Monasca output
plugin being yet another potential place for processing (which
should be avoided). It therefore makes sense to remove this
service entirely, and squash any existing configuration which
can't be moved to Fluentd into the Log Perister service. I.e.
by removing this pipeline, we don't loose any functionality,
we encourage log processing to take place in Fluentd, or at least
outside of Monasca, and we make significant gains in efficiency
by removing a topic from Kafka which contains a copy of all logs
in transit.
Finally, users forwarding logs from outside the control plane,
eg. from tenant instances, should be encouraged to process the
logs at the point of sending using whichever framework they are
forwarding them with. This makes sense, because all Logstash
configuration in Monasca is only accessible by control plane
admins. A user can't typically do any processing inside Monasca,
with or without this change.
Change-Id: I65c76d0d1cd488725e4233b7e75a11d03866095c
Config plays do not need to check containers. This avoids skipping
tasks during the genconfig action.
Ironic and Glance rolling upgrades are handled specially.
Swift and Bifrost do not use the handlers at all.
Partially-Implements: blueprint performance-improvements
Change-Id: I140bf71d62e8f0932c96270d1f08940a5ba4542a
Including tasks has a performance penalty when compared with importing
tasks. If the include has a condition associated with it, then the
overhead of the include may be lower than the overhead of skipping all
imported tasks. In the case of the check-containers.yml include, the
included file only has a single task, so the overhead of skipping this
task will not be greater than the overhead of the task import. It
therefore makes sense to switch to use import_tasks there.
Partially-Implements: blueprint performance-improvements
Change-Id: I65d911670649960708b9f6a4c110d1a7df1ad8f7
The Monasca Log API has been removed and in this change we switch
to using the unified API. If dedicated log APIs are required then
this can be supported through configuration. Out of the box the
Monasca API is used for both logs and metrics which is envisaged to
work for most use cases.
In order to use the unified API for logs, we need to disable the
legacy Kafka client. We also rename the Monasca API config file
to remove a warning about using the old style name.
Depends-On: https://review.opendev.org/#/c/728638
Change-Id: I9b6bf5b6690f4b4b3445e7d15a40e45dd42d2e84
Refactor service configuration to use the copy certificates task. This
reduces code duplication and simplifies implementing encrypting backend
HAProxy traffic for individual services.
Change-Id: I0474324b60a5f792ef5210ab336639edf7a8cd9e
When change the cert file in /etc/kolla/certificate/.
The certificate in the container has not changed.
So I think can use kolla-ansible deploy when certificate is
changed. restart <container>
Partially-Implements: blueprint custom-cacerts
Change-Id: Iaac6f37e85ffdc0352e8062ae5049cc9a6b3db26
Signed-off-by: yj.bai <bai.yongjun@99cloud.net>
When kolla_copy_ca_into_containers is set to "yes", the Certificate
Authority in /etc/kolla/certificates will be copied into service
containers to enable trust for that CA. This is especially useful when
the CA is self signed, and would not be trusted by default.
Partially-Implements: blueprint custom-cacerts
Change-Id: I4368f8994147580460ebe7533850cf63a419d0b4
As part of the effort to implement Ansible code linting in CI
(using ansible-lint) - we need to implement recommendations from
ansible-lint output [1].
One of them is to stop using local_action in favor of delegate_to -
to increase readability and and match the style of typical ansible
tasks.
[1]: https://review.opendev.org/694779/
Partially implements: blueprint ansible-lint
Change-Id: I46c259ddad5a6aaf9c7301e6c44cd8a1d5c457d3
Sometimes as cloud admins, we want to only update code that is running
in a cloud. But we dont need to do anything else. Make an action in
kolla-ansible that allows us to do that.
Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8
Implements: blueprint deploy-containers-action
A user may want to define and use Logstash patterns. This
commit adds support to copy them into the Monasca Log
Transformer container. In the future support could be
added for other Logstash containers.
Change-Id: Id8cde14af6dc7f49714f6b1cb878882d0048d293
Currently, we have a lot of logic for checking if a handler should run,
depending on whether config files have changed and whether the
container configuration has changed. As rm_work pointed out during
the recent haproxy refactor, these conditionals are typically
unnecessary - we can rely on Ansible's handler notification system
to only trigger handlers when they need to run. This removes a lot
of error prone code.
This patch removes conditional handler logic for all services. It is
important to ensure that we no longer trigger handlers when unnecessary,
because without these checks in place it will trigger a restart of the
containers.
Implements: blueprint simplify-handlers
Change-Id: I4f1aa03e9a9faaf8aecd556dfeafdb834042e4cd
The results from the find operation need to be registered per host,
because they depend on the host which runs the search. This bug
impacts users specifying custom plugins for specific hosts.
Change-Id: I41b2986b2f4ccd8fdc6553e83737e4106b6a2c07
Find module searches paths on managed server. Since role path and custom
Kolla config is located on deployment node and deployment node is not
considered to be a managed server, Monasca plugin files cannot be found.
After the deployment container running Monasca agent collector stucks in
restart mode due to missing plugin files.
The problem does not occur if deployment was started from a managed
server (eg. OSC). The problem occurs if the deployment was started from
a separate deployment server - a common case.
This change enforces running find module locally on deployment node.
Change-Id: Ia25daafe2f82f5744646fd2eda2d255ccead814e
Signed-off-by: Bartosz Zurkowski <b.zurkowski@samsung.com>
The Monasca Grafana fork allows users to log into Grafana with their
OpenStack user credentials and see metrics associated with their
OpenStack project. The long term goal is to enable Keystone support
in upstream Grafana, but this work seems to have stalled.
Partially-Implements: blueprint monasca-grafana
Change-Id: Icc04613b2571c094ae23b66d0bcc38b58c0ee4e1
The Monasca Agent collects metrics and in this change is deployed
across the control plane. These metrics are collected into an OpenStack
project. It supports configuring a small number of plugins, which can
be extended in later commits. It also makes the Monasca Agent credentials
available to other roles, such as the common role to allow forwarding
logs to Monasca.
Partially-Implements: blueprint monasca-roles
Change-Id: I76b34fc5e1c76407a45fcf272268d5798b473ca2
A small number of services set the recurse flag when they create
their config directory. This can change permission of files within
the directory, which are later set back to the original state. The
side effect is that the service is then restarted, even though the
net change to the config files amounts to nothing. The expected
behaviour is that a service only restarts if the config *has*
changed. This patch fixes this issue.
Change-Id: Ib6f1ca7b416247f8d455fb25892f4a3b27de03ba
Closes-Bug: 1800480
Jira, Slack and possiblly other plugins allow custom templates
for defining the format of notifications. This change lets you
provide these in a templates folder which is copied into the
monasca-notification container.
Partially-Implements: blueprint monasca-roles
Change-Id: Ibc5ba3944d51f6c8ffc8bdc9ed60f43dd91ca7e0
The Monasca Persister reads metrics from Kafka and stores them
in a configurable time series database.
Change-Id: I8166b32bfb1583098ab8318a5f38d25bddb81e89
Partially-Implements: blueprint monasca-roles
The Monasca Notification engine generates alerts such as Slack
notifications from alerts.
Change-Id: I84861d5feefe6b6f38acc4dd71e94c386d40b562
Partially-Implements: blueprint monasca-roles
Monasca Thresh is a Storm topology which generates alerts from
metric streams according to alarms defined via the Monasca API.
This change runs the thresholder in local mode, which means that
the log output for the topology is directed to stdout and the
topology is restarted if the container is restarted. A future
change will improve the log collection and introduce a better
way of the checking the topology is running for multi-node
clusters.
Change-Id: I063dca5eead15f3cec009df62f0fc5d857dd4bb0
Partially-Implements: blueprint monasca-roles
The log metrics service generates metrics from log messages
which allows further analysis and alerting to be performed
on them. Basic configuration is provided so that metrics
are generated for high level warning logs such as error, or
warning.
Change-Id: I45cc17817c716296451f620f304c0b1108162a56
Partially-Implements: blueprint monasca-roles
This commit is to apply resource-constraints to a few more OpenStack services.
Commit to apply constraints to the last set of services will be made in
the upcoming commit.
Depends-on: Icafa54baca24d2de64238222a5677b9d8b90e2aa
Change-Id: I39004f54281f97d53dfa4b1dbcf248650ad6f186
This is a Logstash component which reads processed logs from Kafka
and writes them to Elasticsearch (or some other backend supported by
Logstash).
Ingesting the logs from this service with Fluentd will be covered under
a different commit.
Change-Id: I2d722991ab2072c54c4715507b19a4c9279f921b
Partially-Implements: blueprint monasca-roles
The Monasca Log Transformer takes raw, unstandardised logs from one
Kafka topic, standardises them with whatever rules the operator wants
to use, and then writes them to a standardised logs topic in Kafka. It
is currently implemented as a Logstash config file.
Since Kolla does a fairly good job of standardising logs, this service
does very little processing. However, when other sources of logs
are used, it may be useful to add rules to the Transformer, particularly
if it's not possible to standardise the logs at source.
Ingesting the logs from this service with Fluentd will be covered under
a different commit.
Change-Id: I31cbb7e9a40a848391f517a56a67e3fd5bc12529
Partially-Implements: blueprint monasca-roles
Add become to all tasks that use the module "kolla_docker"
Change-Id: I4309c4011687b88ec31d739fd8f834fe2326ff10
Partial-Implements: blueprint ansible-specific-task-become
Deploys the Monasca API with mod_wsgi + Apache.
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Partially-Implements: blueprint monasca-roles
Change-Id: I3e03762217fbef1fb0cbff6239abb109cbec226b