For now role haproxy is maintaining haproxy
and keepalived. In follow-up changes there is also
proxysql added.
This patch is *only* renaming/moving stuff to more
prominent role loadbalancer, and moving also specific
templates to subdirectory.
This was done only to better diff in follow-up
changes.
Change-Id: I1d39d5bcaefc4016983bf267a2736b742cc3a555
Adds HAcluster Ansible role. This role contains High Availability
clustering solution composed of Corosync, Pacemaker and Pacemaker Remote.
HAcluster is added as a helper role for Masakari which requires it for
its host monitoring, allowing to provide HA to instances on a failed
compute host.
Kolla hacluster images merged in [1].
[1] https://review.opendev.org/#/c/668765/
Change-Id: I91e5c1840ace8f567daf462c4eb3ec1f0c503823
Implements: blueprint ansible-pacemaker-support
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
For using 3rd party Octavia providers (such as OVN provider) an
octavia-driver-agent container must be running to expose those providers to
use.
OVN CI job has been extended with deploying Octavia and testing OVN Load
Balancer.
Closes-Bug: #1903506
Depends-On: https://review.opendev.org/c/openstack/kolla/+/771191
Change-Id: Ibafa8b7307981f2a51e630cc113d18af6162171c
The common role was previously added as a dependency to all other roles.
It would set a fact after running on a host to avoid running twice. This
had the nice effect that deploying any service would automatically pull
in the common services for that host. When using tags, any services with
matching tags would also run the common role. This could be both
surprising and sometimes useful.
When using Ansible at large scale, there is a penalty associated with
executing a task against a large number of hosts, even if it is skipped.
The common role introduces some overhead, just in determining that it
has already run.
This change extracts the common role into a separate play, and removes
the dependency on it from all other roles. New groups have been added
for cron, fluentd, and kolla-toolbox, similar to other services. This
changes the behaviour in the following ways:
* The common role is now run for all hosts at the beginning, rather than
prior to their first enabled service
* Hosts must be in the necessary group for each of the common services
in order to have that service deployed. This is mostly to avoid
deploying on localhost or the deployment host
* If tags are specified for another service e.g. nova, the common role
will *not* automatically run for matching hosts. The common tag must
be specified explicitly
The last of these is probably the largest behaviour change. While it
would be possible to determine which hosts should automatically run the
common role, it would be quite complex, and would introduce some
overhead that would probably negate the benefit of splitting out the
common role.
Partially-Implements: blueprint performance-improvements
Change-Id: I6a4676bf6efeebc61383ec7a406db07c7a868b2a
The Monasca Log API has been removed and in this change we switch
to using the unified API. If dedicated log APIs are required then
this can be supported through configuration. Out of the box the
Monasca API is used for both logs and metrics which is envisaged to
work for most use cases.
In order to use the unified API for logs, we need to disable the
legacy Kafka client. We also rename the Monasca API config file
to remove a warning about using the old style name.
Depends-On: https://review.opendev.org/#/c/728638
Change-Id: I9b6bf5b6690f4b4b3445e7d15a40e45dd42d2e84
Zun has a new component "zun-cni-daemon" which should be
deployed in every compute nodes. It is basically an implementation
of CNI (Container Network Interface) that performs the neutron
port binding.
If users is using the capsule (pod) API, the recommended deployment
option is using "cri" as capsule driver. This is basically to use
a CRI runtime (i.e. CRI plugin for containerd) for supporting
capsules (pods). A CRI runtime needs a CNI plugin which is what
the "zun-cni-daemon" provides.
The configuration is based on the Zun installation guide [1].
It consits of the following steps:
* Configure the containerd daemon in the host. The "zun-compute"
container will use grpc to communicate with this service.
* Install the "zun-cni" binary at host. The containerd process
will invoke this binary to call the CNI plugin.
* Run a "zun-cni-daemon" container. The "zun-cni" binary will
communicate with this container via HTTP.
Relevant patches:
Blueprint: https://blueprints.launchpad.net/zun/+spec/add-support-cri-runtime
Install guide: https://review.opendev.org/#/c/707948/
Devstack plugin: https://review.opendev.org/#/c/705338/
Kolla image: https://review.opendev.org/#/c/708273/
[1] https://docs.openstack.org/zun/latest/install/index.html
Depends-On: https://review.opendev.org/#/c/721044/
Change-Id: I9c361a99b355af27907cf80f5c88d97191193495
Kolla Ansible was missing vitrage-persistor service
required by Vitrage for data storage.
Depends on fixing availability of Kolla image.
Change-Id: I8158ba66b8b624f6bcb89da9c990a30a68b7187b
Depends-On: Id5e143636f9a81e7294b775f3d8b9134bee58054
Closes-Bug: #1869319
This patch introduces an optional backend encryption for Keystone
service. When used in conjunction with enabling TLS for service API
endpoints, network communcation will be encrypted end to end, from
client through HAProxy to the Keystone service.
Change-Id: I6351147ddaff8b2ae629179a9bc3bae2ebac9519
Partially-Implements: blueprint add-ssl-internal-network
Kolla-Ansible Ceph deployment mechanism has been deprecated in Train [1].
This change removes the Ansible code and associated CI jobs.
[1]: https://review.opendev.org/669214
Change-Id: Ie2167f02ad2f525d3b0f553e2c047516acf55bc2
This allows users to supply an Elasticsearch Curator actions file
to manage log retention [1]. Curator then runs on a cron job, which
defaults to every day. A default curator actions file is provided,
which can be customised by the end user if required.
[1] https://www.elastic.co/guide/en/elasticsearch/client/curator/current/actionfile.html
Change-Id: Ide9baea9190ae849e61b9d8b6cff3305bdcdd534
This patch adds initial support for deploying multiple Nova cells.
Splitting a nova-cell role out from the Nova role allows a more granular
approach to deploying and configuring Nova services.
A new enable_cells flag has been added that enables the support of
multiple cells via the introduction of a super conductor in addition to
cell-specific conductors. When this flag is not set (the default), nova
is configured in the same manner as before - with a single conductor.
The nova role now deploys the global services:
* nova-api
* nova-scheduler
* nova-super-conductor (if enable_cells is true)
The nova-cell role handles services specific to a cell:
* nova-compute
* nova-compute-ironic
* nova-conductor
* nova-libvirt
* nova-novncproxy
* nova-serialproxy
* nova-spicehtml5proxy
* nova-ssh
This patch does not support using a single cell controller for managing
more than one cell. Support for sharing a cell controller will be added
in a future patch.
This patch should be backwards compatible and is tested by existing CI
jobs. A new CI job has been added that tests a multi-cell environment.
ceph-mon has been removed from the play hosts list as it is not
necessary - delegate_to does not require the host to be in the play.
Documentation will be added in a separate patch.
Partially Implements: blueprint support-nova-cells
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: I810aad7d49db3f5a7fd9a2f0f746fd912fe03917
This commit follows up the work in Kolla to provide deploy and configure the
Prometheus blackbox exporter.
An example blackbox-exporter module has been added (disabled by default)
called os_endpoint. This allows for the probing of endpoints over HTTP
and HTTPS. This can be used to monitor that OpenStack endpoints return a status
code of either 200 or 300, and the word 'versions' in the payload.
This change introduces a new variable `prometheus_blackbox_exporter_endpoints`.
Currently no defaults are specified because the configuration is heavily
dependent on the deployment.
Co-authored-by: Jack Heskett <Jack.Heskett@gresearch.co.uk>
Change-Id: I36ad4961078d90e2fd70c9a3368f5157d6fd89cd
The project has been retired and there will be no Train release [1].
This patch removes Neutron LBaaS support in Kolla.
[1] https://review.opendev.org/#/c/658494/
Change-Id: Ic0d3da02b9556a34d8c27ca21a1ebb3af1f5d34c
Qinling is an OpenStack project to provide "Function as a Service".
This project aims to provide a platform to support serverless functions.
Change-Id: I239a0130f8c8b061b531dab530d65172b0914d7c
Implements: blueprint ansible-qinling-support
Story: 2005760
Task: 33468
This patch implements the support for the elasticsearch-exporter in
kolla-ansible
The configuration and prechecks are reused from the other exporters
Depends-On: Id138f12e10102a6dd2cd8d84f2cc47aa29af3972
Change-Id: Iae0eac0179089f159804490bf71f1cf2c38dde54
Because kolla-ansible not have cyborg so should add it.
Implements: blueprint add-cyborg-to-kolla-ansible
Depend-On: I497e67e3a754fccfd2ef5a82f13ccfaf890a6fcd
Change-Id: I6f7ae86f855c5c64697607356d0ff3161f91b239
Vitrage Collector service has been removed from Vitrage in change:
Ie713456b2df96e24d0b15d2362a666162bfb4300.
Change-Id: I45023940c1d2573bfed49d4ce3fac16ed2d559e4
Signed-off-by: Bartosz Zurkowski <b.zurkowski@samsung.com>
Co-Authored-By: Kien Nguyen <kiennt65@viettel.com.vn>
This patch implements the initial support for the
openstack-exporter[0] in the kolla-ansible
prometheus monitoring system.
The configuration and prechecks are reused from the other
exporters and a new template is provided for generating
a os-client-config file required by the exporter.
The default scrape interval is 60 seconds, but it can
be extended via a configuration option.
[0] https://github.com/Linaro/openstack-exporter
Change-Id: I4a34c4bb56e74b5cd544972cbd6540d9acb6e4a1