A combination of durable queues and classic queue mirroring can be used
to provide high availability of RabbitMQ. However, these options should
only be used together, otherwise the system will become unstable. Using
the flag ``om_enable_rabbitmq_high_availability`` will either enable
both options at once, or neither of them.
There are some queues that should not be mirrored:
* ``reply`` queues (these have a single consumer and TTL policy)
* ``fanout`` queues (these have a TTL policy)
* ``amq`` queues (these are auto-delete queues, with a single consumer)
An exclusionary pattern is used in the classic mirroring policy. This
pattern is ``^(?!(amq\\.)|(.*_fanout_)|(reply_)).*``
Change-Id: I51c8023b260eb40b2eaa91bd276b46890c215c25
The ``[oslo_messaging_rabbit] heartbeat_in_pthread`` config option
is set to ``true`` for wsgi applications to allow the RabbitMQ
heartbeats to function. For non-wsgi applications it is set to ``false``
as it may otherwise break the service [1].
[1] https://docs.openstack.org/releasenotes/oslo.messaging/zed.html#upgrade-notes
Change-Id: Id89bd6158aff42d59040674308a8672c358ccb3c
Kolla Ansible is switching to OpenSearch and is dropping support for
deploying ElasticSearch. This is because the final OSS release of
ElasticSearch has exceeded its end of life.
Monasca is affected because it uses both Logstash and ElasticSearch.
Whilst it may continue to work with OpenSearch, Logstash remains an
issue.
In the absence of any renewed interest in the project, we remove
support for deploying it. This helps to reduce the complexity
of log processing configuration in Kolla Ansible, freeing up
development time.
Change-Id: I6fc7842bcda18e417a3fd21c11e28979a470f1cf
Render {{ openstack_service_workers }} for workers
of each openstack service is not enough. There are
several services which has to have more workers because
there are more requests sent to them.
This patch is just adding default value for workers for
each service and sets {{ openstack_service_workers }} as
default, so value can be overrided in hostvars per server.
Nothing changed for normal user.
Change-Id: Ifa5863f8ec865bbf8e39c9b2add42c92abe40616
Fixes an issue where access rules failed to validate:
Cannot validate request with restricted access rules. Set
service_type in [keystone_authtoken] to allow access rule validation
I've used the values from the endpoint. This was mostly a straight
forward copy and paste, except:
- versioned endpoints e.g cinderv3 where I stripped the version
- monasca has multiple endpoints associated with a single service. For
this, I concatenated logging and monitoring to be logging-monitoring.
Closes-Bug: #1965111
Change-Id: Ic4b3ab60abad8c3dd96cd4923a67f2a8f9d195d7
Following up on [1].
The 3 variables are only introducing noise after we removed
the reliance on Keystone's admin port.
[1] I5099b08953789b280c915a6b7a22bdd4e3404076
Change-Id: I3f9dab93042799eda9174257e604fd1844684c1c
In services which use the Apache HTTP server to service HTTP requests,
there exists a TimeOut directive [1] which defaults to 60 seconds. APIs
which come under heavy load, such as Cinder, can sometimes exceed this
which results in a HTTP 504 Gateway timeout, or similar. However, the
request can still be serviced without error. For example, if Nova calls
the Cinder API to detach a volume, and this operation takes longer
than the shortest of the two timeouts, Nova will emit a stack trace
with a 504 Gateway timeout. At some time later, the request to detach
the volume will succeed. The Nova and Cinder DBs then become
out-of-sync with each other, and frequently DB surgery is required.
Although strictly this category of bugs should be fixed in OpenStack
services, it is not realistic to expect this to happen in the short
term. Therefore, this change makes it easier to set the Apache HTTP
timeout via a new variable.
An example of a related bug is here:
https://bugs.launchpad.net/nova/+bug/1888665
Whilst this timeout can currently be set by overriding the WSGI
config for individual services, this change makes it much easier.
Change-Id: Ie452516655cbd40d63bdad3635fd66693e40ce34
Closes-Bug: #1917648
and prometheus collector backend support.
* Fix various remaining typos.
* Fix trailing character on reno.
* Enable Elasticsearch when selected as cloudkitty backend.
* Add a check for ES index creation when ES required.
* Add a release note
* Fix release note line length issue.
Change-Id: I18f3d8f2e10a2996b2ebf92733a1770bef548bda
Closes-bug: #1895945
When the internal VIP is moved in the event of a failure of the active
controller, OpenStack services can become unresponsive as they try to
talk with MariaDB using connections from the SQLAlchemy pool.
It has been argued that OpenStack doesn't really need to use connection
pooling with MariaDB [1]. This commit reduces the use of connection
pooling via two configuration options:
- max_pool_size is set to 1 to allow only a single connection in the
pool (it is not possible to disable connection pooling entirely via
oslo.db, and max_pool_size = 0 means unlimited pool size)
- lower connection_recycle_time from the default of one hour to 10
seconds, which means the single connection in the pool will be
recreated regularly
These settings have shown better reactivity of the system in the event
of a failover.
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html
Change-Id: Ib6a62d4428db9b95569314084090472870417f3d
Closes-Bug: #1896635
This change adds support for encryption of communication between
OpenStack services and RabbitMQ. Server certificates are supported, but
currently client certificates are not.
The kolla-ansible certificates command has been updated to support
generating certificates for RabbitMQ for development and testing.
RabbitMQ TLS is enabled in the all-in-one source CI jobs, or when
The Zuul 'tls_enabled' variable is true.
Change-Id: I4f1d04150fb2b5af085b762890092f87ae6076b5
Implements: blueprint message-queue-ssl-support
It was found to be useless in [1].
It is one of distro_python_version usages.
Note Freezer and Horizon still use python_path (and hence
distro_python_version) for different purposes.
[1] https://review.opendev.org/675822
Change-Id: I6d6d9fdf4c28cb2b686d548955108c994b685bb1
Partially-Implements: blueprint drop-distro-python-version
This patch introduces a global keep alive timeout value for services
that leverage httpd + wsgi to handle http/https requests. The default
value is one minute.
Change-Id: Icf7cb0baf86b428a60a7e9bbed642999711865cd
Partially-Implements: blueprint add-ssl-internal-network
Some CloudKitty API responses include a Location header using http
instead of https. Seen with `openstack rating module enable hashmap`.
Change-Id: I11158bbfd2006e3574e165b6afc9c223b018d4bc
Closes-Bug: #1888544
The use of default(omit) is for module parameters, not templates. We
define a default value for openstack_cacert, so it should never be
undefined anyway.
Change-Id: Idfa73097ca168c76559dc4f3aa8bb30b7113ab28
Currently the WSGI configuration for binary images uses python2.7
site-packages in some places. This change uses distro_python_version to
select the correct python path.
Change-Id: Id5f3f0ede106498b9264942fa0399d7c7862c122
Partially-Implements: blueprint python-3
Include a reference to the globally configured Certificate Authority to
all services. Services use the CA to verify HTTPs connections.
Change-Id: I38da931cdd7ff46cce1994763b5c713652b096cc
Partially-Implements: blueprint support-trusted-ca-certificate-file
Currently we don't put global Apache error logs into /var/log/kolla,
this change adds statements that redirect those logs there.
Adapted the logfile names to catch into openstack wsgi logging fluentd
input config and existing logrotate cron entries.
Change-Id: I21216e688a1993239e3e81411a4e8b6f13e138c2
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Backport: stein
In the Stein release, cloudkitty switched the default storage backend
from sqlalchemy to influxdb. In kolla-ansible stein configuration, we
did not explicitly set the storage backend, and so we automatically
picked up this change. However, prior to
https://review.opendev.org/#/c/615928/ we did not have full support for
InfluxDB as a storage backend, and so this has broken the Rocky-Stein
upgrade (https://bugs.launchpad.net/kolla-ansible/+bug/1838641), which
fails with this during the DB sync:
ERROR cloudkitty InfluxDBClientError: get_list_retention_policies()
requires a database as a parameter or the client to be using a database
This change synchronises our default with cloudkitty's (influxdb), and
also provides an upgrade transition to create the influxdb database.
We also move the cloudkitty_storage_backend variable to
group_vars/all.yml, since it is used to determine whether to enable
influxdb.
Finally, the section name in cloudkitty.conf was incorrect - it was
storage_influx, but should be storage_influxdb.
Change-Id: I71f2ed11bd06f58e141d222e2709835b7ddb2c71
Closes-Bug: #1838641
This proposal will add support to Kolla-Ansible for Cloudkitty
InfluxDB storage system deployment. The feature of InfluxDB as the
storage backend for Cloudkitty was created with the following commit
https://github.com/openstack/cloudkitty/commit/
c4758e78b49386145309a44623502f8095a2c7ee
Problem Description
===================
With the addition of support for InfluxDB in Cloudkitty, which is
achieving general availability via Stein release, we need a method to
easily configure/support this storage backend system via Kolla-ansible.
Kolla-ansible is already able to deploy and configure an InfluxDB
system. Therefore, this proposal will use the InfluxDB deployment
configured via Kolla-ansible to connect to CloudKitty and use it as a
storage backend.
If we do not provide a method for users (operators) to manage
Cloudkitty storage backend via Kolla-ansible, the user has to execute
these changes/configurations manually (or via some other set of
automated scripts), which creates distributed set of configuration
files, "configurations" scripts that have different versioning schemas
and life cycles.
Proposed Change
===============
Architecture
------------
We propose a flag that users can use to make Kolla-ansible configure
CloudKitty to use InfluxDB as the storage backend system. When
enabling this flag, Kolla-ansible will also enable the deployment of
the InfluxDB via Kolla-ansible automatically.
CloudKitty will be configured accordingly to [1] and [2]. We will also
externalize the "retention_policy", "use_ssl", and "insecure", to
allow fine granular configurations to operators. All of these
configurations will only be used when configured; therefore, when they
are not set, the default value/behavior defined in Cloudkitty will be
used. Moreover, when we configure "use_ssl" to "true", the user will
be able to set "cafile" to a custom trusted CA file. Again, if these
variables are not set, the default ones in Cloudkitty will be used.
Implementation
--------------
We need to introduce a new variable called
`cloudkitty_storage_backend`. Valid options are `sqlalchemy` or
`influxdb`. The default value in Kolla-ansible is `sqlalchemy` for
backward compatibility. Then, the first step is to change the
definition for the following variable:
`/ansible/group_vars/all.yml:enable_influxdb: "{{ enable_monasca |
bool }}"`
We also need to enable InfluxDB when CloudKitty is configured to use
it as the storage backend. Afterwards, we need to create tasks in
CloudKitty configurations to create the InfluxDB schema and configure
the configuration files accordingly.
Alternatives
------------
The alternative would be to execute the configurations manually or
handle it via a different set of scripts and configurations files,
which can become cumbersome with time.
Security Impact
---------------
None identified by the author of this spec
Notifications Impact
--------------------
Operators that are already deploying CloudKitty with InfluxDB as
storage backend would need to convert their configurations to
Kolla-ansible (if they wish to adopt Kolla-ansible to execute these
tasks).
Also, deployments (OpenStack environments) that were created with
Cloudkitty using storage v1 will need to migrate all of their data to
V2 before enabling InfluxDB as the storage system.
Other End User Impact
---------------------
None.
Performance Impact
------------------
None.
Other Deployer Impact
---------------------
New configuration options will be available for CloudKitty.
* cloudkitty_storage_backend
* cloudkitty_influxdb_retention_policy
* cloudkitty_influxdb_use_ssl
* cloudkitty_influxdb_cafile
* cloudkitty_influxdb_insecure_connections
* cloudkitty_influxdb_name
Developer Impact
----------------
None
Implementation
==============
Assignee
--------
* `Rafael Weingärtner <rafaelweingartne>`
Work Items
----------
* Extend InfluxDB "enable/disable" variable
* Add new tasks to configure Cloudkitty accordingly to these new
variables that are presented above
* Write documentation and release notes
Dependencies
============
None
Documentation Impact
====================
New documentation for the feature.
References
==========
[1] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/storage.html#influxdb-v2`
[2] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/collector.html#metric-collection`
Change-Id: I65670cb827f8ca5f8529e1786ece635fe44475b0
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
Add the ability to Kolla-ansible to manage the 'max_workers' parameter
in Cloudkitty. We will use the 'openstack_service_workers' variable
to control the number of workers that Cloudkitty is able to use.
Change-Id: I2f4e7e5c45d71a7e01d1b743d2eb4850cc339419
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
Cloudkitty has a default (built-in the container) metrics.yml file
in the /etc/cloudkitty/metrics.yml files. We would like to be able
to overwrite/customize these metrics configurations via kolla-ansible.
Cloudkitty is able to use a custom metric file via "metrics_conf".
Therefore, we are enabling this configuration via Kolla-ansible.
Change-Id: Id9019298482c040be05f540e71dacfdf0bd77469
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
We're duplicating code to build the keystone URLs in nearly every
config, where we've already done it in group_vars. Replace the
redundancy with a variable that does the same thing.
Change-Id: I207d77870e2535c1cdcbc5eaf704f0448ac85a7a
Use <project>_install_type instead of kolla_install_type
to set python_path. For example, general kolla_install_type
is 'binary', but user wants to deploy Horizon from 'source'.
Horizon templates still use python_path=/usr/share/openstack-dashboard,
it is wrong.
Change-Id: Ide6a24e17b1f8ab6506aa5e53f70693706830418
Option auth_uri from group keystone_authtoken is deprecated[1].
Use option www_authenticate_uri from group keystone_authtoken.
[1]https://review.openstack.org/#/c/508522/
Co-Authored-By: confi-surya <singh.surya64mnnit@gmail.com>
Change-Id: Ifd8527d404f1df807ae8196eac2b3849911ddc26
Closes-Bug: #1761907
cloudkitty-processor service error when using ceilometer collector.
Because the ceilometer collector has been removed in cloudkitty repo[1].
[1]https://review.openstack.org/#/c/548630/
Change-Id: I13292500c394134c6c0ab0e50727389a47c97007
Closes-Bug: #1774091
Since pbr 1.4.0, wsgi_scripts entrypoing is supported and it will
generated a wsgi compatible binary file.
Change-Id: I4192258226ec94b667913fd6fe099c4923145ea7
- Barbican
- Ceilometer
- Cloudkitty
- Congress
- Designate
This will copy only yaml or json policy file if they exist.
Change-Id: Iaa19f64073d8bdee948bc2de58e095ca72afc092
Implements: blueprint support-custom-policy-yaml
Co-authored-By: Duong Ha-Quang <duonghq@vn.fujitsu.com>
This commit separates the messaging rpc and notify transports in order
to support separate and different oslo.messaging backends
This patch:
* add rpc and notify variables
* update service role conf templates
* add example to globals.yaml
* add release note
Implements: blueprint hybrid-messaging
Change-Id: I34691c2895c8563f1f322f0850ecff98d11b5185
They dropped the cloudkitty-api command line[0], so we should add wsgi
support for cloudkitty-api.
[0]https://review.openstack.org/#/c/366043/
Change-Id: Ie34d4f2d5c303bbd7ac09a8ab9e8d9bdc763c57b
Closes-Bug: #1713879
Cloudkitty doesn't use notification, cause there is no references to
notification system in Cloudkitty code base.
Change-Id: I17d276452d3861360feb6030f8622542cc455128
Cloudkitty processor use tooz to handle multi processor processes.
Otherwise, duplicated billing will be inserted into mysql.
Change-Id: Ifdc1be78afa89499ee4c3bbec5b9db8ddb2929cf
Closes-Bug: #1681160