In some resource-constrained environments, particularly during service
bootstrap Galera cluster nodes can experience timeouts in inter-node
communication.
This change sets the gmcast.peer_timeout based on the galera cluster
documentation:
https://galeracluster.com/library/documentation/galera-parameters.html
We are observing peer timeout issues on some CI runs - therefore raising
it to PT15S as in similar Ubuntu charms jobs.
Change-Id: Id036e41b62a88bab486c35a5f1fde5cfc2fa4803
global_physnet_mtu needs to be set in neutron.conf, because linuxbridge-agent
discovers underlying vxlan0 interface mtu and returns an error when creating
vxlan port
CentOS8 job will not be added, because CentOS 8 iptables-ebtables package
is missing broute (--among-src) tables support required for linuxbridge agent,
see [1].
[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1720637
Change-Id: I6b12f7ba95401d3342359c57ceeee8bec8aefe49
Kolla-Ansible Ceph deployment mechanism has been deprecated in Train [1].
This change removes the Ansible code and associated CI jobs.
[1]: https://review.opendev.org/669214
Change-Id: Ie2167f02ad2f525d3b0f553e2c047516acf55bc2
This switches to python 3 as the remote python interpreter on
Debian/Ubuntu jobs, with CentOS 7 as the only exception using python 2.
Also switch to auto-detection of the interpeter except for CentOS 7,
which should be based on the one used by ansible-playbook (python 3).
Change-Id: Ie4aff6123dfc7267fe78f4bd736565fb72fe135e
Partially-Implements: python-3
Adds new CI job definitions for CentOS 8:
- kolla-ansible-centos8-source
- kolla-ansible-centos8-binary
- kolla-ansible-centos8-source-ceph-ansible
- kolla-ansible-centos8-source-cinder-lvm
- kolla-ansible-centos8-source-mariadb
- kolla-ansible-centos8-source-bifrost
- kolla-ansible-centos8-source-zun
- kolla-ansible-centos8-source-swift
- kolla-ansible-centos8-source-scenario-nfv
- kolla-ansible-centos8-source-ironic
- kolla-ansible-centos8-binary-ironic
- kolla-ansible-centos8-source-masakari
- kolla-ansible-centos8-source-cells
The following jobs are added to the check pipeline:
- kolla-ansible-centos8-source
- kolla-ansible-centos8-binary
- kolla-ansible-centos8-source-cinder-lvm
- kolla-ansible-centos8-source-mariadb
- kolla-ansible-centos8-source-zun
- kolla-ansible-centos8-source-swift
- kolla-ansible-centos8-source-scenario-nfv
- kolla-ansible-centos8-source-ironic
- kolla-ansible-centos8-binary-ironic
- kolla-ansible-centos8-source-cells
The following jobs are not yet passing so are not added to the check
pipeline:
- kolla-ansible-centos8-source-ceph-ansible
- kolla-ansible-centos8-source-bifrost
- kolla-ansible-centos8-source-masakari
The kolla-ansible-centos8-source job is added to the gate.
Upgrade jobs will be added when CentOS 8 support exists in Train.
Depends-On: https://review.opendev.org/704337
Depends-On: https://review.opendev.org/704848
Depends-On: https://review.opendev.org/704965
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: Ibd806feee71721b122b77d7eff33228ca1cc2853
Partially-Implements: blueprint centos-rhel-8
To make the configuration easier for the user, and to allow non-standard
ceph authentication ids - introduce ceph_*_user variables.
Change-Id: I24e01c43c826b62b6748d93a498f4b7d8ce9e309
Add a TLS scenario in zuul to generate self signed certificates and
to configure TLS to be enabled in the open stack deployment.
Change-Id: If10a23dfa67212e843ef26486c9523074cc920e7
Partially-Implements: blueprint custom-cacerts
* Adding zuul centos-source/ubuntu-source ceph-ansible jobs
* Jobs will deploy all Ceph integrated OpenStack components, i.e.
cinder, glance, nova
* Will utilize core openstack testing script
Depends-On: https://review.opendev.org/685032
Depends-On: https://review.opendev.org/698301
Implements: blueprint ceph-ansible
Change-Id: I233082b46785f74014177f579aeac887a25b2ae2
For the CentOS 7 to 8 transition, we will have a period where both
CentOS 7 and 8 images are available. We differentiate these images via a
tag - the CentOS 8 images will have a tag of train-centos8 (or
master-centos8 temporarily).
To achieve this, and maintain backwards compatibility for the
openstack_release variable, we introduce a new 'openstack_tag' variable.
This variable is based on openstack_release, but has a suffix of
'openstack_tag_suffix', which is empty except on CentOS 8 where it has a
value of '-centos8'.
Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625
Partially-Implements: blueprint centos-rhel-8
This change switches the CI jobs to use python 3 for local execution of
the kolla-ansible commands.
For upgrades, we use python 2 for the previous (Train) deploy, then
reinstall using python 3 for the (Ussuri) upgrade.
NOTE: This is separate from the python interpreter used on remote hosts,
which is configured via ansible_python_interpreter.
Partially Implements: blueprint python-3
Related: blueprint drop-py2-support
Change-Id: I5bdc165f68b7bde1f9ef30fe8216f2a44e6d4706
Continue to reduce the scope of setup_gate.sh. Allows us to more easily
select python 2 or 3.
Change-Id: If2eeeacbbbdf58afb765b4a39772b5a1af7b952b
Partially Implements: blueprint python-3
Separate upgrade logic to is_upgrade job var and rename
scenarios to match.
Rename "ACTION" to "SCENARIO" (as it is a scenario).
Separate testing of dashboard (aka Horizon) and increase
its timeout to 5 minutes (CentOS 7 slow as always).
Separate initialization of core OpenStack.
Use gate setup script from ./tests/
Remove useless tox setupenv.
Do not deploy Heat when not really necessary.
Change-Id: I4fca319ccc3de7188f8b7b44c9c71321e3899467
Resolves a number of TODOs in the CI configuration that provide support
for upgrading from the Stein release.
Change-Id: I9bac5c230b82ac7c097fe6ca2556e428abda31a1
Depends-On: https://review.opendev.org/694254
Tests the following operations for MariaDB:
* Stop
* Recovery
Backup and restore will be added in a separate change.
Depends-On: https://review.opendev.org/693329
Change-Id: I836d91554715cce0e82c1bbebb7430c457418b2d
This also enables Placement when Zun is enabled like Kolla Ansible
already does with Nova.
Change-Id: Id2a09f702e8503b49d2b9e73e06b2ce9f4d168a9
Closes-bug: #1840573
This patch adds initial support for deploying multiple Nova cells.
Splitting a nova-cell role out from the Nova role allows a more granular
approach to deploying and configuring Nova services.
A new enable_cells flag has been added that enables the support of
multiple cells via the introduction of a super conductor in addition to
cell-specific conductors. When this flag is not set (the default), nova
is configured in the same manner as before - with a single conductor.
The nova role now deploys the global services:
* nova-api
* nova-scheduler
* nova-super-conductor (if enable_cells is true)
The nova-cell role handles services specific to a cell:
* nova-compute
* nova-compute-ironic
* nova-conductor
* nova-libvirt
* nova-novncproxy
* nova-serialproxy
* nova-spicehtml5proxy
* nova-ssh
This patch does not support using a single cell controller for managing
more than one cell. Support for sharing a cell controller will be added
in a future patch.
This patch should be backwards compatible and is tested by existing CI
jobs. A new CI job has been added that tests a multi-cell environment.
ceph-mon has been removed from the play hosts list as it is not
necessary - delegate_to does not require the host to be in the play.
Documentation will be added in a separate patch.
Partially Implements: blueprint support-nova-cells
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: I810aad7d49db3f5a7fd9a2f0f746fd912fe03917
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
This is not required since enabling HAProxy over VXLAN [1].
[1] https://review.opendev.org/670690
Change-Id: I239a7c60d6ae0c80640ff10209a80c7a9ca74cd6
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
After modernising docker configuration
(I1215e04ec15b01c0b43bac8c0e81293f6724f278), we lost our
registry-mirrors configuration in CI that lets us use a mirror of
Dockerhub.
This change uses the new docker_custom_config variable to configure the
registry mirror.
Change-Id: I1430413c12e9d0b59e4f216ff66372de0f3a4f21
VXLAN is necessary to run HA in CI (due to floating VIP
address handled by keepalived).
It also turned out to be required to have private
IPv6 address assignments.
This patch is based on linux bridge rather than OVS
to avoid problems with OVS deployed in containers.
This patch enables haproxy in multinode jobs.
Includes saving of linux networking details.
Makes DASHBOARD_URL agree with OS_AUTH_URL - properly uses the
pre-upgrade value for testing.
Co-authored-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Depends-on: https://review.opendev.org/683068
Depends-on: https://review.opendev.org/682957
Change-Id: I66888712da80c3d6f84ee4949762961664d3adea
This lets us control the upgrade process entirely from the
current branch.
Change-Id: Ic8c39e415846596c23dae93c2839375a24e8b888
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
This commit follows up the work in Kolla to provide deploy and configure the
Prometheus blackbox exporter.
An example blackbox-exporter module has been added (disabled by default)
called os_endpoint. This allows for the probing of endpoints over HTTP
and HTTPS. This can be used to monitor that OpenStack endpoints return a status
code of either 200 or 300, and the word 'versions' in the payload.
This change introduces a new variable `prometheus_blackbox_exporter_endpoints`.
Currently no defaults are specified because the configuration is heavily
dependent on the deployment.
Co-authored-by: Jack Heskett <Jack.Heskett@gresearch.co.uk>
Change-Id: I36ad4961078d90e2fd70c9a3368f5157d6fd89cd
After the integration with placement [1], we need to configure how
zun-compute is going to work with nova-compute.
* If zun-compute and nova-compute run on the same compute node,
we need to set 'host_shared_with_nova' as true so that Zun
will use the resource provider (compute node) created by nova.
In this mode, containers and VMs could claim allocations against
the same resource provider.
* If zun-compute runs on a node without nova-compute, no extra
configuration is needed. By default, each zun-compute will create
a resource provider in placement to represent the compute node
it manages.
[1] https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management
Change-Id: I2d85911c4504e541d2994ce3d48e2fbb1090b813
Instead of changing Docker daemon command line let's change config
for Docker instead. In /etc/docker/daemon.json file as it should be.
Custom Docker options can be set with 'docker_custom_config' variable.
Old 'docker_custom_option' is still present but should be avoided.
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Change-Id: I1215e04ec15b01c0b43bac8c0e81293f6724f278
- Test Zun on CentOS too
- Make etcd change also trigger Zun jobs (like kuryr and zun)
- Test multinode Zun deployments instead of AIO
(more likely to break)
- In Zun scenario, stop configuring docker for legacy swarm mode
(Zun is no swarm)
- Separate test-zun.sh testing script
- Show appcontainer to see which node it has been started on
Change-Id: I289b1009fe00aedb9b78cbd83298b14da5fd9670
Depends-On: https://review.opendev.org/676736
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
This actually replaces two ad-hoc fixes with a more unified
solution (with comment for posterity).
Change-Id: I62f57cb489c900f68a0c7aeb3e20e4715c0e2661
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Docker has no restart policy named 'never'. It has 'no'.
This has bitten us already (see [1]) and might bite us again whenever
we want to change the restart policy to 'no'.
This patch makes our docker integration honor all valid restart policies
and only valid restart policies.
All relevant docker restart policy usages are patched as well.
I added some FIXMEs around which are relevant to kolla-ansible docker
integration. They are not fixed in here to not alter behavior.
[1] https://review.opendev.org/667363
Change-Id: I1c9764fb9bbda08a71186091aced67433ad4e3d6
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>