This change switches the CI jobs to use python 3 for local execution of
the kolla-ansible commands.
For upgrades, we use python 2 for the previous (Train) deploy, then
reinstall using python 3 for the (Ussuri) upgrade.
NOTE: This is separate from the python interpreter used on remote hosts,
which is configured via ansible_python_interpreter.
Partially Implements: blueprint python-3
Related: blueprint drop-py2-support
Change-Id: I5bdc165f68b7bde1f9ef30fe8216f2a44e6d4706
Continue to reduce the scope of setup_gate.sh. Allows us to more easily
select python 2 or 3.
Change-Id: If2eeeacbbbdf58afb765b4a39772b5a1af7b952b
Partially Implements: blueprint python-3
There is a number of critical log messages that we see in CI from time
to time. While these should be fixed, let's not fail jobs unnecessarily.
This change introduces one expected critical message in
placement-api.log:
Failed to fetch token data from identity server
Co-authored-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Related-bug: #1847727
Change-Id: I92ad0be70ed05925612f0c709907ab62280326b8
Adds support for configuration of the Docker client timeout via
'docker_client_timeout'.
This change also increases the default timeout to 120 seconds, as we
sometimes see timeouts in CI and heavily loaded or underpowered
environments. Increasing 'docker_client_timeout' further may be helpful
in cases where Docker reports 'Read timed out'.
Change-Id: I73745771078cb2c0ebae2b1d87ba2c4c12958d82
Closes-Bug: #1809844
Separate upgrade logic to is_upgrade job var and rename
scenarios to match.
Rename "ACTION" to "SCENARIO" (as it is a scenario).
Separate testing of dashboard (aka Horizon) and increase
its timeout to 5 minutes (CentOS 7 slow as always).
Separate initialization of core OpenStack.
Use gate setup script from ./tests/
Remove useless tox setupenv.
Do not deploy Heat when not really necessary.
Change-Id: I4fca319ccc3de7188f8b7b44c9c71321e3899467
We fail randomly on check-failure.sh which checks for
containers being down.
Since we share Docker with Zun, the script sees Zun test container
and may fail when it is stopped but not yet removed.
Change-Id: If8b001f7507663e49e8e535f1889592e5f428ab5
Closes-bug: #1853452
* Deploy services using kolla-ansible deploy
* Reconfigure the image for one or more services to use an invalid
* config
* Deploy/reconfigure services using kolla-ansible reconfigure
The invalid config could be a wrong docker registry, wrong image name,
wrong tag, etc.
The restart handler for the service fails, and the old container is
left running.
The restart handler for the service fails, and the old container is
stopped and removed. This leaves the service in a broken state.
This change fixes the issue by pulling the image if necessary prior to
stopping and removing the container.
Change-Id: I85b2a1b224d4c4d85c32c4922a2cd2c41171a1dc
Closes-Bug: #1852572
Resolves a number of TODOs in the CI configuration that provide support
for upgrading from the Stein release.
Change-Id: I9bac5c230b82ac7c097fe6ca2556e428abda31a1
Depends-On: https://review.opendev.org/694254
Tests the following operations for MariaDB:
* Stop
* Recovery
Backup and restore will be added in a separate change.
Depends-On: https://review.opendev.org/693329
Change-Id: I836d91554715cce0e82c1bbebb7430c457418b2d
Commit 73b6a66fd4db4345e5c1ed8acf2f3d10170bfdd4 added installation of
Python 3 package. But without root permissions it fails.
Change-Id: I65ca794955a1b1419853bf63be36cb0d1f2d2345
This also enables Placement when Zun is enabled like Kolla Ansible
already does with Nova.
Change-Id: Id2a09f702e8503b49d2b9e73e06b2ce9f4d168a9
Closes-bug: #1840573
This patch adds initial support for deploying multiple Nova cells.
Splitting a nova-cell role out from the Nova role allows a more granular
approach to deploying and configuring Nova services.
A new enable_cells flag has been added that enables the support of
multiple cells via the introduction of a super conductor in addition to
cell-specific conductors. When this flag is not set (the default), nova
is configured in the same manner as before - with a single conductor.
The nova role now deploys the global services:
* nova-api
* nova-scheduler
* nova-super-conductor (if enable_cells is true)
The nova-cell role handles services specific to a cell:
* nova-compute
* nova-compute-ironic
* nova-conductor
* nova-libvirt
* nova-novncproxy
* nova-serialproxy
* nova-spicehtml5proxy
* nova-ssh
This patch does not support using a single cell controller for managing
more than one cell. Support for sharing a cell controller will be added
in a future patch.
This patch should be backwards compatible and is tested by existing CI
jobs. A new CI job has been added that tests a multi-cell environment.
ceph-mon has been removed from the play hosts list as it is not
necessary - delegate_to does not require the host to be in the play.
Documentation will be added in a separate patch.
Partially Implements: blueprint support-nova-cells
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: I810aad7d49db3f5a7fd9a2f0f746fd912fe03917
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
1. Fix yamllint errors in .yamllint file(!)
YAML lint is currently failling on its own configuration file,
.yamllint. This change fixes the issues.
2. Run bindep role in Zuul jobs
This fixes an issue where libffi is not available.
Change-Id: Ic08a8e53a6905a68f0fe26d4b28184e62a64324f
This ensures that failure of a single host fails the whole play at that
task. This can avoid confusing errors such as when the task
"Assert that the nodepool private IPv4 address is assigned" fails on one
host, causing subsequent errors on other hosts.
Note that this only affects the Zuul playbooks, not Kolla Ansible's
playbooks.
Change-Id: I77a6534dd2ddd188f795e17d17a44be249d01f31
This is not required since enabling HAProxy over VXLAN [1].
[1] https://review.opendev.org/670690
Change-Id: I239a7c60d6ae0c80640ff10209a80c7a9ca74cd6
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
After modernising docker configuration
(I1215e04ec15b01c0b43bac8c0e81293f6724f278), we lost our
registry-mirrors configuration in CI that lets us use a mirror of
Dockerhub.
This change uses the new docker_custom_config variable to configure the
registry mirror.
Change-Id: I1430413c12e9d0b59e4f216ff66372de0f3a4f21