neutron_legacy_iptables option sets the KOLLA_LEGACY_IPTABLES
environment variable in the neutron-l3-agent, neutron-linuxbridge-agent
and neutron_openvswich_agent container where it should be consumed
by kolla_extended_start script resulting in setting iptables-legacy.
Depends-On: https://review.opendev.org/#/c/683679/
Change-Id: Iaa8b46a2227b61a729b8d54bbe4b20f389f251d1
OpenSSL certificate should default to FQDN if possible.
Using IP addresses is not recommended, complicates dual stack
and limits addressing flexibility.
IPv6 control plane implementation [1] follow-up.
[1] Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Change-Id: Ibfc02f933ddcc170e9d616d401e294ba0ff5e981
This patch adds initial support for deploying multiple Nova cells.
Splitting a nova-cell role out from the Nova role allows a more granular
approach to deploying and configuring Nova services.
A new enable_cells flag has been added that enables the support of
multiple cells via the introduction of a super conductor in addition to
cell-specific conductors. When this flag is not set (the default), nova
is configured in the same manner as before - with a single conductor.
The nova role now deploys the global services:
* nova-api
* nova-scheduler
* nova-super-conductor (if enable_cells is true)
The nova-cell role handles services specific to a cell:
* nova-compute
* nova-compute-ironic
* nova-conductor
* nova-libvirt
* nova-novncproxy
* nova-serialproxy
* nova-spicehtml5proxy
* nova-ssh
This patch does not support using a single cell controller for managing
more than one cell. Support for sharing a cell controller will be added
in a future patch.
This patch should be backwards compatible and is tested by existing CI
jobs. A new CI job has been added that tests a multi-cell environment.
ceph-mon has been removed from the play hosts list as it is not
necessary - delegate_to does not require the host to be in the play.
Documentation will be added in a separate patch.
Partially Implements: blueprint support-nova-cells
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: I810aad7d49db3f5a7fd9a2f0f746fd912fe03917
Upgrade jobs like to timeout in the 2-hour window when they must
build their images.
This increase is already applied in ceph jobs.
Change-Id: Ic1118760d9192cc15e1ebf37fb8adf3440f18a78
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Attempts affect pre failures.
This means we can increase stability of jobs by rejecting nodes
that fail pre without failing runs at the same time (unless we
are really unlucky and hit b0rken nodes 5 times in a row).
Change-Id: I17b7f878c742fa8db66f738526855a02ab9f1905
1. Fix yamllint errors in .yamllint file(!)
YAML lint is currently failling on its own configuration file,
.yamllint. This change fixes the issues.
2. Run bindep role in Zuul jobs
This fixes an issue where libffi is not available.
Change-Id: Ic08a8e53a6905a68f0fe26d4b28184e62a64324f
This is to avoid split-brain.
This change also adds relevant docs that sort out the
HA/quorum questions.
Change-Id: I9a8c2ec4dbbd0318beb488548b2cde8f4e487dc1
Closes-Bug: #1837761
Co-authored-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
cloud-init to manage /etc/hosts
1) Ubuntu includes a line in /etc/hosts that makes the local hostname and
nodename (if different) point to 127.0.1.1. This can break RabbitMQ,
which expects the hostname to resolve to the API network address.
2) The distribution might come with cloud-init installed, and manage_etc_hosts
configuration enabled. If so, it will override the file /etc/hosts from cloud-init
templates at every boot, which will break RabbitMQ.
This change fixes these issues.
Change-Id: I53261d0403b983ab419bd44e705b89f7b7a1c316
Closes-Bug: #1837699
Using profiles in cephx is the recommended way since Mimic,
this also adds support for blacklist ops.
Change-Id: Ib9f65644637a5761c6cd7ca8925afc6bb2b8d5f5
Closes-Bug: #1760065
Adds a top-level guide for Nova, with links off to the various virt
driver guides.
Generalises the libvirt TLS guide into a libvirt guide, and adds info on
hardware virtualisation and qemu vs. kvm.
Adds information on configuring consoles.
Change-Id: I36beaaee313bdbc4bcf8cc15c41dda245a5a81ba
This ensures that failure of a single host fails the whole play at that
task. This can avoid confusing errors such as when the task
"Assert that the nodepool private IPv4 address is assigned" fails on one
host, causing subsequent errors on other hosts.
Note that this only affects the Zuul playbooks, not Kolla Ansible's
playbooks.
Change-Id: I77a6534dd2ddd188f795e17d17a44be249d01f31
Currently, swift-proxy config uses hosts in the swift-proxy-server group
to generate the list of memcached servers. However, memcached is
deployed to hosts in the memcached group.
This change fixes the memcached_servers option for swift-proxy to be the
same as other services.
Change-Id: Ib850a1bb2a504ac3e1396846ca3f1d9a30e8fca0
Closes-Bug: #1774313