etcd via tooz does not support group membership required by
Designate coordination.
The best k-a can do is not to configure etcd in Designate.
Change-Id: I2f64f928e730355142ac369d8868cf9f65ca357e
Closes-bug: #1872205
Related-bug: #1840070
Not everyone wants Kafka data stored on a Docker volume. This
change allows a user to flexibly control where the data is stored.
Change-Id: I2ba8c7a85c7bf2564f954a43c6e6dbb3257fe902
There is no longer support for provisioning Ceph in Kolla Ansible, so we
should no longer say that it's only sometimes necessary to create the
cluster/pools/keyrings externally.
Change-Id: Ia3026cfeebfb8258b79490f9facc341c928845f9
This allows you to tune the performance of InfluxDB by locating the
volume on a drive that is separate to the default docker storage.
Change-Id: Iea555a2702b225b30f5d7035b8a703d4f3376ee7
Kolla-Ansible Ceph deployment mechanism has been deprecated in Train [1].
This change removes the Ansible code and associated CI jobs.
[1]: https://review.opendev.org/669214
Change-Id: Ie2167f02ad2f525d3b0f553e2c047516acf55bc2
Ports need physical-network to be set for a flat network, otherwise they
will not bind.
Closes-Bug: #1862628
Change-Id: I9d579b4317a8acbc3e51bbd9c0236846b75d598b
Backport: train
* Ironic dropped CoreOS IPA images in Train - use CentOS DIB images
* Nova flavor requires dropping standard resources
* Link to sections in ironic docs
Change-Id: Id65ada7cd6766d3a907a5a1da54978b56319979c
To make the configuration easier for the user, and to allow non-standard
ceph authentication ids - introduce ceph_*_user variables.
Change-Id: I24e01c43c826b62b6748d93a498f4b7d8ce9e309
Introduce user modifiable variables instead of fixed-names
of Ceph keyring files for external Ceph functionality.
Change-Id: I1a33b3f9d6eca5babf53b91187461e43aef865ce
This is to fix the duplicated words issue like
"Other services that are are out of scope of this".
Change-Id: Ie4882dbb64d6e8774888b97895af20ba3855f0f8
This allows users to supply an Elasticsearch Curator actions file
to manage log retention [1]. Curator then runs on a cron job, which
defaults to every day. A default curator actions file is provided,
which can be customised by the end user if required.
[1] https://www.elastic.co/guide/en/elasticsearch/client/curator/current/actionfile.html
Change-Id: Ide9baea9190ae849e61b9d8b6cff3305bdcdd534
It turned out the previous fix ([1]) was incomplete.
Additionally, it seems we have to limit Tacker server
to one instance co-located with conductor.
[1] https://review.opendev.org/684275
commit b96ade3cf01009d822f85744efee523127f2674c
Change-Id: I9ce27d5f68f32ef59e245960e23336ae5c5db905
Closes-bug: #1853715
Related-bug: #1845142
Adds rabbitmq_server_additional_erl_args variable which
is appended to RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS
environment variable to RabbitMQ server startup script.
This can be used to configure the schedulers.
Docs attached.
Change-Id: Id683c8cc6dac61354ffd94f3b460335b42136ba2
Co-authored-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Related-bug: #1846467
Tacker requires config for storing CSAR vnf packages.
This patch adds it as well as relevant docs.
Only one Tacker Conductor is deployed by default due to
lack of a shared filesystem.
Change-Id: Iad391f35105e79fa9319502256528990915df9b7
Co-authored-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Closes-Bug: #1845142
This also enables Placement when Zun is enabled like Kolla Ansible
already does with Nova.
Change-Id: Id2a09f702e8503b49d2b9e73e06b2ce9f4d168a9
Closes-bug: #1840573
Add documentation about deploying nova with multiple cells.
Change-Id: I89ee276917e5b9170746e07b7f644c7593b03da1
Depends-On: https://review.opendev.org/#/c/675659/
Related: blueprint bp/support-nova-cells
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Adds a top-level guide for Nova, with links off to the various virt
driver guides.
Generalises the libvirt TLS guide into a libvirt guide, and adds info on
hardware virtualisation and qemu vs. kvm.
Adds information on configuring consoles.
Change-Id: I36beaaee313bdbc4bcf8cc15c41dda245a5a81ba
Add coordination backend configuration to designate.conf which is
required in multinode environments. Fixes warning from designate:
WARNING designate.coordination [-] No coordination backend configured,
assuming we are the only worker. Please configure a coordination backend
Change-Id: I23c4d2de7e3f9368795c423000a4f9a6c3a431e2
Closes-Bug: #1843842
Related-Bug: #1840070
The current tasks only use a hardcoded list deploying only the required files.
When using multiple custom policies, additionnal object-*.builder and
object*.gz files are to be deployed as well.
This adds a new default-empty variable that can be overridden when needed
Change-Id: I29c8e349c7cc83e3a2e01ff702d235a0cd97340e
Closes-Bug: #1844752
To securely support live migration between computenodes we should enable
tls, with cert auth, instead of TCP with no auth support.
Implements: blueprint libvirt-tls
Change-Id: I22ea6233933c840b853fdcc8e03400b2bf577271
This commit adds the necessary configuration to the Swift account,
container and object configuration files to enable the Swift recon
cli.
In order to give the object server on each Swift host access to the
recon files, a Docker volume is mounted into each container which
generates them. The volume is then mounted read only into the object
server container. Note that multiple containers append to the same
file. This should not be a problem since Swift uses a lock when
appending.
Change-Id: I343d8f45a78ebc3c11ed0c68fe8bec24f9ea7929
Co-authored-by: Doug Szumski <doug@stackhpc.com>
After the integration with placement [1], we need to configure how
zun-compute is going to work with nova-compute.
* If zun-compute and nova-compute run on the same compute node,
we need to set 'host_shared_with_nova' as true so that Zun
will use the resource provider (compute node) created by nova.
In this mode, containers and VMs could claim allocations against
the same resource provider.
* If zun-compute runs on a node without nova-compute, no extra
configuration is needed. By default, each zun-compute will create
a resource provider in placement to represent the compute node
it manages.
[1] https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management
Change-Id: I2d85911c4504e541d2994ce3d48e2fbb1090b813