This is a docs amendment to let users know that calling
init-runonce is not a required deployment step and it may not work
for them if they modified the defaults.
Change-Id: Ia3922b53d91a1a820447fec6a8074b941edc2ee9
Nova provides a mechanism to set static vendordata via a file [1].
This patch provides support in Kolla Ansible for using this
feature.
Arguably this could be part of a generic mechansim for copying
arbitrary config, but:
- It's not clear if there is anything else that would take
advantage of this
- One size might not fit all
[1] https://docs.openstack.org/nova/latest/configuration/config.html#api.vendordata_jsonfile_path
Change-Id: Id420376d96d0c40415c369ae8dd36e845a781820
This change updates documentation, examples and tests to support
Ironic inspection through DHCP-relay. The dnsmasq service should be
configured with more specific format set in the variable
``ironic_dnsmasq_dhcp_range``. See the dnsmasq manual page [1].
[1] https://thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html
Change-Id: I9488a72db588e31289907668f1997596a8ccdec6
Signed-off-by: Maksim Malchuk <maksim.malchuk@gmail.com>
* Register Swift-compatible endpoints in Keystone
* Load balance across RadosGW API servers using HAProxy
The support is exercised in the cephadm CI jobs, but since RGW is
not currently enabled via cephadm, it is not yet tested.
https://docs.ceph.com/en/latest/radosgw/keystone/
Implements: blueprint ceph-rgw
Change-Id: I891c3ed4ed93512607afe65a42dd99596fd4dbf9
This patch adding option to control weight of haproxy
backends per service via host variable.
Example:
[control]
server1 haproxy_nova_api_weight=10
server2 haproxy_nova_api_weight=2 haproxy_keystone_internal_weight=10
server3 haproxy_keystone_admin_weight=50
If weight is not defined, everything is working as before.
Change-Id: Ie8cc228198651c57f8ffe3eb060875e45d1f0700
Docs adapted to match.
Removed the unsupported-for-quay option to set up
a pull-through cache.
Closes-Bug: #1942134
Change-Id: If5a26b1ba4bf35bc29306c24f608396dbf5e3371
To follow best security practices and help fellow operators.
More details inline and in the linked bug report.
Closes-Bug: #1940547
Change-Id: Ide9e9009a6e272f20a43319f27d257efdf315f68
Manila has changed from using subfolders to subvolumes.
We need a bit of a tidy up to prevent deploy errors.
This change also adds the ability to specify the ceph FS
Manila uses instead of relying on the default "first found".
Closes-Bug: #1938285
Closes-Bug: #1935784
Change-Id: I1d0d34919fbbe74a4022cd496bf84b8b764b5e0f
Basically, there are three main installation scenario:
Scenario 1:
Ironic installation together with other openstack services
including keystone. In this case variable enable_keystone
is set to true and keystone service will be installed
together with ironic installation. It is possible realise this
scenario, no fix needed
Scenario 2:
Ironic installation with connection to already installed
keystone. In this scenario we have to set enable_keystone
to “No” to prevent from new keystone service installation
during the ironic installation process. But in other hand,
we need to have correct sections in ironic.conf to provide
all information needed to connect to existing keystone.
But all sections for keystone are added to ironic.conf only
if enable_keystone var is set to “Yes”. It isn’t possible
to realise this scenario. Proposed fix provide support for
this scenario, where multiple regions share the same
keystone service.
Scenario 3:
No keystone integration. Ironic don't connect to Keystone.
It is possible realise this scenario, no fix needed
Proposed solution also keep the default behaviour: if no
enable_keystone_integration is manually defined by default
it takes value of enable_keystone variable and all behaviour
is the same. But if we don't want to install keystone and
want to connect to existing one at the same time, it will be
possible to set enable_keystone var to “No”
(preventing keystone from installation) and at the same
time set ironic_enable_keystone_integration to Yes to allow
needed section appear in ironic.conf through templating.
Change-Id: I0c7e9a28876a1d4278fb2ed8555c2b08472864b9
As mentioned in the Iced014acee7e590c10848e73feca166f48b622dc
commit message, in Ussuri+ we can use ``+sbwtdcpu none
+sbwtdio none`` as well. This is due to relying on RMQ-provided
erlang in version 23.x.
This change adds the extra arguments by default.
It should be backported down to Ussuri before we do a release with
Iced014acee7e590c10848e73feca166f48b622dc.
Change-Id: I32e247a6cb34d7f6763b544f247fd408dce2b3a2
In Ussuri, nova stopped using separate Ceph keys for the volumes and vms
pools by default. Instead, we set ceph_nova_keyring to the value of
ceph_cinder_keyring by default, which is ceph.client.cinder.keyring.
This is in line with the Ceph OpenStack integration guide [1]. However,
the user used by nova to access the vms pool (ceph_nova_user) defaults
to nova, meaning that nova will still try to use a
ceph.client.nova.keyring, which probably does not exist. We did not see
this issue in CI, because we set ceph_nova_user to cinder.
This change fixes the issue by setting ceph_nova_user to the value of
ceph_cinder_user by default, which is cinder.
Closes-Bug: #1934145
Related-Bug: #1928690
[1] https://docs.ceph.com/en/latest/rbd/rbd-openstack/
Change-Id: I6aa8db2214e07906f1f3e035411fc80ba911a274
Nova always tries to create the rabbitmq user regardless of
whether RabbitMQ is enabled or not.
This ps also adds an external rabbitmq doc.
Change-Id: Iec517226e4c82ea351889b55689a3efceaadcc76
In the Xena release, Ironic removed the iSCSI driver [1]. The
recommended driver is direct, which uses HTTP to transfer the disk
image. This requires an HTTP server, and the simplest option is to use
the one currently deployed when enable_ironic_ipxe is set to true. For
this reason, this patch always enables the HTTP server running on the
conductor.
iPXE is still enabled separately, since it cannot currently be used at
the same time as PXE.
[1] https://review.opendev.org/c/openstack/ironic/+/789382
Change-Id: I30c2ad2bf2957ac544942aefae8898cdc8a61ec6
The variable octavia_amphora_flavor should be octavia_amp_flavor.
The variable for customising network and subnet was only mentioned in
the example.
Change-Id: I3ba5a7ccc2c810fea12bc48584c064738e5aa35e
Adds a new variable, 'disable_firewall', which defaults to true. If set
to false, then the host firewall will not be disabled during
kolla-ansible bootstrap-servers.
Change-Id: Ie5131013012f89c8c3b91ca359ad17d9cb77efc8
On machines with many cores, we were seeing excessive CPU load on systems
that were not very busy. With the following Erlang VM argument we saw
RabbitMQ CPU usage drop from about 150% to around 20%, on a system with
40 hyperthreads.
+S 2:2
By default RabbitMQ starts N schedulers where N is the number of CPU
cores, including hyper-threaded cores. This is fine when you assume all
your CPUs are dedicated to RabbitMQ. Its not a good idea in a typical
Kolla Ansible setup. Here we go for two scheduler threads.
More details can be found here:
https://www.rabbitmq.com/runtime.html#scheduling
and here:
https://erlang.org/doc/man/erl.html#emulator-flags
+sbwt none
This stops busy waiting of the scheduler, for more details see:
https://www.rabbitmq.com/runtime.html#busy-waiting
Newer versions of rabbit may need additional flags:
"+sbwt none +sbwtdcpu none +sbwtdio none"
But this patch should be back portable to older versions of RabbitMQ
used in Train and Stein.
Note that information on this tuning was found by looking at data from:
rabbitmq-diagnostics runtime_thread_stats
More details on that can be found here:
https://www.rabbitmq.com/runtime.html#thread-stats
Related-Bug: #1846467
Change-Id: Iced014acee7e590c10848e73feca166f48b622dc
In the Xena cycle it was decided to remove the Monasca
Grafana fork due to lack of maintenance. This commit removes
the service and provides a limited workaround using the
Monasca Grafana datasource with vanilla Grafana.
Depends-On: I9db7ec2df050fa20317d84f6cea40d1f5fd42e60
Change-Id: I4917ece1951084f6665722ba9a91d47764d3709a