Since I0474324b60a5f792ef5210ab336639edf7a8cd9e swift role uses the new
service-cert-copy role introduced in the
I6351147ddaff8b2ae629179a9bc3bae2ebac9519 but the swift role itself
doesn't contain the handler used in the service-cert-copy. Right now,
restarting the swift container isn't necessary, but the handler should
exist. Also we should fix the name of the service used.
Closes-Bug: #1931097
Change-Id: I2d0615ce6914e1f875a2647c8a95b86dd17eeb22
Signed-off-by: Maksim Malchuk <maksim.malchuk@gmail.com>
The goal for this push request is to normalize the construction and use
of internal, external, and admin URLs. While extending Kolla-ansible
to enable a more flexible method to manage external URLs, we noticed
that the same URL was constructed multiple times in different parts
of the code. This can make it difficult for people that want to work
with these URLs and create inconsistencies in a large code base with
time. Therefore, we are proposing here the use of
"single Kolla-ansible variable" per endpoint URL, which facilitates
for people that are interested in overriding/extending these URLs.
As an example, we extended Kolla-ansible to facilitate the "override"
of public (external) URLs with the following standard
"<component/serviceName>.<companyBaseUrl>".
Therefore, the "NAT/redirect" in the SSL termination system (HAproxy,
HTTPD or some other) is done via the service name, and not by the port.
This allows operators to easily and automatically create more friendly
URL names. To develop this feature, we first applied this patch that
we are sending now to the community. We did that to reduce the surface
of changes in Kolla-ansible.
Another example is the integration of Kolla-ansible and Consul, which
we also implemented internally, and also requires URLs changes.
Therefore, this PR is essential to reduce code duplicity, and to
facility users/developers to work/customize the services URLs.
Change-Id: I73d483e01476e779a5155b2e18dd5ea25f514e93
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
For the CentOS 7 to 8 transition, we will have a period where both
CentOS 7 and 8 images are available. We differentiate these images via a
tag - the CentOS 8 images will have a tag of train-centos8 (or
master-centos8 temporarily).
To achieve this, and maintain backwards compatibility for the
openstack_release variable, we introduce a new 'openstack_tag' variable.
This variable is based on openstack_release, but has a suffix of
'openstack_tag_suffix', which is empty except on CentOS 8 where it has a
value of '-centos8'.
Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625
Partially-Implements: blueprint centos-rhel-8
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
The current tasks only use a hardcoded list deploying only the required files.
When using multiple custom policies, additionnal object-*.builder and
object*.gz files are to be deployed as well.
This adds a new default-empty variable that can be overridden when needed
Change-Id: I29c8e349c7cc83e3a2e01ff702d235a0cd97340e
Closes-Bug: #1844752
Use upstream Ansible modules for registration of services, endpoints,
users, projects, roles, and role grants.
Change-Id: I7c9138d422cc91c177fd8992347176bb54156b5a
This feature is disabled by default, and can be enabled by setting
'enable_swift_s3api' to 'true' in globals.yml.
Two middlewares are required for Swift S3 - s3api and s3token. Additionally, we
need to configure the authtoken middleware to delay auth decisions to give
s3token a chance to authorise requests using EC2 credentials.
Change-Id: Ib8e8e3a1c2ab383100f3c60ec58066e588d3b4db
Adds support to seperate Swift access and replication traffic from other storage traffic.
In a deployment where both Ceph and Swift have been deployed,
this changes adds functionalality to support optional seperation
of storage network traffic. This adds two new network interfaces
'swift_storage_interface' and 'swift_replication_interface' which maintain
backwards compatibility.
The Swift access network interface is configured via 'swift_storage_interface',
which defaults to 'storage_interface'. The Swift replication network
interface is configured via 'swift_replication_interface', which
defaults to 'swift_storage_interface'.
If a separate replication network is used, Kolla Ansible now deploys separate
replication servers for the accounts, containers and objects, that listen on
this network. In this case, these services handle only replication traffic, and
the original account-, container- and object- servers only handle storage
user requests.
Change-Id: Ib39e081574e030126f2d08f51de89641ddb0d42e
This allows swift service endpoints to use custom hostnames, and adds the
following variables:
* swift_internal_fqdn
* swift_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a swift_proxy_server_listen_port option, which defaults to
swift_proxy_server_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
While we're in here, use the ``internal_protocol`` variable for the swift
endpoint in cinder's swift backup driver configuration, instead of hardcoding
to ``http``.
Change-Id: Ibc01618383c26e16c0067f7f6b9cf5160d968d1e
Implements: blueprint service-hostnames
Apply Swift rolling upgrade based on recommendations from Swift PTL John
Dickinson at [1]
[1] https://www.swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/
Co-Authored-By: Surya Prakash <singh.surya64mnnit@gmail.com>
Change-Id: I99f505438916be2f89b24df20506339604e5bd6e
Implements: blueprint apply-service-upgrade-procedure
Having all services in one giant haproxy file makes altering
configuration for a service both painful and dangerous. Each service
should be configured with a simple set of variables and rendered with a
single unified template.
Available are two new templates:
* haproxy_single_service_listen.cfg.j2: close to the original style, but
only one service per file
* haproxy_single_service_split.cfg.j2: using the newer haproxy syntax
for separated frontend and backend
For now the default will be the single listen block, for ease of
transition.
Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b
The authtoken config variable delay_auth_decision must be set to True.
The default is False, but that breaks public access, StaticWeb, FormPost,
TempURL, and authenticated capabilities requests (using Discoverability).
Change-Id: I420a95f5f9fda3321a4acfc5846e40294a8bd588
Closes-Bug: #1768795
The log_level in Swift is fixed to INFO. The patch make it changeable
according to the value of "openstack_logging_debug".
When "openstack_logging_debug" is "False", the log_level is set to
"INFO". It is default value. Otherwise, the log_level is set to
"DEBUG".
Closes-Bug: #1777982
Change-Id: I62f430abd8f332cc2ece56a6733776fa03b10f77
Signed-off-by: tone.zhang <tone.zhang@arm.com>
This change adds enable_fluentd option and enables some other log shippers
to be integrated. When enable_fluentd is "no", syslog server is also disabled.
Then, this change also adds syslog parameters to use a syslog server
prepared by users.
Change-Id: I7c83ef7fe30a6b9ab7385bcee953ad07e96b0a83
Implements: blueprint fluentd-enable-option
In case Kolla's users want to deploy with both of
binary and source image, we should have a variable
install type that define install type for each project.
We also add specific image tag for each Openstack project.
This commit is implemented for Sahara, Searchlight,
Senlin, Solum and Swift projects.
Change-Id: I964796b2f9e3eae69d7eccf68e9428ce9390010c
Implements: blueprint mixing-binary-and-source-image
The swift-object-expirer is provided by the 'openstack-swift-proxy'
package and thus it is unavailable on swift-object image. This change
adds a new Docker image to fulfill this requirement and stop using
swift-object image in this case.
This image is needed while RDO does not fix the packaging. The issue
is being tracked in:
https://bugzilla.redhat.com/show_bug.cgi?id=1382921
Change-Id: Idc7ee92d756d8923da2198ede33abf5ed1142041
Closes-Bug: 1630425
After our switch to keystone-manage bootstrap Horizon is not happy
due to v3 not being setup correctly. This patch fixes that
This also includes removal of unused variables (transforms them into
endpoint url variables)
TrivialFix
Change-Id: I1e04db8c24049f80e974c063f03068a2ab32a563
Due to poor planning on our variable names we have a situation where
we have "internal_address" which must be a VIP, but "external_address"
which should be a DNS name. Now with two vips "external_vip_address"
is a new variable.
This corrects that issue by deprecating kolla_internal_address and
replacing it with 4 nicely named variables.
kolla_internal_vip_address
kolla_internal_fqdn
kolla_external_vip_address
kolla_external_fqdn
The default behaviour will remain the same, and the way the variable
inheritance is setup the kolla_internal_address variable can still be
set in globals.yml and propogate out to these 4 new variables like it
normally would, but all reference to kolla_internal_address has been
completely removed.
Change-Id: I4556dcdbf4d91a8d2751981ef9c64bad44a719e5
Partially-Implements: blueprint ssl-kolla
This change let Swift detect and use physical disk for storage. The
old named volume for storage isn't really useful for any serious setup.
Also updated swift-guide.rst accordingly.
Change-Id: I4f577b7b69d8bcd8b3961500946241c65a16db22
Partially-Implements: blueprint swift-physical-disk
Get rid of swfit children images by share the same image in the swift
account, swift object and swift container services
Closes-Bug: #1534476
Change-Id: I929689f93b56396a41b19fda46e4679c4de84ca1
This intentionally leaves out rabbitmq from this patchset. It will
require additional work to remove its data container
UpgradeImpact
Partially-Implements: blueprint docker-named-volumes
Change-Id: Id68b8e43a3c077ef4f4f4d67ea34d0692e66eef7
Swift replicator services require rsync to function. This patch adds a
new container which is included automatically on each of the Swift
storage nodes.
Change-Id: If10fbe610ca4df21ef0f2c7a1025035d627cb4ba
Partial-Bug: #1477993
This currently deploys the core services for a working Swift which are
account/container/object/proxy.
I've included some basic docs in docs/swift-related.rst, which gives
usage instructions and more context on this patch. These are really to
give an overview of the state of Swift in Kolla as of now, so unless
there's some major inaccuracy there please don't nitpick it.
Change-Id: Id0c54be3e24c46459c40b16b7020f05bddbe1b19
Implements: blueprint ansible-swift