Sometimes as cloud admins, we want to only update code that is running
in a cloud. But we dont need to do anything else. Make an action in
kolla-ansible that allows us to do that.
Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8
Implements: blueprint deploy-containers-action
Use upstream Ansible modules for registration of services, endpoints,
users, projects, roles, and role grants.
Change-Id: I7c9138d422cc91c177fd8992347176bb54156b5a
This commit adds the functionality for an operator to specify
their own trusted CA certificate file for interacting with the
Keystone API.
Implements: blueprint support-trusted-ca-certificate-file
Change-Id: I84f9897cc8e107658701fb309ec318c0f805883b
Docker has no restart policy named 'never'. It has 'no'.
This has bitten us already (see [1]) and might bite us again whenever
we want to change the restart policy to 'no'.
This patch makes our docker integration honor all valid restart policies
and only valid restart policies.
All relevant docker restart policy usages are patched as well.
I added some FIXMEs around which are relevant to kolla-ansible docker
integration. They are not fixed in here to not alter behavior.
[1] https://review.opendev.org/667363
Change-Id: I1c9764fb9bbda08a71186091aced67433ad4e3d6
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
A common class of problems goes like this:
* kolla-ansible deploy
* Hit a problem, often in ansible/roles/*/tasks/bootstrap.yml
* Re-run kolla-ansible deploy
* Service fails to start
This happens because the DB is created during the first run, but for some
reason we fail before performing the DB sync. This means that on the second run
we don't include ansible/roles/*/tasks/bootstrap_service.yml because the DB
already exists, and therefore still don't perform the DB sync. However this
time, the command may complete without apparent error.
We should be less careful about when we perform the DB sync, and do it whenever
it is necessary. There is an argument for not doing the sync during a
'reconfigure' command, although we will not change that here.
This change only always performs the DB sync during 'deploy' and
'reconfigure' commands.
Change-Id: I82d30f3fcf325a3fdff3c59f19a1f88055b566cc
Closes-Bug: #1823766
Closes-Bug: #1797814
Currently, we have a lot of logic for checking if a handler should run,
depending on whether config files have changed and whether the
container configuration has changed. As rm_work pointed out during
the recent haproxy refactor, these conditionals are typically
unnecessary - we can rely on Ansible's handler notification system
to only trigger handlers when they need to run. This removes a lot
of error prone code.
This patch removes conditional handler logic for all services. It is
important to ensure that we no longer trigger handlers when unnecessary,
because without these checks in place it will trigger a restart of the
containers.
Implements: blueprint simplify-handlers
Change-Id: I4f1aa03e9a9faaf8aecd556dfeafdb834042e4cd
When running deploy or reconfigure for Keystone,
ansible/roles/keystone/tasks/deploy.yml calls init_fernet.yml,
which runs /usr/bin/fernet-rotate.sh, which calls keystone-manage
fernet_rotate.
This means that a token can become invalid if the operator runs
deploy or reconfigure too often.
This change splits out fernet-push.sh from the fernet-rotate.sh
script, then calls fernet-push.sh after the fernet bootstrap
performed in deploy.
Change-Id: I824857ddfb1dd026f93994a4ac8db8f80e64072e
Closes-Bug: #1833729
The task does not change any state but is used to set a fact
from parsed output.
Also adjust task name.
Change-Id: I5fe322546d82a373522645485be18fe7bfc57999
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
The task was duplicated below (and this other one is conditional).
Additionally fix related tasks names.
Change-Id: I76a6dd84e78277f87b04951eb4e75bbdfc1c38bf
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Many tasks that use Docker have become specified already, but
not all. This change ensures all tasks that use the following
modules have become:
* kolla_docker
* kolla_ceph_keyring
* kolla_toolbox
* kolla_container_facts
It also adds become for 'command' tasks that use docker CLI.
Change-Id: I4a5ebcedaccb9261dbc958ec67e8077d7980e496
Right now every controller rotates fernet keys. This is nice because
should any controller die, we know the remaining ones will rotate the
keys. However, we are currently over-rotating the keys.
When we over rotate keys, we get logs like this:
This is not a recognized Fernet token <token> TokenNotFound
Most clients can recover and get a new token, but some clients (like
Nova passing tokens to other services) can't do that because it doesn't
have the password to regenerate a new token.
With three controllers, in crontab in keystone-fernet we see the once a day
correctly staggered across the three controllers:
ssh ctrl1 sudo cat /etc/kolla/keystone-fernet/crontab
0 0 * * * /usr/bin/fernet-rotate.sh
ssh ctrl2 sudo cat /etc/kolla/keystone-fernet/crontab
0 8 * * * /usr/bin/fernet-rotate.sh
ssh ctrl3 sudo cat /etc/kolla/keystone-fernet/crontab
0 16 * * * /usr/bin/fernet-rotate.sh
Currently with three controllers we have this keystone config:
[token]
expiration = 86400 (although, keystone default is one hour)
allow_expired_window = 172800 (this is the keystone default)
[fernet_tokens]
max_active_keys = 4
Currently, kolla-ansible configures key rotation according to the following:
rotation_interval = token_expiration / num_hosts
This means we rotate keys more quickly the more hosts we have, which doesn't
make much sense.
Keystone docs state:
max_active_keys =
((token_expiration + allow_expired_window) / rotation_interval) + 2
For details see:
https://docs.openstack.org/keystone/stein/admin/fernet-token-faq.html
Rotation is based on pushing out a staging key, so should any server
start using that key, other servers will consider that valid. Then each
server in turn starts using the staging key, each in term demoting the
existing primary key to a secondary key. Eventually you prune the
secondary keys when there is no token in the wild that would need to be
decrypted using that key. So this all makes sense.
This change adds new variables for fernet_token_allow_expired_window and
fernet_key_rotation_interval, so that we can correctly calculate the
correct number of active keys. We now set the default rotation interval
so as to minimise the number of active keys to 3 - one primary, one
secondary, one buffer.
This change also fixes the fernet cron job generator, which was broken
in the following cases:
* requesting an interval of more than 1 day resulted in no jobs
* requesting an interval of more than 60 minutes, unless an exact
multiple of 60 minutes, resulted in no jobs
It should now be possible to request any interval up to a week divided
by the number of hosts.
Change-Id: I10c82dc5f83653beb60ddb86d558c5602153341a
Closes-Bug: #1809469
Since Ansible 2.5, the use of jinja tests as filters has been
deprecated.
I've run the script provided by the ansible team to 'fix' the
jinja filters to conform to the newer syntax.
This fixes the deprecation warnings.
Change-Id: I844ecb7bec94e561afb09580f58b1bf83a6d00bd
Closes-bug: #1827370
Several config file permissions are incorrect on the host. In general,
files should be 0660, and directories and executables 0770.
Change-Id: Id276ac1864f280554e98b937f2845bb424d521de
Closes-Bug: #1821579
This allows keystone service endpoints to use custom hostnames, and adds the
following variables:
* keystone_internal_fqdn
* keystone_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds the following variables:
* keystone_admin_listen_port
* keystone_public_listen_port
These default to keystone_admin_port and keystone_public_port,
respectively, for backward compatibility.
These options allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I50c46c674134f9958ee4357f0f4eed5483af2214
Implements: blueprint service-hostnames
With this change, an operator may be able to stop a
service container without stopping all services in a host.
This change is the starting point to start
fast-forward upgrades support.
In next changes new flags will be introducced to disable
stop dataplane services during upgrades.
Change-Id: Ifde7a39d7d8596ef0d7405ecf1ac1d49a459d9ef
Implements: blueprint support-stop-containers
The variable {{ node_config_directory }} is used for the configuration
directory on the remote hosts, and should not be used for paths on the
deploy host (localhost).
This changes the default value of the TLS certificate and CA file to
reference {{ CONFIG_DIR }}, in line with the directory used for
admin-openrc.sh (as of I0709482ead4b7a67e82796e17f85bde151e71bc0).
This change also introduces a variable, {{ node_config }}, that
references {{ CONFIG_DIR | default('/etc/kolla') }}, to remove
duplication.
Change-Id: Ibd82ac78630ebfff5824c329d7399e1e900c0ee0
Closes-Bug: #1804025
Having all services in one giant haproxy file makes altering
configuration for a service both painful and dangerous. Each service
should be configured with a simple set of variables and rendered with a
single unified template.
Available are two new templates:
* haproxy_single_service_listen.cfg.j2: close to the original style, but
only one service per file
* haproxy_single_service_split.cfg.j2: using the newer haproxy syntax
for separated frontend and backend
For now the default will be the single listen block, for ease of
transition.
Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b
Now kolla dev mode only support clone master branch from git,
add version tag to support clone dedicated branch.
Change-Id: I88de238e5dc7461ba0662a3ecea9a2d80fd0db60
With the more recent versions of ansible, we should now use
"is" instead of the "|"
This should update it.
Change-Id: I6fba56fca182349972e8b0ee5452b37aa4090e0c
This commit is to apply resource-constraints to a few more OpenStack services.
Commit to apply constraints to the last set of services will be made in
the upcoming commit.
Depends-on: Icafa54baca24d2de64238222a5677b9d8b90e2aa
Change-Id: I39004f54281f97d53dfa4b1dbcf248650ad6f186
1. Add the role enabled check for some projects
2. adjust the file created positon for keystone to keep
consistence with others
Change-Id: Id2b893ba546b3adf41d97927f8d20dca403a0457
Add become to all tasks that use the module "kolla_docker"
Change-Id: I4309c4011687b88ec31d739fd8f834fe2326ff10
Partial-Implements: blueprint ansible-specific-task-become
- rename action and serial to kolla_ansible and kolla_serial
- use become instead of "sudo <command>" in shell
- Remove quota for failed_when and changed_when in rabbitmq tasks
Change-Id: I78cb60168aaa40bb6439198283546b7faf33917c
Implements: blueprint migrate-to-ansible-2-2-0
- remove uesless module_extra_vars, this is a historical issue. In the
past, we use 'docker exec kolla_toolbox ansible xxx' to run module on
target node, so complex data have to pass through extra_vars. Now we
are using kolla_toolbox module, no need to use extra_vars anymore.
- Remove some useless until.
Change-Id: I72ed28001202917f9a82a1c3ea33cd6319911ec8
keystone_* containers are created via the kolla_docker ansible module
with common_options set to docker_common_options. However, when the
containers are checked, common_options are not passed to the
kolla_docker ansible module. This can cause the keystone_* containers
to be restarted during a reconfigure when there are no changes to
keystone configuration.
Add the common_options argument to the kolla_docker ansible module when
checking the keystone containers and set it to docker_common_options.
Change-Id: I44aefcf3d71faecaf1ffe384fd5a2f611e584a37
Closes-Bug: #1759922
This change makes it so that if preconfigured database users are used,
the attempt to change the log_bin_trust_function_creators mysql
variable isn't made anymore.
Also updated the upgrade docs
Change-Id: I356313952d435de6d3b5444c0dd8a71f45aee452
Closes-Bug: 1748269
- Keystone
- Glance
- Nova
- Cinder
This will copy only yaml or json policy file if they exist.
Change-Id: I4a9415d82322aed68c9b7650bdf346f58fa49e2a
Implements: blueprint support-custom-policy-yaml
Co-authored-By: Duong Ha-Quang <duonghq@vn.fujitsu.com>
This change allows the following use cases:
1. Using an already-configured MariaDB / MySQL server / Cluster
2. Using already-created DB users, without requiring root DB access.
Update: added external mariadb precheck
Change-Id: I78b0d178306d7c5293b0bf53e445f19f18b4b824
Implements: blueprint external-mariadb-support.
Closes-Bug: #1603121
Provide support fot kolla dev mode in Keystone. When
'kolla_dev_mode' or 'keystone_dev_mode' variables are
enabled, source code of Keystone project is cloned
and bindmounted.
Partially implements: blueprint mount-sources
Change-Id: Ie4cf401ecd9a507e739a53dfdf16f65292ab57e5
1- Expand and migrate database in first keystone node
2- Upgrade all nodes sequentially along with updation of each node's
configuration file with latest release version
3- Last keystone node, contract database
With this patch, there is small downtime when all containers are
restarted. It will be fixed in other patch.
[1] http://docs.openstack.org/developer/keystone/upgrading.html#upgrading-without-downtime
Co-Authored-By: Surya Prakash Singh <surya.singh@nectechnologies.in>
Co-Authored-By: Eduardo Gonzalez <dabarren@gmail.com>
Co-Authored-By: Duong Ha-Quang <duonghq@vn.fujitsu.com>
Partially-Implements: blueprint ks-rolling-upgrade-role
Change-Id: I2159af567c40848840ff5e483e7d1f6de760b435
Add become to only neccesary tasks in roles:
- glance
- heat
- horizon
- keystone
- neutron
- nova
- openvswitch
Gate is also updated to use 'become' feature
Change-Id: I2f3f27306e9f384148e1ad4d54d8da2ebef34d00
Partial-Implements: blueprint ansible-specific-task-become
When deploying with tls enabled in public
endpoints, ansible modules fails due SSL certificates
are self-signed.
This change adds a new variable to allow customization
on which endpoints ansible should connect.
Defaults to admin because admin auth parameters defaults
to admin endpoint.
Change-Id: Ic3ed58cf9c9579cae08a11bbfe6fce983b5a9cbc
Closes-Bug: #1720995
Actually Openstack services configuration can be overriden using many
files:
- /etc/kolla/config/<< service name >>/<< config file >>
- /etc/kolla/config/<< service name >>/<<host>>/<< config file >>
- /etc/kolla/config/global.conf
- /etc/kolla/config/database.conf
- /etc/kolla/config/messaging.conf
Only per-service configuration is actually documented here:
https://github.com/openstack/kolla-ansible/blob/master/doc/advanced-configuration.rst#L164
Allowing to globally modify service configuration can be perform too,
but it can be done in 3 different manners, all not documented:
- /etc/kolla/config/global.conf
- /etc/kolla/config/database.conf
- /etc/kolla/config/messaging.conf
database.conf and messaging.conf seems redundant with global.conf.
In order to simplify codebase it seems logical to remove them.
Documentation has been added for overriding configuration globally and
release note has been added too.
Closes-Bug: #1682479
Change-Id: I5d922dfc0d938173bad34ac64e490b78db1b7e31
Init fernet task fails if keystone_fernet container
is not running and ssh port bind.
This change add a check to ensure all keystone_fernet containers
are running before init fernet tokens.
Change-Id: Ib95bb5a47a9174f1a00b82cc8b697c0dc19c848e
Closes-Bug: #1704758