Mark Goddard 6c1442c385 Fix keystone fernet key rotation scheduling
Right now every controller rotates fernet keys. This is nice because
should any controller die, we know the remaining ones will rotate the
keys. However, we are currently over-rotating the keys.

When we over rotate keys, we get logs like this:

 This is not a recognized Fernet token <token> TokenNotFound

Most clients can recover and get a new token, but some clients (like
Nova passing tokens to other services) can't do that because it doesn't
have the password to regenerate a new token.

With three controllers, in crontab in keystone-fernet we see the once a day
correctly staggered across the three controllers:

ssh ctrl1 sudo cat /etc/kolla/keystone-fernet/crontab
0 0 * * * /usr/bin/fernet-rotate.sh
ssh ctrl2 sudo cat /etc/kolla/keystone-fernet/crontab
0 8 * * * /usr/bin/fernet-rotate.sh
ssh ctrl3 sudo cat /etc/kolla/keystone-fernet/crontab
0 16 * * * /usr/bin/fernet-rotate.sh

Currently with three controllers we have this keystone config:

[token]
expiration = 86400 (although, keystone default is one hour)
allow_expired_window = 172800 (this is the keystone default)

[fernet_tokens]
max_active_keys = 4

Currently, kolla-ansible configures key rotation according to the following:

   rotation_interval = token_expiration / num_hosts

This means we rotate keys more quickly the more hosts we have, which doesn't
make much sense.

Keystone docs state:

   max_active_keys =
     ((token_expiration + allow_expired_window) / rotation_interval) + 2

For details see:
https://docs.openstack.org/keystone/stein/admin/fernet-token-faq.html

Rotation is based on pushing out a staging key, so should any server
start using that key, other servers will consider that valid. Then each
server in turn starts using the staging key, each in term demoting the
existing primary key to a secondary key. Eventually you prune the
secondary keys when there is no token in the wild that would need to be
decrypted using that key. So this all makes sense.

This change adds new variables for fernet_token_allow_expired_window and
fernet_key_rotation_interval, so that we can correctly calculate the
correct number of active keys. We now set the default rotation interval
so as to minimise the number of active keys to 3 - one primary, one
secondary, one buffer.

This change also fixes the fernet cron job generator, which was broken
in the following cases:

* requesting an interval of more than 1 day resulted in no jobs
* requesting an interval of more than 60 minutes, unless an exact
  multiple of 60 minutes, resulted in no jobs

It should now be possible to request any interval up to a week divided
by the number of hosts.

Change-Id: I10c82dc5f83653beb60ddb86d558c5602153341a
Closes-Bug: #1809469
2019-05-17 14:05:48 +01:00

259 lines
8.1 KiB
YAML

---
- name: Ensuring config directories exist
file:
path: "{{ node_config_directory }}/{{ item.key }}"
state: "directory"
owner: "{{ config_owner_user }}"
group: "{{ config_owner_group }}"
mode: "0770"
become: true
when:
- inventory_hostname in groups[item.value.group]
- item.value.enabled | bool
with_dict: "{{ keystone_services }}"
- name: Check if policies shall be overwritten
local_action: stat path="{{ item }}"
run_once: True
register: keystone_policy
with_first_found:
- files: "{{ supported_policy_format_list }}"
paths:
- "{{ node_custom_config }}/keystone/"
skip: true
- name: Set keystone policy file
set_fact:
keystone_policy_file: "{{ keystone_policy.results.0.stat.path | basename }}"
keystone_policy_file_path: "{{ keystone_policy.results.0.stat.path }}"
when:
- keystone_policy.results
- name: Check if Keystone Domain specific settings enabled
local_action: stat path="{{ node_custom_config }}/keystone/domains"
run_once: True
register: keystone_domain_directory
- name: Creating Keystone Domain directory
vars:
keystone: "{{ keystone_services.keystone }}"
file:
dest: "{{ node_config_directory }}/keystone/domains/"
state: "directory"
mode: "0770"
become: true
when:
- inventory_hostname in groups[keystone.group]
- keystone.enabled | bool
- name: Copying over config.json files for services
template:
src: "{{ item.key }}.json.j2"
dest: "{{ node_config_directory }}/{{ item.key }}/config.json"
mode: "0660"
register: keystone_config_jsons
become: true
with_dict: "{{ keystone_services }}"
when:
- inventory_hostname in groups[item.value.group]
- item.value.enabled | bool
notify:
- Restart keystone container
- Restart keystone-ssh container
- Restart keystone-fernet container
- name: Copying over keystone.conf
vars:
service_name: "{{ item.key }}"
merge_configs:
sources:
- "{{ role_path }}/templates/keystone.conf.j2"
- "{{ node_custom_config }}/global.conf"
- "{{ node_custom_config }}/keystone.conf"
- "{{ node_custom_config }}/keystone/{{ item.key }}.conf"
- "{{ node_custom_config }}/keystone/{{ inventory_hostname }}/keystone.conf"
dest: "{{ node_config_directory }}/{{ item.key }}/keystone.conf"
mode: "0660"
become: true
register: keystone_confs
with_dict: "{{ keystone_services }}"
when:
- inventory_hostname in groups[item.value.group]
- item.key in [ "keystone", "keystone-fernet" ]
- item.value.enabled | bool
notify:
- Restart keystone container
- Restart keystone-fernet container
- name: Creating Keystone Domain directory
vars:
keystone: "{{ keystone_services.keystone }}"
file:
dest: "{{ node_config_directory }}/keystone/domains/"
state: "directory"
mode: "0770"
become: true
when:
- inventory_hostname in groups[keystone.group]
- keystone.enabled | bool
- keystone_domain_directory.stat.exists
- name: Get file list in custom domains folder
local_action: find path="{{ node_custom_config }}/keystone/domains" recurse=no file_type=file
register: keystone_domains
when: keystone_domain_directory.stat.exists
- name: Copying Keystone Domain specific settings
vars:
keystone: "{{ keystone_services.keystone }}"
template:
src: "{{ item.path }}"
dest: "{{ node_config_directory }}/keystone/domains/"
mode: "0660"
become: true
register: keystone_domains
when:
- inventory_hostname in groups[keystone.group]
- keystone.enabled | bool
- keystone_domain_directory.stat.exists
with_items: "{{ keystone_domains.files|default([]) }}"
notify:
- Restart keystone container
- name: Copying over existing policy file
template:
src: "{{ keystone_policy_file_path }}"
dest: "{{ node_config_directory }}/{{ item.key }}/{{ keystone_policy_file }}"
mode: "0660"
become: true
register: keystone_policy_overwriting
when:
- inventory_hostname in groups[item.value.group]
- item.key in [ "keystone", "keystone-fernet" ]
- item.value.enabled | bool
- keystone_policy_file is defined
with_dict: "{{ keystone_services }}"
notify:
- Restart keystone container
- Restart keystone-fernet container
- name: Copying over wsgi-keystone.conf
vars:
keystone: "{{ keystone_services.keystone }}"
template:
src: "{{ item }}"
dest: "{{ node_config_directory }}/keystone/wsgi-keystone.conf"
mode: "0660"
become: true
register: keystone_wsgi
when:
- inventory_hostname in groups[keystone.group]
- keystone.enabled | bool
with_first_found:
- "{{ node_custom_config }}/keystone/{{ inventory_hostname }}/wsgi-keystone.conf"
- "{{ node_custom_config }}/keystone/wsgi-keystone.conf"
- "wsgi-keystone.conf.j2"
notify:
- Restart keystone container
- name: Checking whether keystone-paste.ini file exists
vars:
keystone: "{{ keystone_services.keystone }}"
local_action: stat path="{{ node_custom_config }}/keystone/keystone-paste.ini"
run_once: True
register: check_keystone_paste_ini
when:
- keystone.enabled | bool
- name: Copying over keystone-paste.ini
vars:
keystone: "{{ keystone_services.keystone }}"
template:
src: "{{ node_custom_config }}/keystone/keystone-paste.ini"
dest: "{{ node_config_directory }}/keystone/keystone-paste.ini"
mode: "0660"
become: true
register: keystone_paste_ini
when:
- inventory_hostname in groups[keystone.group]
- keystone.enabled | bool
- check_keystone_paste_ini.stat.exists
notify:
- Restart keystone container
- name: Generate the required cron jobs for the node
command: >
python {{ role_path }}/files/fernet_rotate_cron_generator.py
-t {{ (fernet_key_rotation_interval | int) // 60 }}
-i {{ groups['keystone'].index(inventory_hostname) }}
-n {{ (groups['keystone'] | length) }}
register: cron_jobs_json
when: keystone_token_provider == 'fernet'
delegate_to: localhost
- name: Save the returned from cron jobs for building the crontab
set_fact:
cron_jobs: "{{ (cron_jobs_json.stdout | from_json).cron_jobs }}"
ignore_errors: "{{ ansible_check_mode }}"
when: keystone_token_provider == 'fernet'
- name: Copying files for keystone-fernet
vars:
keystone_fernet: "{{ keystone_services['keystone-fernet'] }}"
template:
src: "{{ item.src }}"
dest: "{{ node_config_directory }}/keystone-fernet/{{ item.dest }}"
mode: "0660"
become: true
register: keystone_fernet_confs
ignore_errors: "{{ ansible_check_mode }}"
with_items:
- { src: "crontab.j2", dest: "crontab" }
- { src: "fernet-rotate.sh.j2", dest: "fernet-rotate.sh" }
- { src: "fernet-node-sync.sh.j2", dest: "fernet-node-sync.sh" }
- { src: "id_rsa", dest: "id_rsa" }
- { src: "ssh_config.j2", dest: "ssh_config" }
when:
- inventory_hostname in groups[keystone_fernet.group]
- keystone_fernet.enabled | bool
notify:
- Restart keystone-fernet container
- name: Copying files for keystone-ssh
vars:
keystone_ssh: "{{ keystone_services['keystone-ssh'] }}"
template:
src: "{{ item.src }}"
dest: "{{ node_config_directory }}/keystone-ssh/{{ item.dest }}"
mode: "0660"
become: true
register: keystone_ssh_confs
with_items:
- { src: "sshd_config.j2", dest: "sshd_config" }
- { src: "id_rsa.pub", dest: "id_rsa.pub" }
when:
- inventory_hostname in groups[keystone_ssh.group]
- keystone_ssh.enabled | bool
notify:
- Restart keystone-ssh container
- name: Check keystone containers
become: true
kolla_docker:
action: "compare_container"
common_options: "{{ docker_common_options }}"
name: "{{ item.value.container_name }}"
image: "{{ item.value.image }}"
volumes: "{{ item.value.volumes|reject('equalto', '')|list }}"
dimensions: "{{ item.value.dimensions }}"
when:
- kolla_action != "config"
- inventory_hostname in groups[item.value.group]
- item.value.enabled | bool
register: check_keystone_containers
with_dict: "{{ keystone_services }}"
notify:
- Restart keystone container
- Restart keystone-ssh container
- Restart keystone-fernet container