Keytabs are slightly longer than what is being tested; upto 100 bytes
or so. This means the encoded data breaks over lines, which means you
need to be more careful about quoting.
Update the testing to a longer keytab (100 bytes of random data) and
fix up the quoting. Also enable no_logging to avoid putting key
material into the logs.
Change-Id: I73c391a2ebd2c962dc9a422f9d44265160210852
This move was prompted by wishing to expose the mirror update logs for
the rsync updates so that debugging problems does not require a root
user (note: not actually done in this change; will be a follow-on).
Rather than start hacking at puppet, the rsync mirror scripts make a
nice delination point for starting an Ansible-first/Bionic update.
Most magic is included in the scripts, so there is not much more to do
than copy them. The host uses the existing kerberos and openafs roles
and copies the key material into place (to be added before merge).
Note the scripts are removed from the extant puppet so we don't have
two updates happening simultaneously. This will also require a manual
clean to remove the cron jobs as a once-off when merging.
The other part of mirror-update is the reprepro based scripts for the
various debuntu repositories. They are left as future work for now.
Testing is added to ensure dependencies and scripts are all in place.
Change-Id: I525ac18b55f0e11b0a541b51fa97ee5d6512bf70
Donnyd has kindly offered us access to fortnebula's test cloud. This
adds clouds.yaml entries to bridge and nodepool so that we can take
advantage of these resources.
Change-Id: I4ebc261c6f548aca0b3f37dc9b60ffac08029e67
This is an intermediate step to having both kafs and openafs testing
in the gate; this just makes it clear which host is which.
Change-Id: I8cd006227ed47ad5f2c5eec664083477dd7ba397
In order to have nodepool build images and upload them to control
plane clouds, add them to the clouds.yaml on the nodepool-builder
hosts. Keep them out of the launcher configs by splitting the config
templates. So that we can keep our copies of things to a minimum,
create a group called "control-plane-clouds" and put bridge and nb0*
in it.
There are clouds mentions in here that we no longer use, a followup
patch will clean those up.
NOTE: Requires shifting the clouds config dict from
host_vars/bridge.openstack.org.yaml to group_vars/control-plane-clouds.yaml
in the secrets on bridge.
Needed-By: https://review.opendev.org/640044
Change-Id: Id1161bca8f23129202599dba299c288a6aa29212
This impelements mirrors to live in the opendev.org namespace. The
implementation is Ansible native for deployment on a Bionic node.
The hostname prefix remains the same (mirrorXX.region.provider.) but
the groups.yaml splits the opendev.org mirrors into a separate group.
The matches in the puppet group are also updated so to not run puppet
on the hosts.
The kerberos and openafs client parts do not need any updating and
works on the Bionic host.
The hosts are setup to provision certificates for themselves from
letsencrypt. Note we've added a new handler for mirror nodes to use
that restarts apache on certificate issue/renewal.
The new "mirror" role is a port of the existing puppet mirror.pp. It
installs apache, sets up some modules, makes some symlinks, sets up a
cleanup cron job and installs the apache vhost configuration.
The vhost configuration is also ported from the extant puppet. It is
simplified somewhat; but the biggest change is that we have extracted
the main port 80 configuration into a macro which is applied to both
port 80 and 443; i.e. the host will have SSL support. The other ports
are left alone for now, but can be updated in due course.
Thus we should be able to CNAME the existing mirrors to new nodes, and
any existing http access can continue. We can update our mirror setup
scripts to point to https resources as appropriate.
Change-Id: Iec576d631dd5b02f6b9fb445ee600be060f9cf1e
This is a first step toward making smaller playbooks which can be
run by Zuul in CD.
Zuul should be able to handle missing projects now, so remove it
from the puppet_git playbook and into puppet.
Make the base playbook be merely the base roles.
Make service playbooks for each service.
Remove the run-docker job because it's covered by service jobs.
Stop testing that puppet is installed in testinfra. It's accidentally
working due to the selection of non-puppeted hosts only being on
bionic nodes and not installing puppet on bionic. Instead, we can now
rely on actually *running* puppet when it's important, such as in the
eavesdrop job. Also remove the installation of puppet on the nodes in
the base job, since it's only useful to test that a synthetic test
of installing puppet on nodes we don't use works.
Don't run remote_puppet_git on gitea for now - it's too slow. A
followup patch will rework gitea project creation to not take hours.
Change-Id: Ibb78341c2c6be28005cea73542e829d8f7cfab08
This change proposes calling a handler each time a certificate is
created/updated. The handler name is based on the name of the
certificate given in the letsencrypt_certs variable, as described in
the role documentation.
Because Ansible considers calling a handler with no listeners an error
this means each letsencrypt user will need to provide a handler.
One simple option illustrated here is just to produce a stamp file.
This can facilitate cross-playbook and even cross-orchestration-tool
communication. For example, puppet or other ansible playbooks can
detect this stamp file and schedule their reloads, etc. then remove
the stamp file. It is conceivable more complex listeners could be
setup via other roles, etc. should the need arise.
A test is added to make sure the stamp file is created for the
letsencrypt test hosts, which are always generating a new certificate
in the gate test.
Change-Id: I4e0609c4751643d6e0c8d9eaa38f184e0ce5452e
Note, this does not have complete tests yet (we will need to update
the job to start a swift for that).
Change-Id: I2ee7a9e4fb503a3431366c16c380cf09327f6050
We currently only have letsencrypt_test_only as a single flag that
sets tests to use the letsencrypt staging environment and also
generates a self-signed certificate.
However, for initial testing we actually want to fully generate
certificates on hosts, but using the staging environment (i.e. *not*
generate self-signed certs). Thus we need to split this option into
two, so the gate tests still use staging+self-signed, but in-progress
production hosts can just using the staging flag.
These variables are split, and graphite01.opendev.org is made to
create staging certificates.
Also remove some debugging that is no longer necessary.
Change-Id: I08959ba904f821c9408d8f363542502cd76a30a4
This change contains the roles and testing for deploying certificates
on hosts using letsencrypt with domain authentication.
From a top level, the process is implemented in the roles as follows:
1) letsencrypt-acme-sh-install
This role installs the acme.sh tool on hosts in the letsencrypt
group, along with a small custom driver script to help parse output
that is used by later roles.
2) letsencrypt-request-certs
This role runs on each host, and reads a host variable describing
the certificates required. It uses the acme.sh tool (via the
driver) to request the certificates from letsencrypt. It populates
a global Ansible variable with the authentication TXT records
required.
If the certificate exists on the host and is not within the renewal
period, it should do nothing.
3) letsencrypt-install-txt-record
This role runs on the adns server. It installs the TXT records
generated in step 2 to the acme.opendev.org domain and then
refreshes the server. Hosts wanting certificates will have
pre-provisioned CNAME records for _acme-challenge.host.opendev.org
pointing to acme.opendev.org.
4) letsencrypt-create-certs
This role runs on each host, reading the same variable as in step
2. However this time the acme.sh tool is run to authenticate and
create the certificates, which should now work correctly via the
TXT records from step 3. After this, the host will have the
full certificate material.
Testing is added via testinfra. For testing purposes requests are
made to the staging letsencrypt servers and a self-signed certificate
is provisioned in step 4 (as the authentication is not available
during CI). We test that the DNS TXT records are created locally on
the CI adns server, however.
Related-Spec: https://review.openstack.org/587283
Change-Id: I1f66da614751a29cc565b37cdc9ff34d70fdfd3f
This adds the concept of an unmanaged domain; for unmanaged domains we
will write out the zone file only if it doesn't already exist.
acme.opendev.org is added as an unmanaged domain. It will be managed
by other ansible roles which add TXT records for ACME authentication.
The initial template comes from the dependent change, and this ensures
the bind configuration is always valid.
For flexibility and testing purposes, we allow passing an extra
refspec and version to the git checkout. This is one way to pull in
changes for speculative CI runs (I looked into having the hosts under
test checkout from Zuul; but by the time we're 3-ansible call's deep
on the DNS hosts-under-test it's a real pain. For the amount of times
we update this, it's easier to just allow a speculative change that
can take a gerrit URL; for an example see [1])
[1] https://review.openstack.org/#/c/641155/10/playbooks/group_vars/dns.yaml
Testing is enhanced to check for zone files and correct configuration
stanzas.
Depends-On: https://review.openstack.org/641154
Depends-On: https://review.openstack.org/641168
Change-Id: I9ef5cfc850c3458c63aff46cfaa0d49a5d194e87
This allows the zones to load, which is useful in follow-on changes
where we can query them on the host from testinfra to make sure it's
all working.
Change-Id: I9d22c07ce2d1ebad67b0f1ca222c1b457779ce47
The run_all cron running in test jobs is unawesome because it can
cause the inventory overrides we put in for the testing to get
overwritten with the real inventory. We don't want test jobs
attempting to run against real hosts.
Change-Id: I733f66ff24b329d193799e6063953e88dd6a35b1
This runs an haproxy which is strikingly similar to the one we
currently run for git.openstack.org, but it is run in a docker
container.
Change-Id: I647ae8c02eb2cd4f3db2b203d61a181f7eb632d2
Add the gitea k8s cluster to root's .kube/config file on bridge.
The default context does not exist in order to force us to explicitly
specify a context for all commands (so that we do not inadvertently
deploy something on the wrong k8s cluster).
Change-Id: I53368c76e6f5b3ab45b1982e9a977f9ce9f08581
This is a role for installing docker on our control-plane servers.
It is based on install-docker from zuul-jobs.
Basic testinfra tests are added; because docker fiddles the iptables
rules in magic ways, the firewall testing is moved out of the base
tests and modified to partially match our base firewall configuration.
Change-Id: Ia4de5032789ff0f2b07d4f93c0c52cf94aa9c25c
This adds connection information for an experimental kubernetes
cluster hosted in vexxhost-sjc1 to the nodepool servers.
Change-Id: Ie7aad841df1779ddba69315ddd9e0ae96a1c8c53
The constructed inventory plugin allows expressing additional groups,
but it's too heavy weight for our needs. Additionally, it is a full
inventory plugin that will add hosts to the inventory if they don't
exist.
What we want instead is something that will associate existing hosts
(that would have come from another source) with groups.
This also switches to using emergency.yaml instead of emergency, which
uses the same format.
We add an extra groups file for gate testing to ensure the CI nodes
get puppet installed.
Change-Id: Iea8b2eb2e9c723aca06f75d3d3307893e320cced
Deployment of the nodepool cloud.yaml file is currently failing with
FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'rackspace_username' is undefined"}
This is because the variables in the group_vars on bridge.o.o are all
prefixed with "nodepool_". Switch to this.
Change-Id: I524cc628138d85e3a31c216d04e4f49bcfaaa4a8
This manages the clouds.yaml files in ansible so that we can get them
updated automatically on bridge.openstack.org (which does not puppet).
Co-Authored-By: James E. Blair <jeblair@redhat.com>
Depends-On: https://review.openstack.org/598378
Change-Id: I2071f2593f57024bc985e18eaf1ffbf6f3d38140
This adds a group var which should normally be the empty list but
can be overridden by the test framework to inject additional iptables
rules. It's used to add the zuul console streaming port. To
accomplish this, the base+extras pattern is adopted for
iptables public tcp/udp ports. This means all host/group vars should
use the "extra" form of the variable rather than the actual variable
defined by the role.
Change-Id: I33fe2b7de4a4ba79c25c0fb41a00e3437cee5463
This adds a job which creates a bridge-like node and bootstraps it,
and then runs the base playbook against all of the node types we
use in our control plane. It uses testinfra to validate the results.
Change-Id: Ibdbaf511bbdaee46e1335f2c83b95ba1553a1d94
Depends-On: https://review.openstack.org/595905