7 Commits

Author SHA1 Message Date
Ian Wienand
814e4be128 Ansible roles for backup
This introduces two new roles for managing the backup-server and hosts
that we wish to back up.

Firstly the "backup" role runs on hosts we wish to backup.  This
generates and configures a separate ssh key for running bup and
installs the appropriate cron job to run the backup daily.

The "backup-server" job runs on the backup server (or, indeed
servers).  It creates users for each backup host, accepts the remote
keys mentioned above and initalises bup.  It is then ready to receive
backups from the remote hosts.

This eliminates a fairly long-standing requirement for manual setup of
the backup server users and keys; this section is removed from the
documentation.

testinfra coverage is added.

Change-Id: I9bf74df351e056791ed817180436617048224d2c
2019-08-05 16:59:57 +10:00
Ian Wienand
d33105535a Separate openafs CI mirror
This is an intermediate step to having both kafs and openafs testing
in the gate; this just makes it clear which host is which.

Change-Id: I8cd006227ed47ad5f2c5eec664083477dd7ba397
2019-06-17 15:56:09 +10:00
Ian Wienand
670107045a Create opendev mirrors
This impelements mirrors to live in the opendev.org namespace.  The
implementation is Ansible native for deployment on a Bionic node.

The hostname prefix remains the same (mirrorXX.region.provider.) but
the groups.yaml splits the opendev.org mirrors into a separate group.
The matches in the puppet group are also updated so to not run puppet
on the hosts.

The kerberos and openafs client parts do not need any updating and
works on the Bionic host.

The hosts are setup to provision certificates for themselves from
letsencrypt.  Note we've added a new handler for mirror nodes to use
that restarts apache on certificate issue/renewal.

The new "mirror" role is a port of the existing puppet mirror.pp.  It
installs apache, sets up some modules, makes some symlinks, sets up a
cleanup cron job and installs the apache vhost configuration.

The vhost configuration is also ported from the extant puppet.  It is
simplified somewhat; but the biggest change is that we have extracted
the main port 80 configuration into a macro which is applied to both
port 80 and 443; i.e. the host will have SSL support.  The other ports
are left alone for now, but can be updated in due course.

Thus we should be able to CNAME the existing mirrors to new nodes, and
any existing http access can continue.  We can update our mirror setup
scripts to point to https resources as appropriate.

Change-Id: Iec576d631dd5b02f6b9fb445ee600be060f9cf1e
2019-05-21 11:08:25 +10:00
James E. Blair
8ad300927e Split the base playbook into services
This is a first step toward making smaller playbooks which can be
run by Zuul in CD.

Zuul should be able to handle missing projects now, so remove it
from the puppet_git playbook and into puppet.

Make the base playbook be merely the base roles.

Make service playbooks for each service.

Remove the run-docker job because it's covered by service jobs.

Stop testing that puppet is installed in testinfra. It's accidentally
working due to the selection of non-puppeted hosts only being on
bionic nodes and not installing puppet on bionic. Instead, we can now
rely on actually *running* puppet when it's important, such as in the
eavesdrop job. Also remove the installation of puppet on the nodes in
the base job, since it's only useful to test that a synthetic test
of installing puppet on nodes we don't use works.

Don't run remote_puppet_git on gitea for now - it's too slow. A
followup patch will rework gitea project creation to not take hours.

Change-Id: Ibb78341c2c6be28005cea73542e829d8f7cfab08
2019-05-19 07:31:00 -05:00
Ian Wienand
afd907c16d letsencrypt support
This change contains the roles and testing for deploying certificates
on hosts using letsencrypt with domain authentication.

From a top level, the process is implemented in the roles as follows:

1) letsencrypt-acme-sh-install

   This role installs the acme.sh tool on hosts in the letsencrypt
   group, along with a small custom driver script to help parse output
   that is used by later roles.

2) letsencrypt-request-certs

   This role runs on each host, and reads a host variable describing
   the certificates required.  It uses the acme.sh tool (via the
   driver) to request the certificates from letsencrypt.  It populates
   a global Ansible variable with the authentication TXT records
   required.

   If the certificate exists on the host and is not within the renewal
   period, it should do nothing.

3) letsencrypt-install-txt-record

   This role runs on the adns server.  It installs the TXT records
   generated in step 2 to the acme.opendev.org domain and then
   refreshes the server.  Hosts wanting certificates will have
   pre-provisioned CNAME records for _acme-challenge.host.opendev.org
   pointing to acme.opendev.org.

4) letsencrypt-create-certs

   This role runs on each host, reading the same variable as in step
   2.  However this time the acme.sh tool is run to authenticate and
   create the certificates, which should now work correctly via the
   TXT records from step 3.  After this, the host will have the
   full certificate material.

Testing is added via testinfra.  For testing purposes requests are
made to the staging letsencrypt servers and a self-signed certificate
is provisioned in step 4 (as the authentication is not available
during CI).  We test that the DNS TXT records are created locally on
the CI adns server, however.

Related-Spec: https://review.openstack.org/587283

Change-Id: I1f66da614751a29cc565b37cdc9ff34d70fdfd3f
2019-04-02 15:31:41 +11:00
Ian Wienand
f07bf2a507 Import install-docker role
This is a role for installing docker on our control-plane servers.

It is based on install-docker from zuul-jobs.

Basic testinfra tests are added; because docker fiddles the iptables
rules in magic ways, the firewall testing is moved out of the base
tests and modified to partially match our base firewall configuration.

Change-Id: Ia4de5032789ff0f2b07d4f93c0c52cf94aa9c25c
2018-12-14 11:30:47 -08:00
Monty Taylor
e998db36f2 Add yamlgroup inventory plugin
The constructed inventory plugin allows expressing additional groups,
but it's too heavy weight for our needs. Additionally, it is a full
inventory plugin that will add hosts to the inventory if they don't
exist.

What we want instead is something that will associate existing hosts
(that would have come from another source) with groups.

This also switches to using emergency.yaml instead of emergency, which
uses the same format.

We add an extra groups file for gate testing to ensure the CI nodes
get puppet installed.

Change-Id: Iea8b2eb2e9c723aca06f75d3d3307893e320cced
2018-11-02 08:19:53 +11:00