Having two groups here was confusing. We seem to use the review group
for most ansible stuff so we prefer that one. We move contents of the
gerrit group_vars into the review group_vars and then clean up the use
of the old group vars file.
Change-Id: I7fa7467f703f5cec075e8e60472868c60ac031f7
Previously we had set up the test gerrit instance to use the same
hostname as production: review02.opendev.org. This causes some confusion
as we have to override settings specifically for testing like a reduced
heap size, but then also copy settings from the prod host vars as we
override the host vars entirely. Using a new hostname allows us to use a
different set of host vars with unique values reducing confusion.
Change-Id: I4b95bbe1bde29228164a66f2d3b648062423e294
Previously we had a test specific group vars file for the review Ansible
group. This provided junk secrets to our test installations of Gerrit
then we relied on the review02.opendev.org production host vars file to
set values that are public.
Unfortunately, this meant we were using the production heapLimit value
which is far too large for our test instances leading to the occasionaly
failure:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 9596567552 bytes for committing reserved memory.
We cannot set the heapLimit in the group var file because the hostvar
file overrides those values. To fix this we need to replace the test
specific group var contents with a test specific host var file instead.
To avoid repeating ourselves we also create a new review.yaml group_vars
file to capture common settings between testing and prod. Note we should
look at combining this new file with the gerrit.yaml group_vars.
On the testing side of things we set the heapLimit to 6GB, we change the
serverid value to prevent any unexpected notedb confusion, and we remove
replication config.
Change-Id: Id8ec5cae967cc38acf79ecf18d3a0faac3a9c4b3
This shifts our Gerrit upgrade testing ahead to testing 3.3 to 3.4
upgrades as we have upgraded to 3.3 at this point.
Change-Id: Ibb45113dd50f294a2692c65f19f63f83c96a3c11
This bumps the gerrit image up to our 3.3 image. Followup changes will
shift upgrade testing to test 3.3 to 3.4 upgrades, clean up no longer
needed 3.2 images, and start building 3.4 images.
Change-Id: Id0f544846946d4c50737a54ceb909a0a686a594e
Avoid running the letsencrypt job when other roles add handlers for
their certificates. We don't need to run this job explicitly in that
case.
Change-Id: Ic2e9b7fc81b73ecf7af197b83496e3589bb28bb0
Co-Authored-By: Jeremy Stanley <fungi@yuggoth.org>
Currently we connect to the LE staging environment with acme.sh during
CI to get the DNS-01 tokens (but we never follow-through and actually
generate the certificate, as we have nowhere to publish the tokens).
We've known for a while that LE staging isn't really meant to be used
by CI like this, and recent instability has made the issue pronounced.
This modifies the driver script to generate fake tokens which work to
ensure all the DNS processing, etc. is happening correctly.
I have put this behind a flag so the letsencrypt job still does this
however. I think it is worth this job actually calling acme.sh to
validate this path; this shouldn't be required too often.
Change-Id: I7c0b471a0661aa311aaa861fd2a0d47b07e45a72
As of https://github.com/ansible/ansible/commit/724800c (and now
2.12.0b1), ansible started requiring Python 3.8 or later on
controllers. Switch our representative bridge.openstack.org test
nodes to the ubuntu-focal label which has 3.8.10 as its default
python3 so we can determine whether it's safe to upgrade production
similarly.
Change-Id: Ie1dc4dfaaf08ab74bf59717610231855926e9d19
This is a bit of spring cleaning. Previously we based on images on
Buster but Bullseye exists now so give it a go.
Change-Id: Icc3d79b361e41df2f2f063993fd206ab7d992f75
To do this we also update jinja-init to bullseye and gitea seems to be
the only user of this image. The impact of this should be fairly self
contained to gitea.
Note this update isn't urgent, but good hygiene. We should coordinate
this update with the 1.15.x gitea upgrade and do them in such a sequence
that we can identify problems easily if they pop up.
Change-Id: Ia0075416a1d8a067cfecd26c03f8db9641cbcb89
This switch testing of lists.openstack.org to Focal and we make a CGI
env var update to accomodate newer mailman.
Specifically newer mailman's CGI scripts filter env vars that it will
pass through. We were setting MAILMAN_SITE_DIR to vhost our mailman
installs with apache2, but that doesn't pass the filter and is removed.
HOST is passed through so we update our scripts, apache vhost configs,
exim, and init scripts to use the HOST env var instead.
Change-Id: I5c8c70c219669e37b7b75a61001a2b7f7bb0bb6c
This uses the opendev assets bundle image created with
I3166679bde6d771276289b9d32e7e4407957b2f8.
The mount options require using BuildKit, hence the Dockerfile update.
Otherwise conceptually it's fairly simple; copy in the files from the
opendevorg/assets image rather than the file-system.
Change-Id: I36bdc76471eec5380a676ebcdd885a88d3985976
Move some common assets into a top-level assets/ directory. Services
can reference these assets via
https://opendev.org/opendev/system-config/raw/branch/master/assets/<file>
in <img> tags, etc.
Some services want to embed these into their images, but we wish to
only keep one canonical copy. For this, add a Dockerfile and jobs
that creates a simple bundle of assets in opendevorg/assets. This can
be referenced in other builds; the new BuildKit bind-mount is
particularly useful for this
(c.f. I36bdc76471eec5380a676ebcdd885a88d3985976).
Change-Id: I3931566eb86a0618705d276445fa0a5f659692ea
The Open Infrastructure Foundation's developers who maintain the
OpenStackID software are taking over management of the site itself,
and have deployed it on new servers. DNS records have already been
updated to the new IP address, so it's time to clean up our end in
preparation for deleting the old servers we've been running.
OpenStackID is still used by some services we run, like RefStack and
Zanata, and we're still hosting the OpenStackID Git repository and
documentation, so this does not get rid of all references to it.
Change-Id: I1d625d5204f1e9e3a85ba9605465f6ebb9433021
We have a subdir in inventory called base that includes the shared
files that we don't have a good way to distinguish between services.
Limit the file matchers to inventory/base so that we don't trigger
all of the services anytime a single service's host_vars changes.
Change-Id: I3f461b4ab56ec55beca29e123186b36513803a44
This order is important to ensure we update the matrix eavesdrop bot
when expected and not later in the day when the daily runs happen.
Change-Id: If8e3f9f34e30cdeb7765e6665d1fb19b339454a3
This will double check that we can run our ansible against focal without
trouble. Once the production server is updated we can land this change
to reflect the server state.
Change-Id: I1a572ee13ea4c3fae38f84e5cc300a610efa94ae
We create (a currently test only) playbook that upgrades zuul. This job
then runs through project creation and renaming and testinfra testing on
the upgraded gerrit version.
Future improvements should consider loading state on the old gerrit
install before we upgrade that can be asserted as well.
Change-Id: I364037232cf0e6f3fa150f4dbb736ef27d1be3f8
These services are all managed with ansible now and don't need to be
triggered when puppet updates.
Change-Id: Ie32b788263724ad9a5ca88a6406290309ec8c87a
Update the file matchers to actually match the current set of puppet
things. This ensure the deploy job runs when we want it and we can catch
up daily instead of hourly.
Previously a number of the matchers didn't actually match the puppet
things because the path prefix was wrong or works were in different
orders for the dir names.
Change-Id: I3510da81d942cf6fb7da998b8a73b0a566ea7411
This is being done beacuse we don't make many changes to the
zuul-preview service but it runs in the hourly buildset starving deploy
runs. Since this doesn't change much we can move it to the daily run
instead.
If we need to update it we can run the playbook manually or land a
change to trigger it.
Change-Id: I89d2c712fcfd18bd4f694b2c90067295253b8836