The title.svg logo for opendev and two jquery js files are no longer
managed by ansible nor do they appear to be in our docker image. They
appear to have been lost when we converted from puppet to ansible +
docker. Add them back in. We are also missing the icla html content (but
not the other clas) add this one back in.
We vendor the js contents even though in the past we copied them from a
git repo clone and a distro package installation. This way we don't have
unexpected surprises, record that the files are used, and can always
update them later.
Change-Id: I981b4b0f233ece45d03a80dc1724a4e496f66eb8
This matches the file, which got lost in my original script because I
didn't quote a $. Also add some quotes for better grouping.
Change-Id: I335e89616f093bdd2f0599b1ea1125ec642515ba
There appears to be a gitea bug that causes PATCH updates to projects to
fail when the cache is in a bad state for that project. We use PATCH
updates to a project to set the project descriptions. Since project
descriptions are not critical to gitea functionality (we weren't
updating them until last week) we can treat this as best effort and
ignore these failures.
We'll log these cases to aid in further debugging but continue on. The
next pass can retry.
Change-Id: I625bdc0856caaccb6b55931b0cdc6cf11a0bf3e1
Gitea has added a STACKTRACE_LEVEL config option to set which log level
will also generate stack traces when logging. We want them for at least
Error and above so set this to Error for now. In particular there seems
to be a commit cache issue which results in errors that having stack
traces for would be helpful to debug.
Change-Id: I0491373ef143dfa753c011d02e3c670c699d2a52
When we moved projects out of openstack/ into opendev/ we didn't also
move their tarballs.
This redirects affected old directories to their new per-tenant home.
See I5bf2ddf09b3df71a3428a8a0c535b131ecbc0bca for info on how this
list was generated.
Change-Id: Ib545a772ecfce475c1007f04c5b5145d375dae23
The upstream mirror may have private contents in dirs like .~tmp~/ or
snapshot/. We exclude these to avoid syncing problems when we don't have
permissions to read them.
Change-Id: I8d366f0e95667bfbe65f259877b13bd0d93cd877
Add the cron job that existed in puppet-graphite to cleanup old,
un-updated stats and directories.
Change-Id: Iac4676ae0ea1d5f1b96b6214ab6ab193c71a2d20
This is the storage-schemas configuration file currently deployed by
puppet-graphite. Apply it to the container so we maintain the same
retention, etc.
Change-Id: Ia733bf4a958a559ce3921094bb3f0875365157ce
There is a single Fedora Atomic image used by Magnum at this point that
we mirror. Lets mirror just that one image and then we can manually rm
the others.
Change-Id: I669247beb64bae41afddd0edce02c0b58e45aa6c
We've converted our opensuse leap 15 image to 15.2. If things look good
we should be clear to stop mirroring 15.1.
Change-Id: Id31a3b57f48a5be671c76a76d5c48b4ef5000c3e
We've removed the images from nodepool in the depends-on and now we can
stop mirroring the distro.
Depends-On: https://review.opendev.org/754471
Change-Id: Ifd4b1fbc92514a76ffa86b7cb42a81f97c245604
The Apache 3081 proxy allows us to do layer 7 filtering on incoming
requests. However, it was returning 502 errors because it proxies to
https://localhost and the certificate doesn't match (see
SSLProxyCheckPeerName directive). However, we can't use the full
hostname in the gate because our self-signed certificate doesn't cover
that.
Add a variable and proxy to localhost in the gate, and the full
hostname in production. This avoids us having to turn off
SSLProxyCheckPeerName.
Change-Id: Ie12178a692f81781b848beb231f9035ececa3fd8
We had assigned a value of 300 to this setting but gitea ignored it and
continued to use a 30 second timeout instead. Rereading docs and code it
appears that we may need a unit to accompany the value. Set it to 300s
instead of 300.
Change-Id: I763092c0371a15a417313ed05a9fd27d0e6e7f93
When we decide we don't need to create a project we set the project
description. The reason for this is that humans like to see their
project descriptions update when they change them.
Rather than get, compare, and set the descrition we just set it under
the assumption this will be fewer requests and thus quicker. The impact
on the db likely plays into this too but our gitea dbs are mostly idle
so should be fine.
Change-Id: I04bdd747f8934d0b35bf76aec5d70be01b921285
This wasn't quite fixed right when these were moved into
project-config. Get the projects and install them.
Change-Id: I0f854609fc9aebffc1fa2a2e14d5231cce9b71d0
185797a0e5e46fd0f68f7b423e79f732c8541d68 made graphite01 (old server)
accidentally do the container restart; this should be for graphite02.
Change-Id: I881ffecf9af5ee07cc3ebcf34f0e204a6389d16b
This was a host used to transition to docker run nodepool builders. That
transition has been completed for nb01.opendev.org and nb02.opendev.org
and we don't need the third x86 builder.
Change-Id: I93c7fc9b24476527b451415e7c138cd17f3fdf9f
In I4e5f803b9d4fb6c2351cf151a085b93a7fd20f60 I put the wrong thing in
the zuul.openstack.org config; for that site we want to cache
/api/status; not the tenant path.
Change-Id: Iffbd870aeff496b9c259206f866af3a90a4349db
mod_mem_cache was removed in Apache 2.4 so all the bits of
configuration gated by the IfModule are currently irrelevant.
The replacement is socache, the in-memory version is "shmcb" (can also
hook up to memcache, etc.). Enable the socache module, and switch the
cache matching parts to use socache and then fall-back to disk cache
(this is what it says this will do in the manual [1])
The other part of this is to turn the CacheQuickHandler off. The
manual says about this [2]
In the default enabled configuration, the cache operates within the
quick handler phase. This phase short circuits the majority of
server processing, and represents the most performant mode of
operation for a typical server. The cache bolts onto the front of
the server, and the majority of server processing is avoided.
I won't claim to fully understand how our mod_rewrite rules and
mod_proxy all hang together with phases and what-not. But emperically
with this turned on (default) we do not seem to get any caching on the
tenant status pages, and with it turned off we do.
I've deliberately removed IfModule gating as well. This actually hid
the problem and made it much more difficult to diagnose; it is much
better if these directives just fail to start Apache if we do not have
the modules we expect to have.
[1] https://httpd.apache.org/docs/2.4/mod/mod_cache_socache.html
[2] https://httpd.apache.org/docs/2.4/mod/mod_cache.html#cachequickhandler
Change-Id: I4e5f803b9d4fb6c2351cf151a085b93a7fd20f60
These two values overwrite each other, move into common configuration.
The "cache-status" is a verbose string, so quote it.
Change-Id: I3cc4627de3d6a0de1adcfed6b424fc3ed0099245
We need a regex to match the url path for zuul statuses. Our existing
setup assumed this would work in a CacheEnable directive but it seems
that it does not. Move this into a LocationMatch which explicitly
supports regexes.
Change-Id: I9df06d2af31ce6550e537f4594640487cca1d735
We attempt to cache things served by zuul-web in our apache proxy. This
is to reduce the load on the zuul-web process which has to query
gearman, the sql database, and eventuall the zookeeper database to
produce its responses.
Things are currently operating slowly and it isn't clear if we're
caching properly. To check that better update our logging format to
record cache hits and misses. Also drop an unnecessary .* in the
CacheEnable url-strings for /static/ as it is unclear if the .* is
treated as a regex here.
Change-Id: Ib57c085fa15365b89b3276e037339dbeddb094e3
We install docker-compose from pypi in order to get newer features
(particularly useful for gerrit). On x86 all the deps for this have
wheels and we don't need build deps but on arm64 wheels don't exist for
things like cffi. Add build-essential, python3-dev, libffi-dev, and
libssl-dev to ensure we can build the necessary deps to install
docker-compose on arm64.
Change-Id: Id9c61dc904d34d2f7cbe17c70ad736a9562bb923
This server is going to be our new arm64 nodepool-builder running on the
new arm64 docker images for nodepool.
Depends-On: https://review.opendev.org/750037
Change-Id: I3b46ff901eb92c7f09b79c22441c3f80bc6f9d15
Modules are collected on bridge and then synchronized to remote hosts
where puppet is run. This is done to ensure an atomic run of puppet
across affected hosts.
These modules are described in modules.env and cloned by
install_modules.sh. Currently this is done in install-ansible, but
after some recent refactoring
(I3b1cea5a25974f56ea9202e252af7b8420f4adc9) the best home for it
appears to now be in puppet-setup-ansible; just before the script is
run.
Change-Id: I4b1d709d7037e2851d73be4bc7a202f52858ad4f
It turns out you can't use "run_once" with the "free" strategy in
Ansible. It actually warns you about this, if you're looking in the
right place.
The existing run-puppet role calls two things with "run_once:", both
delegated to localhost -- cloning the ansible-role-puppet repo (so we
can include_role: puppet) and installing the puppet modules (via
install-ansible-roles role), which are copied from bridge to the
remote side and run by ansible-role-puppet.
With remote_puppet_else.yaml we are running all the puppet hosts at
once with the "free" strategy. This means that these two tasks, both
delegated to localhost (bridge) are actually running for every host.
install-ansible-roles does a git clone, and thus we often see one of
the clones bailing out with a git locking error, because the other
host is running similtaneously.
I8585a1af2dcc294c0e61fc45d9febb044e42151d tried to stop this with
"run_once:" -- but as noted because it's running under the "free"
strategy this is silently ignored.
To get around this, split out the two copying steps into a new role
"puppet-setup". To maintain the namespace, the "run-puppet" module is
renamed to "puppet-run". Before each call of (now) "puppet-run", make
sure we run "puppet-setup" just on localhost.
Remove the run_once and delegation on "install-ansible-roles"; because
this is now called from the playbook with localhost context.
Change-Id: I3b1cea5a25974f56ea9202e252af7b8420f4adc9
Limestone has updated their self signed cert and in order to properly
verify it we need to update the cert material to check against itself.
Maybe we should confirm with logan- that the new cert material looks
correct before landing this just to be sure we're trusting the correct
thing.
Change-Id: Id528716aecb45ffb263850f697c5fb22db3b7969