This reverts commit 05021f11a29a0213c5aecddf8e7b907b7834214a.
This switches Zuul and Nodepool to use Zookeeper TLS. The ZK
cluster is already listening on both ports.
Change-Id: I03d28fb75610fbf5221eeee28699e4bd6f1157ea
Fedora 33 is not released yet and the TripleO team would
like to perform some tests on that image.
Change-Id: I39f6bedadc12277739292cf31cc601bc3b6e30ec
The process of switching hosts to Ansible backups got a little
... backed up. I think the idea was that we would move these legacy
hosts to an all-Ansible configuration a little faster than what has
ended up happening.
In the mean time, we have done a better job of merging our environment
so puppet hosts are just a regular host that runs a puppet step rather
than separate entities.
So there is no problem running these roles on these older servers.
This will bring consistency to our backup story with everything being
managed from Ansible.
This will currently setup these hosts to backup to the only opendev
backup server in vexxhost. As a follow-on, we will add another
opendev backup host in another provider to provide dual-redundancy.
After that, we can remove the bup::site calls from these hosts and
retire the puppet-based backups.
Change-Id: Ieaea46d312056bf34992826d673356c56abfc87a
Builds running on the new container-based executors started failing to
connect to remote hosts with
Load key "/root/.ssh/id_rsa": invalid format
It turns out the new executor is writing keys in OpenSSH format,
rather than the older PEM format. And it seems that the OpenSSH
format is more picky about having a trailing space after the
-----END OPENSSH PRIVATE KEY-----
bit of the id_rsa file. By default, the file lookup runs an rstrip on
the incoming file to remove the trailing space. Turn that off so we
generate a valid key.
Change-Id: I49bb255f359bd595e1b88eda890d04cb18205b6e
With I37dcce3a67477ad3b2c36f2fd3657af18bc25c40 we removed the
configuration managment of backups on the zuul server, which was
happening via puppet. So the server continues in it's last state, but
if we ever built a fresh server it would not have backups.
Add it into the Ansible backup group, and uncomment the backup-server
group to get a run and setup the Ansible-managed backups.
Change-Id: I0af6b7fedc2f8f5a7f214771918138f72d298325
This includes a number of bugfixes. The most important for us likely
being one that allows you to create a repo with a HEAD set to something
other than master, https://github.com/go-gitea/gitea/pull/12182.
I didn't see any template deltas.
Change-Id: I45fdbf22fb1749d966fc5f6f457e89d40efe5949
I476674036748d284b9f51e30cc2ffc9650a50541 did not open port 3081 so
the proxy isn't visible. Also this group variable is a better place
to update the setting.
Change-Id: Iad0696221bb9a19852e4ce7cbe06b06ab360cf11
We have decided to go with the layer 7 reject rules; enable the
reverse proxy for production hosts.
Change-Id: I476674036748d284b9f51e30cc2ffc9650a50541
This brings in the settings added with
I87c85f82f6d38506977bc9bf26d34f6e66746b01 to the container deployment.
As noted there, this stops statsd writing null values for sparesly
updated timers and counters.
Change-Id: I14b5ee40fc8efddfb7bad4fad8a8ae66746131d9
There is a new release, update base container. Add promote job that
was forgotten with the original commit
Iddfafe852166fe95b3e433420e2e2a4a6380fc64.
Change-Id: Ie0d7febd2686d267903b29dfeda54e7cd6ad77a3
The OpenStack Infrastructure team has disbanded, replaced by the
OpenDev community and the OpenStack TaCT SIG. As OpenStack-specific
community infrastructure discussion now happens under TaCT's banner
and they use the openstack-discuss ML, redirect any future messages
for the openstack-infra ML there so we can close down the old list.
Change-Id: I0aea3b36668a92e47a6510880196589b94576cdf
This deploys graphite from the upstream container.
We override the statsd configuration to have it listen on ipv6.
Similarly we override the ngnix config to listen on ipv6, enable ssl,
forward port 80 to 443, block the /admin page (we don't use it).
For production we will just want to put some cinder storage in
/opt/graphite/storage on the production host and figure out how to
migrate the old stats. The is also a bit of cleanup that will follow,
because we half-converted grafana01.opendev.org -- so everything can't
be in the same group till that is gone.
Testing has been added to push some stats and ensure they are seen.
Change-Id: Ie843b3d90a72564ef90805f820c8abc61a71017d
This uses the Grafana container created with
Iddfafe852166fe95b3e433420e2e2a4a6380fc64 to run the
grafana.opendev.org service.
We retain the old model of an Apache reverse-proxy; it's well tested
and understood, it's much easier than trying to map all the SSL
termination/renewal/etc. into the Grafana container and we don't have
to convince ourselves the container is safe to be directly web-facing.
Otherwise this is a fairly straight forward deployment of the
container. As before, it uses the graph configuration kept in
project-config which is loaded in with grafyaml, which is included in
the container.
Once nice advantage is that it makes it quite easy to develop graphs
locally, using the container which can talk to the public graphite
instance. The documentation has been updated with a reference on how
to do this.
Change-Id: I0cc76d29b6911aecfebc71e5fdfe7cf4fcd071a4
LXC3 is usable with CentOS 8, while lxc2 is not available for it anymore
So it's worth adding it to reduce network related issues in CI
Change-Id: I562a7d8000ecda8790da88f08128c35b1ec4a2c9
As described inline, this crawler is causing us problems as it hits
the backends indiscriminately. Block it via the known UA strings,
which luckily are old so should not cause real client issues.
Change-Id: I0d78a8b625b69f600e00e8b3ea64576e0fdb84d9
This adds an option to have an Apache based reverse proxy on port 3081
forwarding to 3000. The idea is that we can use some of the Apache
filtering rules to reject certain traffic if/when required.
It is off by default, but tested in the gate.
Change-Id: Ie34772878d9fb239a5f69f2d7b993cc1f2142930
We use the Ctx.Req object's RemoteAddr value as it should include the
IP:port combo according to https://golang.org/pkg/net/http/#Request. The
default template uses Ctx.RemoteAddr which Macaron attempts to parse for
x-forwarded-for values but this has the problem of stripping out any
port info.
The port info is important for us because we are doing layer 4 load
balancing and not http l7 load balancing. That means the ip:port
mappings are necessary to map between haproxy and gitea logs.
Change-Id: Icea0d3d815c9d8dd2afe2b1bae627510c1d76f99
Adding the tcplog option to an haproxy backend definition overrides
the default log format. Remove it so the supplied default (which we
based on the tcplog built-in default with some additions) will be
used instead.
Change-Id: Id302dede950c1c2ab8e74a662cc3cb1186a6593d
When forwarding TCP sockets at OSI layer 4 with haproxy, it helps to
know the ephemeral port from which it sources each connection to the
backend. In this way, backend connections can be mapped to actual
client IP addresses by correlating backend service access logs with
haproxy logs.
Add "[%bi]:%bp" between the frontend name and backend name values
for the default log-format documented here:
https://www.haproxy.com/blog/haproxy-log-customization/
Change-Id: Ic2623d483d98cd686a85d40bc4f2e8577fb9087f
This will write an NCSA style access.log file to the logs volume.
This will let us see user agents, etc, to aid in troubleshooting.
Change-Id: I64457f631861768928038676545067b80ef7a122