655 lines
27 KiB
YAML
Raw Normal View History

- project:
templates:
- system-config-zuul-role-integration
- system-config-gerrit-images
check:
jobs:
- opendev-tox-docs
- opendev-buildset-registry
- tox-linters
- system-config-run-base
- system-config-run-base-ansible-devel:
voting: false
Add borg-backup roles This adds roles to implement backup with borg [1]. Our current tool "bup" has no Python 3 support and is not packaged for Ubuntu Focal. This means it is effectively end-of-life. borg fits our model of servers backing themselves up to a central location, is well documented and seems well supported. It also has the clarkb seal of approval :) As mentioned, borg works in the same manner as bup by doing an efficient back up over ssh to a remote server. The core of these roles are the same as the bup based ones; in terms of creating a separate user for each host and deploying keys and ssh config. This chooses to install borg in a virtualenv on /opt. This was chosen for a number of reasons; firstly reading the history of borg there have been incompatible updates (although they provide a tool to update repository formats); it seems important that we both pin the version we are using and keep clients and server in sync. Since we have a hetrogenous distribution collection we don't want to rely on the packaged tools which may differ. I don't feel like this is a great application for a container; we actually don't want it that isolated from the base system because it's goal is to read and copy it offsite with as little chance of things going wrong as possible. Borg has a lot of support for encrypting the data at rest in various ways. However, that introduces the possibility we could lose both the key and the backup data. Really the only thing stopping this is key management, and if we want to go down this path we can do it as a follow-on. The remote end server is configured via ssh command rules to run in append-only mode. This means a misbehaving client can't delete its old backups. In theory we can prune backups on the server side -- something we could not do with bup. The documentation has been updated but is vague on this part; I think we should get some hosts in operation, see how the de-duplication is working out and then decide how we want to mange things long term. Testing is added; a focal and bionic host both run a full backup of themselves to the backup server. Pretty cool, the logs are in /var/log/borg-backup-<host>.log. No hosts are currently in the borg groups, so this can be applied without affecting production. I'd suggest the next steps are to bring up a borg-based backup server and put a few hosts into this. After running for a while, we can add all hosts, and then deprecate the current bup-based backup server in vexxhost and replace that with a borg-based one; giving us dual offsite backups. [1] https://borgbackup.readthedocs.io/en/stable/ Change-Id: I2a125f2fac11d8e3a3279eb7fa7adb33a3acaa4e
2020-07-16 13:43:18 +10:00
- system-config-run-borg-backup
- system-config-run-dns
- system-config-run-eavesdrop:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-accessbot
soft: true
- name: system-config-build-image-ircbot
soft: true
- name: system-config-build-image-matrix-eavesdrop
soft: true
Migrate codesearch site to container The hound project has undergone a small re-birth and moved to https://github.com/hound-search/hound which has broken our deployment. We've talked about leaving codesearch up to gitea, but it's not quite there yet. There seems to be no point working on the puppet now. This builds a container than runs houndd. It's an opendev specific container; the config is pulled from project-config directly. There's some custom scripts that drive things. Some points for reviewers: - update-hound-config.sh uses "create-hound-config" (which is in jeepyb for historical reasons) to generate the config file. It grabs the latest projects.yaml from project-config and exits with a return code to indicate if things changed. - when the container starts, it runs update-hound-config.sh to populate the initial config. There is a testing environment flag and small config so it doesn't have to clone the entire opendev for functional testing. - it runs under supervisord so we can restart the daemon when projects are updated. Unlike earlier versions that didn't start listening till indexing was done, this version now puts up a "Hound is not ready yet" message when while it is working; so we can drop all the magic we were doing to probe if hound is listening via netstat and making Apache redirect to a status page. - resync-hound.sh is run from an external cron job daily, and does this update and restart check. Since it only reloads if changes are made, this should be relatively rare anyway. - There is a PR to monitor the config file (https://github.com/hound-search/hound/pull/357) which would mean the restart is unnecessary. This would be good in the near and we could remove the cron job. - playbooks/roles/codesearch is unexciting and deploys the container, certificates and an apache proxy back to localhost:6080 where hound is listening. I've combined removal of the old puppet bits here as the "-codesearch" namespace was already being used. Change-Id: I8c773b5ea6b87e8f7dfd8db2556626f7b2500473
2020-11-17 17:13:46 +11:00
- system-config-run-codesearch:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-hound
soft: true
- system-config-run-kerberos
- system-config-run-lists3:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-mailman
soft: true
- system-config-run-nodepool:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-zookeeper-statsd
soft: true
- system-config-run-meetpad
- system-config-run-mirror-x86
- system-config-run-mirror-update
- system-config-run-paste:
dependencies:
- name: opendev-buildset-registry
- system-config-run-static
- system-config-run-docker-registry
- system-config-run-etherpad:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-etherpad
soft: true
- system-config-run-gitea:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-gitea
soft: true
- name: system-config-build-image-haproxy-statsd
soft: true
- system-config-run-grafana:
dependencies:
- name: opendev-buildset-registry
- system-config-run-graphite
Add a keycloak server This adds a keycloak server so we can start experimenting with it. It's based on the docker-compose file Matthieu made for Zuul (see https://review.opendev.org/819745 ) We should be able to configure a realm and federate with openstackid and other providers as described in the opendev auth spec. However, I am unable to test federation with openstackid due its inability to configure an oauth app at "localhost". Therefore, we will need an actual deployed system to test it. This should allow us to do so. It will also allow use to connect realms to the newly available Zuul admin api on opendev. It should be possible to configure the realm the way we want, then export its configuration into a JSON file and then have our playbooks or the docker-compose file import it. That would allow us to drive change to the configuration of the system through code review. Because of the above limitation with openstackid, I think we should regard the current implementation as experimental. Once we have a realm configuration that we like (which we will create using the GUI), we can chose to either continue to maintain the config with the GUI and appropriate file backups, or switch to a gitops model based on an export. My understanding is that all the data (realms configuration and session) are kept in an H2 database. This is probably sufficient for now and even production use with Zuul, but we should probably switch to mariadb before any heavy (eg gerrit, etc) production use. This is a partial implementation of https://docs.opendev.org/opendev/infra-specs/latest/specs/central-auth.html We can re-deploy with a new domain when it exists. Change-Id: I2e069b1b220dbd3e0a5754ac094c2b296c141753 Co-Authored-By: Matthieu Huin <mhuin@redhat.com>
2021-11-30 13:03:12 -08:00
- system-config-run-keycloak
- system-config-run-review-3.10:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-gerrit-3.10
soft: true
- system-config-run-review-3.11:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-gerrit-3.11
soft: true
# Disabled until we can test the 3.10 -> 3.11 upgrade
#- system-config-upgrade-review:
# dependencies:
# - name: opendev-buildset-registry
# - name: system-config-build-image-gerrit-3.9
# soft: true
# - name: system-config-build-image-gerrit-3.10
# soft: true
- system-config-build-image-refstack
- system-config-run-refstack:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-refstack
soft: true
- system-config-run-tracing
- system-config-run-zookeeper:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-zookeeper-statsd
soft: true
- system-config-run-zuul:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-zookeeper-statsd
soft: true
- system-config-run-zuul-preview
- system-config-run-letsencrypt
- system-config-build-image-assets
- system-config-build-image-jinja-init:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-python-base-3.11-bookworm
soft: true
- system-config-build-image-gitea-init:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-jinja-init
soft: true
Migrate codesearch site to container The hound project has undergone a small re-birth and moved to https://github.com/hound-search/hound which has broken our deployment. We've talked about leaving codesearch up to gitea, but it's not quite there yet. There seems to be no point working on the puppet now. This builds a container than runs houndd. It's an opendev specific container; the config is pulled from project-config directly. There's some custom scripts that drive things. Some points for reviewers: - update-hound-config.sh uses "create-hound-config" (which is in jeepyb for historical reasons) to generate the config file. It grabs the latest projects.yaml from project-config and exits with a return code to indicate if things changed. - when the container starts, it runs update-hound-config.sh to populate the initial config. There is a testing environment flag and small config so it doesn't have to clone the entire opendev for functional testing. - it runs under supervisord so we can restart the daemon when projects are updated. Unlike earlier versions that didn't start listening till indexing was done, this version now puts up a "Hound is not ready yet" message when while it is working; so we can drop all the magic we were doing to probe if hound is listening via netstat and making Apache redirect to a status page. - resync-hound.sh is run from an external cron job daily, and does this update and restart check. Since it only reloads if changes are made, this should be relatively rare anyway. - There is a PR to monitor the config file (https://github.com/hound-search/hound/pull/357) which would mean the restart is unnecessary. This would be good in the near and we could remove the cron job. - playbooks/roles/codesearch is unexciting and deploys the container, certificates and an apache proxy back to localhost:6080 where hound is listening. I've combined removal of the old puppet bits here as the "-codesearch" namespace was already being used. Change-Id: I8c773b5ea6b87e8f7dfd8db2556626f7b2500473
2020-11-17 17:13:46 +11:00
- system-config-build-image-hound:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-python-base-3.11-bookworm
Migrate codesearch site to container The hound project has undergone a small re-birth and moved to https://github.com/hound-search/hound which has broken our deployment. We've talked about leaving codesearch up to gitea, but it's not quite there yet. There seems to be no point working on the puppet now. This builds a container than runs houndd. It's an opendev specific container; the config is pulled from project-config directly. There's some custom scripts that drive things. Some points for reviewers: - update-hound-config.sh uses "create-hound-config" (which is in jeepyb for historical reasons) to generate the config file. It grabs the latest projects.yaml from project-config and exits with a return code to indicate if things changed. - when the container starts, it runs update-hound-config.sh to populate the initial config. There is a testing environment flag and small config so it doesn't have to clone the entire opendev for functional testing. - it runs under supervisord so we can restart the daemon when projects are updated. Unlike earlier versions that didn't start listening till indexing was done, this version now puts up a "Hound is not ready yet" message when while it is working; so we can drop all the magic we were doing to probe if hound is listening via netstat and making Apache redirect to a status page. - resync-hound.sh is run from an external cron job daily, and does this update and restart check. Since it only reloads if changes are made, this should be relatively rare anyway. - There is a PR to monitor the config file (https://github.com/hound-search/hound/pull/357) which would mean the restart is unnecessary. This would be good in the near and we could remove the cron job. - playbooks/roles/codesearch is unexciting and deploys the container, certificates and an apache proxy back to localhost:6080 where hound is listening. I've combined removal of the old puppet bits here as the "-codesearch" namespace was already being used. Change-Id: I8c773b5ea6b87e8f7dfd8db2556626f7b2500473
2020-11-17 17:13:46 +11:00
soft: true
- system-config-build-image-etherpad
- system-config-build-image-mailman
- system-config-build-image-gitea:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-assets
soft: true
- system-config-build-image-haproxy-statsd:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-python-base-3.11-bookworm
soft: true
- system-config-build-image-zookeeper-statsd:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-python-base-3.11-bookworm
soft: true
- system-config-build-image-accessbot:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-python-base-3.11-bookworm
soft: true
- system-config-build-image-ircbot:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-python-builder-3.11-bookworm
soft: true
- system-config-build-image-matrix-eavesdrop:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-python-builder-3.11-bookworm
soft: true
- system-config-build-image-python-base-3.10-bookworm
- system-config-build-image-python-base-3.11-bookworm
- system-config-build-image-python-base-3.11-bookworm-debug
- system-config-build-image-python-base-3.12-bookworm
- system-config-build-image-python-base-3.12-bookworm-debug
- system-config-build-image-python-builder-3.10-bookworm
- system-config-build-image-python-builder-3.11-bookworm
- system-config-build-image-python-builder-3.12-bookworm
- system-config-build-image-uwsgi-base-3.10-bookworm
- system-config-build-image-uwsgi-base-3.11-bookworm
- system-config-build-image-uwsgi-base-3.12-bookworm
check-arm64:
jobs:
- system-config-run-base-arm64
- system-config-run-mirror-arm64
gate:
jobs:
- opendev-tox-docs
- opendev-buildset-registry
- tox-linters
- system-config-run-base
- system-config-run-borg-backup
- system-config-run-dns
- system-config-run-eavesdrop:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-accessbot
soft: true
- name: system-config-upload-image-ircbot
soft: true
- name: system-config-upload-image-matrix-eavesdrop
soft: true
Migrate codesearch site to container The hound project has undergone a small re-birth and moved to https://github.com/hound-search/hound which has broken our deployment. We've talked about leaving codesearch up to gitea, but it's not quite there yet. There seems to be no point working on the puppet now. This builds a container than runs houndd. It's an opendev specific container; the config is pulled from project-config directly. There's some custom scripts that drive things. Some points for reviewers: - update-hound-config.sh uses "create-hound-config" (which is in jeepyb for historical reasons) to generate the config file. It grabs the latest projects.yaml from project-config and exits with a return code to indicate if things changed. - when the container starts, it runs update-hound-config.sh to populate the initial config. There is a testing environment flag and small config so it doesn't have to clone the entire opendev for functional testing. - it runs under supervisord so we can restart the daemon when projects are updated. Unlike earlier versions that didn't start listening till indexing was done, this version now puts up a "Hound is not ready yet" message when while it is working; so we can drop all the magic we were doing to probe if hound is listening via netstat and making Apache redirect to a status page. - resync-hound.sh is run from an external cron job daily, and does this update and restart check. Since it only reloads if changes are made, this should be relatively rare anyway. - There is a PR to monitor the config file (https://github.com/hound-search/hound/pull/357) which would mean the restart is unnecessary. This would be good in the near and we could remove the cron job. - playbooks/roles/codesearch is unexciting and deploys the container, certificates and an apache proxy back to localhost:6080 where hound is listening. I've combined removal of the old puppet bits here as the "-codesearch" namespace was already being used. Change-Id: I8c773b5ea6b87e8f7dfd8db2556626f7b2500473
2020-11-17 17:13:46 +11:00
- system-config-run-codesearch:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-hound
soft: true
- system-config-run-kerberos
- system-config-run-lists3:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-mailman
soft: true
- system-config-run-nodepool:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-zookeeper-statsd
soft: true
- system-config-run-meetpad
- system-config-run-mirror-x86
- system-config-run-mirror-update
- system-config-run-paste:
dependencies:
- name: opendev-buildset-registry
- system-config-run-static
- system-config-run-docker-registry
- system-config-run-etherpad:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-etherpad
soft: true
- system-config-run-gitea:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-gitea
soft: true
- name: system-config-upload-image-haproxy-statsd
soft: true
- system-config-run-grafana:
dependencies:
- name: opendev-buildset-registry
- system-config-run-graphite
Add a keycloak server This adds a keycloak server so we can start experimenting with it. It's based on the docker-compose file Matthieu made for Zuul (see https://review.opendev.org/819745 ) We should be able to configure a realm and federate with openstackid and other providers as described in the opendev auth spec. However, I am unable to test federation with openstackid due its inability to configure an oauth app at "localhost". Therefore, we will need an actual deployed system to test it. This should allow us to do so. It will also allow use to connect realms to the newly available Zuul admin api on opendev. It should be possible to configure the realm the way we want, then export its configuration into a JSON file and then have our playbooks or the docker-compose file import it. That would allow us to drive change to the configuration of the system through code review. Because of the above limitation with openstackid, I think we should regard the current implementation as experimental. Once we have a realm configuration that we like (which we will create using the GUI), we can chose to either continue to maintain the config with the GUI and appropriate file backups, or switch to a gitops model based on an export. My understanding is that all the data (realms configuration and session) are kept in an H2 database. This is probably sufficient for now and even production use with Zuul, but we should probably switch to mariadb before any heavy (eg gerrit, etc) production use. This is a partial implementation of https://docs.opendev.org/opendev/infra-specs/latest/specs/central-auth.html We can re-deploy with a new domain when it exists. Change-Id: I2e069b1b220dbd3e0a5754ac094c2b296c141753 Co-Authored-By: Matthieu Huin <mhuin@redhat.com>
2021-11-30 13:03:12 -08:00
- system-config-run-keycloak
- system-config-run-review-3.10:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-gerrit-3.10
soft: true
- system-config-run-review-3.11:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-gerrit-3.11
soft: true
- system-config-run-refstack:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-refstack
soft: true
- system-config-run-tracing
- system-config-run-zookeeper:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-zookeeper-statsd
soft: true
- system-config-run-zuul:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-zookeeper-statsd
soft: true
- system-config-run-zuul-preview
- system-config-run-letsencrypt
- system-config-upload-image-jinja-init:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-python-base-3.11-bookworm
soft: true
- system-config-upload-image-gitea-init:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-jinja-init
soft: true
- system-config-upload-image-hound:
dependencies:
- name: opendev-buildset-registry
- name: system-config-build-image-python-base-3.11-bookworm
soft: true
- system-config-upload-image-assets
- system-config-upload-image-etherpad
- system-config-upload-image-mailman
- system-config-upload-image-gitea:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-assets
soft: true
- system-config-upload-image-refstack
- system-config-upload-image-haproxy-statsd:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-python-base-3.11-bookworm
soft: true
- system-config-upload-image-zookeeper-statsd:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-python-base-3.11-bookworm
soft: true
- system-config-upload-image-accessbot:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-python-base-3.11-bookworm
soft: true
- system-config-upload-image-ircbot:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-python-builder-3.11-bookworm
soft: true
- system-config-upload-image-matrix-eavesdrop:
dependencies:
- name: opendev-buildset-registry
- name: system-config-upload-image-python-builder-3.11-bookworm
soft: true
- system-config-upload-image-python-base-3.10-bookworm
- system-config-upload-image-python-base-3.11-bookworm
- system-config-upload-image-python-base-3.11-bookworm-debug
- system-config-upload-image-python-base-3.12-bookworm
- system-config-upload-image-python-base-3.12-bookworm-debug
- system-config-upload-image-python-builder-3.10-bookworm
- system-config-upload-image-python-builder-3.11-bookworm
- system-config-upload-image-python-builder-3.12-bookworm
- system-config-upload-image-uwsgi-base-3.10-bookworm
- system-config-upload-image-uwsgi-base-3.11-bookworm
- system-config-upload-image-uwsgi-base-3.12-bookworm
promote:
jobs:
- opendev-promote-docs
deploy:
jobs:
- system-config-promote-image-assets
Migrate codesearch site to container The hound project has undergone a small re-birth and moved to https://github.com/hound-search/hound which has broken our deployment. We've talked about leaving codesearch up to gitea, but it's not quite there yet. There seems to be no point working on the puppet now. This builds a container than runs houndd. It's an opendev specific container; the config is pulled from project-config directly. There's some custom scripts that drive things. Some points for reviewers: - update-hound-config.sh uses "create-hound-config" (which is in jeepyb for historical reasons) to generate the config file. It grabs the latest projects.yaml from project-config and exits with a return code to indicate if things changed. - when the container starts, it runs update-hound-config.sh to populate the initial config. There is a testing environment flag and small config so it doesn't have to clone the entire opendev for functional testing. - it runs under supervisord so we can restart the daemon when projects are updated. Unlike earlier versions that didn't start listening till indexing was done, this version now puts up a "Hound is not ready yet" message when while it is working; so we can drop all the magic we were doing to probe if hound is listening via netstat and making Apache redirect to a status page. - resync-hound.sh is run from an external cron job daily, and does this update and restart check. Since it only reloads if changes are made, this should be relatively rare anyway. - There is a PR to monitor the config file (https://github.com/hound-search/hound/pull/357) which would mean the restart is unnecessary. This would be good in the near and we could remove the cron job. - playbooks/roles/codesearch is unexciting and deploys the container, certificates and an apache proxy back to localhost:6080 where hound is listening. I've combined removal of the old puppet bits here as the "-codesearch" namespace was already being used. Change-Id: I8c773b5ea6b87e8f7dfd8db2556626f7b2500473
2020-11-17 17:13:46 +11:00
- system-config-promote-image-hound
- system-config-promote-image-jinja-init
- system-config-promote-image-gitea-init
- system-config-promote-image-gitea
- system-config-promote-image-etherpad
- system-config-promote-image-mailman
- system-config-promote-image-haproxy-statsd
- system-config-promote-image-zookeeper-statsd
- system-config-promote-image-accessbot
- system-config-promote-image-refstack
- system-config-promote-image-ircbot
- system-config-promote-image-matrix-eavesdrop
- system-config-promote-image-python-base-3.10-bookworm
- system-config-promote-image-python-base-3.11-bookworm
- system-config-promote-image-python-base-3.11-bookworm-debug
- system-config-promote-image-python-base-3.12-bookworm
- system-config-promote-image-python-base-3.12-bookworm-debug
- system-config-promote-image-python-builder-3.10-bookworm
- system-config-promote-image-python-builder-3.11-bookworm
- system-config-promote-image-python-builder-3.12-bookworm
- system-config-promote-image-uwsgi-base-3.10-bookworm
- system-config-promote-image-uwsgi-base-3.11-bookworm
- system-config-promote-image-uwsgi-base-3.12-bookworm
# NOTE: infra-prod-* jobs have a hierarchy below that ensure
# they can run in parallel. We are deliberately keeping their
# dependencies here rather than job definitions to help keep
# these relationships clear.
# This installs the ansible on bridge that all the infra-prod
# jobs will run with. Note the jobs use this ansible to then
# run against zuul's checkout of system-config.
- infra-prod-bootstrap-bridge
# From now on, all jobs should depend on base
- infra-prod-base: &infra-prod-base
dependencies:
- name: infra-prod-bootstrap-bridge
soft: true
# Legacy puppet hosts
- infra-prod-remote-puppet-else: &infra-prod-remote-puppet-else
dependencies:
- name: infra-prod-base
soft: true
#
# Only depends on base, or amongst themselves.
#
- infra-prod-service-bridge: &infra-prod-service-bridge
dependencies:
- name: infra-prod-base
soft: true
- infra-prod-run-cloud-launcher: &infra-prod-run-cloud-launcher
dependencies:
# depends on the cloud config written out by
# service-bridge
- name: infra-prod-service-bridge
soft: true
- infra-prod-service-kerberos: &infra-prod-service-kerberos
dependencies:
- name: infra-prod-base
soft: true
- infra-prod-service-afs: &infra-prod-service-afs
dependencies:
- name: infra-prod-base
soft: true
# NOTE(ianw) in theory we'd want auth changes before
# updating services like openafs using them. Not sure
# in practice this matters much; we very rarely change
# things here anyway.
- name: infra-prod-service-kerberos
soft: true
- infra-prod-service-nameserver: &infra-prod-service-nameserver
dependencies:
- name: infra-prod-base
soft: true
- infra-prod-service-mirror-update: &infra-prod-service-mirror-update
dependencies:
- name: infra-prod-base
soft: true
#
# Hosts using certificates and backups
#
# Hosts that backup should depend on this as this will create
# the users and deploy the keys required for the borg-backup
# role to work.
- infra-prod-service-borg-backup: &infra-prod-service-borg-backup
dependencies:
- name: infra-prod-base
soft: true
# Hosts that have letsencrypt certs should depend on this, as
# it will write out the key material before they try to start
# services that depend on it. For simplicity, we parent to
# this job.
- infra-prod-letsencrypt: &infra-prod-letsencrypt
dependencies:
- name: infra-prod-base
soft: true
- name: infra-prod-service-nameserver
soft: true
# letsencrypt depdencies. keep in alphabetical order
- infra-prod-service-codesearch: &infra-prod-service-codesearch
Migrate codesearch site to container The hound project has undergone a small re-birth and moved to https://github.com/hound-search/hound which has broken our deployment. We've talked about leaving codesearch up to gitea, but it's not quite there yet. There seems to be no point working on the puppet now. This builds a container than runs houndd. It's an opendev specific container; the config is pulled from project-config directly. There's some custom scripts that drive things. Some points for reviewers: - update-hound-config.sh uses "create-hound-config" (which is in jeepyb for historical reasons) to generate the config file. It grabs the latest projects.yaml from project-config and exits with a return code to indicate if things changed. - when the container starts, it runs update-hound-config.sh to populate the initial config. There is a testing environment flag and small config so it doesn't have to clone the entire opendev for functional testing. - it runs under supervisord so we can restart the daemon when projects are updated. Unlike earlier versions that didn't start listening till indexing was done, this version now puts up a "Hound is not ready yet" message when while it is working; so we can drop all the magic we were doing to probe if hound is listening via netstat and making Apache redirect to a status page. - resync-hound.sh is run from an external cron job daily, and does this update and restart check. Since it only reloads if changes are made, this should be relatively rare anyway. - There is a PR to monitor the config file (https://github.com/hound-search/hound/pull/357) which would mean the restart is unnecessary. This would be good in the near and we could remove the cron job. - playbooks/roles/codesearch is unexciting and deploys the container, certificates and an apache proxy back to localhost:6080 where hound is listening. I've combined removal of the old puppet bits here as the "-codesearch" namespace was already being used. Change-Id: I8c773b5ea6b87e8f7dfd8db2556626f7b2500473
2020-11-17 17:13:46 +11:00
dependencies:
- name: infra-prod-letsencrypt
soft: true
- name: system-config-promote-image-hound
soft: true
- infra-prod-service-eavesdrop: &infra-prod-service-eavesdrop
dependencies:
- name: infra-prod-service-borg-backup
soft: true
- name: infra-prod-letsencrypt
soft: true
- name: system-config-promote-image-ircbot
soft: true
- name: system-config-promote-image-matrix-eavesdrop
soft: true
- infra-prod-service-etherpad: &infra-prod-service-etherpad
dependencies:
- name: infra-prod-service-borg-backup
soft: true
- name: infra-prod-letsencrypt
soft: true
- name: system-config-promote-image-etherpad
soft: true
- infra-prod-service-gitea: &infra-prod-service-gitea
dependencies:
- name: infra-prod-service-borg-backup
soft: true
- name: infra-prod-letsencrypt
soft: true
- name: system-config-promote-image-gitea
soft: true
- infra-prod-service-gitea-lb: &infra-prod-service-gitea-lb
dependencies:
- name: system-config-promote-image-haproxy-statsd
soft: true
- infra-prod-service-grafana: &infra-prod-service-grafana
dependencies:
- name: infra-prod-letsencrypt
soft: true
- infra-prod-service-graphite: &infra-prod-service-graphite
dependencies:
- name: infra-prod-letsencrypt
soft: true
Add a keycloak server This adds a keycloak server so we can start experimenting with it. It's based on the docker-compose file Matthieu made for Zuul (see https://review.opendev.org/819745 ) We should be able to configure a realm and federate with openstackid and other providers as described in the opendev auth spec. However, I am unable to test federation with openstackid due its inability to configure an oauth app at "localhost". Therefore, we will need an actual deployed system to test it. This should allow us to do so. It will also allow use to connect realms to the newly available Zuul admin api on opendev. It should be possible to configure the realm the way we want, then export its configuration into a JSON file and then have our playbooks or the docker-compose file import it. That would allow us to drive change to the configuration of the system through code review. Because of the above limitation with openstackid, I think we should regard the current implementation as experimental. Once we have a realm configuration that we like (which we will create using the GUI), we can chose to either continue to maintain the config with the GUI and appropriate file backups, or switch to a gitops model based on an export. My understanding is that all the data (realms configuration and session) are kept in an H2 database. This is probably sufficient for now and even production use with Zuul, but we should probably switch to mariadb before any heavy (eg gerrit, etc) production use. This is a partial implementation of https://docs.opendev.org/opendev/infra-specs/latest/specs/central-auth.html We can re-deploy with a new domain when it exists. Change-Id: I2e069b1b220dbd3e0a5754ac094c2b296c141753 Co-Authored-By: Matthieu Huin <mhuin@redhat.com>
2021-11-30 13:03:12 -08:00
- infra-prod-service-keycloak: &infra-prod-service-keycloak
dependencies:
- name: infra-prod-letsencrypt
soft: true
- infra-prod-service-meetpad: &infra-prod-service-meetpad
dependencies:
- name: infra-prod-letsencrypt
soft: true
- infra-prod-service-lists3: &infra-prod-service-lists3
dependencies:
- name: infra-prod-service-borg-backup
soft: true
- name: infra-prod-letsencrypt
soft: true
- name: system-config-promote-image-mailman
soft: true
- infra-prod-service-mirror: &infra-prod-service-mirror
dependencies:
- name: infra-prod-letsencrypt
soft: true
- infra-prod-service-nodepool: &infra-prod-service-nodepool
dependencies:
- name: infra-prod-letsencrypt
soft: true
- infra-prod-service-static: &infra-prod-service-static
dependencies:
- name: infra-prod-letsencrypt
soft: true
- infra-prod-service-paste: &infra-prod-service-paste
dependencies:
- name: infra-prod-service-borg-backup
soft: true
- name: infra-prod-letsencrypt
soft: true
- infra-prod-service-registry: &infra-prod-service-registry
dependencies:
- name: infra-prod-letsencrypt
soft: true
- infra-prod-service-refstack: &infra-prod-service-refstack
dependencies:
- name: infra-prod-service-borg-backup
soft: true
- name: infra-prod-letsencrypt
soft: true
- name: system-config-promote-image-refstack
soft: true
- infra-prod-service-review: &infra-prod-service-review
dependencies:
- name: infra-prod-service-borg-backup
soft: true
- name: infra-prod-letsencrypt
soft: true
- name: system-config-promote-image-gerrit-3.10
soft: true
- infra-prod-service-tracing: &infra-prod-service-tracing
dependencies:
- name: infra-prod-letsencrypt
soft: true
- infra-prod-service-zookeeper: &infra-prod-service-zookeeper
dependencies:
- name: infra-prod-letsencrypt
soft: true
- name: system-config-promote-image-zookeeper-statsd
soft: true
- infra-prod-service-zuul: &infra-prod-service-zuul
dependencies:
- name: infra-prod-service-borg-backup
soft: true
- name: infra-prod-letsencrypt
soft: true
# should reconfigure after any project updates
- name: infra-prod-manage-projects
soft: true
- infra-prod-service-zuul-db
- infra-prod-service-zuul-lb: &infra-prod-service-zuul-lb
dependencies:
- name: system-config-promote-image-haproxy-statsd
soft: true
- infra-prod-service-zuul-preview: &infra-prod-service-zuul-preview
dependencies:
- name: infra-prod-letsencrypt
soft: true
#
# Jobs that run as secondary steps
#
# accessbot should run on a setup eavesdrop host
- infra-prod-run-accessbot: &infra-prod-run-accessbot
dependencies:
- name: infra-prod-base
soft: true
- name: infra-prod-service-eavesdrop
soft: true
- name: system-config-promote-image-accessbot
soft: true
# manage-projects runs jeepyb etc. and should run on
# a setup review host. also sets up gitea
- infra-prod-manage-projects: &infra-prod-manage-projects
dependencies:
- name: infra-prod-base
soft: true
- name: infra-prod-service-review
soft: true
- name: infra-prod-service-gitea
soft: true
- name: system-config-promote-image-gerrit-3.10
soft: true
# Note that this job also runs from project-config, so we
# match system-config specific files here rather than the
# job definition.
files:
- inventory/.*
- playbooks/manage-projects.yaml
- inventory/service/group_vars/review.yaml
- inventory/service/group_vars/gitea.yaml
- inventory/service/host_vars/gitea
- inventory/service/host_vars/review
- playbooks/roles/gitea-git-repos/
- playbooks/roles/gerrit/defaults/main.yaml
- playbooks/roles/gerrit/tasks/manage-projects.yaml
periodic:
jobs:
- developer-openstack-goaccess-report
- docs-opendev-goaccess-report
- docs-openstack-goaccess-report
- docs-starlingx-goaccess-report
- governance-openstack-goaccess-report
- releases-openstack-goaccess-report
- security-openstack-goaccess-report
- specs-openstack-goaccess-report
- tarballs-opendev-goaccess-report
- zuul-ci-goaccess-report
# Nightly runs of ansible things for catchup
# Keep in order from above
- infra-prod-bootstrap-bridge
- infra-prod-base: *infra-prod-base
- infra-prod-remote-puppet-else: *infra-prod-remote-puppet-else
- infra-prod-letsencrypt: *infra-prod-letsencrypt
- infra-prod-service-bridge: *infra-prod-service-bridge
- infra-prod-run-cloud-launcher: *infra-prod-run-cloud-launcher
- infra-prod-service-kerberos: *infra-prod-service-kerberos
- infra-prod-service-afs: *infra-prod-service-afs
- infra-prod-service-nameserver: *infra-prod-service-nameserver
- infra-prod-service-mirror-update: *infra-prod-service-mirror-update
- infra-prod-service-borg-backup: *infra-prod-service-borg-backup
- infra-prod-letsencrypt: *infra-prod-letsencrypt
- infra-prod-service-codesearch: *infra-prod-service-codesearch
- infra-prod-service-eavesdrop: *infra-prod-service-eavesdrop
- infra-prod-service-etherpad: *infra-prod-service-etherpad
- infra-prod-service-gitea: *infra-prod-service-gitea
- infra-prod-service-gitea-lb: *infra-prod-service-gitea-lb
- infra-prod-service-grafana: *infra-prod-service-grafana
- infra-prod-service-graphite: *infra-prod-service-graphite
Add a keycloak server This adds a keycloak server so we can start experimenting with it. It's based on the docker-compose file Matthieu made for Zuul (see https://review.opendev.org/819745 ) We should be able to configure a realm and federate with openstackid and other providers as described in the opendev auth spec. However, I am unable to test federation with openstackid due its inability to configure an oauth app at "localhost". Therefore, we will need an actual deployed system to test it. This should allow us to do so. It will also allow use to connect realms to the newly available Zuul admin api on opendev. It should be possible to configure the realm the way we want, then export its configuration into a JSON file and then have our playbooks or the docker-compose file import it. That would allow us to drive change to the configuration of the system through code review. Because of the above limitation with openstackid, I think we should regard the current implementation as experimental. Once we have a realm configuration that we like (which we will create using the GUI), we can chose to either continue to maintain the config with the GUI and appropriate file backups, or switch to a gitops model based on an export. My understanding is that all the data (realms configuration and session) are kept in an H2 database. This is probably sufficient for now and even production use with Zuul, but we should probably switch to mariadb before any heavy (eg gerrit, etc) production use. This is a partial implementation of https://docs.opendev.org/opendev/infra-specs/latest/specs/central-auth.html We can re-deploy with a new domain when it exists. Change-Id: I2e069b1b220dbd3e0a5754ac094c2b296c141753 Co-Authored-By: Matthieu Huin <mhuin@redhat.com>
2021-11-30 13:03:12 -08:00
- infra-prod-service-keycloak: *infra-prod-service-keycloak
- infra-prod-service-meetpad: *infra-prod-service-meetpad
- infra-prod-service-lists3: *infra-prod-service-lists3
- infra-prod-service-mirror: *infra-prod-service-mirror
- infra-prod-service-nodepool: *infra-prod-service-nodepool
- infra-prod-service-static: *infra-prod-service-static
- infra-prod-service-paste: *infra-prod-service-paste
- infra-prod-service-registry: *infra-prod-service-registry
- infra-prod-service-refstack: *infra-prod-service-refstack
- infra-prod-service-review: *infra-prod-service-review
- infra-prod-service-tracing: *infra-prod-service-tracing
- infra-prod-service-zookeeper: *infra-prod-service-zookeeper
- infra-prod-service-zuul: *infra-prod-service-zuul
- infra-prod-service-zuul-db
- infra-prod-service-zuul-lb: *infra-prod-service-zuul-lb
- infra-prod-service-zuul-preview: *infra-prod-service-zuul-preview
- infra-prod-run-accessbot: *infra-prod-run-accessbot
- infra-prod-manage-projects: *infra-prod-manage-projects
opendev-prod-hourly:
jobs:
- infra-prod-bootstrap-bridge
- infra-prod-service-bridge: *infra-prod-service-bridge
- infra-prod-service-nodepool: *infra-prod-service-nodepool
- infra-prod-service-registry: *infra-prod-service-registry
- infra-prod-service-zuul: *infra-prod-service-zuul
- infra-prod-service-eavesdrop: *infra-prod-service-eavesdrop