We hold off on landing this until we're sure all the cleanup is done
with this cloud provider. Removing it with everything else would make
cleanup more difficult.
Change-Id: I905a37b430cb313b10a239d7d1b843404af06403
We've been told these resources are going away. Trying to remove them
gracefully from nodepool. Once that is done we can remove our configs
here.
Depends-On: https://review.opendev.org/c/openstack/project-config/+/831398
Change-Id: I396ca49ab33c09622dd398012528fe7172c39fe8
INAP mtl01 region is now owned by iWeb. This updates the cloud launcher
to use the new name and instructs the mirror in this cloud to provision
ssl certs for the old inap and new iweb names as well as updating
clouds.yaml files.
Change-Id: I1256a2e24df1c79dea06716ae4dfbcfe119c13f8
We have limited ipv4 address space in this cloud. Currently we can do
about 6 IP addresses for test nodes after we account for network
infrastructure and the mirror. By switching these instances to using the
external network directly we can clean up some of the neutron network
infrastructure which we think may free up 2 more IP addresses. That
should get us our originally intended max-servers of 8.
Change-Id: I705ff082ff06ae1c97f4c229a22893e6d87d206d
This adds the new inmotion cloud to clouds.yaml files and the cloud
launcher config. This cloud is running on an openstack as a service
platform so we have quite a bit of freedom to make changes here within
the resource limitations if necessary.
Change-Id: I2aed6dffde4a1d6e3044c4bd8df4ca60065ae1ea
The public5 network has the most IP addresses available and is
recommended for use.
This cloud also has fixed public IP's, not floating
Change-Id: I7ae1bb0081d3a86149225c3400b53a9561ccffe6
Otherwise you get
BadRequest: Expecting to find domain in project - the server could
not comply with the request since it is either malformed or otherwise
incorrect. The client is assumed to be in error.
Change-Id: If8869fe888c9f1e9c0a487405574d59dd3001b65
The Oregon State University Open Source Lab (OSUOSL;
https://osuosl.org/) has kindly donated some ARM64 resources. Add
initial cloud config.
Change-Id: I43ed7f0cb0b193db52d9908e39c04e351b3887e3
The OpenEdge cloud has been offline for five months, initially
disabled in I4e46c782a63279d9c18ff4ba2944c15b3027114b, so go ahead
and clean up lingering references. If it is restored later, this can
be reverted fairly easily.
Depends-On: https://review.opendev.org/783989
Depends-On: https://review.opendev.org/783990
Change-Id: I544895003344bc8202363993b52f978e1c07d061
We had the clouds split from back when we used the openstack
dynamic inventory plugin. We don't use that anymore, so we don't
need these to be split. Any other usage we have directly references
a cloud.
Change-Id: I5d95bf910fb8e2cbca64f92c6ad4acd3aaeed1a3
There are insufficient ipv4 floating-ips to cover our VM quota; switch
to ipv6 only so all vm's can boot.
Change-Id: I2225fa9ea888bcf167be7139e036a4b5406b1f4f
This is required because RAX have an odd /v2 that isn't listed in the
catalogue but actually exists (but isn't really full /v2/ support).
It became a problem when recent client versions dropped /v1 support,
so now we have to force them to v2 like this.
Depends-On: https://review.opendev.org/714624
Change-Id: I6f139d2b3036ef0ecaddf3a9a225faae3a2b0450
Change I9ca77927046e2b2e3cee9a642d0bc566e3871515 updated the
nodepool-builder_opendev group to deploy it's config into
/etc/openstack, but updated the wrong template.
The nodepool_builder_clouds.yaml.j2 file was an old, unreferenced copy
from left-over from Id1161bca8f23129202599dba299c288a6aa29212 when we
wanted to use nodepool to manage control-plane clouds. That didn't
work out so well and I think we just missed cleaning it up with
I5e72928ec2dec37afa9c8567eff30eb6e9c04f1d.
Remove it now, and port the path changes into the correct config file.
Change-Id: I37af69b342b413df94435e59a7c16bb218183399
Currently we deploy the openstacksdk config into ~nodepool/.config on
the container, and then map this directory back to /etc/openstack in
the docker-compose. The config-file still hard-codes the
limestone.pem file to ~nodepool/.config.
Switch the nodepool-builder_opendev group to install to
/etc/openstack, and update the nodepool config file template to use
the configured directory for the .pem path.
Also update the testing paths.
Story: #2007407
Task: #39015
Change-Id: I9ca77927046e2b2e3cee9a642d0bc566e3871515
Sister change for Ia5caff34d3fafaffc459e7572a4eef6bd94422ea and
removing earlier references to the mirror server in preparation for
building and adding the new one.
Change-Id: I7d506be85326835d5e77a0c9c461f2d457b1dfd3
This is a new cloud provided via citycloud that will add resources
capable of running Airship jobs. The goal is to use this as a stepping
stone to having Airship jobs run on our generic CI resources. This cloud
will provide both generic and larger resources to support this.
Change-Id: I63fd9023bc11f1382424c8906dc306cee5b3f58d
Rax APIs don't support newer identity v3 or volume v2/v3. Set identity
to v2 so that catalogs can be listed and volume to v1 so that volumes
can be listed.
Change-Id: I6dddf93fb2c7b1a73315629e4a983a2d5a0142cc
We don't want nodepool to use floating IPs in the fn cloud as it is an
ipv6 only cloud. We explicitly tell it there is no fip source and that
the tenant network routes ipv6 externally. This config is based on the
limestone configuration which is a similar cloud network wise.
Change-Id: I4a27a22a5beb9c5fc9d3e16cd2ca5b41aecbb46f
We ended up running into a problem with nodepool built control plane
images (has to do with boot from volume not allowing us to delete images
that are in use by a nova instance). We have decided to clean this up
and go back to not doing this until we can do it more properly.
Note this isn't a revert because having a group for access to control
plane clouds does seem like a good idea in general and I believe there
have been changes we'd have to resolve in the clouds.yaml files anyway.
Depends-On: https://review.opendev.org/#/c/665012/
Change-Id: I5e72928ec2dec37afa9c8567eff30eb6e9c04f1d
Donnyd has kindly offered us access to fortnebula's test cloud. This
adds clouds.yaml entries to bridge and nodepool so that we can take
advantage of these resources.
Change-Id: I4ebc261c6f548aca0b3f37dc9b60ffac08029e67
In order to have nodepool build images and upload them to control
plane clouds, add them to the clouds.yaml on the nodepool-builder
hosts. Keep them out of the launcher configs by splitting the config
templates. So that we can keep our copies of things to a minimum,
create a group called "control-plane-clouds" and put bridge and nb0*
in it.
There are clouds mentions in here that we no longer use, a followup
patch will clean those up.
NOTE: Requires shifting the clouds config dict from
host_vars/bridge.openstack.org.yaml to group_vars/control-plane-clouds.yaml
in the secrets on bridge.
Needed-By: https://review.opendev.org/640044
Change-Id: Id1161bca8f23129202599dba299c288a6aa29212
Do this in an attempt to mitigate/work around the dns resolution
problems we have had in that cloud. One thoguht is that this could be
ipv6 specific.
Change-Id: I1f9ef4a031749484d06de9427943abac4de33d29
Add the gitea k8s cluster to root's .kube/config file on bridge.
The default context does not exist in order to force us to explicitly
specify a context for all commands (so that we do not inadvertently
deploy something on the wrong k8s cluster).
Change-Id: I53368c76e6f5b3ab45b1982e9a977f9ce9f08581
The current-context field needs to reference a defined context. The file
otherwise defines only one "vexxhost-sjc1". Set current-context to that
context.
Change-Id: I1d8991efb5d546f007146fd2fa86ce2b2aeed286
This adds connection information for an experimental kubernetes
cluster hosted in vexxhost-sjc1 to the nodepool servers.
Change-Id: Ie7aad841df1779ddba69315ddd9e0ae96a1c8c53
These names were taken from the citycloud web interface RC file, but
actually match what we already have in
playbooks/templates/clouds/nodepool_clouds.yaml.j2
Testing with this I can authenticate to openstackzuul-citycloud
Change-Id: Ic7aeb5c3a96e5594b8c9c396daaad7e79c1f5c63
citycloud is rolling out per-region keystone. There is a change with an
error in it in the latest openstacksdk, so put the right auth_url into
the files directly while we update it and release it again.
Additionally, Sto2 and Lon1 each have different domain ids. The domain
names are the same though - and that's good, because logical names are
nicer in config files anyway.
Restore the config for those clouds.
Change-Id: If55d27defc164bd38af2ffd1e7739120389422af
This region does not show up in catalog listings anymore and is causing
inventory generation for ansible to fail. This change removes Sto2 from
the management side of things so that we can get ansible and puppet
running again.
This does not cleanup nodepool which we can do in a followup once
ansible and puppet are running again.
Change-Id: Ifeea238592b897aa4cea47b723513d7f38d6374b
This region does not show up in catalog listings anymore and is causing
inventory generation for ansible to fail. This change removes Lon1 from
the management side of things so that we can get ansible and puppet
running again.
This does not cleanup nodepool which we can do in a followup once
ansible and puppet are running again.
Change-Id: Icf3b19381ebba3498dfc204a48dc1ea52ae9d951
Keystone auth and openstacksdk/openstackclient do not do the correct
thing without this setting set. They try v2 even though the discovery
doc at the root url does not list that version as valid. Force version 3
so that things will work again.
Change-Id: I7e1b0189c842bbf9640e2cd50873c9f7992dc8d3