We've been told these resources are going away. Trying to remove them
gracefully from nodepool. Once that is done we can remove our configs
here.
Depends-On: https://review.opendev.org/c/openstack/project-config/+/831398
Change-Id: I396ca49ab33c09622dd398012528fe7172c39fe8
INAP mtl01 region is now owned by iWeb. This updates the cloud launcher
to use the new name and instructs the mirror in this cloud to provision
ssl certs for the old inap and new iweb names as well as updating
clouds.yaml files.
Change-Id: I1256a2e24df1c79dea06716ae4dfbcfe119c13f8
We have limited ipv4 address space in this cloud. Currently we can do
about 6 IP addresses for test nodes after we account for network
infrastructure and the mirror. By switching these instances to using the
external network directly we can clean up some of the neutron network
infrastructure which we think may free up 2 more IP addresses. That
should get us our originally intended max-servers of 8.
Change-Id: I705ff082ff06ae1c97f4c229a22893e6d87d206d
This adds the new inmotion cloud to clouds.yaml files and the cloud
launcher config. This cloud is running on an openstack as a service
platform so we have quite a bit of freedom to make changes here within
the resource limitations if necessary.
Change-Id: I2aed6dffde4a1d6e3044c4bd8df4ca60065ae1ea
The public5 network has the most IP addresses available and is
recommended for use.
This cloud also has fixed public IP's, not floating
Change-Id: I7ae1bb0081d3a86149225c3400b53a9561ccffe6
The OpenEdge cloud has been offline for five months, initially
disabled in I4e46c782a63279d9c18ff4ba2944c15b3027114b, so go ahead
and clean up lingering references. If it is restored later, this can
be reverted fairly easily.
Depends-On: https://review.opendev.org/783989
Depends-On: https://review.opendev.org/783990
Change-Id: I544895003344bc8202363993b52f978e1c07d061
There are insufficient ipv4 floating-ips to cover our VM quota; switch
to ipv6 only so all vm's can boot.
Change-Id: I2225fa9ea888bcf167be7139e036a4b5406b1f4f
This is required because RAX have an odd /v2 that isn't listed in the
catalogue but actually exists (but isn't really full /v2/ support).
It became a problem when recent client versions dropped /v1 support,
so now we have to force them to v2 like this.
Depends-On: https://review.opendev.org/714624
Change-Id: I6f139d2b3036ef0ecaddf3a9a225faae3a2b0450
Change I9ca77927046e2b2e3cee9a642d0bc566e3871515 updated the
nodepool-builder_opendev group to deploy it's config into
/etc/openstack, but updated the wrong template.
The nodepool_builder_clouds.yaml.j2 file was an old, unreferenced copy
from left-over from Id1161bca8f23129202599dba299c288a6aa29212 when we
wanted to use nodepool to manage control-plane clouds. That didn't
work out so well and I think we just missed cleaning it up with
I5e72928ec2dec37afa9c8567eff30eb6e9c04f1d.
Remove it now, and port the path changes into the correct config file.
Change-Id: I37af69b342b413df94435e59a7c16bb218183399
Sister change for Ia5caff34d3fafaffc459e7572a4eef6bd94422ea and
removing earlier references to the mirror server in preparation for
building and adding the new one.
Change-Id: I7d506be85326835d5e77a0c9c461f2d457b1dfd3
This is a new cloud provided via citycloud that will add resources
capable of running Airship jobs. The goal is to use this as a stepping
stone to having Airship jobs run on our generic CI resources. This cloud
will provide both generic and larger resources to support this.
Change-Id: I63fd9023bc11f1382424c8906dc306cee5b3f58d
Rax APIs don't support newer identity v3 or volume v2/v3. Set identity
to v2 so that catalogs can be listed and volume to v1 so that volumes
can be listed.
Change-Id: I6dddf93fb2c7b1a73315629e4a983a2d5a0142cc
We don't want nodepool to use floating IPs in the fn cloud as it is an
ipv6 only cloud. We explicitly tell it there is no fip source and that
the tenant network routes ipv6 externally. This config is based on the
limestone configuration which is a similar cloud network wise.
Change-Id: I4a27a22a5beb9c5fc9d3e16cd2ca5b41aecbb46f
We ended up running into a problem with nodepool built control plane
images (has to do with boot from volume not allowing us to delete images
that are in use by a nova instance). We have decided to clean this up
and go back to not doing this until we can do it more properly.
Note this isn't a revert because having a group for access to control
plane clouds does seem like a good idea in general and I believe there
have been changes we'd have to resolve in the clouds.yaml files anyway.
Depends-On: https://review.opendev.org/#/c/665012/
Change-Id: I5e72928ec2dec37afa9c8567eff30eb6e9c04f1d
In order to have nodepool build images and upload them to control
plane clouds, add them to the clouds.yaml on the nodepool-builder
hosts. Keep them out of the launcher configs by splitting the config
templates. So that we can keep our copies of things to a minimum,
create a group called "control-plane-clouds" and put bridge and nb0*
in it.
There are clouds mentions in here that we no longer use, a followup
patch will clean those up.
NOTE: Requires shifting the clouds config dict from
host_vars/bridge.openstack.org.yaml to group_vars/control-plane-clouds.yaml
in the secrets on bridge.
Needed-By: https://review.opendev.org/640044
Change-Id: Id1161bca8f23129202599dba299c288a6aa29212
Do this in an attempt to mitigate/work around the dns resolution
problems we have had in that cloud. One thoguht is that this could be
ipv6 specific.
Change-Id: I1f9ef4a031749484d06de9427943abac4de33d29
citycloud is rolling out per-region keystone. There is a change with an
error in it in the latest openstacksdk, so put the right auth_url into
the files directly while we update it and release it again.
Additionally, Sto2 and Lon1 each have different domain ids. The domain
names are the same though - and that's good, because logical names are
nicer in config files anyway.
Restore the config for those clouds.
Change-Id: If55d27defc164bd38af2ffd1e7739120389422af
Deployment of the nodepool cloud.yaml file is currently failing with
FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'rackspace_username' is undefined"}
This is because the variables in the group_vars on bridge.o.o are all
prefixed with "nodepool_". Switch to this.
Change-Id: I524cc628138d85e3a31c216d04e4f49bcfaaa4a8
It's definitely not a priori evident what all these configs that seem
to duplicate each other do; add some inline documentation to each to
hopefully explain what's going on a little more clearly for people
unfamiliar.
Change-Id: I0cc2e8773823b7d9b47d3dfd4c80827cd9929075
This manages the clouds.yaml files in ansible so that we can get them
updated automatically on bridge.openstack.org (which does not puppet).
Co-Authored-By: James E. Blair <jeblair@redhat.com>
Depends-On: https://review.openstack.org/598378
Change-Id: I2071f2593f57024bc985e18eaf1ffbf6f3d38140