With the introduction of the DFW3 region, there are new projects
consistent across all regions. We want to switch to using those, but
right now our existing resources are in a legacy project that only
exists in the SJC3 region. Add the new projects to our bridge config
for both regions as new clouds, and remove the nonfunctional DFW3
from the old one for clarity. Once we've built up new resources and
cleaned up the old project in SJC3, we can clean up the entries
associated with it.
Change-Id: I66beaae4a6d53ad07293300153a2d4b8da33cc9f
There is a new DFW3 cloud region that we can use in raxflex. This
updates the two existingraxflex cloud profiles to add the DFW3 profile.
Note that we must the the keystone for each corresponding region so we
do some hacks that other cloud profiles like those for open telekom
cloud and citycloud do within the openstacksdk built in profiles.
Basically we template the region_name into the auth url.
Change-Id: I69ee0c8f4a92cfc100a4e7927a57af933cc287e9
Vexxhost has not been serving the well-known api profile for several
days now. In ordor to continue using the cloud, stop using the hosted
profile and add the extra configuration settings to our local profiles.
Change-Id: Ia6bac39cd5c140b71b31bd9c1c2742958dece5e6
For some reason (cache lookup timeouts?) using the project name and
domain wasn't working initially (but did begin working some time
after also logging into the Skyline dashboard). As a matter of
robustness, use the project IDs instead which worked immediately
with no problem.
Since we needed to add new values in our private hostvars for this,
go ahead and separate out the hostvars used for other items too as
future-proofing. These have all been added on the bridge now.
While we're here, do some cleanup of unnecessary default values
pointed out on the previous review.
Change-Id: I850ef61932e9818495fa99e1d13360693f82edd8
We're starting to experiment with Rackspace's new Flex cloud. It
uses basically the same credentials as their classic cloud but with
more typical Keystone configuration. Just add it to the clouds.yaml
configs initially so we can more easily interact with it from the
bridge, upload our server image and prepare to start launching a
mirror instance.
Because the credentials and identifiers are still basically the
same, this change relies on our existing private hostvars and
doesn't introduce any new ones for now.
Change-Id: I5d06a97d4ab44f02de59298a135c1f2384a2e18a
After this merges, the temporary credential set opendevci_rax_*
and opendevzuul_rax_* can be removed from hostvars.
Depends-On: https://review.opendev.org/911163
Change-Id: I2e9067aa2f11100d311c86beb4df5bf15c72db69
Rackspace is requiring multi-factor authentication for all users
beginning 2024-03-26. Enabling MFA on our accounts will immediately
render password-based authentication inoperable for the API. In
preparation for this switch, add new cloud entries for the provider
which authenticate by API key so that we can test and move more
smoothly between the two while we work out any unanticipated kinks.
Change-Id: I787df458aa048ad80e246128085b252bb5888285
At the request of the upstream admins, use the public7 network which
should have more floating IP's we can use for tests nodes.
Change-Id: I1ed46a23832a1b9761875d61f2d8779914d061ae
The last iteration of this donor environment was taken down at the
end of 2022, let's proceed with final config removal for it.
Change-Id: Icfa9a681f052f69d96fd76c6038a6cd8784d9d8d
We haven't used the Packethost donor environment in a very long
time, go ahead and clean up lingering references to it in our
configuration.
Change-Id: I870f667d10cc38de3ee16be333665ccd9fe396b9
The mirror in our Limestone Networks donor environment is now
unreachable, but we ceased using this region years ago due to
persistent networking trouble and the admin hasn't been around for
roughly as long, so it's probably time to go ahead and say goodbye
to it.
Change-Id: Ibad440a3e9e5c210c70c14a34bcfec1fb24e07ce
All references to this cloud have been removed from nodepool, so we
can now remove nb03 and the mirror node.
Change-Id: I4d97f7bbb6392656017b1774b413b58bdb797323
Emperically raw images start on the new cloud, while qcow2 ones don't.
Let's use raw, which is inline with OSUOSL (the other arm64 cloud) too.
Change-Id: I159c06b710580c36fa16c573bee7302949cf7257
This is just enough to get the cloud-launcher working on the new
Linaro cloud. It's a bit of a manual setup, and much newer hardware,
so trying to do things in small steps.
Change-Id: Ibd451e80bbc6ba6526ba9470ac48b99a981c1a8d
This provider is going away and the depends-on change should be the last
step to remove it from nodepool. Once that is complete we can stop
trying to manage the mirror there (it will need to be manually shut
down), stop managing our user accounts, and stop writing cloud.yaml that
include these details for inap/iweb on nodepool nodes.
Note we leave the bridge clouds.yaml content in place so that we can
manually clean up the mirror node. We can safely remove that clouds.yaml
content in the future without much impact.
Depends-On: https://review.opendev.org/c/openstack/project-config/+/867264
Change-Id: I01338712aeae79aa78e7f61d332a2290093c8a1b
Due to changes in the internap cloud being renamed to iweb and back
again the history of the internap clouds.yaml profile is one of change.
Unfortunately, we need to talk to iweb specifically but the internap
profile in new openstack sdk talks to internap and things break.
Fix this by removing the use of the profile and setting the values
explicitly in our clouds.yaml files.
While this cloud is going away in about a month making this change is
still worthwile as it will allow us to use new openstacksdk on bridge
and nodepool to talk to iweb in the meantime.
Change-Id: I9f6c414115190ec5d25e0654b4da9cd9b9cbb957
This was pinned to v2 in I6dddf93fb2c7b1a73315629e4a983a2d5a0142cc
some time ago.
I have tested with this removed and openstacksdk appears to figure it
out correctly. Removing this reduces one small thing we need to think
about.
Change-Id: I85c3df2ebf6a424724a8e6beb0611924097be468
We've incorrectly embedded the project ID in our block storage
endpoint override for Rackspace Public Cloud, which leads to a 404
Not Found response since the SDK appends the supplied project_id
already. Removing this allows to use latest versions of the
OpenStack CLI/SDK for volume management in Rackspace Public Cloud,
so long as we pin python-cinderclient<8 (for v2 API support).
Change-Id: If37f1a848ec4d3128784ed28068bfae9f06e2f14
We hold off on landing this until we're sure all the cleanup is done
with this cloud provider. Removing it with everything else would make
cleanup more difficult.
Change-Id: I905a37b430cb313b10a239d7d1b843404af06403
We've been told these resources are going away. Trying to remove them
gracefully from nodepool. Once that is done we can remove our configs
here.
Depends-On: https://review.opendev.org/c/openstack/project-config/+/831398
Change-Id: I396ca49ab33c09622dd398012528fe7172c39fe8
INAP mtl01 region is now owned by iWeb. This updates the cloud launcher
to use the new name and instructs the mirror in this cloud to provision
ssl certs for the old inap and new iweb names as well as updating
clouds.yaml files.
Change-Id: I1256a2e24df1c79dea06716ae4dfbcfe119c13f8
We have limited ipv4 address space in this cloud. Currently we can do
about 6 IP addresses for test nodes after we account for network
infrastructure and the mirror. By switching these instances to using the
external network directly we can clean up some of the neutron network
infrastructure which we think may free up 2 more IP addresses. That
should get us our originally intended max-servers of 8.
Change-Id: I705ff082ff06ae1c97f4c229a22893e6d87d206d
This adds the new inmotion cloud to clouds.yaml files and the cloud
launcher config. This cloud is running on an openstack as a service
platform so we have quite a bit of freedom to make changes here within
the resource limitations if necessary.
Change-Id: I2aed6dffde4a1d6e3044c4bd8df4ca60065ae1ea
The public5 network has the most IP addresses available and is
recommended for use.
This cloud also has fixed public IP's, not floating
Change-Id: I7ae1bb0081d3a86149225c3400b53a9561ccffe6
Otherwise you get
BadRequest: Expecting to find domain in project - the server could
not comply with the request since it is either malformed or otherwise
incorrect. The client is assumed to be in error.
Change-Id: If8869fe888c9f1e9c0a487405574d59dd3001b65
The Oregon State University Open Source Lab (OSUOSL;
https://osuosl.org/) has kindly donated some ARM64 resources. Add
initial cloud config.
Change-Id: I43ed7f0cb0b193db52d9908e39c04e351b3887e3
The OpenEdge cloud has been offline for five months, initially
disabled in I4e46c782a63279d9c18ff4ba2944c15b3027114b, so go ahead
and clean up lingering references. If it is restored later, this can
be reverted fairly easily.
Depends-On: https://review.opendev.org/783989
Depends-On: https://review.opendev.org/783990
Change-Id: I544895003344bc8202363993b52f978e1c07d061
We had the clouds split from back when we used the openstack
dynamic inventory plugin. We don't use that anymore, so we don't
need these to be split. Any other usage we have directly references
a cloud.
Change-Id: I5d95bf910fb8e2cbca64f92c6ad4acd3aaeed1a3
There are insufficient ipv4 floating-ips to cover our VM quota; switch
to ipv6 only so all vm's can boot.
Change-Id: I2225fa9ea888bcf167be7139e036a4b5406b1f4f
This is required because RAX have an odd /v2 that isn't listed in the
catalogue but actually exists (but isn't really full /v2/ support).
It became a problem when recent client versions dropped /v1 support,
so now we have to force them to v2 like this.
Depends-On: https://review.opendev.org/714624
Change-Id: I6f139d2b3036ef0ecaddf3a9a225faae3a2b0450
Change I9ca77927046e2b2e3cee9a642d0bc566e3871515 updated the
nodepool-builder_opendev group to deploy it's config into
/etc/openstack, but updated the wrong template.
The nodepool_builder_clouds.yaml.j2 file was an old, unreferenced copy
from left-over from Id1161bca8f23129202599dba299c288a6aa29212 when we
wanted to use nodepool to manage control-plane clouds. That didn't
work out so well and I think we just missed cleaning it up with
I5e72928ec2dec37afa9c8567eff30eb6e9c04f1d.
Remove it now, and port the path changes into the correct config file.
Change-Id: I37af69b342b413df94435e59a7c16bb218183399
Currently we deploy the openstacksdk config into ~nodepool/.config on
the container, and then map this directory back to /etc/openstack in
the docker-compose. The config-file still hard-codes the
limestone.pem file to ~nodepool/.config.
Switch the nodepool-builder_opendev group to install to
/etc/openstack, and update the nodepool config file template to use
the configured directory for the .pem path.
Also update the testing paths.
Story: #2007407
Task: #39015
Change-Id: I9ca77927046e2b2e3cee9a642d0bc566e3871515
Sister change for Ia5caff34d3fafaffc459e7572a4eef6bd94422ea and
removing earlier references to the mirror server in preparation for
building and adding the new one.
Change-Id: I7d506be85326835d5e77a0c9c461f2d457b1dfd3
This is a new cloud provided via citycloud that will add resources
capable of running Airship jobs. The goal is to use this as a stepping
stone to having Airship jobs run on our generic CI resources. This cloud
will provide both generic and larger resources to support this.
Change-Id: I63fd9023bc11f1382424c8906dc306cee5b3f58d
Rax APIs don't support newer identity v3 or volume v2/v3. Set identity
to v2 so that catalogs can be listed and volume to v1 so that volumes
can be listed.
Change-Id: I6dddf93fb2c7b1a73315629e4a983a2d5a0142cc