We ran into this when fixing the zuul02 swap partition. Essentially
parted complained that our alignments weren't optimal. After some
googling the Internet said that using multiples of 8 tends to be safe.
We shifted the offset from 1MB to 8MB to start the partition and the
warnings went away.
Add this change into make_swap.sh to automate this for future servers.
Change-Id: Iad3ef40cf2c1e064482d49bd722c3de4354ec74d
We just discovered that a number of new servers have rather small swap
sizes. It appears this snuck in via change 782898 which tries to bound
the max swap size to 8GB. Unfortunately the input to parted expects MB
so we make a swap size of 8MB instead of 8GB.
Bump the min value to 8192 to fix this.
Change-Id: I76b5b7dd8ac76c2ecbab9064bcdf956394b3a770
It seems newer focal images force you to use the ubuntu user account. We
already have code that attempts to fallback to ubuntu, but we were not
properly catching the error when root fails which caused the whole
script to fail.
Address this by catching the exception, logging a message, then
continuing to the next possible option. If no possible options are found
we raise an exception already which handles the worst case situation.
Change-Id: Ie6013763daff01063840abce193050b33120a7a2
If you're donated a really nice, big server from a friendly provider
like Vexxhost, you need to cap the amount of swap you make or you fill
up the entire root disk.
Change-Id: Ide965f7df8db84a6bbfe3294c9c5b85f0dd7367f
Previously if you ran `sshfp.py foo.opendev.org x.y.z.a` it would spit
out records that look like:
foo.opendev.org IN SSHFP 1 1 stuffstuffstuff
The problem with this is when you copy this output into the zone file
the lack of a terminating '.' means the record will actually be for
foo.opendev.org.opendev.org.
We address this by splitting on '.' and taking the first element. This
will still be broken for hosts named foo.bar.opendev.org but for now is
a decent improvement.
Change-Id: Ib12f66c30e20a62d14d0d0ddd485e28f7f7ab518
When launching new nodes with launch-node.py we need to wait for ipv6
addresses to configure prior to running ping6 sanity checks. The reason
for this is some clouds rely on router advertisements to configure ipv6
addrs on VMs. These happen periodically and the VM may not have its ipv6
address configured yet when we try to ping6 otherwise.
Change-Id: I77515fec481e4146765630cd230dd3c2c296958f
These don't make any sense in the top-level these days.
Once upon a time we used to use these as node scripts to bring up
testing nodes (I think). The important thing is they're not used now.
Change-Id: Iffa6c6bee647f1a242e9e71241d829c813f2a3e7
It turns out bionic ssh-keygen doesn't have the "-D" to produce the
sshfp records; switch to logging in and getting these via "ssh-keygen
-r" on the host.
Change-Id: Icb6efd7c4fd9623af24e58c69f8a188a4c1fb4c9
Add a tool to scan a host and generate the sshfp records to go into
dns. Hook this into the DNS print out from the node launcher.
Change-Id: I686287c3c081debeb6a230e2a3e7b48e5720c65a
As part of our audit to find out what needs to be ported from python2 to
python3 I've discovered that launch-node is already all python3 (because
it runs on bridge) but the shebangs still pointed to `python`. Update
them to reduce confusion while we do the audit and potentially
uplift/port things.
Change-Id: I9a4c9397a1bc9a8b39c60b92ce58c77c0cb3f7f0
We use project-config for gerrit, gitea and nodepool config. That's
cool, because can clone that from zuul too and make sure that each
prod run we're doing runs with the contents of the patch in question.
Introduce a flag file that can be touched in /home/zuulcd that will
block zuul from running prod playbooks. By default, if the file is
there, zuul will wait for an hour before giving up.
Rename zuulcd to zuul
To better align prod and test, name the zuul user zuul.
Change-Id: I83c38c9c430218059579f3763e02d6b9f40c7b89
The "PVHVM" image appears to have disappeared from RAX, replaced with
a "Cloud" image.
Maybe I haven't looked in the right place, but I can't find any info
on if, why or when this was updated. But I started a server with the
"Cloud" image and it seems the same as the PVHVM image to me; hdparm
showed read speads the same as a older server and dd writes to a file
were the same speed (recorded below for posterity).
ianw@nb04:~$ dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.21766 s, 206 MB/s
ianw@nb04:~$ sudo hdparm -Tt /dev/xvda
/dev/xvda:
Timing cached reads: 16428 MB in 1.99 seconds = 8263.05 MB/sec
Timing buffered disk reads: 752 MB in 3.00 seconds = 250.65 MB/sec
From looking at dmesg it has
[ 0.000000] DMI: Xen HVM domU, BIOS 4.1.5 11/28/2013
[ 0.000000] Hypervisor detected: Xen HVM
[ 0.000000] Xen version 4.1.
[ 0.000000] Xen Platform PCI: I/O protocol version 1
[ 0.000000] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs.
[ 0.000000] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks.
which, if [1] is anything to go by suggests it is in PVHVM mode
anyway.
tl;dr seems like the image name changed.
[1] https://xen-orchestra.com/blog/debian-pvhvm-vs-pv/
Change-Id: I4ff14e7e36f59a9487c32fdc6940e8b8a93459e6
First of all, we're using RST syntax, so rename it to README.rst.
More importantly, remove menitons of puppetmaster - and puppet in
general, as they are distracting. When reading the file, my eyes
scanned and hit puppetmaster and I almost skipped the section with the
assumption it was out of date.
Change-Id: I294bf17084be7dad46e075ad2a3ef2674276c018
If you happen to be booting a replacement host, you don't want ansible
to pick up the current host from the current inventory. Put the new
server's inventory last in the list so it overrides and before it.
Change-Id: I3f1edfb95924dae0256f969bc740f1141e291c25
In our launch node script we have the option to ignore ipv6 to deal with
clouds like ovh that report an ipv6 address but don't actually provide
that data to the instance so it cannot configure ipv6. When we ignore
ipv6 we should not try to use the ipv6 address at all.
Use the public_v4 address in this case when writing out an ansible
inventory to run the base.yaml playbook when launching the node.
Otherwise we could use ipv6 which doesn't work.
Change-Id: I2ce5cc0db9852d3426828cf88965819f88b3ebd5
The launch script is referring to the wrong path for the emergency
inventory. Also correct the references in the sysadmin guide and
update the example for using it.
Change-Id: I80bdbd440ec451bcd6fb1a3eb552ffda32407c44
As noted inline, this needs to be skipped on OVH (and I always forget,
and debug this over and over when launching a mirror node there :).
Change-Id: I07780e29f5fef75cdbab3b504f278387ddc4b13f
The sandbox repos moved from openstack-dev to opendev, the
zone-opendev.org and zone-zuul-ci.org as well.
Follow the rename in this repo.
Depends-On: https://review.opendev.org/657277
Change-Id: I31097568e8791cc49c623fc751bcc575268ad148
This was introduced with Ia67e65d25a1d961b619aa445303015fd577dee57
Passing "-i file1,file2,file.." makes Ansible think that the inventory
argument is a list of hostnames. Separate out the "-i" flags so it
reads each file as desired.
Change-Id: I92c9a74de6552968da6c919074d84f2911faf4d4
I managed to leave off the "--image" flag for a Xenial host, so the
script created a Bionic host by default. I let that play out, deleted
the host and tried again with the correct image, but what ended up
happening was the fact cache thought this new host was Bionic, and
several ansible roles therefore ran thinking this too, and we ended up
with a bad Xenial/Bionic mashup.
Clear the cache on node launch to avoid this sort of thing again.
I have launched a node with this new option, and it worked.
Change-Id: Ie37f562402bed3846f27fbdd4441b5f4dcec7eb2
Passing the -i to the jobdir means we're overriding the inventory.
This means variables that come from the /etc/ansible vars, like
sysadmins, are missing.
Add the global inventory to the command line for ansible-playbook.
We have --limit specified from '-l' - so we should still only run
on the host in question.
Change-Id: Ia67e65d25a1d961b619aa445303015fd577dee57
When we're booting boot-from-volume servers and there are errors,
we leave the root volume around. Clean up after ourselves.
Change-Id: I6341cdbf21d659d043592f92ddf8ecf6be997802
When launching a new server we should make sure that all available
package updates are installed before we reboot the server. This way we
get available security updates applied to things like our kernel.
This change adds a new playbook that runs the unattended-upgrade command
on debuntu servers. Will need to add support for other platforms in a
followup change.
Change-Id: Idc88dc33afdd209c388452493e6a7f5731fa0974
We want to be launching opendev server more and more now. Update launch
docs to point out some of the difference with opendev servers.
Additionally point out that we need to update our static inventory file
so that ansible (and puppet) see the new host.
Change-Id: I425377c50007e11aa99cb53f3f5dc3068911ef7f
Some clouds may be a little slower than others building images and to
override the create_server default timeout of 3 minutes (180) you have
to hand edit -- add a global timeout option and use that consistently.
Change-Id: I66032ef929746739d07dca3fd178b8c43bb8174c
Remove the section on launching nodes in the jenkins tenant. That
never happens.
Remove the bits about groups and sudo, as they aren't relevant
any more.
Remove the unused os_client_config import.
Change-Id: I676bb7450ec80df73b76ee7841f78eadbe179183
os.listdir returns dirents relative to the dir being listed. We need to
give full path to these entries when unlinking them. Do this by joining
the inventory_cache_dir path to each inventory_cache file.
Change-Id: I78376cfa3b2aa92641f2685b08616660f523dfaf
Update the launch node readme and script to use python3 on the new
bridge node. There is no python2. Also update ansible to pull in
python3 support. The version we had been using wasn't python3 happy.
Change-Id: I6122160eb70eb6b5f299a8adb6478a9046ff1725
Replace launch-node.py with launch-node-ansible.py. Update it to
delete the inventory cache correctly.
Also, update the docs to list Bionic by default rather than Trusty.
Change-Id: Iadda897b7e71dc12c8db4ced120894054169bbb8
The production directory is a relic from the puppet environment concept,
which we do not use. Remove it.
The puppet apply tests run puppet locally, where the production
environment is still needed, so don't update the paths in the
tools/prep-apply.sh.
Depends-On: https://review.openstack.org/592946
Change-Id: I82572cc616e3c994eab38b0de8c3c72cb5ec5413
We want to launch a new bastion host to run ansible on. Because we're
working on the transition to ansible, it seems like being able to do
that without needing puppet would be nice. This gets user management,
base repo setup and whatnot installed. It doesn't remove them from the
existing puppet, nor does it change the way we're calling anything that
currently exists.
Add bridge.openstack.org to the disabled group so that we don't try to
run puppet on it.
Change-Id: I3165423753009c639d9d2e2ed7d9adbe70360932
Change I76b1099bf0cf3bfead17f96e456cdce87d0e8a49 altered the name of
the inventory script, so reflect that in the corresponding
subprocess call in launch-node.py and a comment in the
expand-groups.sh script.
Change-Id: I4c2c762716813b5d59dcc1b623f5988c8aa7d490
The dns.py file uses openstack.connect to make the Connection but
launch_node.py was still using shade.OpenStackCloud, so when the
connection was passed to dns.py it was trying to use an SDK property but
getting a Shade object.
This is because while sdk has been updated with all of the shade objects,
we haven't updated shade yet to provide the sdk version of the object, so
shade objects from sdk have things shade objects from shade don't yet have.
Update launch_node.py to use the same Connection construction that
dns.py does.
Change-Id: I1c6bfe54f94effe0e592280ba179f61a6d983e7a