Given there is almost no difference between the releases, we can
use the same vars file and simple conditionals. The package
'software-properties-common' is available for Trusty & Xenial so
we just use that and remove the unnecessary extra package.
We also now add the correct UCA repositories for Trusty and Bionic
so that we get the latest version of libvirt.
Finally, we simplify the conditional for the iptables binary to
make it far easier to read.
Change-Id: Id4b3711a4d7a0ccc13db956d41017ac01c97825f
In order to improve the readability and robustness of the mnaio feature
I have replaced the shell out to virsh tasks to use the virt module
where available. I have also created a vm-status play that will
hopefully help resolve SSH failures into the VMs. This play utilizes
the block/rescue/handler pattern to attempt to restart the VM once if
it fails the initial SSH check. Hopefully this will reduce the SSH
failures due to a suck VM. This adds a new variable called
vm_ssh_timeout which allows the deployer an easy place to override the
default timeout. The python-lxml package is needed for the virt module.
Change-Id: I027556b71a8c26d08a56b4ffa56b2eeaf1cbabe9
When using an Ubuntu Xenial host, this patch adds the Ubuntu
Cloud Archive Queens repository so that the MNAIO tooling
makes use of libvirt v4. This provides access to a better
snapshot and snapshot-revert implementation, among other
features.
To improve the chances of success during builds, retries
are added to the package install tasks. Also, given that
we're using Ansible > 2.1.x, we forgo the with_items loop
for the package installs and just give the package module
the list so that it installs them all at once.
Change-Id: I0373e29fb996de1538465277760a0181289cbb44
We normally see ssh connection issues during the lxc container setup
portion of OSA builds. Most people usually end up tweaking ansible ssh
pipeline and retry settings or nerfing the build via ansible fork lowering
to work around it. This is an old issue that we normally put a more
permanent fix in our physical environments by setting the ssh maxsessions
and maxstartups. On the mnaio builds I have been working around this by
stopping the build before deployment and making the changes in a script.
Change-Id: I54c223e1fb9edf6947bc7f76ff689bad22456420
Closes-Bug: 1752914
Currently, the execution of site.yml fails as site.yml installs
python-netaddr in the same run as it uses it, which fails since
ansible-playbook can't see the newly installed module.
This commit simply removes python-netaddr from
mnaio_host_distro_packages and adds a new task to build.sh to
install it before site.yml is kicked off.
NOTE: This commit also switches to installing netaddr via pip
instead of system package since that does not require pre-loading
vars files depending on OS, etc.
Change-Id: I324ba61a860f5942b40972903ae1c40caa7839e5
This change ensures that the VMs and host systems cache apt packges
locally which will speed up the boot and deployment process.
Change-Id: I234e338b9f1b9f11ff1e81ede8c5717e033fdad8
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
The original mnaio was built using a lot of bash and was tailored
specifically for ubuntu 14.04. The new mnaio was built using a mix of
bash and ansible and was tailored specifically for ubuntu 16.04. This
patch takes the two code bases and combines the best things from each
method and wraps it up into a single code path all written using ansible
playbooks and basic variables.
While underlying system has changed the bash environment variable syntax
for overrides remains the same. This allows users to continue with what
has become their normal work-flow while leveraging the new structure and
capabilities.
High level overview:
* The general performance of the VMs running within the MNAIO will now
be a lot better. Before the VMs were built within QCOW2 containers,
while this was flexible and portable it was slower. The new
capabilities will use RAW logical volumes and native IO.
* New repo management starts with preseeds and allows the user to pin
to specific repositories without having to worry about flipping them
post build.
* CPU overhead will be a lot less. The old VM system used an
un-reasonable number of processors per VM which directly translated
to sockets. The new system will use cores and a single socket
allowing for generally better VM performance with a lot less
overhead and resource contention on the host.
* Memory consumption has been greatly reduced. Each VM is now
following the memory restrictions we'd find in the gate, as a MAX.
Most of the VMs are using 1 - 2 GiB of RAM which should be more than
enough for our purposes.
Overall the deployment process is simpler and more flexible and will
work on both trusty and xenial out of the box with the hope to bring
centos7 and suse into the fold some time in the future.
Change-Id: Idc8924452c481b08fd3b9362efa32d10d1b8f707
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>