* We don't need to create the containers as they are created during the
initial run.
* Remove quoting in favor of {% raw %} blocks
Change-Id: Ied696ad0882169d523a60a900788e7c2ba1d3fa3
This change allows this playbook to be run using an older version of
ansible. This change is necessary for my use case where I am running all
OSA and related playbooks in a docker container locally for a Newton
deploy.
The use of Newton OSA's ansible bootstrap script means that the
openstack-ansible my workflow uses requires Ansible 2.1, which does not
support `include_tasks`. This change addresses that problem by replacing
`include_tasks` in the playbook that needs to be run using
openstack-ansible with `include` which produces the desired result.
Change-Id: I8b2a0217e851d022ee40cbdd8bc8045e18d5a07d
Allows for deployment/bootstrap of OSA to be skipped
by skipping run_osa while still allowing configuration
to be added during pre_config_osa.
Change-Id: I40b0c8209f03c7e9543c7c688f2ef8ba2ebdf72d
There can be situations where a gvwstate.dat file is present
in at least one galera container, but the my_uuid and view_id
do not match in any of them. In this case, we should just pick
any container to be the master.
This patch caters for this situation, ensuring that the cluster
still bootstraps whenever the VM boots.
Change-Id: If87cd9399b6624418f16910e4ddc046aaa22e5c5
Nested virtualization is important to improve VM performance
and enabling it is crucial to ensuring that VM images built
on one host work on boot on other hosts because the environment
is consistent.
In this patch add a task to enable it if it is available.
Change-Id: I812d8399cf45fab94f0f46976c9415591d45e463
Due to the rather terrible virt_net module, only one action
can be done on the virt networks at any one time. This means
that the current action of setting them to autostart has no
effect, because the module does not do it. Also, the current
action of disabling the default network and disabling it from
autostarting also does not take full effect. As such, after a
host reboot, the default network autostarts, and the other
networks are not started and the VM's cannot start. When trying
to resolve this by re-running the host setup, the play ignores
any existing virt networks - so the issue cannot be fixed.
This patch does the following:
1. Ensures that the default network does not autostart. This
is done by splitting the disabling of the network, and the
disabling of autostart into two tasks.
2. Changes the define/create action into a single action which
will not change the network configuration if it is defined.
3. Implements the setting of the network as active, and the
setting of it to autostart as two seperate tasks. This
ensures that both actions are actually implemented.
Change-Id: I608f2607824fac649f4e018d89094d57047134b3
It currently seems to think that /dev/vmvg00/disk1
is used for btrfs, so force this operation to
ensure it's changed to xfs.
Change-Id: I0bcc9723fb33b557315422c3259a7ba2b75ceff6
The image downloads may fail, even with aria's built-in
retry mechanism. With this patch we ensure that ansible
will delay and retry again. This improves the chances of
success.
With this we also remove the '--quiet' default parameter
so that we get console output from the task if it does
ultimately fail. This is useful for diagnostic purposes.
Change-Id: Ieed41f06a22effb28463637184980a748791edfe
When the VM's are Ubuntu Trusty, this task causes total failure.
We should only try and do the daemon_reload if the system being
used supports it.
Change-Id: I557856045a7735c8f351df6350f777caae526b10
Unfortunately guestfish may error out silently (no return code
of 1), making hunting down the error a bit obscure. To combat
this we add a bunch of stdout output to the script, and look
for that final step to validate success. To make this work, we
need to copy the script over and execute it with the command
module, because the script module puts everything into stderr.
Change-Id: I8e514ceb2462870721745c9445ec149864a45f4d
Currently the contents of zz-dash-packages is as follows:
dash rm -rf /var/tmp/.guestfs
It should be:
dash
This fixes that.
Also, the guestfs location that needs removing has a
uuid appended, so we add a wildcard to the removal.
Change-Id: If53f55c901f5abf19bdfff0c0f17c2f9ed69d915
Currently all the .example and .aio files are copied over,
resulting in a confusing mess. This changes it to just copy
the stuff that is actually used over instead.
Change-Id: Ic6a3beb4d0084507e3017ea1663fd79fda3d1c12
In an ideal state, if the galera containers are shut down
cleanly, they will leave behind a gvwstate.dat file on each
node which provides the cluster member details so that it
can automatically start up again without intervention.
However, when imaging the MNAIO systems we only interact
with the hosts, so the galera containers sometimes do no
shut down cleanly.
To cater for this, we inspect the disk images for the
primary component, then build the gvwstate.dat file for
the other galera containers. With those put back into the
image, when the VM's start, the cluster forms immediately.
References:
http://galeracluster.com/documentation-webpages/pcrecovery.htmlhttp://galeracluster.com/documentation-webpages/restartingcluster.html
Change-Id: Icfe067607baefd661147f3c22ce846f06fff7c60
Without the dash package in the supermin appliance, guestfish
is unable to write into the images. Ubuntu has not updated their
package to a recent enough version, so we apply the workaround.
Change-Id: If48045c9b6e0cffe3d6a188e8a09a1e58ee885a8
MNAIO built with newer releases, like Rocky, show increased memory
usage exceeding the default 8G for INFRA_VM_SERVER_RAM setting.
The MariaDB server innodb buffer cache is drastically reduced to
match the deployment size.
Change-Id: Ifef9ee209aedb882ae14b1d2a29852375de8e7e8
The swift and cinder hosts do not use containers for services,
so there is no need to do the current process of shrinking the
volumes. Instead, we ensure that the lxc & machines mounts are
removed, with their respective logical volumes.
When setting up the swift logical volumes, we do not need to
create the mount point directories, because the mount task will
do that for us. As such, we remove that task.
Change-Id: Ibbe6d0fede6b6965415e421161354e311708d113
To allow a downloaded set of file-backed images to be used on
another host, the new host's public ssh key needs to be injected
into the VM disks so that ansible is able to connect to it and
complete the rest of the preparation.
Change-Id: I6b9b5efb88283417c15f74f40cfb91943bb8774d
Rather than have to default it in tasks all over the
place, we default it in group_vars. The default is to
enable the feature if file-backed VM's are used.
However, if there are no base images available, the
set_fact task disables it. If a user wishes to force
it not to be used, then an extra-var override is still
usable.
Change-Id: I5c916244a02a44da831d2a0fefd8e8aafae829b2
With the previously added ability to save file-backed VM's,
a user is able to put them onto a web host for storage. We
now add a playbook to download the images using the given
manifest URL.
Change-Id: If1435c70d672cdbacd22df99318c59265362011e
When we save the VM disks, we now use compression to prepare
the base disk so that it reduces the base disk file sizes to
a much smaller base. For the infra hosts this reduces the file
from ~23GB to ~8GB. Once this is done, we then also create a
copy-on-write image in the original disk's place so that the
VM can be booted up to verify functionality or continue work
without having to re-run the 'deploy-vms' playbook.
Change-Id: If95b71d8625b4d5b2a036cec13952e4fd73cecd4
Using the discard option for all mount points ensures
that the deletes actually release the blocks on the disk.
This ensures that SSD performance is optimised and that
file-backed images are kept as small as possible.
Change-Id: I648cbaca56d75e355cf6c8af01e2e3ad20dfc398
When using file-backed storage, or SSD storage, any erasing
done in the VM does not actually clear up the space. By using
the virtio-scsi controller the VM is able to use TRIM to clear
any blocks which are deleted.
This also allows us to use fstrim to reduce the size of the
qemu files before we save them for later re-use.
Change-Id: Ia9001522ce054ee9f8a6dd38270da3e3fd039813
When switching from LVM to a file backing store, the existing pool
cannot be undefined until any existing VM's and LV's are removed.
This ensures that if this is the case, it will be cleaned up so
that the switch is effortless.
Change-Id: Ie1460b37593306044f0a63f445c3da1987362d34
Instead of putting the images in the root of the disk,
we use a subdirectory. This prevents silly mistakes
from happening.
Change-Id: I19d22b7e72de88736db410a771ec22664c641c94
When executing the save, the return code of 2 indicates
a change was implemented - but I forgot to include that
condition for failed_when. As it stands now the task will
be considered failed which is a bit useless.
Change-Id: Ie8b36335048d2dcf6d0f9e66f8440430f4a68398
When not using a file-backed backing store, the vriable is not
defined and results in an error to that effect.
Change-Id: I3142a5960bc4521f79bbdfe32b0e7a0f71742b7d
CI testing is experiencing intermittent failures when deploying base glance
images as part of the openstack-image-setup.yml playbook which is kicked of
as part of the openstack-service-setup.yml playbook in openstack-ansible-ops.
Since the deployment of these resources rely on external URI endpoints, this
type of failure is something that can occur during a customer deployment.
Change-Id: Ieea0f11482646ea152920a1ff1009a2b03705f1c
To allow us to use the json_query filter, we ensure that
the jmespath distro package is installed onto the host.
Change-Id: Icb9053fd3a7486030f4336130fe6ad503852b07a
In order to more successfully reproduce an environment using
saved images, we include the VM XML definition files and the
output from 'pip freeze'. We capture the list of files, their
checksums and the SHA for the git repo into a json manifest
file.
Change-Id: Ia0bf74d509b4acb10b0dd832a4cfe1bb2afb2503
It's better to shut the VM's down cleanly instead of just turning
them off, so we change 'destroy' to 'shutdown' and use the virt
module for this action instead of the command module.
Change-Id: I896b7794328b91dc59726bf1d5366eeb7112ca21
In order to make use of a data disk, we enable the 'file'
implementation of default_vm_disk_mode to use a data disk
much like the 'lvm' implementation.
To simplify changing from the default_vm_disk_mode of lvm
to file and back again, the setup-host playbook will remove
any previous implementation and replace it. This is useful
when doing testing for these different modes because it
does not require cleaning up by hand.
This patch also fixes the implementation of the virt
storage pool. Currently the tasks only execute if
'virt_data_volume.pools is not defined', but it is always
defined so the tasks never execute. We now ensure that
for both backing stores the 'default' storage pool is
defined, started and set to auto start (as three tasks
because the virt_pool module sucks really bad and can only
do one thing at a time).
The pool implementation for the 'file' backed VM's uses
the largest data disk it can find and creates the /data
mount for it. To cater for a different configuration, we
ensure that all references to the disk files use the path
that is configured in the pool ,rather than assuming the
path.
Change-Id: If7e7e37df4d7c0ebe9d003e5b5b97811d41eff22
Rather than installing pip packages on to the host system,
we can just execute the script and it will use the ansible
runtime venv. This works for Ocata onwards. Any earlier
releases can either pre-install the right packages, or
implement the change to the script shebang in a fork.
Change-Id: I88eb4c1bc9fe3a38803c5f0f5d1725dbed74dac7
There is already a default in group_vars/all, so we do not need
to provide a default in every conditional.
Also, we move several LVM data volume tasks into a block given
they have a common set of conditions.
Change-Id: Iff0fafefda2bc5dc1596b7198b779f5da763086c
Given there is almost no difference between the releases, we can
use the same vars file and simple conditionals. The package
'software-properties-common' is available for Trusty & Xenial so
we just use that and remove the unnecessary extra package.
We also now add the correct UCA repositories for Trusty and Bionic
so that we get the latest version of libvirt.
Finally, we simplify the conditional for the iptables binary to
make it far easier to read.
Change-Id: Id4b3711a4d7a0ccc13db956d41017ac01c97825f
* Adds support for provisioning a Multi Node AIO using
CentOS 7.
* Cleans up older MNAIO/Compute/Infra image configs
* Increases LB/Logging/Swift VM ram to allow for CentOS rootfs
to load into RAM. (1GB to 2GB)
* Uses systemd-network networking for configuring network/bridges
* Adds keymap to kvm configuration to alleviate keyboard issues in
virt-manager
Change-Id: I54d903e7c1c70882e8b20a9cef4eafb42be46770