The files and templates we carry are almost always in a state of
maintenance. The upstream services are maintaining these files and
there's really no reason we need to carry duplicate copies of them. This
change removes all of the files we expect to get from the upstream
service. while the focus of this change is to remove configuration file
maintenance burdens it also allows the role to execute faster.
* Source installs have the configuration files within the venv at
"<<VENV_PATH>>/etc/<<SERVICE_NAME>>". The role will now link the
default configuration path to this directory. When the service is
upgraded the link will move to the new venv path.
* Distro installs package all of the required configuration files.
To maintain our current capabilities to override configuration the
role will fetch files from the disk whenever an override is provided and
then push the fetched file back to the target using `config_template`.
Change-Id: I3e7283bf778a9d686f3ae500b289c1fb43b42b92
Signed-off-by: cloudnull <kevin@cloudnull.com>
In order to radically simplify how we prepare the service
venvs, we use a common role to do the wheel builds and the
venv preparation. This makes the process far simpler to
understand, because the role does its own building and
installing. It also reduces the code maintenance burden,
because instead of duplicating the build processes in the
repo_build role and the service role - we only have it all
done in a single place.
We also change the role venv tag var to use the integrated
build's common venv tag so that we can remove the role's
venv tag in group_vars in the integrated build. This reduces
memory consumption and also reduces the duplication.
This is by no means the final stop in the simplification
process, but it is a step forward. The will be work to follow
which:
1. Replaces 'developer mode' with an equivalent mechanism
that uses the common role and is simpler to understand.
We will also simplify the provisioning of pip install
arguments when doing this.
2. Simplifies the installation of optional pip packages.
Right now it's more complicated than it needs to be due
to us needing to keep the py_pkgs plugin working in the
integrated build.
3. Deduplicates the distro package installs. Right now the
role installs the distro packages twice - just before
building the venv, and during the python_venv_build role
execution.
Depends-On: https://review.openstack.org/598957
Change-Id: I18cc964196dbbf5019bc116c41861cb39e466e14
Implements: blueprint python-build-install-simplification
Signed-off-by: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
The systemd unit files are being converted to use common roles to reduce
code sprawl throughout the playbooks. This change allows us to use a
common systemd_mount role as an include which will give us a consistent
experience when deploying services and setting up their resournces on
OS's that uses systemd.
Closes-Bug: #1774037
Change-Id: I11d083788cd388dab0695878193ab18af1b5038b
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
With the more recent versions of ansible, we should now use
"is" instead of the "|" sign for the tests.
This should fix it.
Change-Id: I6749670146cc64cb39b67efb26a9226208828ae7
This is causing some trouble with the integrated build. For now,
while we work out the role's build/storage delegation, let's revert
this.
This reverts commit 975675b65949042ea7dea108294f0604fb37fd10, except
for the a-r-r entry which is left in place as it will be needed later.
Change-Id: I111baaf1e3d70c036508cccc31887e0f328a67ce
Instead of copying a common set of code between all the roles,
switch to using a common role which checks whether a deploy host
already has the appropriate venv package. If it does not, build
it on the fly and pull it to the deploy host.
Implementing this does away with the requirement to do builds
on the repo container. Once this has been implemented into all
roles then the repo_build role will be retired.
Depends-On: https://review.openstack.org/556840
Change-Id: I57e87406bee5c7d10aa824f18d3142f8f3ac6ab4
Systemd has the ability to manage mounts and ensure functionality
/ resource management. Using a systemd mount has the benifit of not
requiring writes to the legacy fstab file which can impact OS
functionality especially when deploying on baremetal. This change
moves the glance NFS mount to a systemd unit file allowing systemd
to manage it independently with no potentially breaking impact to
the underlying operating system.
Changes:
- This PR corrects a long standing issue when using Glance+NFS where
initial deployment would work but if the playbooks were run again
it would fail due to the glance images location being an NFS mount
point with a potentially different UID/GID. To correct this we stat
the directory and if it does NOT exist it is created.
- Following the nova pattern options have been provided to set the UID
and GID of the glance user.
- To ensure out NFS backend solution works with the installation of
glance a test has been added to deploy glance using an NFS backend.
- An upgrade task has been added to this commit to clean up legacy
mounts, This task should be removed in R.
Change-Id: I716c9fe35391629532e67e212d45ea27a5422d1b
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
When using the glance v2 API the glance-registry service is
optional, and the intention is to remove the glance-registry
service in the S cycle.
The glance v1 API is scheduled to be removed in Queens.
This patch therefore disables the v1 API by default to give
us as much time as possible to identify the impact of that
and to get the issues resolved before it is removed from
the code-base.
The patch also cleans up the glance-registry init files
to handle the transition in an existing environment.
Tests are added to validate that enabling the v1 API still
works, and enabling the v2 registry still works.
Change-Id: I4c27aa0ca5b649e4fa76cfd0f326d80f50074db1
From Ansible 2.2 onwards, listen can be used for
handlers instead of chaining notifiers. The
handlers are then executed in the sequence
present in the handler file.
Change-Id: Ia185ab830005f311f37d4eb7dfab1ae116419e3b
Currently when multiple services share a host, the
restart order is random. This is due to an unordered
dict being used to facilitate the mapping of services
to their groups, names and other options.
This patch implements changes to the role to ensure
that services on the same host are restarted in the
correct order when the software/config changes.
Change-Id: I52fc66f861ce98cc8299c84edcfd5f18d74306b3
Due to the debug message plugin the handler restart
messages show at the end of the playbook execution
which is a little confusing. Using debug also
requires setting changed_when to true which is a
little extra bit of code which we do not have to
carry.
Instead we use the command module which is simple,
works and less wordy.
Change-Id: I7096ca81dd6e1126926c95f3c2b7437d0c9d452f
When the policy file is copied from the templated
file to the active file, it loses its group/mode
settings. This patch ensures that they are properly
replicated during the copy.
Change-Id: I39f3c80244f9565d290f420eadeb28e8b77d2d33
The policy.json file is currently read continually by the
services and is not only read on service start. We therefore
cannot template directly to the file read by the service
(if the service is already running) because the new policies
may not be valid until the service restarts. This is
particularly important during a major upgrade. We therefore
only put the policy file in place after the service restart.
This patch also tidies up the handlers and some of the install
tasks to simplify them and reduce the tasks/code a little.
Change-Id: I81de53d8ddc4b462b878b412e53a6de219b71f86
Change the 'glance_service_names' from a list to a dictionary mapping
of services, groups that install those services. This brings the
method into line with that used in the os_neutron role in order to
implement a more standardised method.
The init tasks have been updated to run once and loop through this
mapping rather than being included multiple times and re-run against
each host. This may potentially reduce role run times.
Currently the reload of upstart/systemd scripts may not happen if
only one script changes as the task uses a loop with only one result
register. This patch implements handlers to reload upstart/systemd
scripts to ensure that this happens when any one of the scripts
change.
The handler to reload the services now only tries to restart the
service if the host is in the group for the service according to the
service group mapping. This allows us to ensure that handler
failures are no longer ignored and that no execution time is wasted
trying to restart services which do not exist on the host.
Finally:
- Common variables shared by each service's template files have
been updated to use the service namespaced variables.
- Unused handlers have been removed.
- Unused variables have been removed.
Change-Id: Ia74bbcac35c27928f7e96056b9449932253b75de
glance_nfs_client adds 2 fstab entries, with then handler entry being
incorrectly ordered (src and name are the wrong way around).
This patch removes the redundant handler and moves the existing fstab
task to use the "mount" module which will add the entry to fstab and
ensure the filesystems are mounted.
Additionally this fixes a documentation bug where an incorrect variable
is referenced (glance_nfs_mounts).
Change-Id: I6e0f964d4279800d31119f380a239e2c4ae61cb5
Fixes-Bug: #1477081
This change implements the blueprint to convert all roles and plays into
a more generic setup, following upstream ansible best practices.
Items Changed:
* All tasks have tags.
* All roles use namespaced variables.
* All redundant tasks within a given play and role have been removed.
* All of the repetitive plays have been removed in-favor of a more
simplistic approach. This change duplicates code within the roles but
ensures that the roles only ever run within their own scope.
* All roles have been built using an ansible galaxy syntax.
* The `*requirement.txt` files have been reformatted follow upstream
Openstack practices.
* Dynamically generated inventory is now more organized, this should assist
anyone who may want or need to dive into the JSON blob that is created.
In the inventory a properties field is used for items that customize containers
within the inventory.
* The environment map has been modified to support additional host groups to
enable the seperation of infrastructure pieces. While the old infra_hosts group
will still work this change allows for groups to be divided up into seperate
chunks; eg: deployment of a swift only stack.
* The LXC logic now exists within the plays.
* etc/openstack_deploy/user_variables.yml has all password/token
variables extracted into the separate file
etc/openstack_deploy/user_secrets.yml in order to allow seperate
security settings on that file.
Items Excised:
* All of the roles have had the LXC logic removed from within them which
should allow roles to be consumed outside of the `os-ansible-deployment`
reference architecture.
Note:
* the directory rpc_deployment still exists and is presently pointed at plays
containing a deprecation warning instructing the user to move to the standard
playbooks directory.
* While all of the rackspace specific components and variables have been removed
and or were refactored the repository still relies on an upstream mirror of
Openstack built python files and container images. This upstream mirror is hosted
at rackspace at "http://rpc-repo.rackspace.com" though this is
not locked to and or tied to rackspace specific installations. This repository
contains all of the needed code to create and/or clone your own mirror.
DocImpact
Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
Closes-Bug: #1403676
Implements: blueprint galaxy-roles
Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e