We have identified an issue with stevedore < 3.3.0 where the
cloud-launcher, running under ansible, makes stevedore hashe a /tmp
path into a entry-point cache file it makes, causing a never-ending
expansion.
This appears to be fixed by [1] which is available in 3.3.0. Ensure
we install this on bridge. For good measure, add a ".disable" file as
we don't really need caches here.
There's currently 491,089 leaked files, so I didn't think it wise to
delete these in a ansible loop as it will probably time out the job.
We can do this manually once we stop creating them :)
[1] d7cfadbb7d
Change-Id: If5773613f953f64941a1d8cc779e893e0b2dd516
As described inline, installing ansible from source now installs the
"ansible-core" package, instead of "ansible-base". Since they can't
live together nicely, we have to do a manual override for the devel
job.
Change-Id: I1299ea330e6de048b661fc087f016491758631c7
Modules are collected on bridge and then synchronized to remote hosts
where puppet is run. This is done to ensure an atomic run of puppet
across affected hosts.
These modules are described in modules.env and cloned by
install_modules.sh. Currently this is done in install-ansible, but
after some recent refactoring
(I3b1cea5a25974f56ea9202e252af7b8420f4adc9) the best home for it
appears to now be in puppet-setup-ansible; just before the script is
run.
Change-Id: I4b1d709d7037e2851d73be4bc7a202f52858ad4f
Allow speculative testing of ansible collections in the -devel test
job by linking in the git checkouts from the dependent change.
Depends-On: https://review.opendev.org/747596
Change-Id: I014701f41fb6870360004aa64990e16e278381ed
The Ansible devel branch has pulled in some major changes that has
broken our -devel testing job.
Firstly, installing from source checkout now installs the package
"ansible-base"; this means when we install ARA, which has a dependency
on just "ansible" it pulls in the old 2.9 release (which is what the
-devel test is currently testing with -- the reason for this change).
We could remove ARA, but we quite like it's reports for the nested
Ansible runs. So make a dummy "ansible" 2.9 package and install that
to satisfy the dependency.
Secondly, Ansible devel has split out a lot of things into "community
modules". To keep testing the -devel branch into the future, we need
to pull in the community modules for testing as well [1].
After some very useful discussion with jborean93 in #ansible I believe
the best way to do this is to clone the community projects into place
in the ansible configuration directory. Longer term, we should make
Zuul check these out and use that, then we can speculatively test
changes too -- but for now just KISS.
[1] For reference, upstream bundles all this into the "Ansible
Community Distribution" or ACD, which is what you will get when you
download "ansible" from PyPi or similar. But this job should be
pulling the bleeding edge of ansible and the community modules we use
-- that's what it's for.
Depends-On: https://review.opendev.org/747337
Change-Id: I781e275acb6af85f816ebcaf57a9825b50ca1196
We are currently cloning all of the puppet modules in install-ansible,
but we only need them when we run run-puppet. Move the cloning there
so that we can stop wasting the time in CI jobs that don't need them.
In prod, this should not have much impact.
Change-Id: I641ffc09e9e0801e0bc2469ceec97820ba354160
Make inventory/service for service-specific things, including the
groups.yaml group definitions, and inventory/base for hostvars
related to the base system, including the list of hosts.
Move the exisitng host_vars into inventory/service, since most of
them are likely service-specific. Move group_vars/all.yaml into
base/group_vars as almost all of it is related to base things,
with the execption of the gerrit public key.
A followup patch will move host-specific values into equivilent
files in inventory/base.
This should let us override hostvars in gate jobs. It should also
allow us to do better file matchers - and to be able to organize
our playbooks move if we want to.
Depends-On: https://review.opendev.org/731583
Change-Id: Iddf57b5be47c2e9de16b83a1bc83bee25db995cf
We have two standalone roles, puppet and cloud-launcher, but we
currently install them with galaxy so depends-on patches don't
work. We also install them every time we run anything, even if
we don't need them for the playbook in question.
Add two roles, one to install a set of ansible roles needed by
the host in question, and the other to encapsulate the sequence
of running puppet, which now includes installing the puppet
role, installing puppet, disabling the puppet agent and then
running puppet.
As a followup, we'll do the same thing with the puppet modules,
so that we arent' cloning and rsyncing ALL of the puppet modules
all the time no matter what.
Change-Id: I69a2e99e869ee39a3da573af421b18ad93056d5b
So that we can start running things from the zuul source rather
thatn update-system-config and /opt/system-config, we need to
install a few things onto the host in install-ansible so that the
ansible env is standalone.
This introduces a split execution path. The ansible config is
now all installed globally onto the machine by install-ansible
and does not reference a git checkout.
For running ad-hoc commands, an ansible.cfg is introduced inside
the root of the system-config dir. So if ansible-playbook is
executed with PWD==/opt/system-config it will find that ansible.cfg,
it will take precedence, and any content from system-config
will take precedence.
As a followup we'll make /opt/system-config/ansible.cfg written
out by install-ansible from the same template, and we'll update
the split to make ansible only work when executed from one of
the two configured locations, so that it's clear where we're
operating from.
Change-Id: I097694244e95751d96e67304aaae53ad19d8b873
This change enables the installation of the ARA callback plugin in
the install-ansible role. It does not take care of any web reporting
capabilities.
ARA will not be installed and set up by default.
It can be installed and configured by setting
"install_ansible_enable_ara" to "true".
Co-Authored-By: David Moreau-Simard <dmsimard@redhat.com>
Co-Authored-By: Ian Wienand <iwienand@redhat.com>
Change-Id: Iea84ec8e23ca2e3f021aafae4e89c764f2e05bd2
Rename install_openstacksdk to install_ansible_opensatcksdk to make it
clear this is part of the install-ansible role, and it's the
openstacksdk version used with ansible (might be important if we
switch to virtualenvs). This also clears up inconsistency when we add
ARA install options too.
Change-Id: Ie8cb3d5651322b3f6d2de9d6d80964b0d2822dce
Similar to the pinning introduced in
Ic465efb637c0a1eb475f04b0b0e356d8797ecdeb, use the "latest"
openstacksdk package and allow for passing of pinned versions if
required.
Update the devel test to also use the master of opensatcksdk
Change-Id: I4b437ca9024c87903bdd3569c8309cde725ce28e
This adds arguments to "install-ansible" to allow us to specify the
package name and version.
This is used to pin bridge.o.o to 2.7.0 (see
I9cf4baf1b15893f0c677567f5afede0d0234f0b2).
A new job is added to test against the ansible-devel branch. Added as
voting for now, until it proves to be a concern.
Change-Id: Ic465efb637c0a1eb475f04b0b0e356d8797ecdeb
It's designed to always be used from the latest version.
This trips an ansible lint rule (ANSIBLE0010) which we can ignore, as
we often have pip things that we want to install the latest release
of automatically.
Change-Id: Ieac93ab3a555f2423d4fbcf101d6d9681ae0e497
The constructed inventory plugin allows expressing additional groups,
but it's too heavy weight for our needs. Additionally, it is a full
inventory plugin that will add hosts to the inventory if they don't
exist.
What we want instead is something that will associate existing hosts
(that would have come from another source) with groups.
This also switches to using emergency.yaml instead of emergency, which
uses the same format.
We add an extra groups file for gate testing to ensure the CI nodes
get puppet installed.
Change-Id: Iea8b2eb2e9c723aca06f75d3d3307893e320cced
Add a logrotate role that allows basic configuration of a logrotate
configuration for a specific log-file.
Use this role in the ansible-cron and install-ansible roles to ensure
the log output they are generating is rotated.
This role is not intended to manage the logrotate package (mostly to
avoid the overhead of frequently checking package state when this is
expected to be called for multiple configuration files on a server).
We add it as a base package to our servers.
Tests are added for testinfra.
Change-Id: I90f59c3e42c1135d6be120de38e942ece608b761
According to the Ubuntu 12.04 release notes, up until Ubuntu 11.10
admin access was granted via the "admin" unix group, but was changed
to the "sudo" group to be more consistent with Debian et al.
Remove the now unnecessary group
Modify the install-ansible role to set some directory ownership to
root:root; there didn't seem to be any reason to use admin here.
This means the "users" role is no longer required in the bridge.yaml,
as it is run from the base playbook anyway.
Change-Id: I6a7fdd460fb472f0d3468eb080aebbb010931e11
file: state=touch returns changed every time. Instead, put the log files
into a /var/log/ansible directory.
Change-Id: I086d803f0e532b9da41cb01d4e7d2ed66245dfc1
Rather than copying these out of system-config inside of
install-ansible, just point the ansible.cfg to them in the system-config
location. This way as changes come in that have group updates we don't
have to first apply them to the system.
Change-Id: I1cefd7848b7f3f1adc8fbfa080eb9831124a297b
There is a shared caching infrastructure in ansible now for inventory
and fact plugins. It needs to be configured so that our inventory access
isn't slow as dirt.
Unfortunately the copy of openstack.py in 2.6 is busted WRT to caching
because the internal API changed ... and we didn't have any test jobs
set up for it. This also includes a fixed copy of the plugin and
installs it into the a plugin dir.
Change-Id: Ie92e5d7eac4b7e4060a4e07cb29c5a6f2a16ae18