The hound project has undergone a small re-birth and moved to
https://github.com/hound-search/hound
which has broken our deployment. We've talked about leaving
codesearch up to gitea, but it's not quite there yet. There seems to
be no point working on the puppet now.
This builds a container than runs houndd. It's an opendev specific
container; the config is pulled from project-config directly.
There's some custom scripts that drive things. Some points for
reviewers:
- update-hound-config.sh uses "create-hound-config" (which is in
jeepyb for historical reasons) to generate the config file. It
grabs the latest projects.yaml from project-config and exits with a
return code to indicate if things changed.
- when the container starts, it runs update-hound-config.sh to
populate the initial config. There is a testing environment flag
and small config so it doesn't have to clone the entire opendev for
functional testing.
- it runs under supervisord so we can restart the daemon when
projects are updated. Unlike earlier versions that didn't start
listening till indexing was done, this version now puts up a "Hound
is not ready yet" message when while it is working; so we can drop
all the magic we were doing to probe if hound is listening via
netstat and making Apache redirect to a status page.
- resync-hound.sh is run from an external cron job daily, and does
this update and restart check. Since it only reloads if changes
are made, this should be relatively rare anyway.
- There is a PR to monitor the config file
(https://github.com/hound-search/hound/pull/357) which would mean
the restart is unnecessary. This would be good in the near and we
could remove the cron job.
- playbooks/roles/codesearch is unexciting and deploys the container,
certificates and an apache proxy back to localhost:6080 where hound
is listening.
I've combined removal of the old puppet bits here as the "-codesearch"
namespace was already being used.
Change-Id: I8c773b5ea6b87e8f7dfd8db2556626f7b2500473
It turns out you can't use "run_once" with the "free" strategy in
Ansible. It actually warns you about this, if you're looking in the
right place.
The existing run-puppet role calls two things with "run_once:", both
delegated to localhost -- cloning the ansible-role-puppet repo (so we
can include_role: puppet) and installing the puppet modules (via
install-ansible-roles role), which are copied from bridge to the
remote side and run by ansible-role-puppet.
With remote_puppet_else.yaml we are running all the puppet hosts at
once with the "free" strategy. This means that these two tasks, both
delegated to localhost (bridge) are actually running for every host.
install-ansible-roles does a git clone, and thus we often see one of
the clones bailing out with a git locking error, because the other
host is running similtaneously.
I8585a1af2dcc294c0e61fc45d9febb044e42151d tried to stop this with
"run_once:" -- but as noted because it's running under the "free"
strategy this is silently ignored.
To get around this, split out the two copying steps into a new role
"puppet-setup". To maintain the namespace, the "run-puppet" module is
renamed to "puppet-run". Before each call of (now) "puppet-run", make
sure we run "puppet-setup" just on localhost.
Remove the run_once and delegation on "install-ansible-roles"; because
this is now called from the playbook with localhost context.
Change-Id: I3b1cea5a25974f56ea9202e252af7b8420f4adc9
We were getting this from old test nodes and relying on it. But
the new nodes show we're not actually installing it. Install it.
Change-Id: I3b98041f46f3d3be9f2bd0da5b2f875fa0cef421
It's the only part of base that's important to run when we run a
service. Run it in the service playbooks and get rid of the
dependency on infra-prod-base.
Continue running it in base so that new nodes are brought up
with iptables in place.
Bump the timeout for the mirror job, because the iptables addition
seems to have just bumped it over the edge.
Change-Id: I4608216f7a59cfa96d3bdb191edd9bc7bb9cca39
We have two standalone roles, puppet and cloud-launcher, but we
currently install them with galaxy so depends-on patches don't
work. We also install them every time we run anything, even if
we don't need them for the playbook in question.
Add two roles, one to install a set of ansible roles needed by
the host in question, and the other to encapsulate the sequence
of running puppet, which now includes installing the puppet
role, installing puppet, disabling the puppet agent and then
running puppet.
As a followup, we'll do the same thing with the puppet modules,
so that we arent' cloning and rsyncing ALL of the puppet modules
all the time no matter what.
Change-Id: I69a2e99e869ee39a3da573af421b18ad93056d5b
Make a service playbook, manifest and jobs for codesearch.
Remove openstack_project::server - it doesn't do anything.
Change-Id: I44c140de4ae0b283940f8e23e8c47af983934471