An operator might want to ensure Swift is available during
an upgrade and manually upgrade Swift on completion of the
maintenance.
The operator would need to set these vars before operation:
export SKIP_SWIFT_UPGRADE=yes
export CONTAINERS_TO_DESTROY=add_!swift_all_exclusion
This would prevent the swift containers from being torn
down during the upgrade and would skip all Swift upgrade
operations.
Change-Id: Ibf40499750751dd9f41e447b7b90bb77f592cc14
This was a provider specific command which can be
removed as it could remove unintended containers.
Change-Id: I179565f84fd8176cbcb79eacc8e63e0fef554223
With the more recent versions of ansible, we should now use
"is" instead of the "|" sign for the tests.
This should fix it.
Change-Id: I897b918785c34523688c450bec16661f0f6e496e
In situations where self signed certs are utilized for the API,
the addition of the insecure is necessary to make post-redeploy-cleanup
work.
Change-Id: Ie5d5b6248feba5c4479567d22e74c76065725fda
Cleans out old mariadb apt sources before
running redeploy to prevent issues during
leap frog with galera client.
Change-Id: Iba91de800d4f1ec66a062e2213344e61c392407b
These containers store logs within them and it would be bad to lose
these logs.
Change-Id: I0b3b114dce89c6e55d54efb351788e0cfe85c3b4
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
The DB migrations are using recurring task names,
making it difficult to isolate which version task ran.
The prefix of each task with the actual base OpenStack
version makes it easier to identify those, especially
in a leapfrog situation.
Change-Id: I9c2b711452208be28bef421a5e536bd2bf8a9a03
cinder-manage service list does not output a usable, full, hostname;
just the short hostname is output. To fix this we need to query the
database directly.
We also change the behavior of this operation to drop all services. We
do this at the recommendation of upstream cinder. This is safe because
all cinder services are stopped and the services will re-register when
restarted.
Related-Bug: #1712372
Change-Id: I6d845165ec22d4c2aeece0636a550e0b57050c22
During leapfrog all containers are deleted, and ceph containers
are no exception.
This adds exceptions into the process:
- One on the generic container delete;
- One on the Juno container delete of any remnant container.
Change-Id: I34812f4472594998f3e40b4a5cb650e396a80421
HAProxy should not have an init that loads conf.d files, like it
was done on K, and removed in N, during the haproxy installation.
Else the installation of the package will fail, because it will
load the conf.d files.
Change-Id: I345089cc3493b90c1c4fbd2d47c51f83c65c94f4
If we interrupt the process in unarchive, we'll have a broken
idempotency state: The leapfrog process will run again the venv
prep, the synchronize will be unchanged, and the unarchive
rewire will never run.
Change-Id: I8e91ef39d4ecbc9ff5a6a4a73cd0ce4679d6ecf0
This change fixes the removal of neutron agent containers so that it no
longer relies on the containers existing on the deploy host.
The file `leapfrog_remove_remaining_old_containers` is only created on
the deploy host, when combined with the fact that the original task
couldn't fail, it resulted in the removal of the containers silently
failing on multi-node builds.
This change gets the list of containers from the file on the deploy host
before trying to delete them on all hosts.
Change-Id: Ic95187fd7e7ff93c796ce01f296cb06a16ba72bd
If leapfrogging from Kilo (and above) with a host named rpc.*
all the containers will be wiped during the step
``neutron-remove-old-containers.yml``.
Change-Id: I2e1106bcce12547d6ab9e0384cd96d5e0194001d
Instead of ensuring only the package failing was installed, we
here ensure all the usual packages are installed to the proper
version to avoid headaches in the future.
Change-Id: Ibf766551a4c17adf7763f8c986b0d39cd7148979
This change:
- discovers the current running version to know what
to leap from, because we can't assume Juno only.
At the same time it introduces a human verification
of the source branch.
- removes the useless "-v" of the runs that produced
an undesirable output, making the interface less
"user friendly".
Change-Id: I04e4780bf5f58638addbd992eab7152f288532ae
Co-Authored-By: Jean-Philippe Evrard <jean-philippe@evrard.me>
This change adds upgrade tooling that will take a Juno based
OpenStack-Ansible cloud and upgrade it to Newton. The tooling
will run a deployment through all of the needed steps upgrading
the environment and skipping all of the OpenStack releases in
between.
**This tooling should be considered experimental at this time**
Change-Id: I1880794717b9e47786ae255ea1afa57d805cde8e
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>