For centos, we should be using the RDO repositories to provide
rabbitmq-server. This version is updated with bug fixes and provides
a more stable experience then using rabbitmq.com builds.
Co-Authored-by: Jeffrey Zhang <zhang.lei.fly@gmail.com>
Co-Authored-by: Michal (inc0) Jastrzebski <inc007@gmail.com>
Closes-Bug: #1621460
Change-Id: Ib0eafc5da4397756fbdd837520b15543180ce229
Collectd-ceilometer-plugin is essential for further
more detailed metrics collection, smarter scheduling and service
assurance.
Change-Id: I8da572980de370517ec120d745ad1d36e316b465
Implements: blueprint collectd-ceilometer-plugin
* merge keystone sections in all.yml
* move keystone parameters in globals.yml into its own section
TrivialFix
Change-Id: I72893a44dabd515243175098d5c4da3f8191597b
Added ansible role for influxdb
Introduced host groups for monitoring and influxdb and assign role
Monitoring is deployed on a separate node called monitoring01 by
default
Co-Authored-By: zhubingbing <zhubingbing10@gmail.com>
Change-Id: If2465a14b18c6c3fd657af587a0b85f6b7a0191a
Partially-Implements: Blueprint performance-monitoring
New option enable_neutron_agent_ha added to enable/disable dhcp/l3 agent
high availability, dhcp_agents_per_network is default to 2 and it's
configurable.
Implement blueprint: support-network-ha
Change-Id: Id4742aa67c80584634b923195545bf2b654172f3
An unwitting user may apply the KOLLA_CEPH_OSD[_CACHE]_BOOTSTRAP label
to a partition assuming it will only use that partition for Ceph, and
end up wiping out their disk.
This change adds a layer of checking to this scenario to try and help
avoid a disaster scenario.
Closes-Bug: 1599103
DocImpact
Change-Id: Ibb9fb42f87a76bc02165ec0b93b60234bad8747a
This addresses the ansible aspects of fernet key bootstrapping as
well as distributed key rotation.
- Bootstrapping is handled in the same way as keystone bootstrap.
- A new keystone-fernet and keystone-ssh container is created to allow
the nodes to communicate with each other (taken from nova-ssh).
- The keystone-fernet is a keystone container with crontab installed.
This will handle key rotations through keystone-manage and trigger
an rsync to push new tokens to other nodes.
- Key rotation is setup to be balanced across the keystone nodes using
a round-robbin style. This ensures that any node failures will not
stop the keys from rotating. This is configured by a desired token
expiration time which then determines the cron scheduling for each
node as well as the number of fernet tokens in rotation.
- Ability for recovered node to resync with the cluster. When a node
starts it will run sanity checks to ensure that its fernet tokens
are not stale. If they are it will rsync with other nodes to ensure
its tokens are up to date.
The Docker component is implemented in:
https://review.openstack.org/#/c/349366
Change-Id: I15052c25a1d1149d364236f10ced2e2346119738
Implements: blueprint keystone-fernet-token
The values for 'network_interface' and 'neutron_external_interface' are
missing from all.yml, meaning it is impossible to override them on a per
node / per group basis. (globals.yml get's top precedence).
Make these consistent with the rest of the variables and move the
defaults into all.yml. Operators can still override / update these in
globals.yml as before, but those wanting more flexibility now have it
via host / group variables.
Change-Id: I2575921f76a8e245106da765757c70353bd6762c
Closes-Bug: #1604129
keystone_*_url are cross role variables. They are used in multi roles.
Move them from the common role to the group vars
TrivialFix
Change-Id: If451823ed7612bfec7bc797ec9dd2597164c6804
enable_rabbitmq_cluster is now a "yes" by default but you can set it
to "no" if you want to disable clustering under any circumstances.
The agreement made at OpenStack in Austin was that Kolla-Kubernetes
would concentrate on RabbitMQ and MariaDB without clustering but
with persistent storage and workload migration, then examine how to
do proper distributed functionality as the project progresses, so I
am just following what we'd already agreed upon.
First, it helps us deal with issues of version upgrades without
dealing with clustered version upgrades and the synchronization
thereof.
Second, it provides an alternative model for durability when used in
Kubernetes. Understand that, if we disable RabbitMQ's clustering,
Kubernetes is still able to re-schedule the queue off of a failed node
in ways that Kolla-Ansible is not. There are known issues with
RabbitMQ clustering, especially with auto-heal turned on. For many
small-to-mid-sized clusters, it's going to provide for a better
operator experience to have the known potential for a 30 second blip
after RabbitMQ node failure than it is to have the known potential
for partition and data loss and/or manual operations after you've
turned off auto-heal.
Kolla-kubernetes has already turned off host networking for the
RabbitMQ pod; it's safe to set the interface address in the
Kubernetes context.
The question was asked why don't I just set the RabbitMQ cluster to be
a single instance. It's unlikely that Kubernetes RabbitMQ with a
PetSet will be clustered in the same declaritive fashion as the
rabbitmq-clusterer plugin. Easier to just disable it and worry about
how to configure the kube-friendly clustered RabbitMQ at a later point
in time. Furthermore, it's an entirely valid case for many OpenStack
control planes hosted atop Kolla-Kubernetes to accept the possibility
of a 30-60 second blip in lieu of the long and questionable history
of RabbitMQ clustering in production.
Co-authored-by: Ryan Hallisey <rhallise@redhat.com>
Change-Id: I7f0cb22d29a418fce4af8d69f63739859173d746
Partially-implements: blueprint api-interface-bind-address-override
This changed introduces 4 new parameters to be able to use an existing
elasticsearch service for central logging.
* elasticsearch_address - address of elasticsearch server
* elasticsearch_protocol - protocol (HTTP/HTTPS) used by elasticsearch server
* enable_elasticsearch - deploy elasticsearch container
* enable_kibana - deploy kibana container
Closes-bug: #1584861
Change-Id: Ia1ff9ae8b6d9929c3826da02693d1e2fc9ea2522
When orchestration engine is Kubernates,
ansible_processor_vcpus is not defined.
This patch changes workers to be static when using kubernates
Change-Id: I4d77b2e48ea24c4ca8b86ec5b7e6029c054b247a
Closes-Bug: #1609206
Introduced nova backend selection flag for Ceph and priority if
multiple backends are configured
Add mechanism to deploy arbitrary ceph.conf and keyring files into
nova-compute and nova-libvirt containers
Added documentation
Change-Id: Id010ca9cc2d914e5358ef79edeb600a28220dd4b
Implements: blueprint external-ceph
Remove the unnecessary option in the group_vars/all.yml file.
* removed some cinder.conf options like volume_backend_name,
iscsi_helper, iscsi_protocol etc. these value can be configured by
custom cinder.conf file, no need export as global variables.
* remove meaningless iscsi_ip_addess, which is not used in LVM driver
* force start iscsi relate when enable_cinder_backend_lvm is yes
TrivialFix
Change-Id: Ifcbfdad15e4d68bc5f20fc77e0315a09983ef022
This patch adds support for external Ceph clusters for Cinder.
For clean integration the backend configuration mechanism had to be
slightly adjusted.
We now have the option to enable multiple backends for Cinder
independently.
Currently, the flags cinder_backend_iscsi and cinder_backend_ceph are
used to toggle backends.
Documentation on how to use external ceph was added.
Change-Id: I7e0267b90d62d6d881f24f063cdb894422ec8618
Partially-Implements: Blueprint: external-ceph
Most simple implementation of external ceph support.
We use INI merge to configure RBD backend for Glance and copy
ceph.conf and keyring provided by the user into the container.
Set_configs.py had to be extended to support globbing (wildcards) in
order to copy ceph keyring file which is named depending on the cephx
user name.
Partially-Implements Blueprint: external-ceph
Partially-Implements Blueprint: selectable-ceph
Change-Id: Iacadbd8ec9956e9f075206ea03b28f044cb6ffb8
This introduces a new configuration parameter neutron_enable_qos to
be able to enable the Neutron QoS service plugin.
More details about the Neutron QoS service plugin are available at:
http://docs.openstack.org/liberty/networking-guide/adv-config-qos.html
Change-Id: I8525bf4dce5f1e225f72a4e1c3760b64a36b17f6
Closes-bug: #1593183
Implements: bp netowrking-qos