This PS updates the pod affinity function to allow customisation by
operators at the point of deployment.
Change-Id: I8b7b2f584e990e068051d9a6d5cc7b1e1adb5aa5
This PS move s the replicas key to be under the pod key in the values.
It brings further consolation of related configuration params to be
nested under common keys across all charts.
Change-Id: I420b06debd0a62ba5d83497be43ff6c49c49d339
The existing entrypoint logic used static names to reolve dependencies.
This prevented the service names, and thus the hostnames of services
being altered. This PS resolves that issue by looking up the service name
from the endpoints specified in the values for a chart.
Partial-Implements: blueprint enhance-entrypoint-dependency-checking
External-Tracking-Id: OSH-21
Change-Id: Ib49490f332f8cd88e98c50d9335dfd314a170936
This PS fixes some image references, to bring them inline with the
style and location with other services.
Change-Id: I1c42748875170a5a33bed7566382f3e31438dc7d
Currently, rabbitmq clustering is using the autocluster plugin,
and NODE_TYPE is set to disc in default. so every nodes join cluster as disc node.
However, since there is a need for disc + ram clustering for the performance.
if change node_type from disc to ram, clustering configured as [disc + ram + ram].
Change-Id: Ie83689b0554f0f993bdffac666f0f56db8082992
Implements: blueprint rabbitmq-dns-discovery
Some useful things to note:
1. This uses a StatefulSet instead of a Deployment. The reason for this
is that when RabbitMQ uses DNS for peer discovery, the first thing it
does when trying to join a node is attempt a reverse-dns lookup.
This reverse lookup works when using a StatefulSet, but not a
Deployment.
2. The RabbitMQ configuration was updated to use the new sysctl-style
format. It seems that the new format is required to configure the
new autoclustering features. Additionally, I found that this
generate much clearer error messages than the straight erlang format.
3. I removed the `is-node-properly-clustered` test in the liveness and
readiness probes. This probe isn't directly supported in 3.7.0,
and it wasn't clear that a clustering check was appropriate for each
node.
Change-Id: Ieefbb2205bd77fbac04abcd051fb06fce62e8d97
* Add resources limits and requests for each chart
* Refactor the resources limits and requests to follow a patern
* Fix some coding issues
* fix issues resulting from feedback on the resources PR
* Reset some variables to a static value in the neutron chart.
* Substituting variable entrypoint by dependency_check in the concerned files
* Few adjustments
* Update deploy-region.yaml
* Update deployment.yaml
* Add resources limits and requests for each chart
Squah all commits in one.
* Add resources limits and requests for some charts
* cleaning
* Fix indendation issue
* Update deployment.yaml
* Update daemonset-ovs-vswitchd.yaml
* Made values.yaml consistent throughout charts. Removed any globals
references in subcharts as these are difficult to override. Only
ports should be in globals to build URLs which can come as part
of a future commit. The hostname endpoint aspect of a service
will come from openstack-base/_hosts.tpl and the port
would come from the chart itself as a global so other charts
can reference the port to build a complete URL. Putting the
hostnames themselves as globals in individual charts makes it
difficult to make a sweeping top level FQDN change.
* Cleaned up yaml requirements and incorporated a new _common.tpl
that is distributed to all charts to allow common endpoint naming
while still retaining the ability to install individual charts.
* Fixed keystone URL generation during bootstrap as a correct
URL is critical given keystone uses this to construct all
subsequent URLs in the request. Also allow controlling the
default endpoint version and scheme.
* Added missing NAMESPACE declaration to keystone deployment
as this is required for entrypoint to discover resources
not in the 'default' namespace.
* Refactored all nodeSelector values to be consistent throughout
all charts