Also, fixed up environment variable to use RABBITMQ_NODE_PORT, which is
what RabbitMQ expects (http://www.rabbitmq.com/configure.html).
Implements: blueprint kube-rabbitmq-container
Change-Id: Iacc2ea5d3c4a002e6920ed17cb21733a0cbd8d21
- glance was using wrong var name for admin_password
- also missing "\" in several places, breaking multi-line crudini
commands.
- glance was using wrong tenant name
- in the registry container, glance-manage appears to reference
glance-api.conf
- the glance.json config file was not spawning a registry container
Change-Id: I280d1db3ed576988f2bf29ea665e1922a37f8752
This renames the keystone services so that they are named by function,
rather than port number (which would be confusing if they were running
on a different port).
Change-Id: Ibb0263a133c28a104563df431870a9effe584012
This patch updates all the json files that reference the mariadb service
variables to ues the new names.
Labelling things foo-master crept into this repository from the
kubernetes guestbook example (which has redis-master and redis-slaves).
We're not running clustered software at the moment so these labels are
unnecessary.
Change-Id: I229d04c89aa13cb6cc2e1c33a0a7b21e1c6e9caa
Labelling things foo-master crept into this repository from the
kubernetes guestbook example (which has redis-master and redis-slaves).
We're not running clustered software at the moment so these labels are
unnecessary.
Change-Id: Ibf4cb2b005cc57bcb11e298dd5109cfe309c9ec3
Let's get that quickly so we can add a gate. There was some respacing
along the way (used http://jsonlint.com)
Change-Id: Id18b9f9757306cf3f06e6221a21a9f600db1bd2e
This image configures haproxy to forward connections for all available
kubernetes services. It is meant to be run alongside other contains in
a kubernetes pod to provide access to "remote" services at a consistent
address so that keystone api endpoints can be configured in a sane
fashion.
Change-Id: Ic923c6a772f1bdf36b97b05a1d04de9e5b841ddd
this patch introduces the "crux" [1] tool for creating keystone
users, services, and endpoints in an idempotent fashion. E.g., to
create a user that doesn't exist:
$ crux user-create -n lars -t lars -p secret
creating new tenant
created tenant lars (d74cec5023c4428da533066bb11943db)
creating new user lars
created user lars (adf2c2d92e894a3d90a403c5885f192e)
And performing the same operation a second time:
$ crux user-create -n lars -t lars -p secret
using existing tenant lars (d74cec5023c4428da533066bb11943db)
using existing user lars (adf2c2d92e894a3d90a403c5885f192e)
The behavior is similar for creating keystone endpoints.
[1]: https://github.com/larsks/crux
Change-Id: I694e0c1bdcdde595e1af2ee8ef5d0f239a9ad4cd
we use openssl in many of our start scripts for password generation, so
openssl should probably be part of the base image.
Change-Id: I893adfa3b7d17249b6814fc161e6f3f1696d8cd6
This patch replaces the collection of individual "build" scripts with a
single script (tools/build-docker-image), made available as "build"
inside each image directory.
The build-docker-image script will, by default, build images tagged with
the current commit id in order to prevent developers from accidentally
stepping on each other or on release images.
Documentation in docs/image-building.md describes the script in more
detail.
Change-Id: I444d5c2256a85223f8750a0904cb4b07f18ab67f
- adding db sync and db creation as utf8
- fixing user and role creation
Partial-blueprint: kube-glance-container
Change-Id: I15be99f26483e490fccc23d029f39645c13c724b
Not everyone have access to kube/ docker namespace, so let just push it
only when specified with -p
Change-Id: I49b2b04f8db8ff7ba7c9f6b6dc9b2ec8c30a95c8
This patch does a service and endpoint create so that each separate service
can find keystone. This patch also makes the sleeping a bit more logical
although there are TODOs in this area to remove the sleep operations.
Change-Id: Icfee464f9473686da89bfa8b2106172cbfd4c1a8
Closes-Bug: #1376975
This lays the groundwork for the docker compute container.
The compute node is composed of libvirt container and a nove-compute
container. We are going to have to sort out how to get k8s to schedule
this pod 1 per node.
Change-Id: I1e06e4b5f5bde83b582edfc1094084a4ee353371
Partial-blueprint: kube-libvirt-container
Partial-blueprint: kube-nova-container