In order to correctly wind-down the 2-node devstack-precise image
add it back at min-ready 0. Once nodes are deleted we can clean
up and remove it.
Change-Id: I4cebaf2b75c67269bed424ead561de43f4165077
The d-g will know he needs the setup more nodes,
when the DEVSTACK_GATE_TOPOLOGY is set and it's value
is not 'aio' `all in one`. Jobs for nova network and neutron
are added.
devstack-precise-2-node replaced with devstack-trusy-2-node.
Change-Id: Ib7dfd93f95195505a911fbe56a4c65b4a7719328
The logs vhost rewrite rules were passing through and failing matches
because of apache2's internal rewrites. Stopping passing through to
avoid apache2 breaking us.
Change-Id: I86fafad9a0c991f00a86c042ff1174ca2ccd8c4d
If the requested file doesn't exist locally ask the os-loganalyze
wsgi app to handle the request anyway incase it can fetch the request
from swift.
Change-Id: I8ed3a4c7b9a9fa682dbc4c3f3ffee8ddf2c237c6
The container for os-loganalyze to pull from should be configurable but
there was a bug in the wsgi.conf where we tried to hardset the container
and make it configurable at the same time. Just make it configurable so
that the value in the config is correct.
Change-Id: I56de3ed87d6c27ac1723ffa2812e53cac31b5f40
Set up small non master non data elasticsearch daemons on logstash
workers to act as local load balancers for the elasticsearch http
protocol.
Change-Id: Ie3729f851ebef3331a6b69f718e57d663209bfc2
The logstash elasticsearch output seems to degrade over time and slow
down. Restarting the logstash daemon temporarily corrects this problem.
Switch to the elasticsearch HTTP output to see if that corrects the
problem as well.
Note the logstash watchdog is disabled by this change as logstash
daemons using elasticsearch HTTP output will not join the elasticsearch
cluster which will force the watchdog to always trip. To avoid this
issue disable the watchdog.
Change-Id: I77044b26fa10fb1fc3690a0464d79d55bed2fe00
Instead of a shell script looping over ssh calls, use a simple
ansible playbook. The benefit this gets is that we can then also
script ad-hoc admin tasks either via playbooks or on the command
line. We can also then get rid of the almost entirely unused
salt infrastructure.
Change-Id: I53112bd1f61d94c0521a32016c8a47c8cf9e50f7
Configure to use the read only swift creds that pair up with the read
write creds used to push the files.
Change-Id: I53252b3ed0d596b3fe36caef179f253bde1739cb
This change modifies install_puppet.sh to accept a --three option
setting it to install the latest puppet available. It also creates
a node definition for the puppetmaster.o.o node, the new 3 master,
and the master of the future. Changes were made to various classes
to allow the pinning to version 2.x to be turned off.
Change-Id: I805d6dc50b9de0d8a99cf818d22d06c2dea6090a
We are beginning to migrate projects to bare-trusty as their default
test slave nodes. As part of this change shift the balance of bare nodes
from precise to trusty ending up with a minimum of 21 nodes for all bare
nodes.
Change-Id: I0f8113e2d333fe42845555518c0ac9a29ee6fe23
We have trusty running dsvm jobs now by default. Rebalance min ready
numbers in nodepool to reflect the greater reliance on this image
flavor.
Change-Id: I7e30aed14d79f3d6ba099ce83d070c27dc4cae83
Non instance variable representation is deprecated
so needs to be changed. This change changes varibles
to their instance variable representation.
See more details see:
http://docs.puppetlabs.com/guides/templating.html
Change-Id: Ib77827e01011ef6c0380c9ec7a9d147eafd8ce2f
The image name we were using with nodepool for hpcloud precise images is
no longer valid. Does not show up in a `nova image-list` and nodepool
image builds fail. Use the most recent non deprecated Precise image
available to us according to `nova image-list`.
Change-Id: I65a37c70823f8a5d6d08af36f3c7cd1e3ab1021f
Start booting devstack and bare trusty images with nodepool. This will
allow us to start the migration to trusty while we work on adding dib
support to nodepool.
Change-Id: I07b90af0dc4a5cfb5c547c28d05d8d51c59b9c8e
Adjust nodepool limits for rax-dfw, rax-iad and rax-ord to reflect
our current reality. Our primary quota constraints there are
maxTotalInstances and maxTotalRAMSize, the latter needing to be
divided by 8192 to get an upper bound on possible node count. Take
whichever is lower in each region, subtract the number of images
configured for it to make room for image update template instances
while running at capacity, and knock off two more for breathing
room.
The drastic reduction in max-servers for rax-iad is due to a recent
and so far unexplained drop in maxTotalRAMSize there, which is being
investigated separately and can be readjusted upward once the cause
is addressed.
Change-Id: Iec601ed87bae9a048525ebcde37deb373b688f4d
Add devstack-f20 to hpcloud using the 'Fedora 20 Server 64-bit
20140407 - Partner Image'. I have run the devstack preparation
scripts manually and found no problems.
Change-Id: I562c558409ebc630a5d7b26774fe4132d7bd6b61
Since we've been put in a higher rate limit tier by hpcloud, try
reducing the wait time between calls to see if it helps get nodes
there out of building and delete states more efficiently.
Change-Id: I6125a461371d2fa7f6d81f9bdeba01896552633f
Nodepool is growing the ability to specify AZ when booting nodes. Use
this feature in hpcloud region b as all nodes there are currently being
scheduling in AZ2 and we want to spread the load out.
Change-Id: I1f6559e827116dc35ac9e1ef76d0b940c4b584bf
HP is turning off 1.0. Rip it out of our configs so that nodepool
doesn't go crazy when this happens.
Change-Id: I9aebbbbc7b78f2a057d4183568763a6d2d68ac25
Gracefully stop using hpcloud 1.0 so that any remaining nodes there are
cleaned up. This is in preparation for removing hpcloud 1.0 completely.
Change-Id: I7e45c9f646d2077d6e9692f72389bfc955e1d56c
Now that jobs are only running on R2 having 15 nodes ready without
knowing the demand for individual types doesn't make sense.
Change-Id: Ie130617c6be1ec32d94718d80c27852f9ec6b736
Temporarily stop using this provider as its causing various problems,
the load on CI has been reduced to allow R2 to be able to handle it
solo. Once we redeploy and gain some confidence in this region we
will be adding it back in. Setting max-servers to 0 will allow nodepool
to gracefully cleanup nodes.
Change-Id: Ib16dbef47b74bb027d47c60b50448d51c0110ca3
The error rate for hpcloud 1.1 when under load with 190 servers per
network:router pair exceeded the success rate. Reduce the number of
servers to 100 in each network:router pair to see if this handles load
better.
Change-Id: I91327d8452e3df30de22ee635fcd68fbb7b24c7f
To test the theory of error rates as they relate to server router ratio,
use less networks with a higher server-per-network count.
Change-Id: I132127519b8eb3f1116753cd51be90dab7e370e3
It has been suggested that we use fewer nodes per network:router
pairing in hpcloud region-b with nodepool. Test this by adding nodepool
providers for new network:router pairings. This will put up to 118 nodes
on each network:router of which we now have 5.
Change-Id: I126316e04554d044721ae1495a17d207e0de464a
This was increased some time ago to workaround a bug, the bug is now
fixed. max-servers on the rh1 provider can also be increased to 18(we
have 18 testenvs) as we will now be able to hold more instances.
Change-Id: I23e28b7cf55b099df73f55bf57d7cc0cbc78d969
We have the ability to track failure rates better now, so we should
be able to get some good numbers on whether this is terrible or not.
This reverts commit a906e4b52b8bcddfb6b3346885f99c937b87a0df.
Conflicts:
modules/openstack_project/templates/nodepool/nodepool.yaml.erb
Change-Id: Id9f27f5d4ec8788ea6a4d67fb15aa81cb2e7f1c3
HPCloud deprecated the Precise image we were using in nodepool. This
means they renamed it which broke all of our nodepool precise image
builds in hpcloud. They did not replace the image instead the indication
seems to be that users are expected to use partner images.
We are told the Precise partner images are provided by canonical and are
safe to use. This means that we can in theory also use the Trusty
partner image but that is a change for another day.
Change-Id: I27f9d1e5bfa261610d76edaf8b56df74705cffdd