570 Commits

Author SHA1 Message Date
Clark Boylan
d85f3c004b Use a current Trusty image name in hpcloud
HPCloud "deprecates" images and renames them. This appears to also
change the image uuid in the process. I have checked the image uuid
against
https://cloud-images.ubuntu.com/releases/streams/v1/com.ubuntu.cloud:released:hpcloud.json
and according to that file's gpg signature the uuid is correct.
Note I don't have a chain of trust to the signature used so can only
verify the signature not the signer. I also looked for checksums in
the signed json file but they don't seem to exist so I can only check
the uuid.

Go with the new 14.04.1 image which is current and not deprecated. We
problably want to confirm with ubuntu before we merge this change
though.

Change-Id: I979f1b90fa5dd4e5889b58539208303b78feb115
2014-08-04 19:52:13 +00:00
Jenkins
5794402075 Merge "multinode min ready 2" 2014-08-01 23:19:14 +00:00
Spencer Krum
51fcec1a47 Pinning puppetdb-terminus with other puppet packages
Change-Id: I5025167cc60147258039ac188829337e7907ad0d
2014-07-31 14:36:55 -07:00
Jenkins
b5b124c150 Merge "Remove hpcloud-region-b" 2014-07-31 00:02:21 +00:00
Ian Wienand
8968a45105 Add centos7 image for rax
Rackspace provides a centos7 image, so add it to nodepool so we can
start testing with it.  I've tested the changes from
I80024d1afdb4e40d5fe9793ab2ec443b887c5fa8 manually in this image and
it is OK.

Change-Id: I2c040cc18d61b69a6f13cf0d8d102a5994c130e4
2014-07-28 15:26:44 +10:00
Derek Higgins
59266a3027 Bump up the capacity of the tripleo rh1 cloud
More capacity has been added to the rh1 cloud.

Change-Id: I938d3f346f95999901d7e7afcfed4ce3e269350a
2014-07-22 18:48:12 +01:00
Attila Fazekas
bf89bb65fa multinode min ready 2
Two experimental job have been defined for
as d-g experimental job, but we just keep ready
only 1 pair of nodes. It can cause longer wait
time on the d-g experimental checks.

Increasing the min-ready on devstack-trusty-2-node
from 1 to 2.

Change-Id: I7a51efa9dfbe45264819960da74ab5b8d9874426
2014-07-18 16:48:45 +02:00
James E. Blair
46130f618d Add devstack-precise-2-node back at 0
In order to correctly wind-down the 2-node devstack-precise image
add it back at min-ready 0.  Once nodes are deleted we can clean
up and remove it.

Change-Id: I4cebaf2b75c67269bed424ead561de43f4165077
2014-07-18 07:28:57 -07:00
Jenkins
e2476f8e92 Merge "Add 2 node experimental job to d-g" 2014-07-18 13:52:36 +00:00
Attila Fazekas
9d375137a6 Add 2 node experimental job to d-g
The d-g will know he needs the setup more nodes,
when the DEVSTACK_GATE_TOPOLOGY is set and it's value
is not 'aio' `all in one`. Jobs for nova network and neutron
are added.

devstack-precise-2-node replaced with devstack-trusy-2-node.

Change-Id: Ib7dfd93f95195505a911fbe56a4c65b4a7719328
2014-07-18 01:02:00 -07:00
Monty Taylor
cd816f20e7 Remove hpcloud-region-b
We haven't been using it for a while, because we use all the
sub-regions.

Change-Id: I956c75fee8e12c94aaffe306ce66baeb59130e95
2014-07-18 09:57:08 +02:00
Joshua Hesketh
7244de8baf Fix the apache rules for fetching from swift
Change-Id: I8c1a39a65dc1eefc782664ea5f020150821ebce6
2014-07-17 13:19:54 +00:00
Clark Boylan
d65853f8da Fix logs rewrite passthrough
The logs vhost rewrite rules were passing through and failing matches
because of apache2's internal rewrites. Stopping passing through to
avoid apache2 breaking us.

Change-Id: I86fafad9a0c991f00a86c042ff1174ca2ccd8c4d
2014-07-16 05:32:19 -07:00
Jenkins
a1953fd4ae Merge "Add in rewrite rule to check swift" 2014-07-16 10:06:47 +00:00
Joshua Hesketh
cf71602cc7 Add in rewrite rule to check swift
If the requested file doesn't exist locally ask the os-loganalyze
wsgi app to handle the request anyway incase it can fetch the request
from swift.

Change-Id: I8ed3a4c7b9a9fa682dbc4c3f3ffee8ddf2c237c6
2014-07-16 19:52:39 +10:00
Clark Boylan
14f303d7d1 Don't hard set infra-files in swift wsgi conf
The container for os-loganalyze to pull from should be configurable but
there was a bug in the wsgi.conf where we tried to hardset the container
and make it configurable at the same time. Just make it configurable so
that the value in the config is correct.

Change-Id: I56de3ed87d6c27ac1723ffa2812e53cac31b5f40
2014-07-16 02:22:55 -07:00
Clark Boylan
9f42006bd1 Use local ES balancers on logstash workers
Set up small non master non data elasticsearch daemons on logstash
workers to act as local load balancers for the elasticsearch http
protocol.

Change-Id: Ie3729f851ebef3331a6b69f718e57d663209bfc2
2014-07-07 17:01:57 -07:00
Clark Boylan
a339be3b8c Convert logstash ES output to HTTP.
The logstash elasticsearch output seems to degrade over time and slow
down. Restarting the logstash daemon temporarily corrects this problem.
Switch to the elasticsearch HTTP output to see if that corrects the
problem as well.

Note the logstash watchdog is disabled by this change as logstash
daemons using elasticsearch HTTP output will not join the elasticsearch
cluster which will force the watchdog to always trip. To avoid this
issue disable the watchdog.

Change-Id: I77044b26fa10fb1fc3690a0464d79d55bed2fe00
2014-07-07 13:14:01 -07:00
Monty Taylor
034f37c32a Use ansible instead of direct ssh calls
Instead of a shell script looping over ssh calls, use a simple
ansible playbook. The benefit this gets is that we can then also
script ad-hoc admin tasks either via playbooks or on the command
line. We can also then get rid of the almost entirely unused
salt infrastructure.

Change-Id: I53112bd1f61d94c0521a32016c8a47c8cf9e50f7
2014-07-04 10:01:08 -07:00
Jenkins
629d146fea Merge "Add in wsgi.conf for os-loganalyze" 2014-07-03 22:15:46 +00:00
Joshua Hesketh
df4f93b891 Add in wsgi.conf for os-loganalyze
Configure to use the read only swift creds that pair up with the read
write creds used to push the files.

Change-Id: I53252b3ed0d596b3fe36caef179f253bde1739cb
2014-07-03 14:24:25 -07:00
Spencer Krum
b65a2d3afc Allow site.pp to manage ca and ca_sever in puppet.conf
This allows us to set ca = false and ca_server = <fqdn> on the
new puppet 3 master.

Change-Id: Iba189bdc4bfb22fd23052f2570f52133ea184126
2014-07-02 15:01:17 -07:00
Spencer Krum
6adda92be8 Add node def for puppet3 master
This change modifies install_puppet.sh to accept a --three option
setting it to install the latest puppet available. It also creates
a node definition for the puppetmaster.o.o node, the new 3 master,
and the master of the future. Changes were made to various classes
to allow the pinning to version 2.x to be turned off.

Change-Id: I805d6dc50b9de0d8a99cf818d22d06c2dea6090a
2014-07-02 13:25:14 -07:00
Jenkins
a534f166a4 Merge "Revert "Fixing deprecation warnings"" 2014-07-02 17:42:24 +00:00
James E. Blair
f52d2eb2b6 Revert "Fixing deprecation warnings"
This reverts commit 82b9b59522928863ddadeacfec819e287303ef20.

Change-Id: I746d7ae57802dc76618db9024a0cf94c43774c02
2014-07-02 17:35:02 +00:00
Jenkins
c5a83095e6 Merge "Fixing deprecation warnings" 2014-07-02 07:04:07 +00:00
Clark Boylan
c80fbd7f87 Start shifting balance of bare nodes to trusty
We are beginning to migrate projects to bare-trusty as their default
test slave nodes. As part of this change shift the balance of bare nodes
from precise to trusty ending up with a minimum of 21 nodes for all bare
nodes.

Change-Id: I0f8113e2d333fe42845555518c0ac9a29ee6fe23
2014-06-26 14:13:36 -07:00
Jenkins
fc4095cc8d Merge "remove /rechecks" 2014-06-25 22:46:20 +00:00
Jenkins
4f77921acf Merge "Update HPCloud Precise image name" 2014-06-25 22:46:06 +00:00
Sean Dague
9b357c31fb remove /rechecks
and make it redirect to the elastic-recheck page for niceness to
users using the old link.

Change-Id: I900418ae152c5568c7418237aa0e30e2b1efdd78
2014-06-25 11:04:29 -04:00
Clark Boylan
12f983df26 Rebalance nodepool min ready numbers for trusty
We have trusty running dsvm jobs now by default. Rebalance min ready
numbers in nodepool to reflect the greater reliance on this image
flavor.

Change-Id: I7e30aed14d79f3d6ba099ce83d070c27dc4cae83
2014-06-24 10:55:50 -07:00
Spencer Krum
82b9b59522 Fixing deprecation warnings
Non instance variable representation is deprecated
so needs to be changed. This change changes varibles
to their instance variable representation.

See more details see:
http://docs.puppetlabs.com/guides/templating.html

Change-Id: Ib77827e01011ef6c0380c9ec7a9d147eafd8ce2f
2014-06-19 22:41:42 -07:00
Clark Boylan
2079764ea5 Update HPCloud Precise image name
The image name we were using with nodepool for hpcloud precise images is
no longer valid. Does not show up in a `nova image-list` and nodepool
image builds fail. Use the most recent non deprecated Precise image
available to us according to `nova image-list`.

Change-Id: I65a37c70823f8a5d6d08af36f3c7cd1e3ab1021f
2014-06-18 20:40:53 -07:00
Clark Boylan
6247132dd0 Add trusty to nodepool image list
Start booting devstack and bare trusty images with nodepool. This will
allow us to start the migration to trusty while we work on adding dib
support to nodepool.

Change-Id: I07b90af0dc4a5cfb5c547c28d05d8d51c59b9c8e
2014-06-18 13:52:33 -07:00
Jeremy Stanley
653f8b8d67 Adjust nodepool limits for rax providers
Adjust nodepool limits for rax-dfw, rax-iad and rax-ord to reflect
our current reality. Our primary quota constraints there are
maxTotalInstances and maxTotalRAMSize, the latter needing to be
divided by 8192 to get an upper bound on possible node count. Take
whichever is lower in each region, subtract the number of images
configured for it to make room for image update template instances
while running at capacity, and knock off two more for breathing
room.

The drastic reduction in max-servers for rax-iad is due to a recent
and so far unexplained drop in maxTotalRAMSize there, which is being
investigated separately and can be readjusted upward once the cause
is addressed.

Change-Id: Iec601ed87bae9a048525ebcde37deb373b688f4d
2014-06-11 15:12:03 +00:00
Ian Wienand
543576b988 Add devstack-f20 to hpcloud
Add devstack-f20 to hpcloud using the 'Fedora 20 Server 64-bit
20140407 - Partner Image'.  I have run the devstack preparation
scripts manually and found no problems.

Change-Id: I562c558409ebc630a5d7b26774fe4132d7bd6b61
2014-06-06 10:48:59 +10:00
Jeremy Stanley
b1383f1388 Reduce nodepool's wait between calls in hpcloud
Since we've been put in a higher rate limit tier by hpcloud, try
reducing the wait time between calls to see if it helps get nodes
there out of building and delete states more efficiently.

Change-Id: I6125a461371d2fa7f6d81f9bdeba01896552633f
2014-06-03 15:48:30 +00:00
Jenkins
706d86ae6a Merge "Stop using HP Cloud 1.0" 2014-06-03 05:06:24 +00:00
Clark Boylan
c8b0f37bec Add availability zones to hpcloud region b
Nodepool is growing the ability to specify AZ when booting nodes. Use
this feature in hpcloud region b as all nodes there are currently being
scheduling in AZ2 and we want to spread the load out.

Change-Id: I1f6559e827116dc35ac9e1ef76d0b940c4b584bf
2014-06-02 12:25:33 -07:00
Jenkins
1ef24d01c7 Merge "Fix variable name" 2014-05-30 22:49:43 +00:00
Jenkins
2ffceb49fa Merge "Reduce min-ready for tripleo nodes" 2014-05-30 22:44:34 +00:00
Monty Taylor
7fe6272b07 Stop using HP Cloud 1.0
HP is turning off 1.0. Rip it out of our configs so that nodepool
doesn't go crazy when this happens.

Change-Id: I9aebbbbc7b78f2a057d4183568763a6d2d68ac25
2014-05-30 14:41:59 -07:00
Clark Boylan
8a01f3ea41 Ease down hpcloud 1.0
Gracefully stop using hpcloud 1.0 so that any remaining nodes there are
cleaned up. This is in preparation for removing hpcloud 1.0 completely.

Change-Id: I7e45c9f646d2077d6e9692f72389bfc955e1d56c
2014-05-30 14:36:08 -07:00
K Jonathan Harker
a30dd52700 Fix variable name
The puppet variables do not have _dev_ in them, only the hiera keys
contain _dev_.

Change-Id: I86d751859eb3ffbdef323f88b31f5d43eb305a34
2014-05-30 13:44:33 -07:00
Derek Higgins
7a8b9bf5f4 Reduce min-ready for tripleo nodes
Now that jobs are only running on R2 having 15 nodes ready without
knowing the demand for individual types doesn't make sense.

Change-Id: Ie130617c6be1ec32d94718d80c27852f9ec6b736
2014-05-30 00:52:12 +01:00
Derek Higgins
4daba7bd76 Remove the tripleo-test-cloud provider
Temporarily stop using this provider as its causing various problems,
the load on CI has been reduced to allow R2 to be able to handle it
solo. Once we redeploy and gain some confidence in this region we
will be adding it back in. Setting max-servers to 0 will allow nodepool
to gracefully cleanup nodes.

Change-Id: Ib16dbef47b74bb027d47c60b50448d51c0110ca3
2014-05-29 20:47:52 +01:00
Clark Boylan
77d09ef80e Use 100 servers per hpcloud network:router pair.
The error rate for hpcloud 1.1 when under load with 190 servers per
network:router pair exceeded the success rate. Reduce the number of
servers to 100 in each network:router pair to see if this handles load
better.

Change-Id: I91327d8452e3df30de22ee635fcd68fbb7b24c7f
2014-05-23 15:26:40 -07:00
Monty Taylor
573a5292cc Raise the ratio of servers to routers
To test the theory of error rates as they relate to server router ratio,
use less networks with a higher server-per-network count.

Change-Id: I132127519b8eb3f1116753cd51be90dab7e370e3
2014-05-23 11:15:03 -04:00
Clark Boylan
4f8acb5ffe Use network specific hpcloud region-b providers
It has been suggested that we use fewer nodes per network:router
pairing in hpcloud region-b with nodepool. Test this by adding nodepool
providers for new network:router pairings. This will put up to 118 nodes
on each network:router of which we now have 5.

Change-Id: I126316e04554d044721ae1495a17d207e0de464a
2014-05-22 15:42:06 -07:00
Jenkins
3ef79ce498 Merge "Revert "Revert "Revert "Stop using hpcloud az1-az3 in favor of region-b"""" 2014-05-21 20:01:38 +00:00