Kevin Carter a94f0a9026 Combine our two multi-node-aio processes into one
The original mnaio was built using a lot of bash and was tailored
specifically for ubuntu 14.04. The new mnaio was built using a mix of
bash and ansible and was tailored specifically for ubuntu 16.04. This
patch takes the two code bases and combines the best things from each
method and wraps it up into a single code path all written using ansible
playbooks and basic variables.

While underlying system has changed the bash environment variable syntax
for overrides remains the same. This allows users to continue with what
has become their normal work-flow while leveraging the new structure and
capabilities.

High level overview:
  * The general performance of the VMs running within the MNAIO will now
    be a lot better. Before the VMs were built within QCOW2 containers,
    while this was flexible and portable it was slower. The new
    capabilities will use RAW logical volumes and native IO.
  * New repo management starts with preseeds and allows the user to pin
    to specific repositories without having to worry about flipping them
    post build.
  * CPU overhead will be a lot less. The old VM system used an
    un-reasonable number of processors per VM which directly translated
    to sockets. The new system will use cores and a single socket
    allowing for generally better VM performance with a lot less
    overhead and resource contention on the host.
  * Memory consumption has been greatly reduced. Each VM is now
    following the memory restrictions we'd find in the gate, as a MAX.
    Most of the VMs are using 1 - 2 GiB of RAM which should be more than
    enough for our purposes.

Overall the deployment process is simpler and more flexible and will
work on both trusty and xenial out of the box with the hope to bring
centos7 and suse into the fold some time in the future.

Change-Id: Idc8924452c481b08fd3b9362efa32d10d1b8f707
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
2017-07-28 15:35:23 +00:00

206 lines
4.2 KiB
YAML

---
cidr_networks:
container: 10.0.236.0/22
tunnel: 10.0.240.0/22
storage: 10.0.244.0/22
flat: 10.0.248.0/22
used_ips:
- "10.0.236.0,10.0.236.200"
- "10.0.240.0,10.0.240.200"
- "10.0.244.0,10.0.244.200"
- "10.0.248.0,10.0.248.200"
global_overrides:
internal_lb_vip_address: "10.0.236.112"
external_lb_vip_address: "10.0.2.150"
tunnel_bridge: "br-vxlan"
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
is_ssh_address: true
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-flat"
container_type: "veth"
container_interface: "eth12"
type: "flat"
net_name: "flat"
group_binds:
- neutron_linuxbridge_agent
- utility_all
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: "1:1"
net_name: "vlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
- swift_proxy
swift:
part_power: 8
storage_network: 'br-storage'
replication_network: 'br-storage'
drives:
- name: disk1
- name: disk2
- name: disk3
mount_point: /srv
storage_policies:
- policy:
name: default
index: 0
default: True
###
### Anchors
###
infra_block: &infra_block
infra1:
ip: 10.0.236.100
infra2:
ip: 10.0.236.101
infra3:
ip: 10.0.236.102
compute_block: &compute_block
compute1:
ip: 10.0.236.105
compute2:
ip: 10.0.236.106
cinder_block: &cinder_block
cinder1:
ip: 10.0.236.107
container_vars:
cinder_backends:
limit_container_types: cinder_volume
lvm:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name: LVM_iSCSI
iscsi_ip_address: "10.0.244.107"
cinder2:
ip: 10.0.236.108
container_vars:
cinder_backends:
limit_container_types: cinder_volume
lvm:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name: LVM_iSCSI
iscsi_ip_address: "10.0.244.108"
swift_block: &swift_block
swift1:
ip: 10.0.236.109
swift1:
ip: 10.0.236.110
swift1:
ip: 10.0.236.111
###
### Infrastructure
###
# galera, memcache, rabbitmq, utility
shared-infra_hosts: *infra_block
# repository (apt cache, python packages, etc)
repo-infra_hosts: *infra_block
# rsyslog server
log_hosts:
log1:
ip: 10.0.236.103
# load balancer
haproxy_hosts:
deploy1:
ip: 10.0.236.112
###
### OpenStack
###
# keystone
identity_hosts: *infra_block
# cinder api services
storage-infra_hosts: *infra_block
# glance
image_hosts: *infra_block
# nova api, conductor, etc services
compute-infra_hosts: *infra_block
# heat
orchestration_hosts: *infra_block
# horizon
dashboard_hosts: *infra_block
# neutron server, agents (L3, etc)
network_hosts: *infra_block
# ceilometer (telemetry data collection)
metering-infra_hosts: *infra_block
# aodh (telemetry alarm service)
metering-alarm_hosts: *infra_block
# gnocchi (telemetry metrics storage)
metrics_hosts: *infra_block
# ceilometer compute agent (telemetry data collection)
metering-compute_hosts: *compute_block
# nova hypervisors
compute_hosts: *compute_block
# cinder storage host (LVM-backed)
storage_hosts: *cinder_block
# swift storage hosts
swift_hosts: *swift_block
# swift infra hosts
swift-proxy_hosts: *infra_block