diff --git a/doc/training-guides/basic-install-guide/app_reserved_uids.xml b/doc/training-guides/basic-install-guide/app_reserved_uids.xml deleted file mode 100644 index 0905f81b..00000000 --- a/doc/training-guides/basic-install-guide/app_reserved_uids.xml +++ /dev/null @@ -1,93 +0,0 @@ - - - Reserved user IDs - - - OpenStack reserves certain user IDs to run specific services and - own specific files. These user IDs are set up according to the - distribution packages. The following table gives an overview. - - - - Some OpenStack packages generate and assign user IDs - automatically during package installation. In these cases, the - user ID value is not important. The existence of the user ID is - what matters. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Reserved user IDs
NameDescriptionID
ceilometerOpenStack Ceilometer Daemons166Assigned during package installation
cinderOpenStack Cinder Daemons165Assigned during package installation
glanceOpenStack Glance Daemons161Assigned during package installation
heatOpenStack Heat Daemons187Assigned during package installation
keystoneOpenStack Keystone Daemons163Assigned during package installation
neutronOpenStack Neutron Daemons164Assigned during package installation
novaOpenStack Nova Daemons16296Assigned during package installation
swiftOpenStack Swift Daemons160Assigned during package installation
troveOpenStack Trove DaemonsAssigned during package installation
- Each user belongs to a user group with the same name as the - user. - -
diff --git a/doc/training-guides/basic-install-guide/bk-openstack-basic-install-guide.xml b/doc/training-guides/basic-install-guide/bk-openstack-basic-install-guide.xml deleted file mode 100644 index 22ac65f8..00000000 --- a/doc/training-guides/basic-install-guide/bk-openstack-basic-install-guide.xml +++ /dev/null @@ -1,99 +0,0 @@ - - - OpenStack Basic Installation Guide for - <phrase os="rhel;centos;fedora">Red Hat Enterprise Linux 7, CentOS 7, and Fedora 20</phrase> - <phrase os="ubuntu">Ubuntu 14.04</phrase> - <phrase os="debian">Debian 7</phrase> - <phrase os="opensuse">openSUSE 13.1 and SUSE Linux Enterprise Server 11 SP3</phrase> - - - - OpenStack Basic Installation Guide for - Red Hat Enterprise Linux, CentOS, and Fedora - Ubuntu 14.04 - openSUSE and SUSE Linux Enterprise Server - Debian 7 - - - - - - - - - OpenStack - - - - 2012 - 2013 - 2014 - OpenStack Foundation - - juno - OpenStack Basic Installation Guide - - - - Copyright details are filled in by the - template. - - - - Work in progress, please do not work on this patch. - - The OpenStack® system consists of several key - projects that you install separately but that work - together depending on your cloud needs. These projects - include Compute, Identity Service, Networking, Image - Service, Block Storage, Object Storage, Telemetry, - Orchestration, and Database. You can install any of these - projects separately and configure them stand-alone or - as connected entities. This guide walks through an - installation by using packages available through - Debian 7 (code name: Wheezy). - This guide walks through an - installation by using packages available through - Ubuntu 14.04. - This guide shows you - how to install OpenStack by using packages - available through Fedora 20 as well as on Red Hat - Enterprise Linux 7 and its derivatives through the - EPEL repository. - This guide shows you how to - install OpenStack by using packages on openSUSE - 13.1 and SUSE Linux Enterprise Server 11 SP3 - through the Open Build Service Cloud - repository. Explanations of configuration - options and sample configuration files are - included. - - - - - - - - - - - - - - - - - - - - - - diff --git a/doc/training-guides/basic-install-guide/ch_basic_environment.xml b/doc/training-guides/basic-install-guide/ch_basic_environment.xml deleted file mode 100644 index 74c05a23..00000000 --- a/doc/training-guides/basic-install-guide/ch_basic_environment.xml +++ /dev/null @@ -1,53 +0,0 @@ - - - - Basic environment - - - - The trunk version of this guide focuses on the future Juno - release and will not work for the current Icehouse release. If - you want to install Icehouse, you must use the Icehouse version - of this guide instead. - - - - This chapter explains how to configure each node in the - example architectures - including the - two-node architecture with legacy networking and - three-node - architecture with OpenStack Networking (neutron). - - Although most environments include OpenStack Identity, Image Service, - Compute, at least one networking service, and the dashboard, OpenStack - Object Storage can operate independently of most other services. If your - use case only involves Object Storage, you can skip to - . However, the dashboard will not run without - at least OpenStack Image Service and Compute. - - - You must use an account with administrative privileges to configure - each node. Either run the commands as the root user - or configure the sudo utility. - - - - The systemctl enable call on openSUSE outputs - a warning message when the service uses SysV Init scripts - instead of native systemd files. This warning can be ignored. - - - - - - - - - - diff --git a/doc/training-guides/basic-install-guide/ch_basic_networking.xml b/doc/training-guides/basic-install-guide/ch_basic_networking.xml deleted file mode 100644 index d1ff6636..00000000 --- a/doc/training-guides/basic-install-guide/ch_basic_networking.xml +++ /dev/null @@ -1,37 +0,0 @@ - - - Add a networking component - This chapter explains how to install and configure either - OpenStack Networking (neutron) or the legacy nova-network networking service. - The nova-network service - enables you to deploy one network type per instance and is - suitable for basic network functionality. OpenStack Networking - enables you to deploy multiple network types per instance and - includes plug-ins for a - variety of products that support virtual - networking. - For more information, see the Networking chapter of the OpenStack Cloud - Administrator Guide. -
- OpenStack Networking (neutron) - - - - - -
-
- Next steps - Your OpenStack environment now includes the core components - necessary to launch a basic instance. You can launch an instance or add - more OpenStack services to your environment. -
-
diff --git a/doc/training-guides/basic-install-guide/ch_basics.xml b/doc/training-guides/basic-install-guide/ch_basics.xml deleted file mode 100644 index 0ea28908..00000000 --- a/doc/training-guides/basic-install-guide/ch_basics.xml +++ /dev/null @@ -1,47 +0,0 @@ - - - - Basic environment configuration - - - - The trunk version of this guide focuses on the Icehouse - release and will not work for the current Juno release. If - you want to install Juno, you must use the Juno version - of this guide instead. - - - - This chapter explains how to configure each node in the - example architectures - including the - two-node architecture with legacy networking and - three-node - architecture with OpenStack Networking (neutron). - - Although most environments include OpenStack Identity, Image Service, - Compute, one networking service, and the dashboard, OpenStack - Object Storage can operate independently of most other services. If your - use case only involves Object Storage, you can skip to - . However, the - dashboard will not work without at least the OpenStack Image Service and - Compute. - - - You must use an account with administrative privileges to configure - each node. Either run the commands as the root user - or configure the sudo utility. - - - - - - - - - diff --git a/doc/training-guides/basic-install-guide/ch_ceilometer.xml b/doc/training-guides/basic-install-guide/ch_ceilometer.xml deleted file mode 100644 index 0af09578..00000000 --- a/doc/training-guides/basic-install-guide/ch_ceilometer.xml +++ /dev/null @@ -1,23 +0,0 @@ - - - Add the Telemetry module - Telemetry provides a framework for monitoring and metering - the OpenStack cloud. It is also known as the ceilometer - project. - - - - - - -
- Next steps - Your OpenStack environment now includes Telemetry. You can - launch an instance or add more - services to your environment in the previous chapters. -
-
diff --git a/doc/training-guides/basic-install-guide/ch_cinder.xml b/doc/training-guides/basic-install-guide/ch_cinder.xml deleted file mode 100644 index 28ad1341..00000000 --- a/doc/training-guides/basic-install-guide/ch_cinder.xml +++ /dev/null @@ -1,29 +0,0 @@ - - - Add the Block Storage service - The OpenStack Block Storage service provides block storage devices - to instances using various backends. The Block Storage API and scheduler - services run on the controller node and the volume service runs on one - or more storage nodes. Storage nodes provide volumes to instances using - local block storage devices or SAN/NAS backends with the appropriate - drivers. For more information, see the - Configuration Reference. - - This chapter omits the backup manager because it depends on the - Object Storage service. - - - - -
- Next steps - Your OpenStack environment now includes Block Storage. You can - launch an instance or add more - services to your environment in the following chapters. -
-
diff --git a/doc/training-guides/basic-install-guide/ch_clients.xml b/doc/training-guides/basic-install-guide/ch_clients.xml deleted file mode 100644 index c3f98d19..00000000 --- a/doc/training-guides/basic-install-guide/ch_clients.xml +++ /dev/null @@ -1,45 +0,0 @@ - - - Install and configure the OpenStack clients - The following sections contain information about working - with the OpenStack clients. Recall: in the previous section, - you used the keystone client. - You must install the client tools to complete the rest of - the installation. - Configure the clients on your desktop rather than on the - server so that you have a similar experience to your - users. -
- Create openrc.sh files - - - As explained in , - use the - credentials from and - create the following - PROJECT-openrc.sh - files: - - - - - admin-openrc.sh for the administrative user - - - - - demo-openrc.sh for the normal user: - export OS_USERNAME=demo -export OS_PASSWORD=DEMO_PASS -export OS_TENANT_NAME=demo -export OS_AUTH_URL=http://controller:35357/v2.0 - - - - -
-
diff --git a/doc/training-guides/basic-install-guide/ch_debconf.xml b/doc/training-guides/basic-install-guide/ch_debconf.xml deleted file mode 100644 index 850bc1ec..00000000 --- a/doc/training-guides/basic-install-guide/ch_debconf.xml +++ /dev/null @@ -1,14 +0,0 @@ - - - Configure OpenStack with debconf - - - - - - diff --git a/doc/training-guides/basic-install-guide/ch_glance.xml b/doc/training-guides/basic-install-guide/ch_glance.xml deleted file mode 100644 index a085c86c..00000000 --- a/doc/training-guides/basic-install-guide/ch_glance.xml +++ /dev/null @@ -1,30 +0,0 @@ - - - Add the Image Service - The OpenStack Image Service (glance) enables users to discover, - register, and retrieve virtual machine images. It offers a REST API that enables you to - query virtual machine image metadata and retrieve an actual image. - You can store virtual machine images made available through the - Image Service in a variety of locations, from simple file systems - to object-storage systems like OpenStack Object Storage. - - For simplicity, this guide describes configuring the Image Service to - use the file back end, which uploads and stores in a - directory on the controller node hosting the Image Service. By - default, this directory is /var/lib/glance/images/. - - Before you proceed, ensure that the controller node has at least - several gigabytes of space available in this directory. - For information on requirements for other back ends, see Configuration - Reference. - - - - diff --git a/doc/training-guides/basic-install-guide/ch_heat.xml b/doc/training-guides/basic-install-guide/ch_heat.xml deleted file mode 100644 index a1d53b95..00000000 --- a/doc/training-guides/basic-install-guide/ch_heat.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - Add the Orchestration module - The Orchestration module (heat) uses a heat orchestration template - (HOT) to create and manage cloud resources. - - -
- Next steps - Your OpenStack environment now includes Orchestration. You can - launch an instance or add more - services to your environment in the following chapters. -
-
diff --git a/doc/training-guides/basic-install-guide/ch_horizon.xml b/doc/training-guides/basic-install-guide/ch_horizon.xml deleted file mode 100644 index 4a626c3a..00000000 --- a/doc/training-guides/basic-install-guide/ch_horizon.xml +++ /dev/null @@ -1,44 +0,0 @@ - - - Add the dashboard - The OpenStack dashboard, also known as Horizon, is a Web interface that enables cloud - administrators and users to manage various OpenStack resources and - services. - The dashboard enables web-based interactions with the - OpenStack Compute cloud controller through the OpenStack - APIs. - Horizon enables you to customize the brand of the dashboard. - Horizon provides a set of core classes and reusable templates and tools. - This example deployment uses an Apache web server. - - - -
- Next steps - Your OpenStack environment now includes the dashboard. You can - launch an instance or add - more services to your environment in the following chapters. - After you install and configure the dashboard, you can - complete the following tasks: - - - Customize your dashboard. See section Customize the dashboard in the OpenStack Cloud Administrator Guide - for information on setting up colors, logos, and site titles. - - - Set up session storage. See section Set up session storage for the dashboard - in the OpenStack Cloud Administrator Guide for information on user - session data. - - -
-
diff --git a/doc/training-guides/basic-install-guide/ch_keystone.xml b/doc/training-guides/basic-install-guide/ch_keystone.xml deleted file mode 100644 index 653b2605..00000000 --- a/doc/training-guides/basic-install-guide/ch_keystone.xml +++ /dev/null @@ -1,13 +0,0 @@ - - - Add the Identity service - - - - - - diff --git a/doc/training-guides/basic-install-guide/ch_launch-instance.xml b/doc/training-guides/basic-install-guide/ch_launch-instance.xml deleted file mode 100644 index 7fb1a1c4..00000000 --- a/doc/training-guides/basic-install-guide/ch_launch-instance.xml +++ /dev/null @@ -1,33 +0,0 @@ - - - Launch an instance - An instance is a VM that OpenStack provisions on a compute node. - This guide shows you how to launch a minimal instance using the - CirrOS image that you added to your environment - in the chapter. In these steps, you use the - command-line interface (CLI) on your controller node or any system with - the appropriate OpenStack client libraries. To use the dashboard, see the - - OpenStack User Guide. - Launch an instance using - OpenStack Networking (neutron) - or - legacy networking (nova-network) - . For more - information, see the - - OpenStack User Guide. - - These steps reference example components created in previous - chapters. You must adjust certain values such as IP addresses to - match your environment. - - - - diff --git a/doc/training-guides/basic-install-guide/ch_networking.xml b/doc/training-guides/basic-install-guide/ch_networking.xml deleted file mode 100644 index 8cf07d1a..00000000 --- a/doc/training-guides/basic-install-guide/ch_networking.xml +++ /dev/null @@ -1,43 +0,0 @@ - - - Add a networking component - This chapter explains how to install and configure either - OpenStack Networking (neutron) or the legacy nova-network networking service. - The nova-network service - enables you to deploy one network type per instance and is - suitable for basic network functionality. OpenStack Networking - enables you to deploy multiple network types per instance and - includes plug-ins for a - variety of products that support virtual - networking. - For more information, see the Networking chapter of the OpenStack Cloud - Administrator Guide. -
- OpenStack Networking (neutron) - - - - - -
-
- Legacy networking (nova-network) - - - -
-
- Next steps - Your OpenStack environment now includes the core components - necessary to launch a basic instance. You can launch an instance or add - more OpenStack services to your environment. -
-
diff --git a/doc/training-guides/basic-install-guide/ch_nova.xml b/doc/training-guides/basic-install-guide/ch_nova.xml deleted file mode 100644 index 0b1211d6..00000000 --- a/doc/training-guides/basic-install-guide/ch_nova.xml +++ /dev/null @@ -1,12 +0,0 @@ - - - - Add the Compute service - - - - diff --git a/doc/training-guides/basic-install-guide/ch_overview.xml b/doc/training-guides/basic-install-guide/ch_overview.xml deleted file mode 100644 index 5c802571..00000000 --- a/doc/training-guides/basic-install-guide/ch_overview.xml +++ /dev/null @@ -1,148 +0,0 @@ - - - - Architecture -
- Overview - The OpenStack project is an open source cloud - computing platform that supports all types of cloud environments. The - project aims for simple implementation, massive scalability, and a rich - set of features. Cloud computing experts from around the world - contribute to the project. - OpenStack provides an Infrastructure-as-a-Service - (IaaS) solution through a variety of complemental - services. Each service offers an application programming interface - (API) that facilitates this integration. The - following table provides a list of OpenStack services: - This guide describes how to deploy these services in a functional - test environment and, by example, teaches you how to build a production - environment. -
-
- Conceptual architecture - Launching a virtual machine or instance involves many interactions - among several services. The following diagram provides the conceptual - architecture of a typical OpenStack environment. -
-
- Example architectures - OpenStack is highly configurable to meet different needs with various - compute, networking, and storage options. This guide enables you to - choose your own OpenStack adventure using a combination of basic and - optional services. This guide uses the following example - architectures: - - - - - The basic controller node runs the Identity service, Image - Service, management portions of Compute and Networking, - Networking plug-in, and the dashboard. It also includes - supporting services such as a database, - message broker, and - Network Time Protocol (NTP). - - Optionally, the controller node also runs portions of - Block Storage, Object Storage, Database Service, Orchestration, - and Telemetry. These components provide additional features for - your environment. - - - The network node runs the Networking plug-in, layer-2 agent, - and several layer-3 agents that provision and operate tenant - networks. Layer-2 services include provisioning of virtual - networks and tunnels. Layer-3 services include routing, - NAT, - and DHCP. This node also handles - external (Internet) connectivity for tenant virtual machines - or instances. - - - The compute node runs the hypervisor portion of Compute, - which operates tenant virtual machines or instances. By default - Compute uses KVM as the hypervisor. The compute node also runs - the Networking plug-in and layer-2 agent which operate tenant - networks and implement security groups. You can run more than - one compute node. - Optionally, the compute node also runs the Telemetry - agent. This component provides additional features for - your environment. - - - The optional storage node contains the disks that the Block - Storage service uses to serve volumes. You can run more than one - storage node. - Optionally, the storage node also runs the Telemetry - agent. This component provides additional features for - your environment. - - - - When you implement this architecture, skip - To use optional services, you - might need to install additional nodes, as described in - subsequent chapters. - -
- Three-node architecture with OpenStack Networking (neutron) - - - - - -
-
- - Two-node architecture with legacy networking (nova-network). See - - - The basic - controller node - runs the Identity service, Image Service, management portion of - Compute, and the dashboard necessary to launch a simple instance. - It also includes supporting services such as a database, message - broker, and NTP. - Optionally, the controller node also runs portions of - Block Storage, Object Storage, Database Service, Orchestration, - and Telemetry. These components provide additional features for - your environment. - - - The basic compute node runs the - hypervisor portion of Compute, - which operates tenant - virtual machines - or instances. By default, Compute uses - KVM - as the hypervisor. Compute also - provisions and operates tenant networks and implements - security groups. - You can run more than one compute node. - Optionally, the compute node also runs the Telemetry - agent. This component provides additional features for - your environment. - - - - When you implement this architecture, skip - might need to install additional nodes, as described in - subsequent chapters. - -
- Two-node architecture with legacy networking (nova-network) - - - - - -
-
-
-
-
diff --git a/doc/training-guides/basic-install-guide/ch_sahara.xml b/doc/training-guides/basic-install-guide/ch_sahara.xml deleted file mode 100644 index e615a13e..00000000 --- a/doc/training-guides/basic-install-guide/ch_sahara.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - Add the Data processing service - The Data processing service (sahara) enables users to provide a - scalable data processing stack and associated management interfaces. - This includes provision and operation of data processing clusters as - well as scheduling and operation of data processing jobs. - - - This chapter is a work in progress. It may contain - incorrect information, and will be updated frequently. - - - diff --git a/doc/training-guides/basic-install-guide/ch_swift.xml b/doc/training-guides/basic-install-guide/ch_swift.xml deleted file mode 100644 index 341af6f3..00000000 --- a/doc/training-guides/basic-install-guide/ch_swift.xml +++ /dev/null @@ -1,32 +0,0 @@ - - - Add Object Storage - The OpenStack Object Storage services work together to provide - object storage and retrieval through a REST API. For this example - architecture, you must have already installed the Identity - Service, also known as Keystone. - - - - - - - -
- Next steps - Your OpenStack environment now includes Object Storage. You can - launch an instance or add more - services to your environment in the following chapters. -
-
diff --git a/doc/training-guides/basic-install-guide/ch_trove.xml b/doc/training-guides/basic-install-guide/ch_trove.xml deleted file mode 100644 index 4bb8375c..00000000 --- a/doc/training-guides/basic-install-guide/ch_trove.xml +++ /dev/null @@ -1,15 +0,0 @@ - - - Add the Database service - Use the Database - module to create cloud database resources. The - integrated project name is trove. - This chapter is a work in progress. It may contain - incorrect information, and will be updated frequently. - - - diff --git a/doc/training-guides/basic-install-guide/common/figures/SCH_5002_V00_NUAC-Keystone.png b/doc/training-guides/basic-install-guide/common/figures/SCH_5002_V00_NUAC-Keystone.png deleted file mode 100644 index 29678ac1..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/SCH_5002_V00_NUAC-Keystone.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/novnc/SCH_5009_V00_NUAC-VNC_OpenStack.png b/doc/training-guides/basic-install-guide/common/figures/novnc/SCH_5009_V00_NUAC-VNC_OpenStack.png deleted file mode 100644 index 01afb358..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/novnc/SCH_5009_V00_NUAC-VNC_OpenStack.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/objectstorage-accountscontainers.png b/doc/training-guides/basic-install-guide/common/figures/objectstorage-accountscontainers.png deleted file mode 100644 index 4df7326a..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/objectstorage-accountscontainers.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/objectstorage-arch.png b/doc/training-guides/basic-install-guide/common/figures/objectstorage-arch.png deleted file mode 100644 index 3b7978b6..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/objectstorage-arch.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/objectstorage-buildingblocks.png b/doc/training-guides/basic-install-guide/common/figures/objectstorage-buildingblocks.png deleted file mode 100644 index 8499ca1e..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/objectstorage-buildingblocks.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/objectstorage-nodes.png b/doc/training-guides/basic-install-guide/common/figures/objectstorage-nodes.png deleted file mode 100644 index e7a0396f..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/objectstorage-nodes.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/objectstorage-partitions.png b/doc/training-guides/basic-install-guide/common/figures/objectstorage-partitions.png deleted file mode 100644 index 7e319ca0..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/objectstorage-partitions.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/objectstorage-replication.png b/doc/training-guides/basic-install-guide/common/figures/objectstorage-replication.png deleted file mode 100644 index 8ce13091..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/objectstorage-replication.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/objectstorage-ring.png b/doc/training-guides/basic-install-guide/common/figures/objectstorage-ring.png deleted file mode 100644 index 22ef3120..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/objectstorage-ring.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/objectstorage-usecase.png b/doc/training-guides/basic-install-guide/common/figures/objectstorage-usecase.png deleted file mode 100644 index 5d7c8f42..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/objectstorage-usecase.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/objectstorage-zones.png b/doc/training-guides/basic-install-guide/common/figures/objectstorage-zones.png deleted file mode 100644 index ee5ffbf7..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/objectstorage-zones.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/figures/objectstorage.png b/doc/training-guides/basic-install-guide/common/figures/objectstorage.png deleted file mode 100644 index 9454065c..00000000 Binary files a/doc/training-guides/basic-install-guide/common/figures/objectstorage.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/common/section_compute-configure-vnc.xml b/doc/training-guides/basic-install-guide/common/section_compute-configure-vnc.xml deleted file mode 100644 index f6612bc1..00000000 --- a/doc/training-guides/basic-install-guide/common/section_compute-configure-vnc.xml +++ /dev/null @@ -1,286 +0,0 @@ - -
- VNC console proxy - The VNC proxy is an OpenStack component that enables compute - service users to access their instances through VNC - clients. - The VNC console connection works as follows: - - - A user connects to the API and gets an - access_url such as, - http://ip:port/?token=xyz. - - - - The user pastes the URL in a browser or uses it as a - client parameter. - - - The browser or client connects to the proxy. - - - The proxy talks to nova-consoleauth to authorize the token for - the user, and maps the token to the - private host and port of the VNC server - for an instance. - The compute host specifies the address that the proxy - should use to connect through the - nova.conf file option, - . In this way, - the VNC proxy works as a bridge between the public network and - private host network. - - - The proxy initiates the connection to VNC server and - continues to proxy until the session ends. - - - The proxy also tunnels the VNC protocol over WebSockets so that the - noVNC client can talk to VNC servers. In general, the VNC - proxy: - - - Bridges between the public network where the clients live and the private network where - VNC servers live. - - - Mediates token authentication. - - - Transparently deals with hypervisor-specific connection - details to provide a uniform client experience. -
- noVNC process - - - - - -
-
-
-
- - About nova-consoleauth - - Both client proxies leverage a shared service to manage - token authentication called nova-consoleauth. This service must be running - for either proxy to work. Many proxies of either type can be run - against a single nova-consoleauth service in a cluster - configuration. - Do not confuse the nova-consoleauth shared service with - nova-console, which is a XenAPI-specific - service that most recent VNC proxy architectures do not - use. -
-
- Typical deployment - A typical deployment has the following components: - - - A nova-consoleauth process. Typically runs on - the controller host. - - - One or more nova-novncproxy services. Supports - browser-based noVNC clients. For simple deployments, this - service typically runs on the same machine as nova-api because it operates - as a proxy between the public network and the private - compute host network. - - - One or more nova-xvpvncproxy - services. Supports the special Java client discussed here. - For simple deployments, this service typically runs on the - same machine as nova-api because it acts as a proxy between - the public network and the private compute host - network. - - - One or more compute hosts. These compute hosts must have - correctly configured options, as follows. - - -
-
- VNC configuration options - To customize the VNC console, use the following configuration options: - - - To support live migration, you cannot specify a specific IP - address for vncserver_listen, because that - IP address does not exist on the destination host. - - - - - - The vncserver_proxyclient_address defaults to - 127.0.0.1, which is the address of the compute host that - Compute instructs proxies to use when connecting to instance servers. - - - For all-in-one XenServer domU deployments, set this to 169.254.0.1. - For multi-host XenServer domU deployments, set to a dom0 management IP on the - same network as the proxies. - For multi-host libvirt deployments, set to a host management IP on the same - network as the proxies. - - - -
-
- - nova-novncproxy (noVNC) - - You must install the noVNC package, which contains the nova-novncproxy service. As root, run the following - command: - # apt-get install novnc - The service starts automatically on installation. - To restart the service, run: - # service novnc restart - The configuration option parameter should point to your - nova.conf file, which includes the - message queue server address and credentials. - By default, nova-novncproxy binds on - 0.0.0.0:6080. - To connect the service to your Compute deployment, add the following configuration options - to your nova.conf file: - - - - vncserver_listen=0.0.0.0 - - Specifies the address on which the VNC service should - bind. Make sure it is assigned one of the compute node - interfaces. This address is the one used by your domain - file. - <graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/> - - To use live migration, use the - 0.0.0.0 address. - - - - - vncserver_proxyclient_address=127.0.0.1 - - The address of the compute host that Compute instructs proxies to use when connecting - to instance vncservers. - - -
-
- - Frequently asked questions about VNC access to virtual - machines - - - - Q: What is the difference between - nova-xvpvncproxy and nova-novncproxy? - - A: nova-xvpvncproxy, which ships with OpenStack Compute, is a proxy - that supports a simple Java client. nova-novncproxy uses noVNC to provide VNC support through a web - browser. - - - Q: I want VNC support in the OpenStack dashboard. What services - do I need? - A: You need nova-novncproxy, nova-consoleauth, and correctly configured - compute hosts. - - - Q: When I use nova get-vnc-console or click - on the VNC tab of the OpenStack dashboard, it hangs. Why? - A: Make sure you are running nova-consoleauth (in addition to nova-novncproxy). The proxies - rely on nova-consoleauth to validate tokens, and - waits for a reply from them until a timeout is reached. - - - - Q: My VNC proxy worked fine during - my all-in-one test, but now it doesn't work on multi host. - Why? - A: The default options work for an all-in-one install, - but changes must be made on your compute hosts once you - start to build a cluster. As an example, suppose you have - two servers: - PROXYSERVER (public_ip=172.24.1.1, management_ip=192.168.1.1) -COMPUTESERVER (management_ip=192.168.1.2) - Your nova-compute configuration file must set the - following values: - # These flags help construct a connection data structure -vncserver_proxyclient_address=192.168.1.2 -novncproxy_base_url=http://172.24.1.1:6080/vnc_auto.html -xvpvncproxy_base_url=http://172.24.1.1:6081/console - -# This is the address where the underlying vncserver (not the proxy) -# will listen for connections. -vncserver_listen=192.168.1.2 - - novncproxy_base_url and - xvpvncproxy_base_url use a public IP; - this is the URL that is ultimately returned to clients, - which generally do not have access to your private - network. Your PROXYSERVER must be able to reach - vncserver_proxyclient_address, - because that is the address over which the VNC connection - is proxied. - - - - - Q: My noVNC does not work with recent - versions of web browsers. Why? - - A: Make sure you have installed - python-numpy, which is required to - support a newer version of the WebSocket protocol - (HyBi-07+). - - - - Q: How do I adjust the dimensions of - the VNC window image in the OpenStack - dashboard? - A: These values are hard-coded in a Django HTML - template. To alter them, edit the - _detail_vnc.html template file. The - location of this file varies based on Linux distribution. On - Ubuntu 12.04, the file is at - /usr/share/pyshared/horizon/dashboards/nova/instances/templates/instances/_detail_vnc.html. - Modify the and - options, as follows: - <iframe src="{{ vnc_url }}" width="720" height="430"></iframe> - - -
-
diff --git a/doc/training-guides/basic-install-guide/common/section_keystone-concepts.xml b/doc/training-guides/basic-install-guide/common/section_keystone-concepts.xml deleted file mode 100644 index 568db36b..00000000 --- a/doc/training-guides/basic-install-guide/common/section_keystone-concepts.xml +++ /dev/null @@ -1,144 +0,0 @@ - -
- - OpenStack Identity concepts - The OpenStackIdentity Service performs the - following functions: - - - Tracking users and their permissions. - - - Providing a catalog of available services with their API - endpoints. - - - When installing OpenStack Identity service, you must register - each service in your OpenStack installation. Identity service - can then track which OpenStack services are installed, and - where they are located on the network. - To understand OpenStack Identity, you must understand the - following concepts: - - - User - - Digital representation of a person, system, or - service who uses OpenStack cloud services. The - Identity service validates that incoming requests - are made by the user who claims to be making the - call. Users have a login and may be assigned - tokens to access resources. Users can be directly - assigned to a particular tenant and behave as if - they are contained in that tenant. - - - - Credentials - - Data that confirms the user's identity. For - example: user name and password, user name and API - key, or an authentication token provided by the - Identity Service. - - - - Authentication - - The process of confirming the identity of a user. - OpenStack Identity confirms an incoming request - by validating a set of credentials supplied by the - user. - These credentials are initially a user name and - password, or a user name and API key. When user - credentials are validated, OpenStack Identity issues an - authentication token which the user provides in subsequent - requests. - - - - Token - - An alpha-numeric string of text used to access - OpenStack APIs and resources. A token may be - revoked at any time and is valid for a - finite duration. - While OpenStack Identity supports token-based - authentication in this release, the intention is - to support additional protocols in the future. - Its main purpose is to be an integration service, - and not aspire to be a full-fledged identity store - and management solution. - - - - Tenant - - A container used to group or isolate resources. - Tenants also group or isolate identity objects. - Depending on the service operator, a tenant may map - to a customer, account, organization, or project. - - - - Service - - An OpenStack service, such as Compute (nova), - Object Storage (swift), or Image Service (glance). - It provides one or more endpoints in which - users can access resources and perform operations. - - - - Endpoint - - A network-accessible address where you access a service, - usually a URL address. If you are using an extension for - templates, an endpoint template can be created, which - represents the templates of all the consumable services - that are available across the regions. - - - - Role - - A personality with a defined set of user rights and - privileges to perform a specific set of operations. - In the Identity service, a token that is issued - to a user includes the list of roles. Services that are - being called by that user determine how they interpret the - set of roles a user has and to which operations or - resources each role grants access. - - - - Keystone Client - - A command line interface for the OpenStack - Identity API. For example, users can run the - keystone service-create and - keystone endpoint-create commands - to register services in their OpenStack - installations. - - - - The following diagram shows the OpenStack Identity process - flow: - - - - - - - - -
diff --git a/doc/training-guides/basic-install-guide/common/section_objectstorage-account-reaper.xml b/doc/training-guides/basic-install-guide/common/section_objectstorage-account-reaper.xml deleted file mode 100644 index 4be6751f..00000000 --- a/doc/training-guides/basic-install-guide/common/section_objectstorage-account-reaper.xml +++ /dev/null @@ -1,39 +0,0 @@ - -
- Account reaper - In the background, the account reaper removes data from the deleted accounts. - A reseller marks an account for deletion by issuing a DELETE request on the account’s - storage URL. This action sets the status column of the account_stat table in the account - database and replicas to DELETED, marking the account's data for deletion. - Typically, a specific retention time or undelete are not provided. However, you can set a - delay_reaping value in the [account-reaper] section of the - account-server.conf to delay the actual deletion of data. At this time, to undelete you have - to update the account database replicas directly, setting the status column to an empty - string and updating the put_timestamp to be greater than the delete_timestamp. - It's on the developers' to-do list to write a utility that performs this task, preferably - through a ReST call. - - The account reaper runs on each account server and scans the server occasionally for - account databases marked for deletion. It only fires up on the accounts for which the server - is the primary node, so that multiple account servers aren’t trying to do it simultaneously. - Using multiple servers to delete one account might improve the deletion speed but requires - coordination to avoid duplication. Speed really is not a big concern with data deletion, and - large accounts aren’t deleted often. - Deleting an account is simple. For each account container, all objects are deleted and - then the container is deleted. Deletion requests that fail will not stop the overall process - but will cause the overall process to fail eventually (for example, if an object delete - times out, you will not be able to delete the container or the account). The account reaper - keeps trying to delete an account until it is empty, at which point the database reclaim - process within the db_replicator will remove the database files. - A persistent error state may prevent the deletion of an object - or container. If this happens, you will see - a message such as “Account <name> has not been reaped - since <date>” in the log. You can control when this is - logged with the reap_warn_after value in the [account-reaper] - section of the account-server.conf file. The default value is 30 - days. -
diff --git a/doc/training-guides/basic-install-guide/common/section_objectstorage-arch.xml b/doc/training-guides/basic-install-guide/common/section_objectstorage-arch.xml deleted file mode 100644 index 2866625a..00000000 --- a/doc/training-guides/basic-install-guide/common/section_objectstorage-arch.xml +++ /dev/null @@ -1,79 +0,0 @@ - - -%openstack; -]> -
- Cluster architecture -
- Access tier - Large-scale deployments segment off an access tier, which is considered the Object Storage - system's central hub. The access tier fields the incoming API requests from clients and - moves data in and out of the system. This tier consists of front-end load balancers, - ssl-terminators, and authentication services. It runs the (distributed) brain of the - Object Storage system: the proxy server processes. -
- Object Storage architecture - - - - - -
- Because access servers are collocated in their own tier, you can scale out read/write - access regardless of the storage capacity. For example, if a cluster is on the public - Internet, requires SSL termination, and has a high demand for data access, you can - provision many access servers. However, if the cluster is on a private network and used - primarily for archival purposes, you need fewer access servers. - Since this is an HTTP addressable storage service, you may incorporate a load balancer - into the access tier. - Typically, the tier consists of a collection of 1U servers. These machines use a - moderate amount of RAM and are network I/O intensive. Since these systems field each - incoming API request, you should provision them with two high-throughput (10GbE) - interfaces - one for the incoming "front-end" requests and the other for the "back-end" - access to the object storage nodes to put and fetch data. -
- Factors to consider - For most publicly facing deployments as well as private deployments available - across a wide-reaching corporate network, you use SSL to encrypt traffic to the - client. SSL adds significant processing load to establish sessions between clients, - which is why you have to provision more capacity in the access layer. SSL may not be - required for private deployments on trusted networks. -
-
-
- Storage nodes - In most configurations, each of the five zones should have an equal amount of storage - capacity. Storage nodes use a reasonable amount of memory and CPU. Metadata needs to be - readily available to return objects quickly. The object stores run services not only to - field incoming requests from the access tier, but to also run replicators, auditors, and - reapers. You can provision object stores provisioned with single gigabit or 10 gigabit - network interface depending on the expected workload and desired performance. -
- Object Storage (swift) - - - - - -
- Currently, a 2 TB or 3 TB SATA disk delivers - good performance for the price. You can use desktop-grade - drives if you have responsive remote hands in the datacenter - and enterprise-grade drives if you don't. -
- Factors to consider - You should keep in mind the desired I/O performance for single-threaded requests . - This system does not use RAID, so a single disk handles each request for an object. - Disk performance impacts single-threaded response rates. - To achieve apparent higher throughput, the object storage system is designed to - handle concurrent uploads/downloads. The network I/O capacity (1GbE, bonded 1GbE - pair, or 10GbE) should match your desired concurrent throughput needs for reads and - writes. -
-
-
diff --git a/doc/training-guides/basic-install-guide/common/section_objectstorage-characteristics.xml b/doc/training-guides/basic-install-guide/common/section_objectstorage-characteristics.xml deleted file mode 100644 index dbf80a87..00000000 --- a/doc/training-guides/basic-install-guide/common/section_objectstorage-characteristics.xml +++ /dev/null @@ -1,58 +0,0 @@ - -
- Object Storage characteristics - The key characteristics of Object Storage are that: - - - All objects stored in Object Storage have a URL. - - - All objects stored are replicated 3✕ in as-unique-as-possible zones, which - can be defined as a group of drives, a node, a rack, and so on. - - - All objects have their own metadata. - - - Developers interact with the object storage system through a RESTful HTTP - API. - - - Object data can be located anywhere in the cluster. - - - The cluster scales by adding additional nodes without sacrificing performance, - which allows a more cost-effective linear storage expansion than fork-lift - upgrades. - - - Data doesn't have to be migrate to an entirely new storage system. - - - New nodes can be added to the cluster without downtime. - - - Failed nodes and disks can be swapped out without downtime. - - - It runs on industry-standard hardware, such as Dell, HP, and Supermicro. - - -
- Object Storage (swift) - - - - - -
- Developers can either write directly to the Swift API or use one of the many client - libraries that exist for all of the popular programming languages, such as Java, Python, - Ruby, and C#. Amazon S3 and RackSpace Cloud Files users should be very familiar with Object - Storage. Users new to object storage systems will have to adjust to a different approach and - mindset than those required for a traditional filesystem. -
diff --git a/doc/training-guides/basic-install-guide/common/section_objectstorage-components.xml b/doc/training-guides/basic-install-guide/common/section_objectstorage-components.xml deleted file mode 100644 index ef53a40d..00000000 --- a/doc/training-guides/basic-install-guide/common/section_objectstorage-components.xml +++ /dev/null @@ -1,235 +0,0 @@ - -
- Components - The components that enable Object Storage to deliver high availability, high - durability, and high concurrency are: - - - Proxy servers. Handle all of the incoming - API requests. - - - Rings. Map logical names of data to - locations on particular disks. - - - Zones. Isolate data from other zones. A - failure in one zone doesn’t impact the rest of the cluster because data is - replicated across zones. - - - Accounts and containers. Each account and - container are individual databases that are distributed across the cluster. An - account database contains the list of containers in that account. A container - database contains the list of objects in that container. - - - Objects. The data itself. - - - Partitions. A partition stores objects, - account databases, and container databases and helps manage locations where data - lives in the cluster. - - -
- Object Storage building blocks - - - - - -
-
- Proxy servers - Proxy servers are the public face of Object Storage and handle all of the incoming API - requests. Once a proxy server receives a request, it determines the storage node based - on the object's URL, for example, https://swift.example.com/v1/account/container/object. - Proxy servers also coordinate responses, handle failures, and coordinate - timestamps. - Proxy servers use a shared-nothing architecture and can be scaled as needed based on - projected workloads. A minimum of two proxy servers should be deployed for redundancy. - If one proxy server fails, the others take over. -
-
- Rings - A ring represents a mapping between the names of entities stored on disk and their - physical locations. There are separate rings for accounts, containers, and objects. When - other components need to perform any operation on an object, container, or account, they - need to interact with the appropriate ring to determine their location in the - cluster. - The ring maintains this mapping using zones, devices, partitions, and replicas. Each - partition in the ring is replicated, by default, three times across the cluster, and - partition locations are stored in the mapping maintained by the ring. The ring is also - responsible for determining which devices are used for handoff in failure - scenarios. - Data can be isolated into zones in the ring. Each partition replica is guaranteed to - reside in a different zone. A zone could represent a drive, a server, a cabinet, a - switch, or even a data center. - The partitions of the ring are equally divided among all of the devices in the Object - Storage installation. When partitions need to be moved around (for example, if a device - is added to the cluster), the ring ensures that a minimum number of partitions are moved - at a time, and only one replica of a partition is moved at a time. - You can use weights to balance the distribution of partitions on drives across the - cluster. This can be useful, for example, when differently sized drives are used in a - cluster. - The ring is used by the proxy server and several background processes (like - replication). -
- The <emphasis role="bold">ring</emphasis> - - - - - -
- These rings are externally managed, in that the server processes themselves do not - modify the rings, they are instead given new rings modified by other tools. - The ring uses a configurable number of bits from an - MD5 hash for a path as a partition index that designates a - device. The number of bits kept from the hash is known as - the partition power, and 2 to the partition power - indicates the partition count. Partitioning the full MD5 - hash ring allows other parts of the cluster to work in - batches of items at once which ends up either more - efficient or at least less complex than working with each - item separately or the entire cluster all at once. - Another configurable value is the replica count, which indicates how many of the - partition-device assignments make up a single ring. For a given partition number, each - replica’s device will not be in the same zone as any other replica's device. Zones can - be used to group devices based on physical locations, power separations, network - separations, or any other attribute that would improve the availability of multiple - replicas at the same time. -
-
- Zones - Object Storage allows configuring zones in order to isolate failure boundaries. - Each data replica resides in a separate zone, if possible. At the smallest level, a zone - could be a single drive or a grouping of a few drives. If there were five object storage - servers, then each server would represent its own zone. Larger deployments would have an - entire rack (or multiple racks) of object servers, each representing a zone. The goal of - zones is to allow the cluster to tolerate significant outages of storage servers without - losing all replicas of the data. - As mentioned earlier, everything in Object Storage is stored, by default, three - times. Swift will place each replica "as-uniquely-as-possible" to ensure both high - availability and high durability. This means that when chosing a replica location, - Object Storage chooses a server in an unused zone before an unused server in a zone that - already has a replica of the data. -
- Zones - - - - - -
- When a disk fails, replica data is automatically distributed to the other zones to - ensure there are three copies of the data. -
-
- Accounts and containers - Each account and container is an individual SQLite - database that is distributed across the cluster. An - account database contains the list of containers in - that account. A container database contains the list - of objects in that container. -
- Accounts and containers - - - - - -
- To keep track of object data locations, each account in the system has a database - that references all of its containers, and each container database references each - object. -
-
- Partitions - A partition is a collection of stored data, including account databases, container - databases, and objects. Partitions are core to the replication system. - Think of a partition as a bin moving throughout a fulfillment center warehouse. - Individual orders get thrown into the bin. The system treats that bin as a cohesive - entity as it moves throughout the system. A bin is easier to deal with than many little - things. It makes for fewer moving parts throughout the system. - System replicators and object uploads/downloads operate on partitions. As the - system scales up, its behavior continues to be predictable because the number of - partitions is a fixed number. - Implementing a partition is conceptually simple, a partition is just a - directory sitting on a disk with a corresponding hash table of what it contains. -
- Partitions - - - - - -
-
-
- Replicators - In order to ensure that there are three copies of the data everywhere, replicators - continuously examine each partition. For each local partition, the replicator compares - it against the replicated copies in the other zones to see if there are any - differences. - The replicator knows if replication needs to take place by examining hashes. A hash - file is created for each partition, which contains hashes of each directory in the - partition. Each of the three hash files is compared. For a given partition, the hash - files for each of the partition's copies are compared. If the hashes are different, then - it is time to replicate, and the directory that needs to be replicated is copied - over. - This is where partitions come in handy. With fewer things in the system, larger - chunks of data are transferred around (rather than lots of little TCP connections, which - is inefficient) and there is a consistent number of hashes to compare. - The cluster eventually has a consistent behavior where the newest data has a - priority. -
- Replication - - - - - -
- If a zone goes down, one of the nodes containing a replica notices and proactively - copies data to a handoff location. -
-
- Use cases - The following sections show use cases for object uploads and downloads and introduce the components. -
- Upload - A client uses the REST API to make a HTTP request to PUT an object into an existing - container. The cluster receives the request. First, the system must figure out where - the data is going to go. To do this, the account name, container name, and object - name are all used to determine the partition where this object should live. - Then a lookup in the ring figures out which storage nodes contain the partitions in - question. - The data is then sent to each storage node where it is placed in the appropriate - partition. At least two of the three writes must be successful before the client is - notified that the upload was successful. - Next, the container database is updated asynchronously to reflect that there is a new - object in it. -
- Object Storage in use - - - - - -
-
-
- Download - A request comes in for an account/container/object. Using the same consistent hashing, - the partition name is generated. A lookup in the ring reveals which storage nodes - contain that partition. A request is made to one of the storage nodes to fetch the - object and, if that fails, requests are made to the other nodes. -
-
-
diff --git a/doc/training-guides/basic-install-guide/common/section_objectstorage-features.xml b/doc/training-guides/basic-install-guide/common/section_objectstorage-features.xml deleted file mode 100644 index b477f0db..00000000 --- a/doc/training-guides/basic-install-guide/common/section_objectstorage-features.xml +++ /dev/null @@ -1,125 +0,0 @@ - -
- Features and benefits - - - - - Features - Benefits - - - - - Leverages commodity hardware - No lock-in, lower price/GB. - - - HDD/node failure agnostic - Self-healing, reliable, data redundancy protects from - failures. - - - Unlimited storage - Large and flat namespace, highly scalable read/write - access, able to serve content directly from storage system. - - - Multi-dimensional scalability - - Scale-out architecture: Scale vertically and - horizontally-distributed storage. Backs up and archives - large amounts of data with linear performance. - - - Account/container/object structure - No nesting, not a traditional file system: Optimized - for scale, it scales to multiple petabytes and billions of - objects. - - - Built-in replication 3✕ + - data redundancy (compared with 2✕ on RAID) - - A configurable number of accounts, containers and - object copies for high availability. - - - Easily add capacity (unlike - RAID resize) - Elastic data scaling with ease - - - No central database - Higher performance, no bottlenecks - - - RAID not required - Handle many small, random reads and writes efficiently - - - Built-in management utilities - Account management: Create, add, verify, and delete - users; Container management: Upload, download, and verify; - Monitoring: Capacity, host, network, log trawling, and - cluster health. - - - Drive auditing - Detect drive failures preempting data corruption - - - Expiring objects - Users can set an expiration time or a TTL on an object - to control access - - - Direct object access - Enable direct browser access to content, such as for - a control panel - - - Realtime visibility into client - requests - Know what users are requesting. - - - Supports S3 API - Utilize tools that were designed for the popular S3 - API. - - - Restrict containers per - account - Limit access to control usage by user. - - - Support for NetApp, Nexenta, - SolidFire - Unified support for block volumes using a variety of - storage systems. - - - Snapshot and backup API for - block volumes - Data protection and recovery for VM data. - - - Standalone volume API - available - Separate endpoint and API for integration with other - compute systems. - - - Integration with - Compute - Fully integrated with Compute for attaching block - volumes and reporting on usage. - - - -
diff --git a/doc/training-guides/basic-install-guide/common/section_objectstorage-intro.xml b/doc/training-guides/basic-install-guide/common/section_objectstorage-intro.xml deleted file mode 100644 index c774fd0f..00000000 --- a/doc/training-guides/basic-install-guide/common/section_objectstorage-intro.xml +++ /dev/null @@ -1,22 +0,0 @@ - -
- Introduction to Object Storage - OpenStack Object Storage (code-named swift) is open source software for creating - redundant, scalable data storage using clusters of standardized servers to store petabytes - of accessible data. It is a long-term storage system for large amounts of static data that - can be retrieved, leveraged, and updated. Object Storage uses a distributed architecture - with no central point of control, providing greater scalability, redundancy, and permanence. - Objects are written to multiple hardware devices, with the OpenStack software responsible - for ensuring data replication and integrity across the cluster. Storage clusters scale - horizontally by adding new nodes. Should a node fail, OpenStack works to replicate its - content from other active nodes. Because OpenStack uses software logic to ensure data - replication and distribution across different devices, inexpensive commodity hard drives and - servers can be used in lieu of more expensive equipment. - Object Storage is ideal for cost effective, scale-out storage. It provides a fully - distributed, API-accessible storage platform that can be integrated directly into - applications or used for backup, archiving, and data retention. -
diff --git a/doc/training-guides/basic-install-guide/common/section_objectstorage-replication.xml b/doc/training-guides/basic-install-guide/common/section_objectstorage-replication.xml deleted file mode 100644 index 7da90def..00000000 --- a/doc/training-guides/basic-install-guide/common/section_objectstorage-replication.xml +++ /dev/null @@ -1,111 +0,0 @@ - -
- Replication - Because each replica in Object Storage functions - independently and clients generally require only a simple - majority of nodes to respond to consider an operation - successful, transient failures like network partitions can - quickly cause replicas to diverge. These differences are - eventually reconciled by asynchronous, peer-to-peer replicator - processes. The replicator processes traverse their local file - systems and concurrently perform operations in a manner that - balances load across physical disks. - Replication uses a push model, with records and files - generally only being copied from local to remote replicas. - This is important because data on the node might not belong - there (as in the case of hand offs and ring changes), and a - replicator cannot know which data it should pull in from - elsewhere in the cluster. Any node that contains data must - ensure that data gets to where it belongs. The ring handles - replica placement. - To replicate deletions in addition to creations, every - deleted record or file in the system is marked by a tombstone. - The replication process cleans up tombstones after a time - period known as the consistency - window. This window defines the duration of the - replication and how long transient failure can remove a node - from the cluster. Tombstone cleanup must be tied to - replication to reach replica convergence. - If a replicator detects that a remote drive has failed, the - replicator uses the get_more_nodes - interface for the ring to choose an alternate node with which - to synchronize. The replicator can maintain desired levels of - replication during disk failures, though some replicas might - not be in an immediately usable location. - - The replicator does not maintain desired levels of - replication when failures such as entire node failures - occur; most failures are transient. - - The main replication types are: - - - Database - replication. Replicates containers and - objects. - - - Object replication. - Replicates object data. - - -
- Database replication - Database replication completes a low-cost hash - comparison to determine whether two replicas already - match. Normally, this check can quickly verify that most - databases in the system are already synchronized. If the - hashes differ, the replicator synchronizes the databases - by sharing records added since the last synchronization - point. - This synchronization point is a high water mark that - notes the last record at which two databases were known to - be synchronized, and is stored in each database as a tuple - of the remote database ID and record ID. Database IDs are - unique across all replicas of the database, and record IDs - are monotonically increasing integers. After all new - records are pushed to the remote database, the entire - synchronization table of the local database is pushed, so - the remote database can guarantee that it is synchronized - with everything with which the local database was - previously synchronized. - If a replica is missing, the whole local database file - is transmitted to the peer by using rsync(1) and is - assigned a new unique ID. - In practice, database replication can process hundreds - of databases per concurrency setting per second (up to the - number of available CPUs or disks) and is bound by the - number of database transactions that must be - performed. -
-
- Object replication - The initial implementation of object replication - performed an rsync to push data from a local partition to - all remote servers where it was expected to reside. While - this worked at small scale, replication times skyrocketed - once directory structures could no longer be held in RAM. - This scheme was modified to save a hash of the contents - for each suffix directory to a per-partition hashes file. - The hash for a suffix directory is no longer valid when - the contents of that suffix directory is modified. - The object replication process reads in hash files and - calculates any invalidated hashes. Then, it transmits the - hashes to each remote server that should hold the - partition, and only suffix directories with differing - hashes on the remote server are rsynced. After pushing - files to the remote server, the replication process - notifies it to recalculate hashes for the rsynced suffix - directories. - The number of uncached directories that object - replication must traverse, usually as a result of - invalidated suffix directory hashes, impedes performance. - To provide acceptable replication speeds, object - replication is designed to invalidate around 2 percent of - the hash space on a normal node each day. -
-
diff --git a/doc/training-guides/basic-install-guide/common/section_objectstorage-ringbuilder.xml b/doc/training-guides/basic-install-guide/common/section_objectstorage-ringbuilder.xml deleted file mode 100644 index 68e0f342..00000000 --- a/doc/training-guides/basic-install-guide/common/section_objectstorage-ringbuilder.xml +++ /dev/null @@ -1,226 +0,0 @@ - -
- Ring-builder - Use the swift-ring-builder utility to build and manage rings. This - utility assigns partitions to devices and writes an optimized - Python structure to a gzipped, serialized file on disk for - transmission to the servers. The server processes occasionally - check the modification time of the file and reload in-memory - copies of the ring structure as needed. If you use a slightly - older version of the ring, one of the three replicas for a - partition subset will be incorrect because of the way the - ring-builder manages changes to the ring. You can work around - this issue. - The ring-builder also keeps its own builder file with the - ring information and additional data required to build future - rings. It is very important to keep multiple backup copies of - these builder files. One option is to copy the builder files - out to every server while copying the ring files themselves. - Another is to upload the builder files into the cluster - itself. If you lose the builder file, you have to create a new - ring from scratch. Nearly all partitions would be assigned to - different devices and, therefore, nearly all of the stored - data would have to be replicated to new locations. So, - recovery from a builder file loss is possible, but data would - be unreachable for an extended time. -
- Ring data structure - The ring data structure consists of three top level - fields: a list of devices in the cluster, a list of lists - of device ids indicating partition to device assignments, - and an integer indicating the number of bits to shift an - MD5 hash to calculate the partition for the hash. -
-
- Partition assignment list - This is a list of array(‘H’) of - devices ids. The outermost list contains an - array(‘H’) for each replica. Each - array(‘H’) has a length equal to - the partition count for the ring. Each integer in the - array(‘H’) is an index into the - above list of devices. The partition list is known - internally to the Ring class as - _replica2part2dev_id. - So, to create a list of device dictionaries assigned to - a partition, the Python code would look like: - devices = [self.devs[part2dev_id[partition]] for -part2dev_id in self._replica2part2dev_id] - That code is a little simplistic because it does not - account for the removal of duplicate devices. If a ring - has more replicas than devices, a partition will have more - than one replica on a device. - array(‘H’) is used for memory - conservation as there may be millions of - partitions. -
-
- Replica counts - To support the gradual change in replica counts, a ring - can have a real number of replicas and is not restricted - to an integer number of replicas. - A fractional replica count is for the whole ring and not - for individual partitions. It indicates the average number - of replicas for each partition. For example, a replica - count of 3.2 means that 20 percent of partitions have four - replicas and 80 percent have three replicas. - The replica count is adjustable. - Example: - $ swift-ring-builder account.builder set_replicas 4 -$ swift-ring-builder account.builder rebalance - You must rebalance the replica ring in globally - distributed clusters. Operators of these clusters - generally want an equal number of replicas and regions. - Therefore, when an operator adds or removes a region, the - operator adds or removes a replica. Removing unneeded - replicas saves on the cost of disks. - You can gradually increase the replica count at a rate - that does not adversely affect cluster performance. - For example: - $ swift-ring-builder object.builder set_replicas 3.01 -$ swift-ring-builder object.builder rebalance -<distribute rings and wait>... - -$ swift-ring-builder object.builder set_replicas 3.02 -$ swift-ring-builder object.builder rebalance -<creatdistribute rings and wait>... - Changes take effect after the ring is rebalanced. - Therefore, if you intend to change from 3 replicas to 3.01 - but you accidentally type 2.01, no data - is lost. - Additionally, swift-ring-builder - X.builder - create can now take a decimal argument for - the number of replicas. -
-
- Partition shift value - The partition shift value is known internally to the - Ring class as _part_shift. This value - is used to shift an MD5 hash to calculate the partition - where the data for that hash should reside. Only the top - four bytes of the hash is used in this process. For - example, to compute the partition for the - /account/container/object path, the - Python code might look like the following code: - partition = unpack_from('>I', -md5('/account/container/object').digest())[0] >> -self._part_shift - For a ring generated with part_power P, the partition - shift value is 32 - P. -
-
- Build the ring - The ring builder process includes these high-level - steps: - - - The utility calculates the number of partitions to - assign to each device based on the weight of the - device. For example, for a partition at the power - of 20, the ring has 1,048,576 partitions. One - thousand devices of equal weight each want - 1,048.576 partitions. The devices are sorted by - the number of partitions they desire and kept in - order throughout the initialization - process. - - Each device is also assigned a random - tiebreaker value that is used when two devices - desire the same number of partitions. This - tiebreaker is not stored on disk anywhere, and - so two different rings created with the same - parameters will have different partition - assignments. For repeatable partition - assignments, - RingBuilder.rebalance() - takes an optional seed value that seeds the - Python pseudo-random number generator. - - - - The ring builder assigns each partition replica - to the device that requires most partitions at - that point while keeping it as far away as - possible from other replicas. The ring builder - prefers to assign a replica to a device in a - region that does not already have a replica. If no - such region is available, the ring builder - searches for a device in a different zone, or on a - different server. If it does not find one, it - looks for a device with no replicas. Finally, if - all options are exhausted, the ring builder - assigns the replica to the device that has the - fewest replicas already assigned. - - The ring builder assigns multiple replicas - to one device only if the ring has fewer - devices than it has replicas. - - - - When building a new ring from an old ring, the - ring builder recalculates the desired number of - partitions that each device wants. - - - The ring builder unassigns partitions and - gathers these partitions for reassignment, as - follows: - - The ring builder unassigns any - assigned partitions from any removed - devices and adds these partitions to - the gathered list. - - - The ring builder unassigns any - partition replicas that can be spread - out for better durability and adds - these partitions to the gathered list. - - - - The ring builder unassigns random - partitions from any devices that have - more partitions than they need and - adds these partitions to the gathered - list. - - - - - - The ring builder reassigns the gathered - partitions to devices by using a similar method to - the one described previously. - - - When the ring builder reassigns a replica to a - partition, the ring builder records the time of - the reassignment. The ring builder uses this value - when it gathers partitions for reassignment so - that no partition is moved twice in a configurable - amount of time. The RingBuilder class knows this - configurable amount of time as - min_part_hours. The ring - builder ignores this restriction for replicas of - partitions on removed devices because removal of a - device happens on device failure only, and - reassignment is the only choice. - - - Theses steps do not always perfectly rebalance a ring - due to the random nature of gathering partitions for - reassignment. To help reach a more balanced ring, the - rebalance process is repeated until near perfect (less - than 1 percent off) or when the balance does not improve - by at least 1 percent (indicating we probably cannot get - perfect balance due to wildly imbalanced zones or too many - partitions recently moved). -
-
diff --git a/doc/training-guides/basic-install-guide/common/section_objectstorage-troubleshoot.xml b/doc/training-guides/basic-install-guide/common/section_objectstorage-troubleshoot.xml deleted file mode 100644 index 01d6f19a..00000000 --- a/doc/training-guides/basic-install-guide/common/section_objectstorage-troubleshoot.xml +++ /dev/null @@ -1,107 +0,0 @@ - -
- Troubleshoot Object Storage - For Object Storage, everything is logged in /var/log/syslog (or messages on some distros). - Several settings enable further customization of logging, such as log_name, log_facility, - and log_level, within the object server configuration files. -
- Drive failure - In the event that a drive has failed, the first step is to make sure the drive is - unmounted. This will make it easier for Object Storage to work around the failure until - it has been resolved. If the drive is going to be replaced immediately, then it is just - best to replace the drive, format it, remount it, and let replication fill it up. - If the drive can’t be replaced immediately, then it is best to leave it - unmounted, and remove the drive from the ring. This will allow all the replicas - that were on that drive to be replicated elsewhere until the drive is replaced. - Once the drive is replaced, it can be re-added to the ring. - You can look at error messages in /var/log/kern.log for hints of drive failure. -
-
- Server failure - If a server is having hardware issues, it is a good idea to make sure the - Object Storage services are not running. This will allow Object Storage to - work around the failure while you troubleshoot. - If the server just needs a reboot, or a small amount of work that should only - last a couple of hours, then it is probably best to let Object Storage work - around the failure and get the machine fixed and back online. When the machine - comes back online, replication will make sure that anything that is missing - during the downtime will get updated. - If the server has more serious issues, then it is probably best to remove all - of the server’s devices from the ring. Once the server has been repaired and is - back online, the server’s devices can be added back into the ring. It is - important that the devices are reformatted before putting them back into the - ring as it is likely to be responsible for a different set of partitions than - before. -
-
- Detect failed drives - It has been our experience that when a drive is about to fail, error messages will spew into - /var/log/kern.log. There is a script called swift-drive-audit that can be run via cron - to watch for bad drives. If errors are detected, it will unmount the bad drive, so that - Object Storage can work around it. The script takes a configuration file with the - following settings: - - This script has only been tested on Ubuntu 10.04, so if you are using a - different distro or OS, some care should be taken before using in production. - -
-
- Emergency recovery of ring builder files - You should always keep a backup of swift ring builder files. However, if an - emergency occurs, this procedure may assist in returning your cluster to an - operational state. - Using existing swift tools, there is no way to recover a builder file from a - ring.gz file. However, if you have a knowledge of Python, it is possible to - construct a builder file that is pretty close to the one you have lost. The - following is what you will need to do. - - This procedure is a last-resort for emergency circumstances. It - requires knowledge of the swift python code and may not succeed. - - First, load the ring and a new ringbuilder object in a Python REPL: - >>> from swift.common.ring import RingData, RingBuilder ->>> ring = RingData.load('/path/to/account.ring.gz') - Now, start copying the data we have in the ring into the builder. - ->>> import math ->>> partitions = len(ring._replica2part2dev_id[0]) ->>> replicas = len(ring._replica2part2dev_id) - ->>> builder = RingBuilder(int(Math.log(partitions, 2)), replicas, 1) ->>> builder.devs = ring.devs ->>> builder._replica2part2dev = ring.replica2part2dev_id ->>> builder._last_part_moves_epoch = 0 ->>> builder._last_part_moves = array('B', (0 for _ in xrange(self.parts))) ->>> builder._set_parts_wanted() ->>> for d in builder._iter_devs(): - d['parts'] = 0 ->>> for p2d in builder._replica2part2dev: - for dev_id in p2d: - builder.devs[dev_id]['parts'] += 1 - This is the extent of the recoverable fields. For - min_part_hours you'll either have to remember what the - value you used was, or just make up a new one. - ->>> builder.change_min_part_hours(24) # or whatever you want it to be - Try some validation: if this doesn't raise an exception, you may feel some - hope. Not too much, though. - >>> builder.validate() - Save the builder. - ->>> import pickle ->>> pickle.dump(builder.to_dict(), open('account.builder', 'wb'), protocol=2) - You should now have a file called 'account.builder' in the current working - directory. Next, run swift-ring-builder account.builder write_ring - and compare the new account.ring.gz to the account.ring.gz that you started - from. They probably won't be byte-for-byte identical, but if you load them up - in a REPL and their _replica2part2dev_id and - devs attributes are the same (or nearly so), then you're - in good shape. - Next, repeat the procedure for container.ring.gz - and object.ring.gz, and you might get usable builder files. -
-
diff --git a/doc/training-guides/basic-install-guide/common/section_objectstorage_tenant-specific-image-storage.xml b/doc/training-guides/basic-install-guide/common/section_objectstorage_tenant-specific-image-storage.xml deleted file mode 100644 index a15e305a..00000000 --- a/doc/training-guides/basic-install-guide/common/section_objectstorage_tenant-specific-image-storage.xml +++ /dev/null @@ -1,53 +0,0 @@ - -
- Configure tenant-specific image locations with Object - Storage - For some deployers, it is not ideal to store all images in - one place to enable all tenants and users to access them. You - can configure the Image Service to store image data in - tenant-specific image locations. Then, only the following - tenants can use the Image Service to access the created image: - - The tenant who owns the image - - - Tenants that are defined in - and - that have admin-level accounts - - - - To configure tenant-specific image locations - - Configure swift as your - in the - glance-api.conf file. - - - Set these configuration options in the - glance-api.conf file: - - . - Set to True to enable - tenant-specific storage locations. Default - is False. - - - . - Specify a list of tenant IDs that can - grant read and write access to all Object - Storage containers that are created by the - Image Service. - - - - - With this configuration, images are stored in an - Object Storage service (swift) endpoint that is pulled - from the service catalog for the authenticated - user. -
diff --git a/doc/training-guides/basic-install-guide/figures/NOVA_ARCH.png b/doc/training-guides/basic-install-guide/figures/NOVA_ARCH.png deleted file mode 100644 index 6206deaf..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/NOVA_ARCH.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/NOVA_ARCH.svg b/doc/training-guides/basic-install-guide/figures/NOVA_ARCH.svg deleted file mode 100644 index 74e22cb3..00000000 --- a/doc/training-guides/basic-install-guide/figures/NOVA_ARCH.svg +++ /dev/null @@ -1,5907 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - - - - - - - AMQP - Messaging server - z - - - nova-api - (Public API server) - - - - nova-api - (Public API server) - - - - - - - - - - - Internet - - - - Cloud users - Using tools to managevirtual guests - - - - - - - - - - - - - - - Admin network - - - - Internet EndUsers - Using services providedby virtual guests - - - - - - - - - - - - Publicnetwork - - - - - - - - - Disk Images - for Virtual Guests - - - - - - - - - Virtual Guests - Runing in the cloud - - - - nova-compute - (uses libvirt or XenAPI to manage guests) - - - - User authorization - (SQL, LDAP or fake LDAP using ReDIS) - s - - - - - - - - - - nova-network - manages cloud networks, vlans and bridges - - - - - - - cinder - disk images for v. guests(filesystem or AoE) - - - - - - - nova-objectstore - (implements S3-like apiUsing Files or (later) Swift - - - - - nova-scheduler - Plans where to place new guests - - - - - diff --git a/doc/training-guides/basic-install-guide/figures/NOVA_install_arch.png b/doc/training-guides/basic-install-guide/figures/NOVA_install_arch.png deleted file mode 100644 index c7318156..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/NOVA_install_arch.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/NOVA_install_arch.svg b/doc/training-guides/basic-install-guide/figures/NOVA_install_arch.svg deleted file mode 100644 index 38eef824..00000000 --- a/doc/training-guides/basic-install-guide/figures/NOVA_install_arch.svg +++ /dev/null @@ -1,15676 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - David Pravec <alekibango@danix.org> - - - - - released under terms of Apache License - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Networ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ROUTERIP: 192.168.12.10 - SWITCHIP: 192.168.12.10 - KVM SWITCHIP: 192.168.12.10 - VPN-GWIP: 192.168.12.10 - BALANCERIP: 192.168.12.10 - FIREWALLIP: 192.168.12.10 - CARP (virtual IP)IP: 192.168.12.10 - CABLE connectionIP: 192.168.12.10 - rfIBER connectionIP: 192.168.12.10 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - WEB SERVERIP: 192.168.12.10 - DB SERVERIP: 192.168.12.10 - MAIL SERVERIP: 192.168.12.10 - FTP SERVERIP: 192.168.12.10 - DOC SERVER (storage)IP: 192.168.12.10 - VIRTUAL SERVERIP: 192.168.12.10 - MONITOR SERVERIP: 192.168.12.10 - SPAREIP: 192.168.12.10 - APP SERVERIP: 192.168.12.10 - DW SERVERIP: 192.168.12.10 - SMS OPERATOREIP: 192.168.12.10 - INFO CLIENTEIP: 192.168.12.10 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - P gina-1 - - - Hoja.14 - - - - Hoja.2 - - Box - - - - - - - Box.3 - - - - - - - Hoja.4 - - - - Hoja.5 - - - - Box.6 - - - - - - - Hoja.7 - - - - Hoja.8 - - - - Box.9 - - - - - - - Box.10 - - - - - - - Box.11 - - - - - - - Hoja.12 - - - - Hoja.13 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - OpenStack Compute services Database server on second node - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Internet - - Cloud of 2-4 virtual servers in one clusterSelf-contained storage of virtual images - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Router Private Switch Public Switch - - diff --git a/doc/training-guides/basic-install-guide/figures/basic-architecture-networking.svg b/doc/training-guides/basic-install-guide/figures/basic-architecture-networking.svg deleted file mode 100644 index fb0cb6d6..00000000 --- a/doc/training-guides/basic-install-guide/figures/basic-architecture-networking.svg +++ /dev/null @@ -1,184 +0,0 @@ - - - - - - - - - - image/svg+xml - - - - - - - - - 192.168.0.10 - 192.168.0.11 - 10.0.0.10 - 10.0.0.11 - cloudcontroller - computenode - compute1 - controller - - diff --git a/doc/training-guides/basic-install-guide/figures/basic-architecture.svg b/doc/training-guides/basic-install-guide/figures/basic-architecture.svg deleted file mode 100644 index 95405c16..00000000 --- a/doc/training-guides/basic-install-guide/figures/basic-architecture.svg +++ /dev/null @@ -1,1128 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - - - - - - - Controller - - keystone - - glance-api - - - nova-api - - - nova-novncproxy - - - nova-scheduler - - MySQL - - QPid/RabbitMQ - - - - - - - - - - - - - - - - - - - - - - - - Internal Network - - - - - - - - - - - - - - - - - - - - - - - - External Network - - - Cloud Nodes - - nova-compute - - kvm - - - vm - - - - vm - - - - vm - - - - - nova-cert - glance-registry - nova-consoleauth - - - nova-network - - diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_1_register_endpoint.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_1_register_endpoint.png deleted file mode 100644 index 2ddbe6fd..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_1_register_endpoint.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_2_keystone_server_ip.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_2_keystone_server_ip.png deleted file mode 100644 index 755b6c2d..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_2_keystone_server_ip.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_3_keystone_authtoken.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_3_keystone_authtoken.png deleted file mode 100644 index 715d4028..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_3_keystone_authtoken.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_4_service_endpoint_ip_address.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_4_service_endpoint_ip_address.png deleted file mode 100644 index 28526dea..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_4_service_endpoint_ip_address.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_5_region_name.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_5_region_name.png deleted file mode 100644 index bdcd1b09..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/api-endpoint_5_region_name.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_1_configure-with-dbconfig-yes-no.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_1_configure-with-dbconfig-yes-no.png deleted file mode 100644 index fc3faf32..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_1_configure-with-dbconfig-yes-no.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_2_db-types.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_2_db-types.png deleted file mode 100644 index a819b3e5..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_2_db-types.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_3_connection_method.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_3_connection_method.png deleted file mode 100644 index 1ce4bcac..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_3_connection_method.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_4_mysql_root_password.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_4_mysql_root_password.png deleted file mode 100644 index 0b5a2501..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_4_mysql_root_password.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_5_mysql_app_password.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_5_mysql_app_password.png deleted file mode 100644 index 4f3e5d9d..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_5_mysql_app_password.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_6_mysql_app_password_confirm.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_6_mysql_app_password_confirm.png deleted file mode 100644 index df4dbf77..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_6_mysql_app_password_confirm.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_keep_admin_pass.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_keep_admin_pass.png deleted file mode 100644 index 86f3dc1e..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_keep_admin_pass.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_used_for_remote_db.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_used_for_remote_db.png deleted file mode 100644 index 7162d81f..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/dbconfig-common_used_for_remote_db.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/glance-common_pipeline_flavor.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/glance-common_pipeline_flavor.png deleted file mode 100644 index 488b74ed..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/glance-common_pipeline_flavor.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_1_admin_token.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_1_admin_token.png deleted file mode 100644 index 27877434..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_1_admin_token.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_2_register_admin_tenant_yes_no.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_2_register_admin_tenant_yes_no.png deleted file mode 100644 index a18be1ed..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_2_register_admin_tenant_yes_no.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_3_admin_user_name.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_3_admin_user_name.png deleted file mode 100644 index f13e0c00..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_3_admin_user_name.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_4_admin_user_email.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_4_admin_user_email.png deleted file mode 100644 index a5f234eb..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_4_admin_user_email.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_5_admin_user_pass.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_5_admin_user_pass.png deleted file mode 100644 index 845b50b4..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_5_admin_user_pass.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_6_admin_user_pass_confirm.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_6_admin_user_pass_confirm.png deleted file mode 100644 index 68c7c2d8..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_6_admin_user_pass_confirm.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_7_register_endpoint.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_7_register_endpoint.png deleted file mode 100644 index caa33a06..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/keystone_7_register_endpoint.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/neutron_1_plugin_selection.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/neutron_1_plugin_selection.png deleted file mode 100644 index 27b5e758..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/neutron_1_plugin_selection.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/neutron_2_networking_type.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/neutron_2_networking_type.png deleted file mode 100644 index fa74b57c..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/neutron_2_networking_type.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/neutron_3_hypervisor_ip.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/neutron_3_hypervisor_ip.png deleted file mode 100644 index 3c23e001..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/neutron_3_hypervisor_ip.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/rabbitmq-host.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/rabbitmq-host.png deleted file mode 100644 index 5ca93ae6..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/rabbitmq-host.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/rabbitmq-password.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/rabbitmq-password.png deleted file mode 100644 index 94415062..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/rabbitmq-password.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/rabbitmq-user.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/rabbitmq-user.png deleted file mode 100644 index d66d7f1f..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/rabbitmq-user.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_admin_password.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_admin_password.png deleted file mode 100644 index e775f69b..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_admin_password.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_admin_tenant_name.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_admin_tenant_name.png deleted file mode 100644 index a889bd87..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_admin_tenant_name.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_server_hostname.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_server_hostname.png deleted file mode 100644 index 472b73bf..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_server_hostname.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_tenant_admin_user.png b/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_tenant_admin_user.png deleted file mode 100644 index a9411231..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/debconf-screenshots/service_keystone_authtoken_tenant_admin_user.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/installguide_arch-neutron.png b/doc/training-guides/basic-install-guide/figures/installguide_arch-neutron.png deleted file mode 100644 index 68332ec2..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/installguide_arch-neutron.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/installguide_arch-neutron.svg b/doc/training-guides/basic-install-guide/figures/installguide_arch-neutron.svg deleted file mode 100644 index 95af50de..00000000 --- a/doc/training-guides/basic-install-guide/figures/installguide_arch-neutron.svg +++ /dev/null @@ -1,995 +0,0 @@ - - - - - - - - - - - - image/svg+xml - - - - - - - - - Compute Nodecompute1 - - - - Network Nodenetwork - - - - Controller Nodecontroller - - - - - - DatabaseMySQL or MariaDB - - Message BrokerRabbitMQ or Qpid - Supporting Services - - - - - ComputeNova Management - - NetworkingNeutron ServerML2 Plug-In - - Image ServiceGlance - - IdentityKeystone - - DashboardHorizon - Basic Services - - Optional Services - - Block StorageCinder Management - - Object StorageSwift Proxy - - OrchestrationHeat - - TelemetryCeilometer Core - - Database ServiceTrove Management - - - - 1: Management 10.0.0.11/24 - Network Interfaces - - - - - NetworkingML2 Plug-InLayer 2 Agent (OVS)Layer 3 AgentDHCP Agent - Basic Services - - - - - 1: Management10.0.0.21/24 - Network Interfaces - - - 3: External(unnumbered) - 2: Instance Tunnels10.0.1.21/24 - - - - Optional Services - - TelemetryCeilometer Agent - - - - - 1: Management10.0.0.31/24 - - 2: Instance Tunnels10.0.1.31/24 - Network Interfaces - - - - - ComputeNova HypervisorKVM or QEMU - - NetworkingML2 Plug-InLayer 2 Agent (OVS) - Basic Services - - - diff --git a/doc/training-guides/basic-install-guide/figures/installguide_arch-nova.png b/doc/training-guides/basic-install-guide/figures/installguide_arch-nova.png deleted file mode 100644 index 5ac7f780..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/installguide_arch-nova.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/installguide_arch-nova.svg b/doc/training-guides/basic-install-guide/figures/installguide_arch-nova.svg deleted file mode 100644 index fdfdc58a..00000000 --- a/doc/training-guides/basic-install-guide/figures/installguide_arch-nova.svg +++ /dev/null @@ -1,737 +0,0 @@ - - - - - - - - - - - - image/svg+xml - - - - - - - - - Compute Nodecompute1 - - - - - Controller Nodecontroller - - - - ComputeNova HypervisorKVM or QEMUNova Networking - Basic Services - - - - - TelemetryCeilometer Agent - Optional Services - - - - - 1: Management10.0.0.31/24 - - 2: External(unnumbered) - Network Interfaces - - - - - - - - ComputeNova Management - Image ServiceGlance - IdentityKeystone - DashboardHorizon - Basic Services - - - - - - - - - Block StorageCinder Management - Object StorageSwift Proxy - OrchestrationHeat - TelemetryCeilometer Core - Database ServiceTrove Management - Optional Services - - - - - DatabaseMySQL or MariaDB - - Message BrokerRabbitMQ or Qpid - Supporting Services - - - - - 1: Management10.0.0.11/24 - Network Interfaces - - - diff --git a/doc/training-guides/basic-install-guide/figures/installguide_neutron-initial-networks.png b/doc/training-guides/basic-install-guide/figures/installguide_neutron-initial-networks.png deleted file mode 100644 index 8c39dc65..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/installguide_neutron-initial-networks.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/installguide_neutron-initial-networks.svg b/doc/training-guides/basic-install-guide/figures/installguide_neutron-initial-networks.svg deleted file mode 100644 index 700317ba..00000000 --- a/doc/training-guides/basic-install-guide/figures/installguide_neutron-initial-networks.svg +++ /dev/null @@ -1,622 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - - Network Interfaces - - Network Nodenetwork - - External Network Interface(unnumbered) - - Compute Nodecompute1 - - Tenant Virtual Network (demo-net)Subnet: 192.168.1.0/24 (demo-subnet)Gateway: 192.168.1.1 - - External Virtual Network (ext-net)Subnet: 203.0.113.0/24 (ext-subnet)Gateway: 203.0.113.1 - - Instance Tunnels Interface10.0.1.21/24 - Network Interfaces - - External Physical NetworkSubnet: 203.0.113.0/24 - - - Instance Tunnels Interface10.0.1.31/24 - Tenant Virtual Router (demo-router) - - Instance (demo1)Subnet: 192.168.1.0/24 (demo-subnet) - - External Physical RouterIP Address: 203.0.113.1 - - - - - - - - - - - Internet - - - External Physical Switch - - diff --git a/doc/training-guides/basic-install-guide/figures/networking-interactions-swift.png b/doc/training-guides/basic-install-guide/figures/networking-interactions-swift.png deleted file mode 100644 index f2e7eca6..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/networking-interactions-swift.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/networking-interactions-swift.svg b/doc/training-guides/basic-install-guide/figures/networking-interactions-swift.svg deleted file mode 100644 index 8b50b6d4..00000000 --- a/doc/training-guides/basic-install-guide/figures/networking-interactions-swift.svg +++ /dev/null @@ -1,790 +0,0 @@ - - - - - 2014-03-28 03:35Z - - - - image/svg+xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Canvas 1 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - swift-container-server - - - - Storage nodes - - - - Public - - - - Replication (optional) - - - - Storage Network - - - - - OpenStack - Dashboard - - - - OpenStack Identity - - - - - - all storage nodes - - - - Networks - - - - - - swift-proxy-server - - - Cloud controller/ - Proxy server - - - Networking inter- - communications - - - - - - swift-account-server - - - - - - - swift-object-server - - - - - - - - - memcached service - - - - - - - rsync - - - - Tenant API - SQLite - - - - diff --git a/doc/training-guides/basic-install-guide/figures/nova-external-1.png b/doc/training-guides/basic-install-guide/figures/nova-external-1.png deleted file mode 100644 index 5d0542d9..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/nova-external-1.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/nova-external-1.svg b/doc/training-guides/basic-install-guide/figures/nova-external-1.svg deleted file mode 100644 index af239754..00000000 --- a/doc/training-guides/basic-install-guide/figures/nova-external-1.svg +++ /dev/null @@ -1,1176 +0,0 @@ - - - - - 2012-06-13 02:39Z - - - - image/svg+xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Canvas 1 - - - Layer 1 - - - - - - - - - - - - - - - - - - - - - - - - MySQL/PostgreSQL - - - - - - - dnsmasq - - - - - - - nova- - network - - - - - - - - iptables - - - - - - - Linux - bridging - - - - - - - Linux - VLANs - - - - - - - KVM - - - - - - - libvirt - - - - - - - - nova- - compute - - - - - - - - - - - openstack - dashboard - - - - - - - apache - - - - - - - - novnc - - - - - - - - - memcache - - - - - - - - Cinder - - - - - - - L - VM - - - - - - - - IET - - - - - - - - open-iscsi - - - - - - - - All Compute services interact with a messaging service and database, such as MySQL - - - - - - - - - - - - - - - - - - - - - - - RabbitMQ/Qpid/0MQ - - - All network components interact - through the Linux networking stack - - - - - diff --git a/doc/training-guides/basic-install-guide/figures/nova-external-2.png b/doc/training-guides/basic-install-guide/figures/nova-external-2.png deleted file mode 100644 index 1c1ce604..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/nova-external-2.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/nova-external-2.svg b/doc/training-guides/basic-install-guide/figures/nova-external-2.svg deleted file mode 100644 index ebba5277..00000000 --- a/doc/training-guides/basic-install-guide/figures/nova-external-2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - -2012-06-26 20:47ZCanvas 2Layer 1PostgreSQLdnsmasqnova-networkiptablesLinuxbridgingLinuxVLANsXCPxapinova-computeopenstackdashboardnginxnovncMySQLCinderSANopen-iscsiAll Compute services interact with Qpid and PostgreSQLQpidAll network components interactthrough the Linux networking stack diff --git a/doc/training-guides/basic-install-guide/figures/nova-external.graffle b/doc/training-guides/basic-install-guide/figures/nova-external.graffle deleted file mode 100644 index fedd8f08..00000000 --- a/doc/training-guides/basic-install-guide/figures/nova-external.graffle +++ /dev/null @@ -1,2840 +0,0 @@ - - - - - ApplicationVersion - - com.omnigroup.OmniGrafflePro.MacAppStore - 139.7 - - CreationDate - 2012-05-12 19:30:00 +0000 - Creator - Lorin Hochstein - GraphDocumentVersion - 8 - GuidesLocked - NO - GuidesVisible - YES - ImageCounter - 1 - LinksVisible - NO - MagnetsVisible - NO - MasterSheets - - ModificationDate - 2012-06-26 20:47:40 +0000 - Modifier - Lorin Hochstein - NotesVisible - NO - OriginVisible - NO - PageBreaks - YES - PrintInfo - - NSBottomMargin - - float - 41 - - NSHorizonalPagination - - int - 0 - - NSLeftMargin - - float - 18 - - NSOrientation - - coded - BAtzdHJlYW10eXBlZIHoA4QBQISEhAhOU051bWJlcgCEhAdOU1ZhbHVlAISECE5TT2JqZWN0AIWEASqEhAFxlwGG - - NSPaperSize - - size - {792, 612} - - NSPrintReverseOrientation - - int - 0 - - NSRightMargin - - float - 18 - - NSTopMargin - - float - 18 - - - ReadOnly - NO - Sheets - - - ActiveLayerIndex - 0 - AutoAdjust - - BackgroundGraphic - - Bounds - {{0, 0}, {756, 1106}} - Class - SolidGraphic - ID - 2 - Style - - shadow - - Draws - NO - - stroke - - Draws - NO - - - - BaseZoom - 0 - CanvasOrigin - {0, 0} - ColumnAlign - 1 - ColumnSpacing - 36 - DisplayScale - 1 0/72 in = 1.0000 in - GraphicsList - - - Class - LineGraphic - Head - - ID - 15 - - ID - 109 - Points - - {332.21428788194891, 466.99994540648248} - {333.28573293937148, 539.50005459314184} - - Style - - stroke - - HeadArrow - 0 - Legacy - - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 17 - - - - Bounds - {{410, 561}, {186, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 108 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Align - 0 - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural - -\f0\fs24 \cf0 All network components interact\ -through the Linux networking stack} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{478.2251, 152}, {55, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 106 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 RabbitMQ} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{453.40039000000002, 183.89878999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 96 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{468.80466000000001, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 97 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{518.01813000000004, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 98 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{509.81598000000002, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 99 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{493.41107, 183.89886000000001}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 100 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{501.61383000000001, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 101 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{485.20891999999998, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 102 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{477.00677000000002, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 103 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{526.22020999999995, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 104 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{463.12891000000002, 169.99997999999999}, {84.034508000000002, 35.999865999999997}} - Class - ShapedGraphic - ID - 105 - Magnets - - {-0.5, 5.9604598999999998e-08} - {0.5, 5.9604598999999998e-08} - - Shape - ParallelLines - - - ID - 95 - - - Bounds - {{426.5, 123}, {305, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 48 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 All Compute services interact with RabbitMQ and MySQL} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - Head - - ID - 39 - - ID - 44 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {364.99817876204622, 449.67583154739242} - {443, 443} - {442.56531529246348, 346.49999492689739} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 17 - - - - Class - LineGraphic - Head - - ID - 35 - - ID - 42 - Points - - {585.41875060415782, 346.29148007990523} - {657.08125075451107, 397.70851991980311} - - Style - - stroke - - HeadArrow - 0 - Legacy - - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 37 - - - - Class - LineGraphic - Head - - ID - 37 - - ID - 41 - Points - - {475.5, 332.00000001078166} - {532.5, 332.00000002940453} - - Style - - stroke - - HeadArrow - 0 - Legacy - - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 39 - - - - Class - LineGraphic - Head - - ID - 39 - - ID - 40 - Points - - {369.50001850699363, 332} - {409.49999999300627, 332} - - Style - - stroke - - HeadArrow - 0 - Legacy - - TailArrow - 0 - - - Tail - - ID - 20 - - - - Bounds - {{410, 318}, {65, 28}} - Class - ShapedGraphic - ID - 39 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 open-iscsi} - - - - Class - LineGraphic - Head - - ID - 37 - - ID - 38 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {639.49998149317275, 332.00000303032465} - {598.50000000682724, 332.00000634347646} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 34 - - - - Bounds - {{533, 318}, {65, 28}} - Class - ShapedGraphic - ID - 37 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 IET} - - - - Class - LineGraphic - Head - - ID - 35 - - ID - 36 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {677.00000002503316, 365.50001648119365} - {677.00000004894559, 397.50000001880625} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 34 - - - - Bounds - {{644.5, 398}, {65, 28}} - Class - ShapedGraphic - ID - 35 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 LVM} - - - - Bounds - {{640, 299}, {74, 66}} - Class - ShapedGraphic - ID - 34 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 cinder} - - - - Class - LineGraphic - Head - - ID - 32 - - ID - 33 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {165.39999163629457, 282.85453153940466} - {165.39999033485893, 226.50000001059664} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 26 - - - - Bounds - {{128.39999, 198}, {74, 28}} - Class - ShapedGraphic - ID - 32 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 memcache} - - - - Class - LineGraphic - Head - - ID - 17 - - ID - 31 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {198.39998772886597, 452.09903955903417} - {299.00000227113406, 452.40096021857363} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 29 - - - - Class - LineGraphic - Head - - ID - 29 - - ID - 30 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {165.39999179029778, 357.65456345280523} - {165.39999027515182, 437.49999999719336} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 26 - - - - Bounds - {{132.89999, 438}, {65, 28}} - Class - ShapedGraphic - ID - 29 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 novnc} - - - - Class - LineGraphic - Head - - ID - 26 - - ID - 28 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {86.500000018599465, 320.25454926273454} - {125.29997018139933, 320.25454839588974} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 27 - - - - Bounds - {{21, 306.25454999999999}, {65, 28}} - Class - ShapedGraphic - ID - 27 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 apache} - - - - Bounds - {{125.79998999999999, 283.35455000000002}, {79.200005000000004, 73.799994999999996}} - Class - ShapedGraphic - ID - 26 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs22 \cf0 openstack\ -dashboard} - - - - Class - LineGraphic - Head - - ID - 18 - - ID - 25 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {332, 365.50001651959224} - {332, 381.49999998040767} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 20 - - - - Class - LineGraphic - Head - - ID - 16 - - ID - 23 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {368.50001726494651, 642.5} - {428.49999998505336, 642.5} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 12 - - - - Class - LineGraphic - Head - - ID - 15 - - ID - 22 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {333.5, 611.49998474990014} - {333.5, 568.50000000009993} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 12 - - - - Class - LineGraphic - Head - - ID - 14 - - ID - 21 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {298.50135809854953, 642.22442014250828} - {239.49998448615352, 641.75984239752881} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 12 - - - - Bounds - {{295, 299}, {74, 66}} - Class - ShapedGraphic - ID - 20 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 nova-compute} - - - - Class - LineGraphic - Head - - ID - 17 - - ID - 19 - Points - - {332, 410.5} - {332, 438} - - Style - - stroke - - HeadArrow - 0 - Legacy - - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 18 - - - - Bounds - {{299.5, 382}, {65, 28}} - Class - ShapedGraphic - ID - 18 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 libvirt} - - - - Bounds - {{299.5, 438.5}, {65, 28}} - Class - ShapedGraphic - ID - 17 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 KVM} - - - - Bounds - {{429, 628.5}, {65, 28}} - Class - ShapedGraphic - ID - 16 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Linux\ -VLANs} - - - - Bounds - {{301, 540}, {65, 28}} - Class - ShapedGraphic - ID - 15 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Linux\ -bridging} - - - - Bounds - {{174, 627.5}, {65, 28}} - Class - ShapedGraphic - ID - 14 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 iptables} - - - - Class - LineGraphic - Head - - ID - 11 - - ID - 13 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {333.5, 673.50001525008133} - {333.5, 726.4999999999186} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 12 - - - - Bounds - {{299, 612}, {69, 61}} - Class - ShapedGraphic - ID - 12 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 nova-network} - - - - Bounds - {{301, 727}, {65, 28}} - Class - ShapedGraphic - ID - 11 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 dnsmasq} - - - - Bounds - {{601.5, 161}, {54, 54}} - Class - ShapedGraphic - ID - 10 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - Cylinder - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\fs20 \cf0 MySQL} - VerticalPad - 0 - - - - GridInfo - - HPages - 1 - KeepToScale - - Layers - - - Lock - NO - Name - Layer 1 - Print - YES - View - YES - - - LayoutInfo - - Animate - NO - circoMinDist - 18 - circoSeparation - 0.0 - layoutEngine - dot - neatoSeparation - 0.0 - twopiSeparation - 0.0 - - Orientation - 2 - PrintOnePage - - RowAlign - 1 - RowSpacing - 36 - SheetTitle - Canvas 1 - UniqueID - 1 - VPages - 2 - - - ActiveLayerIndex - 0 - AutoAdjust - - BackgroundGraphic - - Bounds - {{0, 0}, {756, 1106}} - Class - SolidGraphic - ID - 2 - Style - - shadow - - Draws - NO - - stroke - - Draws - NO - - - - BaseZoom - 0 - CanvasOrigin - {0, 0} - ColumnAlign - 1 - ColumnSpacing - 36 - DisplayScale - 1 0/72 in = 1.0000 in - GraphicsList - - - Class - LineGraphic - Head - - ID - 15 - - ID - 109 - Points - - {332.2142849901083, 466.99994540648498} - {333.28571558825678, 539.50005459351496} - - Style - - stroke - - HeadArrow - 0 - Legacy - - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 17 - - - - Bounds - {{410, 561}, {186, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 108 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Align - 0 - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural - -\f0\fs24 \cf0 All network components interact\ -through the Linux networking stack} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{492.7251, 152}, {26, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 106 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Qpid} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{453.40039000000002, 183.89878999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 96 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{468.80466000000001, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 97 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{518.01813000000004, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 98 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{509.81598000000002, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 99 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{493.41107, 183.89886000000001}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 100 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{501.61383000000001, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 101 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{485.20891999999998, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 102 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{477.00677000000002, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 103 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{526.22020999999995, 183.89895999999999}, {35.999865999999997, 8.2022628999999991}} - Class - ShapedGraphic - ID - 104 - Rotation - 270 - Shape - Rectangle - - - Bounds - {{463.12891000000002, 169.99997999999999}, {84.034508000000002, 35.999865999999997}} - Class - ShapedGraphic - ID - 105 - Magnets - - {-0.5, 5.9604598999999998e-08} - {0.5, 5.9604598999999998e-08} - - Shape - ParallelLines - - - ID - 95 - - - Bounds - {{428.5, 123}, {301, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 48 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 All Compute services interact with Qpid and PostgreSQL} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - Head - - ID - 39 - - ID - 44 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {364.99817876204622, 449.67583154739242} - {443, 443} - {442.56531529246348, 346.49999492689739} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 17 - - - - Class - LineGraphic - Head - - ID - 37 - - ID - 41 - Points - - {475.5, 332.00000000144951} - {532.5, 332.00000000395329} - - Style - - stroke - - HeadArrow - 0 - Legacy - - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 39 - - - - Class - LineGraphic - Head - - ID - 39 - - ID - 40 - Points - - {369.50001850699363, 332} - {409.49999999300627, 332} - - Style - - stroke - - HeadArrow - 0 - Legacy - - TailArrow - 0 - - - Tail - - ID - 20 - - - - Bounds - {{410, 318}, {65, 28}} - Class - ShapedGraphic - ID - 39 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 open-iscsi} - - - - Class - LineGraphic - Head - - ID - 37 - - ID - 38 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {639.49998149317275, 332.00000064555508} - {598.50000000682724, 332.00000135136133} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 34 - - - - Bounds - {{533, 318}, {65, 28}} - Class - ShapedGraphic - ID - 37 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 SAN} - - - - Bounds - {{640, 299}, {74, 66}} - Class - ShapedGraphic - ID - 34 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 cinder} - - - - Class - LineGraphic - Head - - ID - 32 - - ID - 33 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {165.39999163629457, 282.85453153940466} - {165.39999033485893, 226.50000001059664} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 26 - - - - Bounds - {{128.39999, 198}, {74, 28}} - Class - ShapedGraphic - ID - 32 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 MySQL} - - - - Class - LineGraphic - Head - - ID - 17 - - ID - 31 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {198.39998772886597, 452.09903960039782} - {299.00000227113406, 452.4009603860336} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 29 - - - - Class - LineGraphic - Head - - ID - 29 - - ID - 30 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {165.39999179029778, 357.65456345280523} - {165.39999027515182, 437.49999999719336} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 26 - - - - Bounds - {{132.89999, 438}, {65, 28}} - Class - ShapedGraphic - ID - 29 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 novnc} - - - - Class - LineGraphic - Head - - ID - 26 - - ID - 28 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {86.500000018599465, 320.25454926273454} - {125.29997018139933, 320.25454839588974} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 27 - - - - Bounds - {{21, 306.25454999999999}, {65, 28}} - Class - ShapedGraphic - ID - 27 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 nginx} - - - - Bounds - {{125.79998999999999, 283.35455000000002}, {79.200005000000004, 73.799994999999996}} - Class - ShapedGraphic - ID - 26 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs22 \cf0 openstack\ -dashboard} - - - - Class - LineGraphic - Head - - ID - 18 - - ID - 25 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {332, 365.50001651959224} - {332, 381.49999998040767} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 20 - - - - Class - LineGraphic - Head - - ID - 16 - - ID - 23 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {368.50001726494651, 642.5} - {428.49999998505336, 642.5} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 12 - - - - Class - LineGraphic - Head - - ID - 15 - - ID - 22 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {333.5, 611.49998474990014} - {333.5, 568.50000000009993} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 12 - - - - Class - LineGraphic - Head - - ID - 14 - - ID - 21 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {298.50135809854953, 642.22442014250828} - {239.49998448615352, 641.75984239752881} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 12 - - - - Bounds - {{295, 299}, {74, 66}} - Class - ShapedGraphic - ID - 20 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 nova-compute} - - - - Class - LineGraphic - Head - - ID - 17 - - ID - 19 - Points - - {332, 410.5} - {332, 438} - - Style - - stroke - - HeadArrow - 0 - Legacy - - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 18 - - - - Bounds - {{299.5, 382}, {65, 28}} - Class - ShapedGraphic - ID - 18 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 xapi} - - - - Bounds - {{299.5, 438.5}, {65, 28}} - Class - ShapedGraphic - ID - 17 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 XCP} - - - - Bounds - {{429, 628.5}, {65, 28}} - Class - ShapedGraphic - ID - 16 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Linux\ -VLANs} - - - - Bounds - {{301, 540}, {65, 28}} - Class - ShapedGraphic - ID - 15 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Linux\ -bridging} - - - - Bounds - {{174, 627.5}, {65, 28}} - Class - ShapedGraphic - ID - 14 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 iptables} - - - - Class - LineGraphic - Head - - ID - 11 - - ID - 13 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {333.5, 673.50001525008133} - {333.5, 726.4999999999186} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 2 - TailArrow - 0 - - - Tail - - ID - 12 - - - - Bounds - {{299, 612}, {69, 61}} - Class - ShapedGraphic - ID - 12 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 nova-network} - - - - Bounds - {{301, 727}, {65, 28}} - Class - ShapedGraphic - ID - 11 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 dnsmasq} - - - - Bounds - {{601.5, 161}, {65, 61}} - Class - ShapedGraphic - ID - 10 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - Cylinder - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470 -{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\fs20 \cf0 PostgreSQL} - VerticalPad - 0 - - - - GridInfo - - HPages - 1 - KeepToScale - - Layers - - - Lock - NO - Name - Layer 1 - Print - YES - View - YES - - - LayoutInfo - - Animate - NO - circoMinDist - 18 - circoSeparation - 0.0 - layoutEngine - dot - neatoSeparation - 0.0 - twopiSeparation - 0.0 - - Orientation - 2 - PrintOnePage - - RowAlign - 1 - RowSpacing - 36 - SheetTitle - Canvas 2 - UniqueID - 2 - VPages - 2 - - - SmartAlignmentGuidesActive - YES - SmartDistanceGuidesActive - YES - UseEntirePage - - WindowInfo - - CurrentSheet - 1 - ExpandedCanvases - - - name - Canvas 1 - - - Frame - {{1135, 269}, {1274, 1118}} - ListView - - OutlineWidth - 142 - RightSidebar - - ShowRuler - - Sidebar - - SidebarWidth - 120 - VisibleRegion - {{-192, 0}, {1139, 979}} - Zoom - 1 - ZoomValues - - - Canvas 1 - 1 - 1 - - - Canvas 2 - 1 - 1 - - - - - diff --git a/doc/training-guides/basic-install-guide/figures/swift_install_arch.png b/doc/training-guides/basic-install-guide/figures/swift_install_arch.png deleted file mode 100644 index 20b9ac62..00000000 Binary files a/doc/training-guides/basic-install-guide/figures/swift_install_arch.png and /dev/null differ diff --git a/doc/training-guides/basic-install-guide/figures/swift_install_arch.svg b/doc/training-guides/basic-install-guide/figures/swift_install_arch.svg deleted file mode 100644 index 54cdca18..00000000 --- a/doc/training-guides/basic-install-guide/figures/swift_install_arch.svg +++ /dev/null @@ -1,14932 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - David Pravec <alekibango@danix.org> - - - - - released under terms of Apache License - - - - - - - - - - - - - - - - - - - - - - - - - - OpenStack Object Storage Stores container databases, account databases, and stored objects - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Proxy node Public Switch - - - - - - - Storage nodes - - - - - - - - - - - - - - - - - - - - - - - Private Switch - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-adding-proxy-server.xml b/doc/training-guides/basic-install-guide/object-storage/section_object-storage-adding-proxy-server.xml deleted file mode 100644 index 41a85b9e..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-adding-proxy-server.xml +++ /dev/null @@ -1,54 +0,0 @@ - -
- Add another proxy server - To provide additional reliability and bandwidth - to your cluster, you can add proxy servers. You can - set up an additional proxy node the same way - that you set up the first proxy node but with - additional configuration steps. - After you have more than two proxies, you must - load balance them; your storage endpoint (what - clients use to connect to your storage) also - changes. You can select from different - strategies for load balancing. For example, - you could use round-robin DNS, or a software - or hardware load balancer (like pound) in - front of the two proxies. You can then point your - storage URL to the load balancer, configure an initial - proxy node and complete these steps to add proxy - servers. - - - Update the list of memcache - servers in the - /etc/swift/proxy-server.conf - file for added proxy servers. If - you run multiple memcache servers, - use this pattern for the multiple - IP:port listings in each proxy - server configuration file: - 10.1.2.3:11211,10.1.2.4:11211 - [filter:cache] -use = egg:swift#memcache -memcache_servers = PROXY_LOCAL_NET_IP:11211 - - - Copy ring information to all - nodes, including new proxy nodes. - Also, ensure that the ring - information gets to all storage - nodes. - - - After you sync all nodes, make - sure that the admin has keys in - /etc/swift and - the ownership for the ring file is - correct. - - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-example-install-arch.xml b/doc/training-guides/basic-install-guide/object-storage/section_object-storage-example-install-arch.xml deleted file mode 100644 index 504da6fd..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-example-install-arch.xml +++ /dev/null @@ -1,56 +0,0 @@ - -
- Example of Object Storage installation architecture - - - Node: A host machine that runs one or more OpenStack - Object Storage services. - - - Proxy node: Runs proxy services. - - - Storage node: Runs account, container, and object - services. Contains the SQLite databases. - - - Ring: A set of mappings between OpenStack Object - Storage data to physical devices. - - - Replica: A copy of an object. By default, three - copies are maintained in the cluster. - - - Zone: A logically separate section of the cluster, - related to independent failure characteristics. - - - Region (optional): A logically separate section of - the cluster, representing distinct physical locations - such as cities or countries. Similar to zones but - representing physical locations of portions of the - cluster rather than logical segments. - - - To increase reliability and performance, you can add - additional proxy servers. - This document describes each storage node as a separate zone - in the ring. At a minimum, five zones are recommended. A zone - is a group of nodes that are as isolated as possible from other - nodes (separate servers, network, power, even geography). The - ring guarantees that every replica is stored in a separate - zone. This diagram shows one possible configuration for a - minimal installation: - - - - - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-install-config-proxy-node.xml b/doc/training-guides/basic-install-guide/object-storage/section_object-storage-install-config-proxy-node.xml deleted file mode 100644 index 1f4895eb..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-install-config-proxy-node.xml +++ /dev/null @@ -1,200 +0,0 @@ - -
- Install and configure the proxy node - The proxy server takes each request and looks up locations - for the account, container, or object and routes the requests - correctly. The proxy server also handles API requests. You - enable account management by configuring it in the - /etc/swift/proxy-server.conf file. - - The Object Storage processes run under a separate user - and group, set by configuration options, and referred to as - swift:swift. The default - user is swift. - - - - Install swift-proxy service: - # apt-get install swift swift-proxy memcached python-keystoneclient python-swiftclient python-webob - # yum install openstack-swift-proxy memcached python-swiftclient python-keystone-auth-token - # zypper install openstack-swift-proxy memcached python-swiftclient python-keystoneclient python-xml - - - Modify memcached to listen on the default interface - on a local, non-public network. Edit this line in - the /etc/memcached.conf file: - -l 127.0.0.1 - Change it to: - -l PROXY_LOCAL_NET_IP - - - Modify memcached to listen on the default interface - on a local, non-public network. Edit - the /etc/sysconfig/memcached file: - OPTIONS="-l PROXY_LOCAL_NET_IP" - MEMCACHED_PARAMS="-l PROXY_LOCAL_NET_IP" - - - Restart the memcached service: - # service memcached restart - - - Start the memcached service and configure it to start when - the system boots: - # service memcached start -# chkconfig memcached on - - - Create - Edit - /etc/swift/proxy-server.conf: - [DEFAULT] -bind_port = 8080 -user = swift - -[pipeline:main] -pipeline = catch_errors gatekeeper healthcheck proxy-logging cache authtoken keystoneauth proxy-logging proxy-server - -[app:proxy-server] -use = egg:swift#proxy -allow_account_management = true -account_autocreate = true - -[filter:keystoneauth] -use = egg:swift#keystoneauth -operator_roles = Member,admin,swiftoperator - -[filter:authtoken] -paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory - -# Delaying the auth decision is required to support token-less -# usage for anonymous referrers ('.r:*'). -delay_auth_decision = true - -# auth_* settings refer to the Keystone server -auth_protocol = http -auth_host = controller -auth_uri = http://controller:5000 - -# the service tenant and swift username and password created in Keystone -admin_tenant_name = service -admin_user = swift -admin_password = SWIFT_PASS - -[filter:healthcheck] -use = egg:swift#healthcheck - -[filter:cache] -use = egg:swift#memcache -set log_name = cache - -[filter:catch_errors] -use = egg:swift#catch_errors - -[filter:gatekeeper] -use = egg:swift#gatekeeper - -[filter:proxy-logging] -use = egg:swift#proxy_logging - - - - If you run multiple memcache servers, put the - multiple IP:port listings in the [filter:cache] - section of the - /etc/swift/proxy-server.conf file: - 10.1.2.3:11211,10.1.2.4:11211 - Only the proxy server uses memcache. - - - keystoneclient.middleware.auth_token: You - must configure auth_uri to point to the public - identity endpoint. Otherwise, clients might not be able to - authenticate against an admin endpoint. - - - - - Create the account, container, and object rings. The - builder command creates a builder file - with a few parameters. The parameter with the value of - 18 represents 2 ^ 18th, the value that the partition - is sized to. Set this “partition power” value - based on the total amount of storage you expect your - entire ring to use. The value 3 represents the - number of replicas of each object, with the last value - being the number of hours to restrict moving a - partition more than once. - # cd /etc/swift -# swift-ring-builder account.builder create 18 3 1 -# swift-ring-builder container.builder create 18 3 1 -# swift-ring-builder object.builder create 18 3 1 - - - For every storage device on each node add entries to - each ring: - # swift-ring-builder account.builder add zZONE-STORAGE_LOCAL_NET_IP:6002[RSTORAGE_REPLICATION_NET_IP:6005]/DEVICE 100 -# swift-ring-builder container.builder add zZONE-STORAGE_LOCAL_NET_IP_1:6001[RSTORAGE_REPLICATION_NET_IP:6004]/DEVICE 100 -# swift-ring-builder object.builder add zZONE-STORAGE_LOCAL_NET_IP_1:6000[RSTORAGE_REPLICATION_NET_IP:6003]/DEVICE 100 - - You must omit the optional STORAGE_REPLICATION_NET_IP parameter if you - do not want to use dedicated network for - replication. - - For example, if a storage node - has a partition in Zone 1 on IP 10.0.0.1, the storage node has - address 10.0.1.1 from replication network. The mount point of - this partition is /srv/node/sdb1, and the - path in /etc/rsyncd.conf is - /srv/node/, the DEVICE would be sdb1 and - the commands are: - # swift-ring-builder account.builder add z1-10.0.0.1:6002R10.0.1.1:6005/sdb1 100 -# swift-ring-builder container.builder add z1-10.0.0.1:6001R10.0.1.1:6004/sdb1 100 -# swift-ring-builder object.builder add z1-10.0.0.1:6000R10.0.1.1:6003/sdb1 100 - - If you assume five zones with one node for each - zone, start ZONE at 1. For each additional node, - increment ZONE by 1. - - - - Verify the ring contents for each ring: - # swift-ring-builder account.builder -# swift-ring-builder container.builder -# swift-ring-builder object.builder - - - Rebalance the rings: - # swift-ring-builder account.builder rebalance -# swift-ring-builder container.builder rebalance -# swift-ring-builder object.builder rebalance - - Rebalancing rings can take some time. - - - - Copy the account.ring.gz, - container.ring.gz, and - object.ring.gz files to each - of the Proxy and Storage nodes in /etc/swift. - - - Make sure the swift user owns all configuration files: - # chown -R swift:swift /etc/swift - - - Restart the Proxy service: - # service swift-proxy restart - - - Start the Proxy service and configure it to start when the - system boots: - # service openstack-swift-proxy start -# chkconfig openstack-swift-proxy on - - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-install-config-storage-nodes.xml b/doc/training-guides/basic-install-guide/object-storage/section_object-storage-install-config-storage-nodes.xml deleted file mode 100644 index 99031d15..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-install-config-storage-nodes.xml +++ /dev/null @@ -1,117 +0,0 @@ - -
- Install and configure storage nodes - - Object Storage works on any file system that supports - Extended Attributes (XATTRS). XFS shows the best overall - performance for the swift use case after considerable - testing and benchmarking at Rackspace. It is also the only - file system that has been thoroughly tested. See the OpenStack Configuration - Reference for additional - recommendations. - - - - Install storage node packages: - - # apt-get install swift swift-account swift-container swift-object xfsprogs - # yum install openstack-swift-account openstack-swift-container \ - openstack-swift-object xfsprogs xinetd - # zypper install openstack-swift-account openstack-swift-container \ - openstack-swift-object python-xml xfsprogs xinetd - - - For each device on the node that you want to use for - storage, set up the XFS volume - (/dev/sdb is used as an - example). Use a single partition per drive. For - example, in a server with 12 disks you may use one or - two disks for the operating system which should not be - touched in this step. The other 10 or 11 disks should - be partitioned with a single partition, then formatted - in XFS. - # fdisk /dev/sdb -# mkfs.xfs /dev/sdb1 -# echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab -# mkdir -p /srv/node/sdb1 -# mount /srv/node/sdb1 -# chown -R swift:swift /srv/node - - - Create - /etc/rsyncd.conf: - Replace the content of - /etc/rsyncd.conf with: - uid = swift -gid = swift -log file = /var/log/rsyncd.log -pid file = /var/run/rsyncd.pid -address = STORAGE_LOCAL_NET_IP - -[account] -max connections = 2 -path = /srv/node/ -read only = false -lock file = /var/lock/account.lock - -[container] -max connections = 2 -path = /srv/node/ -read only = false -lock file = /var/lock/container.lock - -[object] -max connections = 2 -path = /srv/node/ -read only = false -lock file = /var/lock/object.lock - - - (Optional) If you want to separate rsync and - replication traffic to replication network, set - STORAGE_REPLICATION_NET_IP - instead of - STORAGE_LOCAL_NET_IP: - address = STORAGE_REPLICATION_NET_IP - - - Edit the following line in - /etc/default/rsync: - RSYNC_ENABLE=true - - - Edit the following line in - /etc/xinetd.d/rsync: - disable = no - - - Start the rsync service: - # service rsync start - Start the xinetd service: - # service xinetd start - Start the xinetd service and configure it to - start when the system boots: - # service xinetd start -# chkconfig xinetd on - - The rsync service requires no authentication, so - run it on a local, private network. - - - - Create the swift recon cache directory and set its - permissions: - # mkdir -p /var/swift/recon -# chown -R swift:swift /var/swift/recon - - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-install.xml b/doc/training-guides/basic-install-guide/object-storage/section_object-storage-install.xml deleted file mode 100644 index a3e5f1a3..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-install.xml +++ /dev/null @@ -1,129 +0,0 @@ - -
- Install Object Storage - Though you can install OpenStack Object Storage for development or - testing purposes on one server, a multiple-server installation enables - the high availability and redundancy you want in a production - distributed object storage system. - To perform a single-node installation for development purposes from - source code, use the Swift All In One instructions (Ubuntu) or DevStack - (multiple distros). See http://swift.openstack.org/development_saio.html for manual - instructions or http://devstack.org for all-in-one including authentication - with the Identity Service (keystone) v2.0 API. -
- Before you begin - Have a copy of the operating system installation media available - if you are installing on a new server. - These steps assume you have set up repositories for packages for - your operating system as shown in - . - This document demonstrates how to install a cluster by using the - following types of nodes: - - - One proxy node which runs the - swift-proxy-server - processes. The proxy server proxies requests to the - appropriate storage nodes. - - - - Five storage nodes that run the swift-account-server, - swift-container-server, - and swift-object-server - processes which control storage of the account - databases, the container databases, as well as the - actual stored objects. - - - - Fewer storage nodes can be used initially, but a minimum of - five is recommended for a production cluster. - -
-
- General installation steps - - - Create a swift user that the Object - Storage Service can use to authenticate with the Identity - Service. Choose a password and specify an email address for - the swift user. Use the - service tenant and give the user the - admin role: - $ keystone user-create --name swift --pass SWIFT_PASS -$ keystone user-role-add --user swift --tenant service --role admin - Replace SWIFT_PASS with a - suitable password. - - - Create a service entry for the Object Storage - Service: - $ keystone service-create --name swift --type object-store \ - --description "OpenStack Object Storage" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | OpenStack Object Storage | -| id | eede9296683e4b5ebfa13f5166375ef6 | -| name | swift | -| type | object-store | -+-------------+----------------------------------+ - - The service ID is randomly generated and is different - from the one shown here. - - - - Specify an API endpoint for the Object Storage Service by - using the returned service ID. When you specify an endpoint, - you provide URLs for the public API, internal API, and admin - API. In this guide, the controller host - name is used: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ object-store / {print $2}') \ - --publicurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \ - --internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \ - --adminurl http://controller:8080 \ - --region regionOne -+-------------+---------------------------------------------------+ -| Property | Value | -+-------------+---------------------------------------------------+ -| adminurl | http://controller:8080/ | -| id | 9e3ce428f82b40d38922f242c095982e | -| internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s | -| publicurl | http://controller:8080/v1/AUTH_%(tenant_id)s | -| region | regionOne | -| service_id | eede9296683e4b5ebfa13f5166375ef6 | -+-------------+---------------------------------------------------+ - - - Create the configuration directory on all nodes: - # mkdir -p /etc/swift - - - Create /etc/swift/swift.conf on all - nodes: - - - - - The prefix and suffix value in /etc/swift/swift.conf - should be set to some random string of text to be used as a salt - when hashing to determine mappings in the ring. This file must - be the same on every node in the cluster! - - Next, set up your storage nodes and proxy node. This example uses - the Identity Service for the common authentication piece. -
-
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-network-planning.xml b/doc/training-guides/basic-install-guide/object-storage/section_object-storage-network-planning.xml deleted file mode 100644 index be87c193..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-network-planning.xml +++ /dev/null @@ -1,83 +0,0 @@ - -
- Plan networking for Object Storage - For both conserving network resources and ensuring that - network administrators understand the needs for networks and - public IP addresses for providing access to the APIs and storage - network as necessary, this section offers recommendations and - required minimum sizes. Throughput of at least 1000 Mbps is - suggested. - This guide describes the following networks: - - A mandatory public network. Connects to the proxy - server. - - - A mandatory storage network. Not accessible from outside -the cluster. All nodes connect to this network. - - - An optional replication network. Not accessible from - outside the cluster. Dedicated to replication traffic among - storage nodes. Must be configured in the Ring. - - - This figure shows the basic architecture for the public - network, the storage network, and the optional replication - network. - - - - - - By default, all of the OpenStack Object Storage services, as - well as the rsync daemon on the storage nodes, are configured to - listen on their STORAGE_LOCAL_NET IP - addresses. - If you configure a replication network in the Ring, the - Account, Container and Object servers listen on both the - STORAGE_LOCAL_NET and - STORAGE_REPLICATION_NET IP addresses. The - rsync daemon only listens on the - STORAGE_REPLICATION_NET IP address. - - - Public Network (Publicly routable IP range) - - Provides public IP accessibility to the API endpoints - within the cloud infrastructure. - Minimum size: one IP address for each proxy - server. - - - - Storage Network (RFC1918 IP Range, not publicly - routable) - - Manages all inter-server communications within the - Object Storage infrastructure. - Minimum size: one IP address for each storage node and - proxy server. - Recommended size: as above, with room for expansion to - the largest your cluster size. For example, 255 or CIDR - /24. - - - - Replication Network (RFC1918 IP Range, not publicly - routable) - - Manages replication-related communications among storage - servers within the Object Storage infrastructure. - Recommended size: as for - STORAGE_LOCAL_NET. - - - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-sys-requirements.xml b/doc/training-guides/basic-install-guide/object-storage/section_object-storage-sys-requirements.xml deleted file mode 100644 index 010f9c79..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-sys-requirements.xml +++ /dev/null @@ -1,103 +0,0 @@ - -
- - System requirements for Object Storage - Hardware: OpenStack Object - Storage is designed to run on commodity hardware. - - When you install only the Object Storage and Identity - Service, you cannot use the dashboard unless you also - install Compute and the Image Service. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Hardware recommendations
ServerRecommended HardwareNotes
Object Storage object servers - Processor: dual quad - coreMemory: 8 or 12 GB RAM - Disk space: optimized for cost per GB - Network: one 1 GB Network Interface Card - (NIC)The amount of disk space depends on how much - you can fit into the rack efficiently. You - want to optimize these for best cost per GB - while still getting industry-standard failure - rates. At Rackspace, our storage servers are - currently running fairly generic 4U servers - with 24 2T SATA drives and 8 cores of - processing power. RAID on the storage drives - is not required and not recommended. Swift's - disk usage pattern is the worst case possible - for RAID, and performance degrades very - quickly using RAID 5 or 6. - As an example, Rackspace runs Cloud Files - storage servers with 24 2T SATA drives and 8 - cores of processing power. Most services - support either a worker or concurrency value - in the settings. This allows the services to - make effective use of the cores - available.
Object Storage container/account - servers - Processor: dual quad core - Memory: 8 or 12 GB RAM - Network: one 1 GB Network Interface Card - (NIC)Optimized for IOPS due to tracking with - SQLite databases.
Object Storage proxy server - Processor: dual quad - coreNetwork: one 1 GB Network - Interface Card (NIC)Higher network throughput offers better - performance for supporting many API - requests. - Optimize your proxy servers for best CPU - performance. The Proxy Services are more CPU - and network I/O intensive. If you are using 10 - GB networking to the proxy, or are terminating - SSL traffic at the proxy, greater CPU power is - required.
- Operating system: OpenStack - Object Storage currently runs on Ubuntu, RHEL, CentOS, Fedora, - openSUSE, or SLES. - Networking: 1 Gbps or 10 - Gbps is suggested internally. For OpenStack Object Storage, an - external network should connect the outside world to the proxy - servers, and the storage network is intended to be isolated on - a private network or multiple private networks. - Database: For OpenStack - Object Storage, a SQLite database is part of the OpenStack - Object Storage container and account management - process. - Permissions: You can - install OpenStack Object Storage either as root or as a user - with sudo permissions if you configure the sudoers file to - enable all the permissions. -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-verifying-install.xml b/doc/training-guides/basic-install-guide/object-storage/section_object-storage-verifying-install.xml deleted file mode 100644 index 4d5d6d94..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_object-storage-verifying-install.xml +++ /dev/null @@ -1,43 +0,0 @@ - -
- Verify the installation - You can run these commands from the proxy server or any - server that has access to the Identity Service. - - - Make sure that your credentials are set up correctly in the - admin-openrc.sh file and source it: - $ source admin-openrc.sh - - Run the following swift command: - $ swift stat -Account: AUTH_11b9758b7049476d9b48f7a91ea11493 -Containers: 0 - Objects: 0 - Bytes: 0 -Content-Type: text/plain; charset=utf-8 -X-Timestamp: 1381434243.83760 -X-Trans-Id: txdcdd594565214fb4a2d33-0052570383 -X-Put-Timestamp: 1381434243.83760 - - - Run the following swift commands to upload - files to a container. Create the test.txt and - test2.txt test files locally if needed. - $ swift upload myfiles test.txt -$ swift upload myfiles test2.txt - - - Run the following swift command to - download all files from the myfiles - container: - $ swift download myfiles -test2.txt [headers 0.267s, total 0.267s, 0.000s MB/s] -test.txt [headers 0.271s, total 0.271s, 0.000s MB/s] - - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_start-storage-node-services.xml b/doc/training-guides/basic-install-guide/object-storage/section_start-storage-node-services.xml deleted file mode 100644 index 9b8c601c..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_start-storage-node-services.xml +++ /dev/null @@ -1,40 +0,0 @@ - -
- - Start services on the storage nodes - Now that the ring files are on each storage node, you can - start the services. On each storage node, run the following - command: - # for service in \ - swift-object swift-object-replicator swift-object-updater swift-object-auditor \ - swift-container swift-container-replicator swift-container-updater swift-container-auditor \ - swift-account swift-account-replicator swift-account-reaper swift-account-auditor; do \ - service $service start; done - # for service in \ - openstack-swift-object openstack-swift-object-replicator openstack-swift-object-updater openstack-swift-object-auditor \ - openstack-swift-container openstack-swift-container-replicator openstack-swift-container-updater openstack-swift-container-auditor \ - openstack-swift-account openstack-swift-account-replicator openstack-swift-account-reaper openstack-swift-account-auditor; do \ - systemctl enable $service.service; systemctl start $service.service; done - On SLES: - # for service in \ - openstack-swift-object openstack-swift-object-replicator openstack-swift-object-updater openstack-swift-object-auditor \ - openstack-swift-container openstack-swift-container-replicator openstack-swift-container-updater openstack-swift-container-auditor \ - openstack-swift-account openstack-swift-account-replicator openstack-swift-account-reaper openstack-swift-account-auditor; do \ - service $service start; chkconfig $service on; done - On openSUSE: - # for service in \ - openstack-swift-object openstack-swift-object-replicator openstack-swift-object-updater openstack-swift-object-auditor \ - openstack-swift-container openstack-swift-container-replicator openstack-swift-container-updater openstack-swift-container-auditor \ - openstack-swift-account openstack-swift-account-replicator openstack-swift-account-reaper openstack-swift-account-auditor; do \ - systemctl enable $service.service; systemctl start $service.service; done - - To start all swift services at once, run the command: - # swift-init all start - To know more about swift-init command, run: - $ man swift-init - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_swift-controller-node.xml b/doc/training-guides/basic-install-guide/object-storage/section_swift-controller-node.xml deleted file mode 100644 index 5f14b945..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_swift-controller-node.xml +++ /dev/null @@ -1,195 +0,0 @@ - -
- Install and configure the controller node - This section describes how to install and configure the proxy - service that handles requests for the account, container, and object - services operating on the storage nodes. For simplicity, this - guide installs and configures the proxy service on the controller node. - However, you can run the proxy service on any node with network - connectivity to the storage nodes. Additionally, you can install and - configure the proxy service on multiple nodes to increase performance - and redundancy. For more information, see the - Deployment Guide. - - To configure prerequisites - The proxy service relies on an authentication and authorization - mechanism such as the Identity service. However, unlike other services, - it also offers an internal mechanism that allows it to operate without - any other OpenStack services. However, for simplicity, this guide - references the Identity service in . Before - you configure the Object Storage service, you must create Identity - service credentials including endpoints. - - The Object Storage service does not use a SQL database on - the controller node. - - - To create the Identity service credentials, complete these - steps: - - - Create a swift user: - $ keystone user-create --name swift --pass SWIFT_PASS -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| email | | -| enabled | True | -| id | d535e5cbd2b74ac7bfb97db9cced3ed6 | -| name | swift | -| username | swift | -+----------+----------------------------------+ - Replace SWIFT_PASS with a suitable - password. - - - Link the swift user to the - service tenant and admin - role: - $ keystone user-role-add --user swift --tenant service --role admin - - This command provides no output. - - - - Create the swift service: - $ keystone service-create --name swift --type object-store \ - --description "OpenStack Object Storage" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | OpenStack Object Storage | -| enabled | True | -| id | 75ef509da2c340499d454ae96a2c5c34 | -| name | swift | -| type | object-store | -+-------------+----------------------------------+ - - - - - Create the Identity service endpoints: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ object-store / {print $2}') \ - --publicurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \ - --internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \ - --adminurl http://controller:8080 \ - --region regionOne -+-------------+---------------------------------------------------+ -| Property | Value | -+-------------+---------------------------------------------------+ -| adminurl | http://controller:8080/ | -| id | af534fb8b7ff40a6acf725437c586ebe | -| internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s | -| publicurl | http://controller:8080/v1/AUTH_%(tenant_id)s | -| region | regionOne | -| service_id | 75ef509da2c340499d454ae96a2c5c34 | -+-------------+---------------------------------------------------+ - - - - To install and configure the controller node components - - Install the packages: - - Complete OpenStack environments already include some of these - packages. - - # apt-get install swift swift-proxy python-swiftclient python-keystoneclient memcached - # yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token memcached - # zypper install openstack-swift-proxy python-swiftclient python-keystoneclient memcached python-xml - - - Create the /etc/swift directory. - - - Obtain the proxy service configuration file from the Object - Storage source repository: - # curl -o /etc/swift/proxy-server.conf \ - https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample - - - Edit the /etc/swift/proxy-server.conf - file and complete the following actions: - - - In the [DEFAULT] section, configure - the bind port, user, and configuration directory: - [DEFAULT] -... -bind_port = 8080 -user = swift -swift_dir = /etc/swift - - - In the [pipeline] section, enable - the appropriate modules: - [pipeline] -pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server - - For more information on other modules that enable - additional features, see the - Deployment Guide. - - - - In the [app:proxy-server] section, enable - account management: - [app:proxy-server] -... -allow_account_management = true -account_autocreate = true - - - In the [filter:keystoneauth] section, - configure the operator roles: - [filter:keystoneauth] -use = egg:swift#keystoneauth -... -operator_roles = admin,_member_ - - You might need to uncomment this section. - - - - In the [filter:authtoken] section, - configure Identity service access: - [filter:authtoken] -paste.filter_factory = keystonemiddleware.auth_token:filter_factory -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = swift -admin_password = SWIFT_PASS -delay_auth_decision = true - Replace SWIFT_PASS with the - password you chose for the swift user in the - Identity service. - - You might need to uncomment this section. - - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [filter:cache] section, configure - the memcached location: - [filter:cache] -... -memcache_servers = 127.0.0.1:11211 - - - - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_swift-example-arch.xml b/doc/training-guides/basic-install-guide/object-storage/section_swift-example-arch.xml deleted file mode 100644 index b9e6fe62..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_swift-example-arch.xml +++ /dev/null @@ -1,56 +0,0 @@ - -
- Example architecture - In a production environment, the Object Storage service requires - at least two proxy nodes and five storage nodes. For simplicity, this - guide uses a minimal architecture with the proxy service running on - the existing OpenStack controller node and two storage nodes. However, - these concepts still apply. - - - Node: A host machine that runs one or more OpenStack - Object Storage services. - - - Proxy node: Runs proxy services. - - - Storage node: Runs account, container, and object - services. Contains the SQLite databases. - - - Ring: A set of mappings between OpenStack Object - Storage data to physical devices. - - - Replica: A copy of an object. By default, three - copies are maintained in the cluster. - - - Zone (optional): A logically separate section of the cluster, - related to independent failure characteristics. - - - Region (optional): A logically separate section of - the cluster, representing distinct physical locations - such as cities or countries. Similar to zones, but - representing physical locations of portions of the - cluster rather than logical segments. - - - To increase reliability and performance, you can add - additional proxy servers. - The following diagram shows one possible architecture for a - minimal production environment: - - - - - - - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_swift-finalize-installation.xml b/doc/training-guides/basic-install-guide/object-storage/section_swift-finalize-installation.xml deleted file mode 100644 index 629f0ca4..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_swift-finalize-installation.xml +++ /dev/null @@ -1,134 +0,0 @@ - -
- Finalize installation - - Configure hashes and default storage policy - - Obtain the /etc/swift/swift.conf file from - the Object Storage source repository: - # curl -o /etc/swift/swift.conf \ - https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/swift.conf-sample - - - Edit the /etc/swift/swift.conf file and - complete the following actions: - - - In the [swift-hash] section, configure - the hash path prefix and suffix for your environment. - [swift-hash] -... -swift_hash_path_suffix = HASH_PATH_PREFIX -swift_hash_path_prefix = HASH_PATH_SUFFIX - Replace HASH_PATH_PREFIX and - HASH_PATH_SUFFIX with unique - values. - - Keep these values secret and do not change or lose - them. - - - - In the [storage-policy:0] section, - configure the default storage policy: - [storage-policy:0] -... -name = Policy-0 -default = yes - - - - - Copy the swift.conf file to - the /etc/swift directory on each storage node - and any additional nodes running the proxy service. - - - On all nodes, ensure proper ownership of the configuration - directory: - # chown -R swift:swift /etc/swift - - - On the controller node and any other nodes running the proxy - service, restart the Object Storage proxy service including - its dependencies: - # service memcached restart -# service swift-proxy restart - - - On the controller node and any other nodes running the proxy - service, start the Object Storage proxy service including its - dependencies and configure them to start when the system boots: - # systemctl enable openstack-swift-proxy.service memcached.service -# systemctl start openstack-swift-proxy.service memcached.service - On SLES: - # service memcached start -# service openstack-swift-proxy start -# chkconfig memcached on -# chkconfig openstack-swift-proxy on - On openSUSE: - # systemctl enable openstack-swift-proxy.service memcached.service -# systemctl start openstack-swift-proxy.service memcached.service - - - On the storage nodes, start the Object Storage services: - # swift-init all start - - The storage node runs many Object Storage services and the - swift-init command makes them easier to - manage. You can ignore errors from services not running on the - storage node. - - - - On the storage nodes, start the Object Storage services and - configure them to start when the system boots: - # systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \ - openstack-swift-account-reaper.service openstack-swift-account-replicator.service -# systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \ - openstack-swift-account-reaper.service openstack-swift-account-replicator.service -# systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service \ - openstack-swift-container-replicator.service openstack-swift-container-updater.service -# systemctl start openstack-swift-container.service openstack-swift-container-auditor.service \ - openstack-swift-container-replicator.service openstack-swift-container-updater.service -# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \ - openstack-swift-object-replicator.service openstack-swift-object-updater.service -# systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \ - openstack-swift-object-replicator.service openstack-swift-object-updater.service - - - On the storage nodes, start the Object Storage services and - configure them to start when the system boots: - On SLES: - # for service in \ - openstack-swift-account openstack-swift-account-auditor \ - openstack-swift-account-reaper openstack-swift-account-replicator; do \ - service $service start; chkconfig $service on; done -# for service in \ - openstack-swift-container openstack-swift-container-auditor \ - openstack-swift-container-replicator openstack-swift-container-updater; do \ - service $service start; chkconfig $service on; done -# for service in \ - openstack-swift-object openstack-swift-object-auditor \ - openstack-swift-object-replicator openstack-swift-object-updater; do \ - service $service start; chkconfig $service on; done - On openSUSE: - # systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \ - openstack-swift-account-reaper.service openstack-swift-account-replicator.service -# systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \ - openstack-swift-account-reaper.service openstack-swift-account-replicator.service -# systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service \ - openstack-swift-container-replicator.service openstack-swift-container-updater.service -# systemctl start openstack-swift-container.service openstack-swift-container-auditor.service \ - openstack-swift-container-replicator.service openstack-swift-container-updater.service -# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \ - openstack-swift-object-replicator.service openstack-swift-object-updater.service -# systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \ - openstack-swift-object-replicator.service openstack-swift-object-updater.service - - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_swift-initial-rings.xml b/doc/training-guides/basic-install-guide/object-storage/section_swift-initial-rings.xml deleted file mode 100644 index 05f5b6cf..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_swift-initial-rings.xml +++ /dev/null @@ -1,190 +0,0 @@ - -
- Create initial rings - Before starting the Object Storage services, you must create - the initial account, container, and object rings. The ring builder - creates configuration files that each node uses to determine and - deploy the storage architecture. For simplicity, this guide uses one - region and zone with 2^10 (1024) maximum partitions, 3 replicas of each - object, and 1 hour minimum time between moving a partition more than - once. For Object Storage, a partition indicates a directory on a storage - device rather than a conventional partition table. For more information, - see the - Deployment Guide. -
- Account ring - The account server uses the account ring to maintain lists - of containers. - - To create the ring - - Perform these steps on the controller node. - - - Change to the /etc/swift directory. - - - Create the base account.builder file: - # swift-ring-builder account.builder create 10 3 1 - - - Add each storage node to the ring: - # swift-ring-builder account.builder \ - add r1z1-STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS:6002/DEVICE_NAME DEVICE_WEIGHT - Replace - STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS - with the IP address of the management network on the storage node. - Replace DEVICE_NAME with a storage - device name on the same storage node. For example, using the first - storage node in - with the - /dev/sdb1 storage device and weight of 100: - # swift-ring-builder account.builder add r1z1-10.0.0.51:6002/sdb1 100 - Repeat this command for each storage device on each storage - node. The example architecture requires four variations of this - command. - - - Verify the ring contents: - # swift-ring-builder account.builder -account.builder, build version 4 -1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance -The minimum number of hours before a partition can be reassigned is 1 -Devices: id region zone ip address port replication ip replication port name weight partitions balance meta - 0 1 1 10.0.0.51 6002 10.0.0.51 6002 sdb1 100.00 768 0.00 - 1 1 1 10.0.0.51 6002 10.0.0.51 6002 sdc1 100.00 768 0.00 - 2 1 1 10.0.0.52 6002 10.0.0.52 6002 sdb1 100.00 768 0.00 - 3 1 1 10.0.0.52 6002 10.0.0.52 6002 sdc1 100.00 768 0.00 - - - Rebalance the ring: - # swift-ring-builder account.builder rebalance - - This process can take a while. - - - -
-
- Container ring - The container server uses the container ring to maintain lists - of objects. However, it does not track object locations. - - To create the ring - - Perform these steps on the controller node. - - - Change to the /etc/swift directory. - - - Create the base container.builder - file: - # swift-ring-builder container.builder create 10 3 1 - - - Add each storage node to the ring: - # swift-ring-builder container.builder \ - add r1z1-STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS:6001/DEVICE_NAME DEVICE_WEIGHT - Replace - STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS - with the IP address of the management network on the storage node. - Replace DEVICE_NAME with a storage - device name on the same storage node. For example, using the first - storage node in - with the - /dev/sdb1 storage device and weight of 100: - # swift-ring-builder container.builder add r1z1-10.0.0.51:6001/sdb1 100 - Repeat this command for each storage device on each storage - node. The example architecture requires four variations of this - command. - - - Verify the ring contents: - # swift-ring-builder container.builder -container.builder, build version 4 -1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance -The minimum number of hours before a partition can be reassigned is 1 -Devices: id region zone ip address port replication ip replication port name weight partitions balance meta - 0 1 1 10.0.0.51 6001 10.0.0.51 6001 sdb1 100.00 768 0.00 - 1 1 1 10.0.0.51 6001 10.0.0.51 6001 sdc1 100.00 768 0.00 - 2 1 1 10.0.0.52 6001 10.0.0.52 6001 sdb1 100.00 768 0.00 - 3 1 1 10.0.0.52 6001 10.0.0.52 6001 sdc1 100.00 768 0.00 - - - Rebalance the ring: - # swift-ring-builder container.builder rebalance - - This process can take a while. - - - -
-
- Object ring - The object server uses the object ring to maintain lists - of object locations on local devices. - - To create the ring - - Perform these steps on the controller node. - - - Change to the /etc/swift directory. - - - Create the base object.builder file: - # swift-ring-builder object.builder create 10 3 1 - - - Add each storage node to the ring: - # swift-ring-builder object.builder \ - add r1z1-STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS:6000/DEVICE_NAME DEVICE_WEIGHT - Replace - STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS - with the IP address of the management network on the storage node. - Replace DEVICE_NAME with a storage - device name on the same storage node. For example, using the first - storage node in - with the - /dev/sdb1 storage device and weight of 100: - # swift-ring-builder object.builder add r1z1-10.0.0.51:6000/sdb1 100 - Repeat this command for each storage device on each storage - node. The example architecture requires four variations of this - command. - - - Verify the ring contents: - # swift-ring-builder object.builder -object.builder, build version 4 -1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance -The minimum number of hours before a partition can be reassigned is 1 -Devices: id region zone ip address port replication ip replication port name weight partitions balance meta - 0 1 1 10.0.0.51 6000 10.0.0.51 6000 sdb1 100.00 768 0.00 - 1 1 1 10.0.0.51 6000 10.0.0.51 6000 sdc1 100.00 768 0.00 - 2 1 1 10.0.0.52 6000 10.0.0.52 6000 sdb1 100.00 768 0.00 - 3 1 1 10.0.0.52 6000 10.0.0.52 6000 sdc1 100.00 768 0.00 - - - Rebalance the ring: - # swift-ring-builder object.builder rebalance - - This process can take a while. - - - -
-
- Distribute ring configuration files - Copy the account.ring.gz, - container.ring.gz, and - object.ring.gz files to the - /etc/swift directory on each storage node and - any additional nodes running the proxy service. -
-
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_swift-storage-node.xml b/doc/training-guides/basic-install-guide/object-storage/section_swift-storage-node.xml deleted file mode 100644 index 09da2dcc..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_swift-storage-node.xml +++ /dev/null @@ -1,256 +0,0 @@ - -
- Install and configure the storage nodes - This section describes how to install and configure storage nodes - that operate the account, container, and object services. For - simplicity, this configuration references two storage nodes, each - containing two empty local block storage devices. Each of the - devices, /dev/sdb and /dev/sdc, - must contain a suitable partition table with one partition occupying - the entire device. Although the Object Storage service supports any - file system with extended attributes (xattr), - testing and benchmarking indicate the best performance and reliability - on XFS. For more information on horizontally - scaling your environment, see the - Deployment Guide. - - To configure prerequisites - You must configure each storage node before you install and - configure the Object Storage service on it. Similar to the controller - node, each storage node contains one network interface on the - management network. Optionally, each storage - node can contain a second network interface on a separate network for - replication. For more information, see - . - - Configure unique items on the first storage node: - - - Configure the management interface: - IP address: 10.0.0.51 - Network mask: 255.255.255.0 (or /24) - Default gateway: 10.0.0.1 - - - Set the hostname of the node to - object1. - - - - - Configure unique items on the second storage node: - - - Configure the management interface: - IP address: 10.0.0.52 - Network mask: 255.255.255.0 (or /24) - Default gateway: 10.0.0.1 - - - Set the hostname of the node to - object2. - - - - - Configure shared items on both storage nodes: - - - Copy the contents of the /etc/hosts file - from the controller node and add the following to it: - # object1 -10.0.0.51 object1 - -# object2 -10.0.0.52 object2 - Also add this content to the /etc/hosts - file on all other nodes in your environment. - - - Install and configure - NTP - using the instructions in - . - - - Install the supporting utility packages: - # apt-get install xfsprogs rsync - # yum install xfsprogs rsync - # zypper install xfsprogs rsync xinetd - - - Format the /dev/sdb1 and - /dev/sdc1 partitions as XFS: - # mkfs.xfs /dev/sdb1 -# mkfs.xfs /dev/sdc1 - - - Create the mount point directory structure: - # mkdir -p /srv/node/sdb1 -# mkdir -p /srv/node/sdc1 - - - Edit the /etc/fstab file and add the - following to it: - /dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 -/dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 - - - Mount the devices: - # mount /srv/node/sdb1 -# mount /srv/node/sdc1 - - - - - Edit the /etc/rsyncd.conf file and add the - following to it: - uid = swift -gid = swift -log file = /var/log/rsyncd.log -pid file = /var/run/rsyncd.pid -address = MANAGEMENT_INTERFACE_IP_ADDRESS - -[account] -max connections = 2 -path = /srv/node/ -read only = false -lock file = /var/lock/account.lock - -[container] -max connections = 2 -path = /srv/node/ -read only = false -lock file = /var/lock/container.lock - -[object] -max connections = 2 -path = /srv/node/ -read only = false -lock file = /var/lock/object.lock - Replace MANAGEMENT_INTERFACE_IP_ADDRESS - with the IP address of the management network on the storage - node. - - The rsync service - requires no authentication, so consider running it on a private - network. - - - - Edit the /etc/default/rsync file and enable - the rsync service: - RSYNC_ENABLE=true - - - Edit the /etc/xinetd.d/rsync file and enable - the rsync service: - disable = no - - - Start the rsync - service: - # service rsync start - - - Start the rsyncd service - and configure it to start when the system boots: - # systemctl enable rsyncd.service -# systemctl start rsyncd.service - - - Start the xinetd service - and configure it to start when the system boots: - On SLES: - # service xinetd start -# chkconfig xinetd on - On openSUSE: - # systemctl enable xinetd.service -# systemctl start xinetd.service - - - - Install and configure storage node components - - Perform these steps on each storage node. - - - Install the packages: - # apt-get install swift swift-account swift-container swift-object - # yum install openstack-swift-account openstack-swift-container \ - openstack-swift-object - # zypper install openstack-swift-account openstack-swift-container \ - openstack-swift-object python-xml - - - Obtain the accounting, container, and object service configuration - files from the Object Storage source repository: - # curl -o /etc/swift/account-server.conf \ - https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample - # curl -o /etc/swift/container-server.conf \ - https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample - # curl -o /etc/swift/object-server.conf \ - https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample - - - Edit the - /etc/swift/account-server.conf, - /etc/swift/container-server.conf, and - /etc/swift/object-server.conf files and - complete the following actions: - - - In the [DEFAULT] section, configure the - bind IP address, bind port, user, configuration directory, and - mount point directory: - [DEFAULT] -... -bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS -bind_port = 6002 -user = swift -swift_dir = /etc/swift -devices = /srv/node - Replace - MANAGEMENT_INTERFACE_IP_ADDRESS - with the IP address of the management network on the storage - node. - - - In the [pipeline] section, enable - the appropriate modules: - [pipeline] -pipeline = healthcheck recon account-server - - For more information on other modules that enable - additional features, see the - Deployment Guide. - - - - In the [filter:recon] section, configure - the recon (metrics) cache directory: - [filter:recon] -... -recon_cache_path = /var/cache/swift - - - - - Ensure proper ownership of the mount point directory - structure: - # chown -R swift:swift /srv/node - - - Create the recon directory and ensure proper - ownership of it: - # mkdir -p /var/cache/swift -# chown -R swift:swift /var/cache/swift - - -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_swift-system-reqs.xml b/doc/training-guides/basic-install-guide/object-storage/section_swift-system-reqs.xml deleted file mode 100644 index 0c31e721..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_swift-system-reqs.xml +++ /dev/null @@ -1,103 +0,0 @@ - -
- - System requirements - Hardware: OpenStack Object - Storage is designed to run on commodity hardware. - - When you install only the Object Storage and Identity - Service, you cannot use the dashboard unless you also - install Compute and the Image Service. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Hardware recommendations
ServerRecommended HardwareNotes
Object Storage object servers - Processor: dual quad - coreMemory: 8 or 12GB RAM - Disk space: optimized for cost per GB - Network: one 1GB Network Interface Card - (NIC)The amount of disk space depends on how much - you can fit into the rack efficiently. You - want to optimize these for best cost per GB - while still getting industry-standard failure - rates. At Rackspace, our storage servers are - currently running fairly generic 4U servers - with 24 2T SATA drives and 8 cores of - processing power. RAID on the storage drives - is not required and not recommended. Swift's - disk usage pattern is the worst case possible - for RAID, and performance degrades very - quickly using RAID 5 or 6. - As an example, Rackspace runs Cloud Files - storage servers with 24 2T SATA drives and 8 - cores of processing power. Most services - support either a worker or concurrency value - in the settings. This allows the services to - make effective use of the cores - available.
Object Storage container/account - servers - Processor: dual quad core - Memory: 8 or 12GB RAM - Network: one 1GB Network Interface Card - (NIC)Optimized for IOPS due to tracking with - SQLite databases.
Object Storage proxy server - Processor: dual quad - coreNetwork: one 1 GB Network - Interface Card (NIC)Higher network throughput offers better - performance for supporting many API - requests. - Optimize your proxy servers for best CPU - performance. The Proxy Services are more CPU - and network I/O intensive. If you are using 10 - GB networking to the proxy, or are terminating - SSL traffic at the proxy, greater CPU power is - required.
- Operating system: OpenStack - Object Storage currently runs on Ubuntu, RHEL, CentOS, Fedora, - openSUSE, or SLES. - Networking: 1 Gbps or 10 - Gbps is suggested internally. For OpenStack Object Storage, an - external network should connect the outside world to the proxy - servers, and the storage network is intended to be isolated on - a private network or multiple private networks. - Database: For OpenStack - Object Storage, a SQLite database is part of the OpenStack - Object Storage container and account management - process. - Permissions: You can - install OpenStack Object Storage either as root or as a user - with sudo permissions if you configure the sudoers file to - enable all the permissions. -
diff --git a/doc/training-guides/basic-install-guide/object-storage/section_swift-verify.xml b/doc/training-guides/basic-install-guide/object-storage/section_swift-verify.xml deleted file mode 100644 index 65d58aa7..00000000 --- a/doc/training-guides/basic-install-guide/object-storage/section_swift-verify.xml +++ /dev/null @@ -1,50 +0,0 @@ - -
- Verify operation - This section describes how to verify operation of the Object - Storage service. - - - Perform these steps on the controller node. - - - Source the demo tenant credentials: - $ source demo-openrc.sh - - - Show the service status: - $ swift stat -Account: AUTH_11b9758b7049476d9b48f7a91ea11493 -Containers: 0 - Objects: 0 - Bytes: 0 -Content-Type: text/plain; charset=utf-8 -X-Timestamp: 1381434243.83760 -X-Trans-Id: txdcdd594565214fb4a2d33-0052570383 -X-Put-Timestamp: 1381434243.83760 - - - Upload a test file: - $ swift upload demo-container1 FILE - Replace FILE with the name of a local - file to upload to the demo-container1 - container. - - - List containers: - $ swift list -demo-container1 - - - Download a test file: - $ swift download demo-container1 FILE - Replace FILE with the name of the - file uploaded to the demo-container1 - container. - - -
diff --git a/doc/training-guides/basic-install-guide/roadmap.rst b/doc/training-guides/basic-install-guide/roadmap.rst deleted file mode 100644 index 423a246a..00000000 --- a/doc/training-guides/basic-install-guide/roadmap.rst +++ /dev/null @@ -1,33 +0,0 @@ -Roadmap for Install Guides --------------------------- - -This file is stored with the source to offer ideas for what to work on. -Put your name next to a task if you want to work on it and put a WIP -review up on review.openstack.org. - -May 20, 2014 - -This guide has an overall blueprint with spec at: -https://wiki.openstack.org/wiki/Documentation/InstallationGuideImprovements - -To do tasks: - -- Unify chapter and section names (such as Overview) -- Add sample output of each command and highlight important parts -- Mention project as standard but tenant must be used for CLI params -- Refer to generic SQL database and update for MariaDB (RHEL), MySQL, - and PostgreSQL -- Provide sample configuration files for each node -- Compute and network nodes should reference server on controller node -- Update password list -- Add audience information; who is this book intended for - -Ongoing tasks: - -- Ensure it meets conventions and standards -- Continually update with latest release information relevant to install - -Wishlist tasks: - -- Replace all individual client commands (like keystone, nova) with - openstack client commands diff --git a/doc/training-guides/basic-install-guide/samples/account-server-1.conf.txt b/doc/training-guides/basic-install-guide/samples/account-server-1.conf.txt deleted file mode 100644 index 870ce08e..00000000 --- a/doc/training-guides/basic-install-guide/samples/account-server-1.conf.txt +++ /dev/null @@ -1,20 +0,0 @@ -[DEFAULT] -devices = /srv/1/node -mount_check = false -bind_port = 6012 -user = swift -log_facility = LOG_LOCAL2 - -[pipeline:main] -pipeline = account-server - -[app:account-server] -use = egg:swift#account - -[account-replicator] -vm_test_mode = yes - -[account-auditor] - -[account-reaper] - \ No newline at end of file diff --git a/doc/training-guides/basic-install-guide/samples/account-server.conf.txt b/doc/training-guides/basic-install-guide/samples/account-server.conf.txt deleted file mode 100644 index b1d55e0b..00000000 --- a/doc/training-guides/basic-install-guide/samples/account-server.conf.txt +++ /dev/null @@ -1,16 +0,0 @@ -[DEFAULT] -bind_ip = 0.0.0.0 -workers = 2 - -[pipeline:main] -pipeline = account-server - -[app:account-server] -use = egg:swift#account - -[account-replicator] - -[account-auditor] - -[account-reaper] - diff --git a/doc/training-guides/basic-install-guide/samples/api-paste.ini b/doc/training-guides/basic-install-guide/samples/api-paste.ini deleted file mode 100644 index ac350ee8..00000000 --- a/doc/training-guides/basic-install-guide/samples/api-paste.ini +++ /dev/null @@ -1,118 +0,0 @@ -############ -# Metadata # -############ -[composite:metadata] -use = egg:Paste#urlmap -/: meta - -[pipeline:meta] -pipeline = ec2faultwrap logrequest metaapp - -[app:metaapp] -paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory - -####### -# EC2 # -####### - -[composite:ec2] -use = egg:Paste#urlmap -/services/Cloud: ec2cloud - -[composite:ec2cloud] -use = call:nova.api.auth:pipeline_factory -noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor -keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor - -[filter:ec2faultwrap] -paste.filter_factory = nova.api.ec2:FaultWrapper.factory - -[filter:logrequest] -paste.filter_factory = nova.api.ec2:RequestLogging.factory - -[filter:ec2lockout] -paste.filter_factory = nova.api.ec2:Lockout.factory - -[filter:ec2keystoneauth] -paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory - -[filter:ec2noauth] -paste.filter_factory = nova.api.ec2:NoAuth.factory - -[filter:cloudrequest] -controller = nova.api.ec2.cloud.CloudController -paste.filter_factory = nova.api.ec2:Requestify.factory - -[filter:authorizer] -paste.filter_factory = nova.api.ec2:Authorizer.factory - -[filter:validator] -paste.filter_factory = nova.api.ec2:Validator.factory - -[app:ec2executor] -paste.app_factory = nova.api.ec2:Executor.factory - -############# -# Openstack # -############# - -[composite:osapi_compute] -use = call:nova.api.openstack.urlmap:urlmap_factory -/: oscomputeversions -/v1.1: openstack_compute_api_v2 -/v2: openstack_compute_api_v2 - -[composite:osapi_volume] -use = call:nova.api.openstack.urlmap:urlmap_factory -/: osvolumeversions -/v1: openstack_volume_api_v1 - -[composite:openstack_compute_api_v2] -use = call:nova.api.auth:pipeline_factory -noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2 -keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2 -keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2 - -[composite:openstack_volume_api_v1] -use = call:nova.api.auth:pipeline_factory -noauth = faultwrap sizelimit noauth ratelimit osapi_volume_app_v1 -keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_volume_app_v1 -keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1 - -[filter:faultwrap] -paste.filter_factory = nova.api.openstack:FaultWrapper.factory - -[filter:noauth] -paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory - -[filter:ratelimit] -paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory - -[filter:sizelimit] -paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory - -[app:osapi_compute_app_v2] -paste.app_factory = nova.api.openstack.compute:APIRouter.factory - -[pipeline:oscomputeversions] -pipeline = faultwrap oscomputeversionapp - -[app:osapi_volume_app_v1] -paste.app_factory = nova.api.openstack.volume:APIRouter.factory - -[app:oscomputeversionapp] -paste.app_factory = nova.api.openstack.compute.versions:Versions.factory - -[pipeline:osvolumeversions] -pipeline = faultwrap osvolumeversionapp -########## -# Shared # -########## - -[filter:keystonecontext] -paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory - -[filter:authtoken] -paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory -# Workaround for https://bugs.launchpad.net/nova/+bug/1154809 -auth_version = v2.0 diff --git a/doc/training-guides/basic-install-guide/samples/container-server-1.conf.txt b/doc/training-guides/basic-install-guide/samples/container-server-1.conf.txt deleted file mode 100644 index 59d5e2f4..00000000 --- a/doc/training-guides/basic-install-guide/samples/container-server-1.conf.txt +++ /dev/null @@ -1,20 +0,0 @@ -[DEFAULT] -devices = /srv/1/node -mount_check = false -bind_port = 6011 -user = swift -log_facility = LOG_LOCAL2 - -[pipeline:main] -pipeline = container-server - -[app:container-server] -use = egg:swift#container - -[container-replicator] -vm_test_mode = yes - -[container-updater] - -[container-auditor] -[container-sync] \ No newline at end of file diff --git a/doc/training-guides/basic-install-guide/samples/container-server.conf.txt b/doc/training-guides/basic-install-guide/samples/container-server.conf.txt deleted file mode 100644 index 9c0287bd..00000000 --- a/doc/training-guides/basic-install-guide/samples/container-server.conf.txt +++ /dev/null @@ -1,17 +0,0 @@ -[DEFAULT] -bind_ip = 0.0.0.0 -workers = 2 - -[pipeline:main] -pipeline = container-server - -[app:container-server] -use = egg:swift#container - -[container-replicator] - -[container-updater] - -[container-auditor] - -[container-sync] diff --git a/doc/training-guides/basic-install-guide/samples/glance-api-paste.ini b/doc/training-guides/basic-install-guide/samples/glance-api-paste.ini deleted file mode 100644 index 158ce719..00000000 --- a/doc/training-guides/basic-install-guide/samples/glance-api-paste.ini +++ /dev/null @@ -1,57 +0,0 @@ -# Use this pipeline for no auth or image caching - DEFAULT -# [pipeline:glance-api] -# pipeline = versionnegotiation unauthenticated-context rootapp - -# Use this pipeline for image caching and no auth -# [pipeline:glance-api-caching] -# pipeline = versionnegotiation unauthenticated-context cache rootapp - -# Use this pipeline for caching w/ management interface but no auth -# [pipeline:glance-api-cachemanagement] -# pipeline = versionnegotiation unauthenticated-context cache cachemanage rootapp - -# Use this pipeline for keystone auth -[pipeline:glance-api-keystone] -pipeline = versionnegotiation authtoken context rootapp - -# Use this pipeline for keystone auth with image caching -# [pipeline:glance-api-keystone+caching] -# pipeline = versionnegotiation authtoken context cache rootapp - -# Use this pipeline for keystone auth with caching and cache management -# [pipeline:glance-api-keystone+cachemanagement] -# pipeline = versionnegotiation authtoken context cache cachemanage rootapp - -[composite:rootapp] -paste.composite_factory = glance.api:root_app_factory -/: apiversions -/v1: apiv1app -/v2: apiv2app - -[app:apiversions] -paste.app_factory = glance.api.versions:create_resource - -[app:apiv1app] -paste.app_factory = glance.api.v1.router:API.factory - -[app:apiv2app] -paste.app_factory = glance.api.v2.router:API.factory - -[filter:versionnegotiation] -paste.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory - -[filter:cache] -paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory - -[filter:cachemanage] -paste.filter_factory = glance.api.middleware.cache_manage:CacheManageFilter.factory - -[filter:context] -paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory - -[filter:unauthenticated-context] -paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory - -[filter:authtoken] -paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory -delay_auth_decision = true diff --git a/doc/training-guides/basic-install-guide/samples/glance-api.conf b/doc/training-guides/basic-install-guide/samples/glance-api.conf deleted file mode 100644 index d5e01e63..00000000 --- a/doc/training-guides/basic-install-guide/samples/glance-api.conf +++ /dev/null @@ -1,327 +0,0 @@ -[DEFAULT] -# Show more verbose log output (sets INFO log level output) -verbose = True - -# Show debugging output in logs (sets DEBUG log level output) -debug = False - -# Which backend scheme should Glance use by default is not specified -# in a request to add a new image to Glance? Known schemes are determined -# by the known_stores option below. -# Default: 'file' -default_store = file - -# List of which store classes and store class locations are -# currently known to glance at startup. -#known_stores = glance.store.filesystem.Store, -# glance.store.http.Store, -# glance.store.rbd.Store, -# glance.store.s3.Store, -# glance.store.swift.Store, - - -# Maximum image size (in bytes) that may be uploaded through the -# Glance API server. Defaults to 1 TB. -# WARNING: this value should only be increased after careful consideration -# and must be set to a value under 8 EB (9223372036854775808). -#image_size_cap = 1099511627776 - -# Address to bind the API server -bind_host = 0.0.0.0 - -# Port the bind the API server to -bind_port = 9292 - -# Log to this file. Make sure you do not set the same log -# file for both the API and registry servers! -log_file = /var/log/glance/api.log - -# Backlog requests when creating socket -backlog = 4096 - -# TCP_KEEPIDLE value in seconds when creating socket. -# Not supported on OS X. -#tcp_keepidle = 600 - -# SQLAlchemy connection string for the reference implementation -# registry server. Any valid SQLAlchemy connection string is fine. -# See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine -# sql_connection = sqlite:///glance.sqlite -# sql_connection = sql_connection = mysql://glance:YOUR_GLANCEDB_PASSWORD@192.168.206.130/glance - -# Period in seconds after which SQLAlchemy should reestablish its connection -# to the database. -# -# MySQL uses a default `wait_timeout` of 8 hours, after which it will drop -# idle connections. This can result in 'MySQL Gone Away' exceptions. If you -# notice this, you can lower this value to ensure that SQLAlchemy reconnects -# before MySQL can drop the connection. -sql_idle_timeout = 3600 - -# Number of Glance API worker processes to start. -# On machines with more than one CPU increasing this value -# may improve performance (especially if using SSL with -# compression turned on). It is typically recommended to set -# this value to the number of CPUs present on your machine. -workers = 1 - -# Role used to identify an authenticated user as administrator -#admin_role = admin - -# Allow unauthenticated users to access the API with read-only -# privileges. This only applies when using ContextMiddleware. -#allow_anonymous_access = False - -# Allow access to version 1 of glance api -#enable_v1_api = True - -# Allow access to version 2 of glance api -#enable_v2_api = True - -# ================= Syslog Options ============================ - -# Send logs to syslog (/dev/log) instead of to file specified -# by `log_file` -use_syslog = False - -# Facility to use. If unset defaults to LOG_USER. -#syslog_log_facility = LOG_LOCAL0 - -# ================= SSL Options =============================== - -# Certificate file to use when starting API server securely -#cert_file = /path/to/certfile - -# Private key file to use when starting API server securely -#key_file = /path/to/keyfile - -# CA certificate file to use to verify connecting clients -#ca_file = /path/to/cafile - -# ================= Security Options ========================== - -# AES key for encrypting store 'location' metadata, including -# -- if used -- Swift or S3 credentials -# Should be set to a random string of length 16, 24 or 32 bytes -#metadata_encryption_key = <16, 24 or 32 char registry metadata key> - -# ============ Registry Options =============================== - -# Address to find the registry server -registry_host = 0.0.0.0 - -# Port the registry server is listening on -registry_port = 9191 - -# What protocol to use when connecting to the registry server? -# Set to https for secure HTTP communication -registry_client_protocol = http - -# The path to the key file to use in SSL connections to the -# registry server, if any. Alternately, you may set the -# GLANCE_CLIENT_KEY_FILE environ variable to a filepath of the key file -#registry_client_key_file = /path/to/key/file - -# The path to the cert file to use in SSL connections to the -# registry server, if any. Alternately, you may set the -# GLANCE_CLIENT_CERT_FILE environ variable to a filepath of the cert file -#registry_client_cert_file = /path/to/cert/file - -# The path to the certifying authority cert file to use in SSL connections -# to the registry server, if any. Alternately, you may set the -# GLANCE_CLIENT_CA_FILE environ variable to a filepath of the CA cert file -#registry_client_ca_file = /path/to/ca/file - -# ============ Notification System Options ===================== - -# Notifications can be sent when images are create, updated or deleted. -# There are three methods of sending notifications, logging (via the -# log_file directive), rabbit (via a rabbitmq queue), qpid (via a Qpid -# message queue), or noop (no notifications sent, the default) -notifier_strategy = noop - -# Configuration options if sending notifications via rabbitmq (these are -# the defaults) -rabbit_host = localhost -rabbit_port = 5672 -rabbit_use_ssl = false -rabbit_userid = guest -rabbit_password = guest -rabbit_virtual_host = / -rabbit_notification_exchange = glance -rabbit_notification_topic = glance_notifications -rabbit_durable_queues = False - -# Configuration options if sending notifications via Qpid (these are -# the defaults) -qpid_notification_exchange = glance -qpid_notification_topic = glance_notifications -qpid_hostname = localhost -qpid_port = 5672 -qpid_username = -qpid_password = -qpid_sasl_mechanisms = -qpid_reconnect_timeout = 0 -qpid_reconnect_limit = 0 -qpid_reconnect_interval_min = 0 -qpid_reconnect_interval_max = 0 -qpid_reconnect_interval = 0 -qpid_heartbeat = 5 -# Set to 'ssl' to enable SSL -qpid_protocol = tcp -qpid_tcp_nodelay = True - -# ============ Filesystem Store Options ======================== - -# Directory that the Filesystem backend store -# writes image data to -filesystem_store_datadir = /var/lib/glance/images/ - -# ============ Swift Store Options ============================= - -# Version of the authentication service to use -# Valid versions are '2' for keystone and '1' for swauth and rackspace -swift_store_auth_version = 2 - -# Address where the Swift authentication service lives -# Valid schemes are 'http://' and 'https://' -# If no scheme specified, default to 'https://' -# For swauth, use something like '127.0.0.1:8080/v1.0/' -swift_store_auth_address = 127.0.0.1:5000/v2.0/ - -# User to authenticate against the Swift authentication service -# If you use Swift authentication service, set it to 'account':'user' -# where 'account' is a Swift storage account and 'user' -# is a user in that account -swift_store_user = jdoe:jdoe - -# Auth key for the user authenticating against the -# Swift authentication service -swift_store_key = a86850deb2742ec3cb41518e26aa2d89 - -# Container within the account that the account should use -# for storing images in Swift -swift_store_container = glance - -# Do we create the container if it does not exist? -swift_store_create_container_on_put = False - -# What size, in MB, should Glance start chunking image files -# and do a large object manifest in Swift? By default, this is -# the maximum object size in Swift, which is 5GB -swift_store_large_object_size = 5120 - -# When doing a large object manifest, what size, in MB, should -# Glance write chunks to Swift? This amount of data is written -# to a temporary disk buffer during the process of chunking -# the image file, and the default is 200MB -swift_store_large_object_chunk_size = 200 - -# Whether to use ServiceNET to communicate with the Swift storage servers. -# (If you aren't RACKSPACE, leave this False!) -# -# To use ServiceNET for authentication, prefix hostname of -# `swift_store_auth_address` with 'snet-'. -# Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/ -swift_enable_snet = False - -# If set to True enables multi-tenant storage mode which causes Glance images -# to be stored in tenant specific Swift accounts. -#swift_store_multi_tenant = False - -# A list of tenants that will be granted read/write access on all Swift -# containers created by Glance in multi-tenant mode. -#swift_store_admin_tenants = [] - -# The region of the swift endpoint to be used for single tenant. This setting -# is only necessary if the tenant has multiple swift endpoints. -#swift_store_region = - -# ============ S3 Store Options ============================= - -# Address where the S3 authentication service lives -# Valid schemes are 'http://' and 'https://' -# If no scheme specified, default to 'http://' -s3_store_host = 127.0.0.1:8080/v1.0/ - -# User to authenticate against the S3 authentication service -s3_store_access_key = <20-char AWS access key> - -# Auth key for the user authenticating against the -# S3 authentication service -s3_store_secret_key = <40-char AWS secret key> - -# Container within the account that the account should use -# for storing images in S3. Note that S3 has a flat namespace, -# so you need a unique bucket name for your glance images. An -# easy way to do this is append your AWS access key to "glance". -# S3 buckets in AWS *must* be lowercased, so remember to lowercase -# your AWS access key if you use it in your bucket name below! -s3_store_bucket = glance - -# Do we create the bucket if it does not exist? -s3_store_create_bucket_on_put = False - -# When sending images to S3, the data will first be written to a -# temporary buffer on disk. By default the platform's temporary directory -# will be used. If required, an alternative directory can be specified here. -#s3_store_object_buffer_dir = /path/to/dir - -# When forming a bucket url, boto will either set the bucket name as the -# subdomain or as the first token of the path. Amazon's S3 service will -# accept it as the subdomain, but Swift's S3 middleware requires it be -# in the path. Set this to 'path' or 'subdomain' - defaults to 'subdomain'. -#s3_store_bucket_url_format = subdomain - -# ============ RBD Store Options ============================= - -# Ceph configuration file path -# If using cephx authentication, this file should -# include a reference to the right keyring -# in a client. section -rbd_store_ceph_conf = /etc/ceph/ceph.conf - -# RADOS user to authenticate as (only applicable if using cephx) -rbd_store_user = glance - -# RADOS pool in which images are stored -rbd_store_pool = images - -# Images will be chunked into objects of this size (in megabytes). -# For best performance, this should be a power of two -rbd_store_chunk_size = 8 - -# ============ Delayed Delete Options ============================= - -# Turn on/off delayed delete -delayed_delete = False - -# Delayed delete time in seconds -scrub_time = 43200 - -# Directory that the scrubber will use to remind itself of what to delete -# Make sure this is also set in glance-scrubber.conf -scrubber_datadir = /var/lib/glance/scrubber - -# =============== Image Cache Options ============================= - -# Base directory that the Image Cache uses -image_cache_dir = /var/lib/glance/image-cache/ - -[keystone_authtoken] -auth_host = 127.0.0.1 -auth_port = 35357 -auth_protocol = http -admin_tenant_name = service -admin_user = admin -admin_password = secrete - -[paste_deploy] -# Name of the paste configuration file that defines the available pipelines -config_file = /etc/glance/glance-api-paste.ini - -# Partial name of a pipeline in your paste configuration file with the -# service name removed. For example, if your paste section name is -# [pipeline:glance-api-keystone], you would configure the flavor below -# as 'keystone'. -flavor=keystone diff --git a/doc/training-guides/basic-install-guide/samples/glance-cache-paste.ini b/doc/training-guides/basic-install-guide/samples/glance-cache-paste.ini deleted file mode 100644 index 35ab3715..00000000 --- a/doc/training-guides/basic-install-guide/samples/glance-cache-paste.ini +++ /dev/null @@ -1,15 +0,0 @@ -[app:glance-pruner] -paste.app_factory = glance.common.wsgi:app_factory -glance.app_factory = glance.image_cache.pruner:Pruner - -[app:glance-prefetcher] -paste.app_factory = glance.common.wsgi:app_factory -glance.app_factory = glance.image_cache.prefetcher:Prefetcher - -[app:glance-cleaner] -paste.app_factory = glance.common.wsgi:app_factory -glance.app_factory = glance.image_cache.cleaner:Cleaner - -[app:glance-queue-image] -paste.app_factory = glance.common.wsgi:app_factory -glance.app_factory = glance.image_cache.queue_image:Queuer diff --git a/doc/training-guides/basic-install-guide/samples/glance-cache.conf b/doc/training-guides/basic-install-guide/samples/glance-cache.conf deleted file mode 100644 index 8985ea5c..00000000 --- a/doc/training-guides/basic-install-guide/samples/glance-cache.conf +++ /dev/null @@ -1,40 +0,0 @@ -[DEFAULT] -# Show more verbose log output (sets INFO log level output) -verbose = True - -# Show debugging output in logs (sets DEBUG log level output) -debug = False - -log_file = /var/log/glance/image-cache.log - -# Send logs to syslog (/dev/log) instead of to file specified by `log_file` -use_syslog = False - -# Directory that the Image Cache writes data to -image_cache_dir = /var/lib/glance/image-cache/ - -# Number of seconds after which we should consider an incomplete image to be -# stalled and eligible for reaping -image_cache_stall_time = 86400 - -# image_cache_invalid_entry_grace_period - seconds -# -# If an exception is raised as we're writing to the cache, the cache-entry is -# deemed invalid and moved to /invalid so that it can be -# inspected for debugging purposes. -# -# This is number of seconds to leave these invalid images around before they -# are elibible to be reaped. -image_cache_invalid_entry_grace_period = 3600 - -# Max cache size in bytes -image_cache_max_size = 10737418240 - -# Address to find the registry server -registry_host = 0.0.0.0 - -# Port the registry server is listening on -registry_port = 9191 - -# Admin token to use if using Keystone -# admin_token = 123 diff --git a/doc/training-guides/basic-install-guide/samples/glance-registry-paste.ini b/doc/training-guides/basic-install-guide/samples/glance-registry-paste.ini deleted file mode 100644 index 540896f2..00000000 --- a/doc/training-guides/basic-install-guide/samples/glance-registry-paste.ini +++ /dev/null @@ -1,25 +0,0 @@ -# Use this pipeline for no auth - DEFAULT -# [pipeline:glance-registry] -# pipeline = unauthenticated-context registryapp - -# Use this pipeline for keystone auth -[pipeline:glance-registry-keystone] -pipeline = authtoken context registryapp - -[filter:authtoken] -paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory -admin_tenant_name = service -admin_user = glance -admin_password = glance - -[app:registryapp] -paste.app_factory = glance.registry.api.v1:API.factory - -[filter:context] -paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory - -[filter:unauthenticated-context] -paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory - -[filter:authtoken] -paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory diff --git a/doc/training-guides/basic-install-guide/samples/glance-registry.conf b/doc/training-guides/basic-install-guide/samples/glance-registry.conf deleted file mode 100644 index 3886b0ec..00000000 --- a/doc/training-guides/basic-install-guide/samples/glance-registry.conf +++ /dev/null @@ -1,86 +0,0 @@ -[DEFAULT] -# Show more verbose log output (sets INFO log level output) -verbose = True - -# Show debugging output in logs (sets DEBUG log level output) -debug = False - -# Address to bind the registry server -bind_host = 0.0.0.0 - -# Port the bind the registry server to -bind_port = 9191 - -# Log to this file. Make sure you do not set the same log -# file for both the API and registry servers! -log_file = /var/log/glance/registry.log - -# Backlog requests when creating socket -backlog = 4096 - -# TCP_KEEPIDLE value in seconds when creating socket. -# Not supported on OS X. -#tcp_keepidle = 600 - -# SQLAlchemy connection string for the reference implementation -# registry server. Any valid SQLAlchemy connection string is fine. -# See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine -sql_connection = mysql://glance:YOUR_GLANCEDB_PASSWORD@192.168.206.130/glance - -# Period in seconds after which SQLAlchemy should reestablish its connection -# to the database. -# -# MySQL uses a default `wait_timeout` of 8 hours, after which it will drop -# idle connections. This can result in 'MySQL Gone Away' exceptions. If you -# notice this, you can lower this value to ensure that SQLAlchemy reconnects -# before MySQL can drop the connection. -sql_idle_timeout = 3600 - -# Limit the api to return `param_limit_max` items in a call to a container. If -# a larger `limit` query param is provided, it will be reduced to this value. -api_limit_max = 1000 - -# If a `limit` query param is not provided in an api request, it will -# default to `limit_param_default` -limit_param_default = 25 - -# Role used to identify an authenticated user as administrator -#admin_role = admin - -# ================= Syslog Options ============================ - -# Send logs to syslog (/dev/log) instead of to file specified -# by `log_file` -use_syslog = False - -# Facility to use. If unset defaults to LOG_USER. -#syslog_log_facility = LOG_LOCAL1 - -# ================= SSL Options =============================== - -# Certificate file to use when starting registry server securely -#cert_file = /path/to/certfile - -# Private key file to use when starting registry server securely -#key_file = /path/to/keyfile - -# CA certificate file to use to verify connecting clients -#ca_file = /path/to/cafile - -[keystone_authtoken] -auth_host = 127.0.0.1 -auth_port = 35357 -auth_protocol = http -admin_tenant_name = service -admin_user = admin -admin_password = secrete - -[paste_deploy] -# Name of the paste configuration file that defines the available pipelines -config_file = /etc/glance/glance-registry-paste.ini - -# Partial name of a pipeline in your paste configuration file with the -# service name removed. For example, if your paste section name is -# [pipeline:glance-api-keystone], you would configure the flavor below -# as 'keystone'. -flavor=keystone diff --git a/doc/training-guides/basic-install-guide/samples/glance-scrubber-paste.ini b/doc/training-guides/basic-install-guide/samples/glance-scrubber-paste.ini deleted file mode 100644 index ac342f8f..00000000 --- a/doc/training-guides/basic-install-guide/samples/glance-scrubber-paste.ini +++ /dev/null @@ -1,3 +0,0 @@ -[app:glance-scrubber] -paste.app_factory = glance.common.wsgi:app_factory -glance.app_factory = glance.store.scrubber:Scrubber diff --git a/doc/training-guides/basic-install-guide/samples/glance-scrubber.conf b/doc/training-guides/basic-install-guide/samples/glance-scrubber.conf deleted file mode 100644 index b2c5723c..00000000 --- a/doc/training-guides/basic-install-guide/samples/glance-scrubber.conf +++ /dev/null @@ -1,25 +0,0 @@ -[DEFAULT] -# Show more verbose log output (sets INFO log level output) -verbose = True - -# Show debugging output in logs (sets DEBUG log level output) -debug = False - -# Log to this file. Make sure you do not set the same log -# file for both the API and registry servers! -log_file = /var/log/glance/scrubber.log - -# Send logs to syslog (/dev/log) instead of to file specified by `log_file` -use_syslog = False - -# Delayed delete time in seconds -scrub_time = 43200 - -# Should we run our own loop or rely on cron/scheduler to run us -daemon = False - -# Loop time between checking the registry for new items to schedule for delete -wakeup_time = 300 - -[app:glance-scrubber] -paste.app_factory = glance.store.scrubber:app_factory diff --git a/doc/training-guides/basic-install-guide/samples/keystone-paste.ini b/doc/training-guides/basic-install-guide/samples/keystone-paste.ini deleted file mode 100644 index 0f4590a2..00000000 --- a/doc/training-guides/basic-install-guide/samples/keystone-paste.ini +++ /dev/null @@ -1,85 +0,0 @@ -# Keystone PasteDeploy configuration file. - -[filter:debug] -paste.filter_factory = keystone.common.wsgi:Debug.factory - -[filter:token_auth] -paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory - -[filter:admin_token_auth] -paste.filter_factory = keystone.middleware:AdminTokenAuthMiddleware.factory - -[filter:xml_body] -paste.filter_factory = keystone.middleware:XmlBodyMiddleware.factory - -[filter:json_body] -paste.filter_factory = keystone.middleware:JsonBodyMiddleware.factory - -[filter:user_crud_extension] -paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory - -[filter:crud_extension] -paste.filter_factory = keystone.contrib.admin_crud:CrudExtension.factory - -[filter:ec2_extension] -paste.filter_factory = keystone.contrib.ec2:Ec2Extension.factory - -[filter:s3_extension] -paste.filter_factory = keystone.contrib.s3:S3Extension.factory - -[filter:url_normalize] -paste.filter_factory = keystone.middleware:NormalizingFilter.factory - -[filter:sizelimit] -paste.filter_factory = keystone.middleware:RequestBodySizeLimiter.factory - -[filter:stats_monitoring] -paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory - -[filter:stats_reporting] -paste.filter_factory = keystone.contrib.stats:StatsExtension.factory - -[filter:access_log] -paste.filter_factory = keystone.contrib.access:AccessLogMiddleware.factory - -[app:public_service] -paste.app_factory = keystone.service:public_app_factory - -[app:service_v3] -paste.app_factory = keystone.service:v3_app_factory - -[app:admin_service] -paste.app_factory = keystone.service:admin_app_factory - -[pipeline:public_api] -pipeline = access_log sizelimit url_normalize token_auth admin_token_auth xml_body json_body ec2_extension user_crud_extension public_service - -[pipeline:admin_api] -pipeline = access_log sizelimit url_normalize token_auth admin_token_auth xml_body json_body ec2_extension s3_extension crud_extension admin_service - -[pipeline:api_v3] -pipeline = access_log sizelimit url_normalize token_auth admin_token_auth xml_body json_body ec2_extension s3_extension service_v3 - -[app:public_version_service] -paste.app_factory = keystone.service:public_version_app_factory - -[app:admin_version_service] -paste.app_factory = keystone.service:admin_version_app_factory - -[pipeline:public_version_api] -pipeline = access_log sizelimit url_normalize xml_body public_version_service - -[pipeline:admin_version_api] -pipeline = access_log sizelimit url_normalize xml_body admin_version_service - -[composite:main] -use = egg:Paste#urlmap -/v2.0 = public_api -/v3 = api_v3 -/ = public_version_api - -[composite:admin] -use = egg:Paste#urlmap -/v2.0 = admin_api -/v3 = api_v3 -/ = admin_version_api diff --git a/doc/training-guides/basic-install-guide/samples/network-interfaces.conf.txt b/doc/training-guides/basic-install-guide/samples/network-interfaces.conf.txt deleted file mode 100644 index c5fed3d5..00000000 --- a/doc/training-guides/basic-install-guide/samples/network-interfaces.conf.txt +++ /dev/null @@ -1,15 +0,0 @@ -# The loopback network interface -auto lo -iface lo inet loopback - -# The primary network interface -auto eth0 -iface eth0 inet dhcp - -# Bridge network interface for VM networks -auto br100 -iface br100 inet static -address 192.168.100.1 -netmask 255.255.255.0 -bridge_stp off -bridge_fd 0 \ No newline at end of file diff --git a/doc/training-guides/basic-install-guide/samples/object-server-1.conf.txt b/doc/training-guides/basic-install-guide/samples/object-server-1.conf.txt deleted file mode 100644 index 2a0ff66c..00000000 --- a/doc/training-guides/basic-install-guide/samples/object-server-1.conf.txt +++ /dev/null @@ -1,21 +0,0 @@ -[DEFAULT] -devices = /srv/1/node -mount_check = false -bind_port = 6010 -user = swift -log_facility = LOG_LOCAL2 - -[pipeline:main] -pipeline = object-server - -[app:object-server] -use = egg:swift#object - -[object-replicator] -vm_test_mode = yes - -[object-updater] - -[object-auditor] - -[object-expirer] \ No newline at end of file diff --git a/doc/training-guides/basic-install-guide/samples/object-server.conf.txt b/doc/training-guides/basic-install-guide/samples/object-server.conf.txt deleted file mode 100644 index e7a4ea98..00000000 --- a/doc/training-guides/basic-install-guide/samples/object-server.conf.txt +++ /dev/null @@ -1,17 +0,0 @@ -[DEFAULT] -bind_ip = 0.0.0.0 -workers = 2 - -[pipeline:main] -pipeline = object-server - -[app:object-server] -use = egg:swift#object - -[object-replicator] - -[object-updater] - -[object-auditor] - -[object-expirer] \ No newline at end of file diff --git a/doc/training-guides/basic-install-guide/samples/openrc.txt b/doc/training-guides/basic-install-guide/samples/openrc.txt deleted file mode 100644 index a0519f08..00000000 --- a/doc/training-guides/basic-install-guide/samples/openrc.txt +++ /dev/null @@ -1,5 +0,0 @@ -export OS_USERNAME=admin -export OS_TENANT_NAME=demo -export OS_PASSWORD=secrete -export OS_AUTH_URL=http://192.168.206.130:5000/v2.0/ -export OS_REGION_NAME=RegionOne diff --git a/doc/training-guides/basic-install-guide/samples/swift.conf.txt b/doc/training-guides/basic-install-guide/samples/swift.conf.txt deleted file mode 100644 index db9cf4b3..00000000 --- a/doc/training-guides/basic-install-guide/samples/swift.conf.txt +++ /dev/null @@ -1,4 +0,0 @@ -[swift-hash] -# random unique string that can never change (DO NOT LOSE) -swift_hash_path_prefix = xrfuniounenqjnw -swift_hash_path_suffix = fLIbertYgibbitZ diff --git a/doc/training-guides/basic-install-guide/samples/test-stack.yml b/doc/training-guides/basic-install-guide/samples/test-stack.yml deleted file mode 100644 index 966d136e..00000000 --- a/doc/training-guides/basic-install-guide/samples/test-stack.yml +++ /dev/null @@ -1,26 +0,0 @@ -heat_template_version: 2013-05-23 - -description: Test Template - -parameters: - ImageID: - type: string - description: Image use to boot a server - NetID: - type: string - description: Network ID for the server - -resources: - server1: - type: OS::Nova::Server - properties: - name: "Test server" - image: { get_param: ImageID } - flavor: "m1.tiny" - networks: - - network: { get_param: NetID } - -outputs: - server1_private_ip: - description: IP address of the server in the private network - value: { get_attr: [ server1, first_address ] } diff --git a/doc/training-guides/basic-install-guide/section_basics-database.xml b/doc/training-guides/basic-install-guide/section_basics-database.xml deleted file mode 100644 index 78269e83..00000000 --- a/doc/training-guides/basic-install-guide/section_basics-database.xml +++ /dev/null @@ -1,91 +0,0 @@ - -
- - Database - Most OpenStack services use an SQL database to store information. - The database typically runs on the controller node. The procedures in - this guide use MariaDB or - MySQL depending on the distribution. - OpenStack services also support other SQL databases including - PostgreSQL. - - To install and configure the database server - - Install the packages: - - The Python MySQL library is compatible with MariaDB. - - # apt-get install mariadb-server python-mysqldb - # apt-get install mysql-server python-mysqldb - # yum install mariadb mariadb-server MySQL-python - On openSUSE: - # zypper install mariadb-client mariadb python-mysql - On SLES: - # zypper install mysql-client mysql python-mysql - - - Choose a suitable password for the database root account. - - - Edit the - /etc/mysql/my.cnf file and complete the - following actions: - Edit the - /etc/my.cnf file and complete the following - actions: - - - In the [mysqld] section, set the - bind-address key to the management IP - address of the controller node to enable access by other - nodes via the management network: - [mysqld] -... -bind-address = 10.0.0.11 - - - In the [mysqld] section, set the - following keys to enable useful options and the UTF-8 - character set: - [mysqld] -... -default-storage-engine = innodb -innodb_file_per_table -collation-server = utf8_general_ci -init-connect = 'SET NAMES utf8' -character-set-server = utf8 - - - - - - To finalize installation - - Restart the database service: - # service mysql restart - - - Start the database service and configure it to start when the - system boots: - # systemctl enable mariadb.service -# systemctl start mariadb.service - On SLES: - # service mysql start -# chkconfig mysql on - On openSUSE: - # systemctl start mysql.service -# systemctl enable mysql.service - - - Secure the database service: - Secure the database - service including choosing a suitable password for the root - account: - # mysql_secure_installation - - -
diff --git a/doc/training-guides/basic-install-guide/section_basics-networking-neutron.xml b/doc/training-guides/basic-install-guide/section_basics-networking-neutron.xml deleted file mode 100644 index b6fd3872..00000000 --- a/doc/training-guides/basic-install-guide/section_basics-networking-neutron.xml +++ /dev/null @@ -1,355 +0,0 @@ - -
- - OpenStack Networking (neutron) - The example architecture with OpenStack Networking (neutron) requires - one controller node, one network node, and at least one compute node. - The controller node contains one network interface on the - management network. The network node contains - one network interface on the management network, one on the - instance tunnels network, and one on the - external network. The compute node contains - one network interface on the management network and one on the - instance tunnels network. - - Network interface names vary by distribution. Traditionally, - interfaces use "eth" followed by a sequential number. To cover all - variations, this guide simply refers to the first interface as the - interface with the lowest number, the second interface as the - interface with the middle number, and the third interface as the - interface with the highest number. - -
- Three-node architecture with OpenStack Networking (neutron) - - - - - -
- Unless you intend to use the exact configuration provided in this - example architecture, you must modify the networks in this procedure to - match your environment. Also, each node must resolve the other nodes - by name in addition to IP address. For example, the - controller name must resolve to - 10.0.0.11, the IP address of the management - interface on the controller node. - - Reconfiguring network interfaces will interrupt network - connectivity. We recommend using a local terminal session for these - procedures. - -
- Controller node - - To configure networking: - - Configure the first interface as the management interface: - IP address: 10.0.0.11 - Network mask: 255.255.255.0 (or /24) - Default gateway: 10.0.0.1 - - - Reboot the system to activate the changes. - - - - To configure name resolution: - - Set the hostname of the node to - controller. - - - Edit the /etc/hosts file to contain the - following: - # controller -10.0.0.11 controller - -# network -10.0.0.21 network - -# compute1 -10.0.0.31 compute1 - - You must remove or comment the line beginning with - 127.0.1.1. - - - -
-
- Network node - - To configure networking: - - Configure the first interface as the management interface: - IP address: 10.0.0.21 - Network mask: 255.255.255.0 (or /24) - Default gateway: 10.0.0.1 - - - Configure the second interface as the instance tunnels - interface: - IP address: 10.0.1.21 - Network mask: 255.255.255.0 (or /24) - - - The external interface uses a special configuration without an - IP address assigned to it. Configure the third interface as the - external interface: - Replace INTERFACE_NAME with the - actual interface name. For example, eth2 or - ens256. - - - Edit the /etc/network/interfaces file - to contain the following: - # The external network interface -auto INTERFACE_NAME -iface INTERFACE_NAME inet manual - up ip link set dev $IFACE up - down ip link set dev $IFACE down - - - Edit the - /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME - file to contain the following: - Do not change the HWADDR and - UUID keys. - DEVICE=INTERFACE_NAME -TYPE=Ethernet -ONBOOT="yes" -BOOTPROTO="none" - - - Edit the - /etc/sysconfig/network/ifcfg-INTERFACE_NAME file to - contain the following: - STARTMODE='auto' -BOOTPROTO='static' - - - - - Reboot the system to activate the changes. - - - - To configure name resolution: - - Set the hostname of the node to network. - - - Edit the /etc/hosts file to contain the - following: - # network -10.0.0.21 network - -# controller -10.0.0.11 controller - -# compute1 -10.0.0.31 compute1 - - You must remove or comment the line beginning with - 127.0.1.1. - - - -
-
- Compute node - - To configure networking: - - Configure the first interface as the management interface: - IP address: 10.0.0.31 - Network mask: 255.255.255.0 (or /24) - Default gateway: 10.0.0.1 - - Additional compute nodes should use 10.0.0.32, 10.0.0.33, - and so on. - - - - Configure the second interface as the instance tunnels - interface: - IP address: 10.0.1.31 - Network mask: 255.255.255.0 (or /24) - - Additional compute nodes should use 10.0.1.32, 10.0.1.33, - and so on. - - - - Reboot the system to activate the changes. - - - - To configure name resolution: - - Set the hostname of the node to compute1. - - - Edit the /etc/hosts file to contain the - following: - # compute1 -10.0.0.31 compute1 - -# controller -10.0.0.11 controller - -# network -10.0.0.21 network - - You must remove or comment the line beginning with - 127.0.1.1. - - - -
-
- Verify connectivity - We recommend that you verify network connectivity to the Internet - and among the nodes before proceeding further. - - - From the controller node, - ping a site on the Internet: - # ping -c 4 openstack.org -PING openstack.org (174.143.194.225) 56(84) bytes of data. -64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms -64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms -64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms -64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms - ---- openstack.org ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3022ms -rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms - - - From the controller node, - ping the management interface on the - network node: - # ping -c 4 network -PING network (10.0.0.21) 56(84) bytes of data. -64 bytes from network (10.0.0.21): icmp_seq=1 ttl=64 time=0.263 ms -64 bytes from network (10.0.0.21): icmp_seq=2 ttl=64 time=0.202 ms -64 bytes from network (10.0.0.21): icmp_seq=3 ttl=64 time=0.203 ms -64 bytes from network (10.0.0.21): icmp_seq=4 ttl=64 time=0.202 ms - ---- network ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3000ms -rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms - - - From the controller node, - ping the management interface on the - compute node: - # ping -c 4 compute1 -PING compute1 (10.0.0.31) 56(84) bytes of data. -64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms -64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms -64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms -64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms - ---- network ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3000ms -rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms - - - From the network node, - ping a site on the Internet: - # ping -c 4 openstack.org -PING openstack.org (174.143.194.225) 56(84) bytes of data. -64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms -64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms -64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms -64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms - ---- openstack.org ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3022ms -rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms - - - From the network node, - ping the management interface on the - controller node: - # ping -c 4 controller -PING controller (10.0.0.11) 56(84) bytes of data. -64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms -64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms -64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms -64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms - ---- controller ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3000ms -rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms - - - From the network node, - ping the instance tunnels interface on the - compute node: - # ping -c 4 10.0.1.31 -PING 10.0.1.31 (10.0.1.31) 56(84) bytes of data. -64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=1 ttl=64 time=0.263 ms -64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=2 ttl=64 time=0.202 ms -64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=3 ttl=64 time=0.203 ms -64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=4 ttl=64 time=0.202 ms - ---- 10.0.1.31 ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3000ms -rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms - - - From the compute node, - ping a site on the Internet: - # ping -c 4 openstack.org -PING openstack.org (174.143.194.225) 56(84) bytes of data. -64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms -64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms -64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms -64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms - ---- openstack.org ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3022ms -rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms - - - From the compute node, - ping the management interface on the - controller node: - # ping -c 4 controller -PING controller (10.0.0.11) 56(84) bytes of data. -64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms -64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms -64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms -64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms - ---- controller ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3000ms -rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms - - - From the compute node, - ping the instance tunnels interface on the - network node: - # ping -c 4 10.0.1.21 -PING 10.0.1.21 (10.0.1.21) 56(84) bytes of data. -64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=1 ttl=64 time=0.263 ms -64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=2 ttl=64 time=0.202 ms -64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=3 ttl=64 time=0.203 ms -64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=4 ttl=64 time=0.202 ms - ---- 10.0.1.21 ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3000ms -rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms - - -
-
diff --git a/doc/training-guides/basic-install-guide/section_basics-networking-nova.xml b/doc/training-guides/basic-install-guide/section_basics-networking-nova.xml deleted file mode 100644 index 0e78f71b..00000000 --- a/doc/training-guides/basic-install-guide/section_basics-networking-nova.xml +++ /dev/null @@ -1,218 +0,0 @@ - -
- - Legacy networking (nova-network) - The example architecture with legacy networking (nova-network) - requires a controller node and at least one compute node. The controller - node contains one network interface on the - management network. The compute node contains - one network interface on the management network and one on the - external network. - - Network interface names vary by distribution. Traditionally, - interfaces use "eth" followed by a sequential number. To cover all - variations, this guide simply refers to the first interface as the - interface with the lowest number and the second interface as the - interface with the highest number. - -
- Two-node architecture with legacy networking (nova-network) - - - - - -
- Unless you intend to use the exact configuration provided in this - example architecture, you must modify the networks in this procedure to - match your environment. Also, each node must resolve the other nodes - by name in addition to IP address. For example, the - controller name must resolve to - 10.0.0.11, the IP address of the management - interface on the controller node. - - Reconfiguring network interfaces will interrupt network - connectivity. We recommend using a local terminal session for these - procedures. - -
- Controller node - - To configure networking: - - Configure the first interface as the management interface: - IP address: 10.0.0.11 - Network mask: 255.255.255.0 (or /24) - Default gateway: 10.0.0.1 - - - Reboot the system to activate the changes. - - - - To configure name resolution: - - Set the hostname of the node to - controller. - - - Edit the /etc/hosts file to contain the - following: - # controller -10.0.0.11 controller - -# compute1 -10.0.0.31 compute1 - - You must remove or comment the line beginning with - 127.0.1.1. - - - -
-
- Compute node - - To configure networking: - - Configure the first interface as the management interface: - IP address: 10.0.0.31 - Network mask: 255.255.255.0 (or /24) - Default gateway: 10.0.0.1 - - Additional compute nodes should use 10.0.0.32, 10.0.0.33, - and so on. - - - - The external interface uses a special configuration without an - IP address assigned to it. Configure the second interface as the - external interface: - Replace INTERFACE_NAME with the - actual interface name. For example, eth1 or - ens224. - - - Edit the /etc/network/interfaces file - to contain the following: - # The external network interface -auto INTERFACE_NAME -iface INTERFACE_NAME inet manual - up ip link set dev $IFACE up - down ip link set dev $IFACE down - - - Edit the - /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME - file to contain the following: - Do not change the HWADDR and - UUID keys. - DEVICE=INTERFACE_NAME -TYPE=Ethernet -ONBOOT="yes" -BOOTPROTO="none" - - - Edit the - /etc/sysconfig/network/ifcfg-INTERFACE_NAME - file to contain the following: - STARTMODE='auto' -BOOTPROTO='static' - - - - - Reboot the system to activate the changes. - - - - To configure name resolution: - - Set the hostname of the node to compute1. - - - Edit the /etc/hosts file to contain the - following: - # compute1 -10.0.0.31 compute1 - -# controller -10.0.0.11 controller - - You must remove or comment the line beginning with - 127.0.1.1. - - - -
-
- Verify connectivity - We recommend that you verify network connectivity to the Internet - and among the nodes before proceeding further. - - - From the controller node, - ping a site on the Internet: - # ping -c 4 openstack.org -PING openstack.org (174.143.194.225) 56(84) bytes of data. -64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms -64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms -64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms -64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms - ---- openstack.org ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3022ms -rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms - - - From the controller node, - ping the management interface on the - compute node: - # ping -c 4 compute1 -PING compute1 (10.0.0.31) 56(84) bytes of data. -64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms -64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms -64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms -64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms - ---- compute1 ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3000ms -rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms - - - From the compute node, - ping a site on the Internet: - # ping -c 4 openstack.org -PING openstack.org (174.143.194.225) 56(84) bytes of data. -64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms -64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms -64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms -64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms - ---- openstack.org ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3022ms -rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms - - - From the compute node, - ping the management interface on the - controller node: - # ping -c 4 controller -PING controller (10.0.0.11) 56(84) bytes of data. -64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms -64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms -64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms -64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms - ---- controller ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3000ms -rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms - - -
-
diff --git a/doc/training-guides/basic-install-guide/section_basics-networking.xml b/doc/training-guides/basic-install-guide/section_basics-networking.xml deleted file mode 100644 index b5e09c9f..00000000 --- a/doc/training-guides/basic-install-guide/section_basics-networking.xml +++ /dev/null @@ -1,67 +0,0 @@ - -
- - Networking - After installing the operating system on each node for the - architecture that you choose to deploy, you must configure the network - interfaces. We recommend that you disable any automated network - management tools and manually edit the appropriate configuration files - for your distribution. For more information on how to configure networking - on your distribution, see the - documentation. - documentation. - documentation. - SLES 11 - or - openSUSE documentation. - - To disable Network Manager: - - Use the YaST network module: - # yast2 network - For more information, see the - SLES or - the - - openSUSE documentation. - - - - RHEL and CentOS enable a restrictive - firewall by default. During the installation - process, certain steps will fail unless you alter or disable the - firewall. For more information about securing your environment, refer - to the OpenStack - Security Guide. - openSUSE and SLES enable a restrictive - firewall by default. During the installation - process, certain steps will fail unless you alter or disable the - firewall. For more information about securing your environment, refer - to the OpenStack - Security Guide. - Your distribution does not enable a - restrictive firewall by default. For more - information about securing your environment, refer to the - OpenStack - Security Guide. - Proceed to network configuration for the example - OpenStack Networking (neutron) - or legacy - networking (nova-network) architecture. - - -
diff --git a/doc/training-guides/basic-install-guide/section_basics-ntp.xml b/doc/training-guides/basic-install-guide/section_basics-ntp.xml deleted file mode 100644 index 474b49c0..00000000 --- a/doc/training-guides/basic-install-guide/section_basics-ntp.xml +++ /dev/null @@ -1,168 +0,0 @@ - -
- - Network Time Protocol (NTP) - You must install - NTP to - properly synchronize services among nodes. We recommend that you configure - the controller node to reference more accurate (lower stratum) servers and - other nodes to reference the controller node. -
- Controller node - - To install the NTP service - - # apt-get install ntp - # yum install ntp - # zypper install ntp - - - - To configure the NTP service - By default, the controller node synchronizes the time via a pool - of public servers. However, you can optionally edit the - /etc/ntp.conf file to configure alternative - servers such as those provided by your organization. - - Edit the /etc/ntp.conf file and add, - change, or remove the following keys as necessary for your - environment: - server NTP_SERVER iburst -restrict -4 default kod notrap nomodify -restrict -6 default kod notrap nomodify - Replace NTP_SERVER with the - hostname or IP address of a suitable more accurate (lower stratum) - NTP server. The configuration supports multiple - server keys. - - For the restrict keys, you essentially - remove the nopeer and noquery - options. - - - Remove the /var/lib/ntp/ntp.conf.dhcp file - if it exists. - - - - Restart the NTP service: - # service ntp restart - - - Start the NTP service and configure it to start when the system - boots: - # systemctl enable ntpd.service -# systemctl start ntpd.service - On SLES: - # service ntp start -# chkconfig ntp on - On openSUSE: - # systemctl enable ntp.service -# systemctl start ntp.service - - -
-
- Other nodes - - To install the NTP service - - # apt-get install ntp - # yum install ntp - # zypper install ntp - - - - To configure the NTP service - Configure the network and compute nodes to reference the - controller node. - - Edit the /etc/ntp.conf file: - Comment out or remove all but one server - key and change it to reference the controller node. - server controller iburst - - Remove the /var/lib/ntp/ntp.conf.dhcp file - if it exists. - - - - Restart the NTP service: - # service ntp restart - - - Start the NTP service and configure it to start when the system - boots: - # systemctl enable ntpd.service -# systemctl start ntpd.service - On SLES: - # service ntp start -# chkconfig ntp on - On openSUSE: - # systemctl enable ntp.service -# systemctl start ntp.service - - -
-
- Verify operation - We recommend that you verify NTP synchronization before proceeding - further. Some nodes, particularly those that reference the controller - node, can take several minutes to synchronize. - - - Run this command on the controller node: - - # ntpq -c peers - remote refid st t when poll reach delay offset jitter -============================================================================== -*ntp-server1 192.0.2.11 2 u 169 1024 377 1.901 -0.611 5.483 -+ntp-server2 192.0.2.12 2 u 887 1024 377 0.922 -0.246 2.864 - Contents in the remote column should - indicate the hostname or IP address of one or more NTP servers. - - Contents in the refid column typically - reference IP addresses of upstream servers. - - - - Run this command on the controller node: - - # ntpq -c assoc -ind assid status conf reach auth condition last_event cnt -=========================================================== - 1 20487 961a yes yes none sys.peer sys_peer 1 - 2 20488 941a yes yes none candidate sys_peer 1 - Contents in the condition column should - indicate sys.peer for at least one server. - - - Run this command on all other nodes: - # ntpq -c peers - remote refid st t when poll reach delay offset jitter -============================================================================== -*controller 192.0.2.21 3 u 47 64 37 0.308 -0.251 0.079 - Contents in the remote column should - indicate the hostname of the controller node. - - Contents in the refid column typically - reference IP addresses of upstream servers. - - - - Run this command on all other nodes: - - # ntpq -c assoc -ind assid status conf reach auth condition last_event cnt -=========================================================== - 1 21181 963a yes yes none sys.peer sys_peer 3 - Contents in the condition column should - indicate sys.peer. - - -
-
diff --git a/doc/training-guides/basic-install-guide/section_basics-packages.xml b/doc/training-guides/basic-install-guide/section_basics-packages.xml deleted file mode 100644 index 222b4823..00000000 --- a/doc/training-guides/basic-install-guide/section_basics-packages.xml +++ /dev/null @@ -1,168 +0,0 @@ - -
- - OpenStack packages - Distributions release OpenStack packages as part of the distribution - or using other methods because of differing release schedules. Perform - these procedures on all nodes. - - Disable or remove any automatic update services because they can - impact your OpenStack environment. - - - To configure prerequisites - - Install the python-software-properties package - to ease repository management: - # apt-get install python-software-properties - - - - To enable the OpenStack repository - - Enable the Ubuntu Cloud archive repository: - # add-apt-repository cloud-archive:juno - - - - To configure prerequisites - - Install the yum-plugin-priorities package to - enable assignment of relative priorities within repositories: - # yum install yum-plugin-priorities - - - Install the epel-release package to enable the - EPEL repository: - # yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm - - Fedora does not require this package. - - - - - To enable the OpenStack repository - - Install the rdo-release-juno package to enable - the RDO repository: - # yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm - - - - To enable the OpenStack repository - - Enable the Open Build Service repositories based on your openSUSE - or SLES version: - On openSUSE 13.1: - # zypper addrepo -f obs://Cloud:OpenStack:Juno/openSUSE_13.1 Juno - On SLES 11 SP3: - # zypper addrepo -f obs://Cloud:OpenStack:Juno/SLE_11_SP3 Juno - - The packages are signed by GPG key 893A90DAD85F9316. You should - verify the fingerprint of the imported GPG key before using - it. - Key ID: 893A90DAD85F9316 -Key Name: Cloud:OpenStack OBS Project <Cloud:OpenStack@build.opensuse.org> -Key Fingerprint: 35B34E18ABC1076D66D5A86B893A90DAD85F9316 -Key Created: Tue Oct 8 13:34:21 2013 -Key Expires: Thu Dec 17 13:34:21 2015 - - - - - To use the Debian Wheezy backports archive for - Juno - The Juno release is available - only in Debian Experimental (otherwise called rc-buggy), - as Jessie is frozen soon, and will contain Icehouse. - However, the Debian maintainers - of OpenStack also maintain a non-official Debian repository - for OpenStack containing Wheezy backports. - - On all nodes, install the Debian Wheezy backport repository - Juno: - # echo "deb http://archive.gplhost.com/debian juno-backports main" >>/etc/apt/sources.list - - - Install the Debian Wheezy OpenStack repository for - Juno: - # echo "deb http://archive.gplhost.com/debian juno main" >>/etc/apt/sources.list - - - Update the repository database and install the key: - # apt-get update && apt-get install gplhost-archive-keyring - - - Update the package database, upgrade your system, and reboot - for all changes to take effect: - # apt-get update && apt-get dist-upgrade -# reboot - - - Numerous archive.gplhost.com mirrors are - available around the world. All are available with both FTP and - HTTP protocols (you should use the closest mirror). The list of - mirrors is available at http://archive.gplhost.com/readme.mirrors. - - Manually install python-argparse - The Debian OpenStack packages are maintained on Debian Sid - (also known as Debian Unstable) - the current development - version. Backported packages run correctly on Debian Wheezy with - one caveat: - All OpenStack packages are written in Python. Wheezy uses - Python 2.6 and 2.7, with Python 2.6 as the default interpreter; - Sid has only Python 2.7. There is one packaging change between - these two. In Python 2.6, you installed the - python-argparse package separately. In - Python 2.7, this package is installed by default. Unfortunately, - in Python 2.7, this package does not include Provides: - python-argparse directive. - - Because the packages are maintained in Sid where the - Provides: python-argparse directive causes an - error, and the Debian OpenStack maintainer wants to maintain one - version of the OpenStack packages, you must manually install the - python-argparse on each OpenStack system - that runs Debian Wheezy before you install the other OpenStack - packages. Use the following command to install the - package: - # apt-get install python-argparse - This caveat applies to most OpenStack packages in - Wheezy. - - - - To finalize installation - - Upgrade the packages on your system: - # apt-get update && apt-get dist-upgrade - # yum upgrade - # zypper refresh && zypper dist-upgrade - - If the upgrade process includes a new kernel, reboot your system - to activate it. - - - - RHEL and CentOS enable SELinux by - default. Install the openstack-selinux package - to automatically manage security policies for OpenStack - services: - # yum install openstack-selinux - - Fedora does not require this package. - - - The installation process for this package can take a - while. - - - -
diff --git a/doc/training-guides/basic-install-guide/section_basics-passwords.xml b/doc/training-guides/basic-install-guide/section_basics-passwords.xml deleted file mode 100644 index 4be05c8c..00000000 --- a/doc/training-guides/basic-install-guide/section_basics-passwords.xml +++ /dev/null @@ -1,119 +0,0 @@ - -
- - Passwords - The various OpenStack services and the required software, like the - database and the messaging server, have to be password protected. These - passwords are used when configuring a service and accessing a - service. You have to choose a password while configuring the - service and later remember to use the same password when accessing it. - Optionally, you can generate random passwords with the - pwgen program. To create passwords one at a - time, use the output of the below command repeatedly: - $ openssl rand -hex 10 - - This guide uses the convention that - SERVICE_PASS is - the password to access the service SERVICE and - SERVICE_DBPASS is - the database password used by the service SERVICE to access the - database. - - The complete list of passwords you need to define in this guide are: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Passwords
Password nameDescription
Database password (no variable used)Root password for the database
RABBIT_PASSPassword of user guest of RabbitMQ
KEYSTONE_DBPASSDatabase password of Identity service
DEMO_PASSPassword of user demo
ADMIN_PASSPassword of user admin
GLANCE_DBPASSDatabase password for Image Service
GLANCE_PASSPassword of Image Service user glance
NOVA_DBPASSDatabase password for Compute service
NOVA_PASSPassword of Compute service user nova
DASH_DBPASSDatabase password for the dashboard
CINDER_DBPASSDatabase password for the Block Storage service
CINDER_PASSPassword of Block Storage service user cinder
NEUTRON_DBPASSDatabase password for the Networking service
NEUTRON_PASSPassword of Networking service user neutron
HEAT_DBPASSDatabase password for the Orchestration service
HEAT_PASSPassword of Orchestration service user heat
CEILOMETER_DBPASSDatabase password for the Telemetry service
CEILOMETER_PASSPassword of Telemetry service user ceilometer
TROVE_DBPASSDatabase password of Database service
TROVE_PASSPassword of Database Service user trove
-
-
diff --git a/doc/training-guides/basic-install-guide/section_basics-prerequisites.xml b/doc/training-guides/basic-install-guide/section_basics-prerequisites.xml deleted file mode 100644 index d17695c6..00000000 --- a/doc/training-guides/basic-install-guide/section_basics-prerequisites.xml +++ /dev/null @@ -1,63 +0,0 @@ - -
- - Before you begin - For a functional environment, OpenStack doesn't require a - significant amount of resources. We recommend that your environment meets - or exceeds the following minimum requirements which can support several - minimal CirrOS instances: - - - Controller Node: 1 processor, 2 GB memory, and 5 GB - storage - - - Network Node: 1 processor, 512 MB memory, and 5 GB - storage - - - Compute Node: 1 processor, 2 GB memory, and 10 GB - storage - - - To minimize clutter and provide more resources for OpenStack, we - recommend a minimal installation of your Linux distribution. Also, we - strongly recommend that you install a 64-bit version of your distribution - on at least the compute node. If you install a 32-bit version of your - distribution on the compute node, attempting to start an instance using - a 64-bit image will fail. - - A single disk partition on each node works for most basic - installations. However, you should consider - Logical Volume Manager (LVM) for installations - with optional services such as Block Storage. - - Many users build their test environments on - virtual machines - (VMs). The primary benefits of VMs include the - following: - - - One physical server can support multiple nodes, each with almost - any number of network interfaces. - - - Ability to take periodic "snap shots" throughout the installation - process and "roll back" to a working configuration in the event of - a problem. - - - However, VMs will reduce performance of your instances, particularly - if your hypervisor and/or processor lacks support for hardware - acceleration of nested VMs. - - If you choose to install on VMs, make sure your hypervisor - permits promiscuous mode on the - external network. - - For more information about system requirements, see the -
diff --git a/doc/training-guides/basic-install-guide/section_basics-queue.xml b/doc/training-guides/basic-install-guide/section_basics-queue.xml deleted file mode 100644 index 5cd5cd7d..00000000 --- a/doc/training-guides/basic-install-guide/section_basics-queue.xml +++ /dev/null @@ -1,86 +0,0 @@ - -
- - Messaging server - OpenStack uses a message broker to coordinate - operations and status information among services. The message broker - service typically runs on the controller node. OpenStack supports several - message brokers including RabbitMQ, - Qpid, and ZeroMQ. - However, most distributions that package OpenStack support a particular - message broker. This guide covers the RabbitMQ message broker which is - supported by each distribution. If you prefer to implement a - different message broker, consult the documentation associated - with it. - - - RabbitMQ - - - Qpid - - - ZeroMQ - - - - To install the <application>RabbitMQ</application> message broker service - - # apt-get install rabbitmq-server - - - # yum install rabbitmq-server - - - # zypper install rabbitmq-server - - - - To configure the message broker service - - Start the message broker service and configure it to start when the - system boots: - # systemctl enable rabbitmq-server.service -# systemctl start rabbitmq-server.service - On SLES: - # service rabbitmq-server start -# chkconfig rabbitmq-server on - On openSUSE: - # systemctl enable rabbitmq-server.service -# systemctl start rabbitmq-server.service - - - The message broker creates a default account that uses - guest for the username and password. To simplify - installation of your test environment, we recommend that you use this - account, but change the password for it. - Run the following command: - Replace RABBIT_PASS with a suitable - password. - # rabbitmqctl change_password guest RABBIT_PASS -Changing password for user "guest" ... -...done. - You must configure the rabbit_password key - in the configuration file for each OpenStack service that uses the - message broker. - - For production environments, you should create a unique account - with suitable password. For more information on securing the - message broker, see the - documentation. - If you decide to create a unique account with suitable password - for your test environment, you must configure the - rabbit_userid and - rabbit_password keys in the configuration file - of each OpenStack service that uses the message broker. - - - - Congratulations, now you are ready to install OpenStack - services! -
diff --git a/doc/training-guides/basic-install-guide/section_basics-security.xml b/doc/training-guides/basic-install-guide/section_basics-security.xml deleted file mode 100644 index 766e7fed..00000000 --- a/doc/training-guides/basic-install-guide/section_basics-security.xml +++ /dev/null @@ -1,130 +0,0 @@ - -
- - Security - OpenStack services support various security methods including - password, policy, and encryption. Additionally, supporting services - including the database server and message broker support at least - password security. - To ease the installation process, this guide only covers password - security where applicable. You can create secure passwords manually, - generate them using a tool such as - pwgen, or - by running the following command: - $ openssl rand -hex 10 - For OpenStack services, this guide uses - SERVICE_PASS to reference service account - passwords and SERVICE_DBPASS to reference - database passwords. - The following table provides a list of services that require - passwords and their associated references in the guide: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Passwords
Password nameDescription
Database password (no variable used)Root password for the database
RABBIT_PASSPassword of user guest of RabbitMQ
KEYSTONE_DBPASSDatabase password of Identity service
DEMO_PASSPassword of user demo
ADMIN_PASSPassword of user admin
GLANCE_DBPASSDatabase password for Image Service
GLANCE_PASSPassword of Image Service user glance
NOVA_DBPASSDatabase password for Compute service
NOVA_PASSPassword of Compute service user nova
DASH_DBPASSDatabase password for the dashboard
CINDER_DBPASSDatabase password for the Block Storage service
CINDER_PASSPassword of Block Storage service user cinder
NEUTRON_DBPASSDatabase password for the Networking service
NEUTRON_PASSPassword of Networking service user neutron
HEAT_DBPASSDatabase password for the Orchestration service
HEAT_PASSPassword of Orchestration service user heat
CEILOMETER_DBPASSDatabase password for the Telemetry service
CEILOMETER_PASSPassword of Telemetry service user ceilometer
TROVE_DBPASSDatabase password of Database service
TROVE_PASSPassword of Database Service user trove
-
- OpenStack and supporting services require administrative privileges - during installation and operation. In some cases, services perform - modifications to the host that can interfere with deployment automation - tools such as Ansible, Chef, and Puppet. For example, some OpenStack - services add a root wrapper to sudo that can interfere - with security policies. See the - Cloud Administrator Guide - for more information. Also, the Networking service assumes default values - for kernel network parameters and modifies firewall rules. To avoid most - issues during your initial installation, we recommend using a stock - deployment of a supported distribution on your hosts. However, if you - choose to automate deployment of your hosts, review the configuration - and policies applied to them before proceeding further. -
diff --git a/doc/training-guides/basic-install-guide/section_ceilometer-cinder.xml b/doc/training-guides/basic-install-guide/section_ceilometer-cinder.xml deleted file mode 100644 index da54bba5..00000000 --- a/doc/training-guides/basic-install-guide/section_ceilometer-cinder.xml +++ /dev/null @@ -1,46 +0,0 @@ - -
- - Add the Block Storage service agent for Telemetry - - - To retrieve volume samples, you must configure the Block - Storage service to send notifications to the bus. - Edit /etc/cinder/cinder.conf - and add in the [DEFAULT] section on the controller - and volume nodes: - control_exchange = cinder -notification_driver = cinder.openstack.common.notifier.rpc_notifier - - - Restart the Block Storage services with their new - settings. - On the controller node: - # service cinder-api restart -# service cinder-scheduler restart - # systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service - On SLES: - # service openstack-cinder-api restart -# service openstack-cinder-scheduler restart - On openSUSE: - # systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service - On the storage node: - # service cinder-volume restart - # systemctl restart openstack-cinder-volume.service - On SLES: - # service openstack-cinder-volume restart - On openSUSE: - # systemctl restart openstack-cinder-volume.service - - - If you want to collect OpenStack Block Storage notification on demand, - you can use cinder-volume-usage-audit from OpenStack Block Storage. - For more information, Block Storage audit script setup to get notifications. - - -
diff --git a/doc/training-guides/basic-install-guide/section_ceilometer-controller.xml b/doc/training-guides/basic-install-guide/section_ceilometer-controller.xml deleted file mode 100644 index 6d70adc2..00000000 --- a/doc/training-guides/basic-install-guide/section_ceilometer-controller.xml +++ /dev/null @@ -1,384 +0,0 @@ - -
- Install and configure controller node - This section describes how to install and configure the Telemetry - module, code-named ceilometer, on the controller node. The Telemetry - module uses separate agents to collect measurements from each OpenStack - service in your environment. - - To configure prerequisites - Before you install and configure Telemetry, you must install - MongoDB, create a MongoDB database, and - create Identity service credentials including endpoints. - - Enable the Open Build Service repositories for MongoDB based on - your openSUSE or SLES version: - On openSUSE: - # zypper addrepo -f obs://server:database/openSUSE_13.1 Database - On SLES: - # zypper addrepo -f obs://server:database/SLE_11_SP3 Database - - The packages are signed by GPG key - 562111AC05905EA8. You should - verify the fingerprint of the imported GPG key before using - it. - Key Name: server:database OBS Project <server:database@build.opensuse.org> -Key Fingerprint: 116EB86331583E47E63CDF4D562111AC05905EA8 -Key Created: Thu Oct 11 20:08:39 2012 -Key Expires: Sat Dec 20 20:08:39 2014 - - - - Install the MongoDB package: - # yum install mongodb-server mongodb - # zypper install mongodb - # apt-get install mongodb-server - - - Edit the /etc/mongodb.conf file and - complete the following actions: - - - Configure the bind_ip key to use the - management interface IP address of the controller node. - bind_ip = 10.0.0.11 - - - By default, MongoDB creates several 1GB journal files - in the /var/lib/mongodb/journal - directory. If you want to reduce the size of each journal file - to 128MB and limit total journal space consumption to - 512MB, assert the smallfiles key: - smallfiles = true - If you change the journaling configuration, - stop the MongoDB service, remove the initial journal files, and - start the service: - # service mongodb stop -# rm /var/lib/mongodb/journal/prealloc.* -# service mongodb start - You can also disable journaling. For more information, see - the MongoDB manual. - - - Restart the MongoDB service: - # service mongodb restart - - - Start the MongoDB services and configure them to start when - the system boots: - On SLES: - # service mongodb start -# chkconfig mongodb on - On openSUSE: - # systemctl enable mongodb.service -# systemctl start mongodb.service - - # service mongod start -# chkconfig mongod on - - - - - Create the ceilometer database: - # mongo --host controller --eval ' - db = db.getSiblingDB("ceilometer"); - db.addUser({user: "ceilometer", - pwd: "CEILOMETER_DBPASS", - roles: [ "readWrite", "dbAdmin" ]})' - Replace CEILOMETER_DBPASS with a - suitable password. - - - Source the admin credentials to gain access - to admin-only CLI commands: - $ source admin-openrc.sh - - - To create the Identity service credentials: - - - Create the ceilometer user: - $ keystone user-create --name ceilometer --pass CEILOMETER_PASS - Replace CEILOMETER_PASS with a - suitable password. - - - Link the ceilometer user to the - service tenant and admin - role: - $ keystone user-role-add --user ceilometer --tenant service --role admin - - - Create the ceilometer service: - $ keystone service-create --name ceilometer --type metering \ - --description "Telemetry" - - - Create the Identity service endpoints: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ metering / {print $2}') \ - --publicurl http://controller:8777 \ - --internalurl http://controller:8777 \ - --adminurl http://controller:8777 \ - --region regionOne - - - - - - To configure prerequisites - Before you install and configure Telemetry, you must install - MongoDB. - - Install the MongoDB package: - # apt-get install mongodb-server - - - Edit the /etc/mongodb.conf file and - complete the following actions: - - - Configure the bind_ip key to use the - management interface IP address of the controller node. - bind_ip = 10.0.0.11 - - - By default, MongoDB creates several 1GB journal files - in the /var/lib/mongodb/journal - directory. If you want to reduce the size of each journal file - to 128MB and limit total journal space consumption to - 512MB, assert the smallfiles key: - smallfiles = true - If you change the journaling configuration, stop the MongoDB - service, remove the initial journal files, and start the - service: - # service mongodb stop -# rm /var/lib/mongodb/journal/prealloc.* -# service mongodb start - You can also disable journaling. For more information, see - the MongoDB manual. - - - Restart the MongoDB service: - # service mongodb restart - - - - - - To install and configure the Telemetry module components - - Install the packages: - # apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central \ - ceilometer-agent-notification ceilometer-alarm-evaluator ceilometer-alarm-notifier \ - python-ceilometerclient - # yum install openstack-ceilometer-api openstack-ceilometer-collector \ - openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm \ - python-ceilometerclient - # zypper install openstack-ceilometer-api openstack-ceilometer-collector \ - openstack-ceilometer-agent-notification openstack-ceilometer-agent-central python-ceilometerclient \ - openstack-ceilometer-alarm-evaluator openstack-ceilometer-alarm-notifier - - - Generate a random value to use as the metering secret: - # openssl rand -hex 10 - # openssl rand 10 | hexdump -e '1/1 "%.2x"' - - - Edit the /etc/ceilometer/ceilometer.conf file - and complete the following actions: - - - In the [database] section, - configure database access: - [database] -... -connection = mongodb://ceilometer:CEILOMETER_DBPASS@controller:27017/ceilometer - Replace CEILOMETER_DBPASS with - the password you chose for the Telemetry module database. - - - In the [DEFAULT] section, configure - RabbitMQ message broker access: - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the password - you chose for the guest account in - RabbitMQ. - - - In the [DEFAULT] and - [keystone_authtoken] sections, configure - Identity service access: - [DEFAULT] -... -auth_strategy = keystone - -[keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = ceilometer -admin_password = CEILOMETER_PASS - Replace CEILOMETER_PASS with the - password you chose for the celiometer - user in the Identity service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [service_credentials] - section, configure service credentials: - [service_credentials] -... -os_auth_url = http://controller:5000/v2.0 -os_username = ceilometer -os_tenant_name = service -os_password = CEILOMETER_PASS - Replace CEILOMETER_PASS with - the password you chose for the ceilometer - user in the Identity service. - - - In the [publisher] section, configure - the metering secret: - [publisher] -... -metering_secret = METERING_SECRET - Replace METERING_SECRET with the - random value that you generated in a previous step. - - - In the [DEFAULT] section, configure the log - directory: - [DEFAULT] -... -log_dir = /var/log/ceilometer - - - In the [collector] section, configure the - dispatcher: - - [collector] -... -dispatcher = database - - - - - - - To install and configure the Telemetry module components - - Install the packages: - # apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central \ - ceilometer-agent-notification ceilometer-alarm-evaluator ceilometer-alarm-notifier \ - python-ceilometerclient - - - Respond to prompts for - database management, - Identity service - credentials, - service endpoint - registration, and - message broker - credentials. - - - Generate a random value to use as the metering secret: - # openssl rand -hex 10 - - - Edit the /etc/ceilometer/ceilometer.conf file - and complete the following actions: - - - In the [publisher] section, configure - the metering secret: - [publisher] -... -metering_secret = METERING_SECRET - Replace METERING_SECRET with the - random value that you generated in a previous step. - - - In the [service_credentials] - section, configure service credentials: - [service_credentials] -... -os_auth_url = http://controller:5000/v2.0 -os_username = ceilometer -os_tenant_name = service -os_password = CEILOMETER_PASS - Replace CEILOMETER_PASS with - the password you chose for the ceilometer - user in the Identity service. - - - - - - To finalize installation - - Restart the Telemetry services: - # service ceilometer-agent-central restart -# service ceilometer-agent-notification restart -# service ceilometer-api restart -# service ceilometer-collector restart -# service ceilometer-alarm-evaluator restart -# service ceilometer-alarm-notifier restart - - - Start the Telemetry services and configure them to start when the - system boots: - # systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service \ - openstack-ceilometer-central.service openstack-ceilometer-collector.service \ - openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service -# systemctl start openstack-ceilometer-api.service openstack-ceilometer-notification.service \ - openstack-ceilometer-central.service openstack-ceilometer-collector.service \ - openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service - On SLES: - # service openstack-ceilometer-api start -# service openstack-ceilometer-agent-notification start -# service openstack-ceilometer-agent-central start -# service openstack-ceilometer-collector start -# service openstack-ceilometer-alarm-evaluator start -# service openstack-ceilometer-alarm-notifier start -# chkconfig openstack-ceilometer-api on -# chkconfig openstack-ceilometer-agent-notification on -# chkconfig openstack-ceilometer-agent-central on -# chkconfig openstack-ceilometer-collector on -# chkconfig openstack-ceilometer-alarm-evaluator on -# chkconfig openstack-ceilometer-alarm-notifier on - On openSUSE: - # systemctl enable openstack-ceilometer-api.service -# systemctl enable openstack-ceilometer-agent-notification.service -# systemctl enable openstack-ceilometer-agent-central.service -# systemctl enable openstack-ceilometer-collector.service -# systemctl enable openstack-ceilometer-alarm-evaluator.service -# systemctl enable openstack-ceilometer-alarm-notifier.service -# systemctl start openstack-ceilometer-api.service -# systemctl start openstack-ceilometer-agent-notification.service -# systemctl start openstack-ceilometer-agent-central.service -# systemctl start openstack-ceilometer-collector.service -# systemctl start openstack-ceilometer-alarm-evaluator.service -# systemctl start openstack-ceilometer-alarm-notifier.service - - -
diff --git a/doc/training-guides/basic-install-guide/section_ceilometer-glance.xml b/doc/training-guides/basic-install-guide/section_ceilometer-glance.xml deleted file mode 100644 index 38eaed86..00000000 --- a/doc/training-guides/basic-install-guide/section_ceilometer-glance.xml +++ /dev/null @@ -1,33 +0,0 @@ - -
- Configure the Image Service for Telemetry - - - To retrieve image samples, you must configure the Image - Service to send notifications to the bus. - Edit - /etc/glance/glance-api.conf and modify the - [DEFAULT] section: - notification_driver = messaging -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - - - Restart the Image Services with their new - settings: - # service glance-registry restart -# service glance-api restart - # systemctl restart openstack-glance-api.service openstack-glance-registry.service - On SLES: - # service openstack-glance-api restart -# service openstack-glance-registry restart - On openSUSE: - # systemctl restart openstack-glance-api.service openstack-glance-registry.service - - -
diff --git a/doc/training-guides/basic-install-guide/section_ceilometer-install.xml b/doc/training-guides/basic-install-guide/section_ceilometer-install.xml deleted file mode 100644 index a028d893..00000000 --- a/doc/training-guides/basic-install-guide/section_ceilometer-install.xml +++ /dev/null @@ -1,287 +0,0 @@ - - -%openstack; -]> -
- Install the Telemetry module - - Telemetry provides an API service that - provides a collector and a range of disparate agents. Before - you can install these agents on nodes such as the compute - node, you must use this procedure to install the core - components on the controller node. - - Install the Telemetry service on the controller - node: - # apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central \ - ceilometer-agent-notification ceilometer-alarm-evaluator ceilometer-alarm-notifier python-ceilometerclient - # yum install openstack-ceilometer-api openstack-ceilometer-collector \ - openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm \ - python-ceilometerclient - # zypper install openstack-ceilometer-api openstack-ceilometer-collector \ - openstack-ceilometer-agent-notification openstack-ceilometer-agent-central python-ceilometerclient \ - openstack-ceilometer-alarm-evaluator openstack-ceilometer-alarm-notifier - - - Respond to the prompts for [keystone_authtoken] settings, - RabbitMQ credentials - and API endpoint - registration. - - - The Telemetry service uses a database to store information. - Specify the location of the database in the configuration - file. The examples use a MongoDB database on the controller - node: - # yum install mongodb-server mongodb - # zypper install mongodb - # apt-get install mongodb-server - - - By default, MongoDB is configured to create several 1 GB files - in the /var/lib/mongodb/journal/ directory - to support database journaling. - - - If you need to minimize the space allocated to support - database journaling then set the - configuration key to true in the - /etc/mongodb.conf configuration - file. This configuration reduces the size of each journaling - file to 512 MB. - - - As the files are created, the first time the MongoDB service starts - you must stop the service and remove the files for this change to - take effect: - - # service mongodb stop -# rm /var/lib/mongodb/journal/prealloc.* -# service mongodb start - - For more information on the - configuration key refer to the MongoDB documentation at - . - - - For instructions detailing the steps to disable database journaling - entirely refer to - . - - - - - Configure MongoDB to make it listen on the controller management IP - address. Edit the /etc/mongodb.conf file and modify the - bind_ip key: - bind_ip = 10.0.0.11 - - - Restart the MongoDB service to apply the configuration change: - # service mongodb restart - - - Start the MongoDB server and configure it to start when - the system boots: - # service mongodb start -# chkconfig mongodb on - - # service mongod start -# chkconfig mongod on - - - Create the database and a ceilometer - database user: - # mongo --host controller --eval ' -db = db.getSiblingDB("ceilometer"); -db.addUser({user: "ceilometer", - pwd: "CEILOMETER_DBPASS", - roles: [ "readWrite", "dbAdmin" ]})' - - - Configure the Telemetry service to use the database: - # openstack-config --set /etc/ceilometer/ceilometer.conf \ - database connection mongodb://ceilometer:CEILOMETER_DBPASS@controller:27017/ceilometer - Edit the - /etc/ceilometer/ceilometer.conf file - and change the [database] section: - [database] -# The SQLAlchemy connection string used to connect to the -# database (string value) -connection = mongodb://ceilometer:CEILOMETER_DBPASS@controller:27017/ceilometer - - - - You must define a secret key that is used as a shared - secret among Telemetry service nodes. Use - openssl to generate a random token and - store it in the configuration file: - # CEILOMETER_TOKEN=$(openssl rand -hex 10) -# echo $CEILOMETER_TOKEN -# openstack-config --set /etc/ceilometer/ceilometer.conf publisher metering_secret $CEILOMETER_TOKEN - For SUSE Linux Enterprise, run the - following command: - # CEILOMETER_TOKEN=$(openssl rand 10|hexdump -e '1/1 "%.2x"') - # openssl rand -hex 10 - Edit the - /etc/ceilometer/ceilometer.conf file - and change the [publisher] section. Replace - CEILOMETER_TOKEN with the results of - the openssl command: - [publisher] -# Secret value for signing metering messages (string value) -metering_secret = CEILOMETER_TOKEN - - - - Configure the RabbitMQ access: - # openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT rabbit_host controller -# openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT rabbit_password RABBIT_PASS - Edit the /etc/ceilometer/ceilometer.conf file and update the [DEFAULT] section: - rabbit_host = controller -rabbit_password = RABBIT_PASS - - - - Configure the collector dispatcher: - # openstack-config --set /etc/ceilometer/ceilometer.conf \ - collector dispatcher database - - - - Configure the log directory: - Edit the /etc/ceilometer/ceilometer.conf file - and update the [DEFAULT] section: - [DEFAULT] -log_dir = /var/log/ceilometer - - - - Create a ceilometer user that the - Telemetry service uses to authenticate with the Identity - Service. Use the service tenant and give - the user the admin role: - $ keystone user-create --name=ceilometer --pass=CEILOMETER_PASS --email=ceilometer@example.com -$ keystone user-role-add --user=ceilometer --tenant=service --role=admin - - - Configure the Telemetry service to authenticate with the Identity - service: - Set the value to - keystone in the - /etc/ceilometer/ceilometer.conf file: - # openstack-config --set /etc/ceilometer/ceilometer.conf \ - DEFAULT auth_strategy keystone - [DEFAULT] -... -auth_strategy = keystone - - - Add the credentials to the configuration files for the - Telemetry service: - # openstack-config --set /etc/ceilometer/ceilometer.conf \ - keystone_authtoken auth_host controller -# openstack-config --set /etc/ceilometer/ceilometer.conf \ - keystone_authtoken admin_user ceilometer -# openstack-config --set /etc/ceilometer/ceilometer.conf \ - keystone_authtoken admin_tenant_name service -# openstack-config --set /etc/ceilometer/ceilometer.conf \ - keystone_authtoken auth_protocol http -# openstack-config --set /etc/ceilometer/ceilometer.conf \ - keystone_authtoken auth_uri http://controller:5000 -# openstack-config --set /etc/ceilometer/ceilometer.conf \ - keystone_authtoken admin_password CEILOMETER_PASS -# openstack-config --set /etc/ceilometer/ceilometer.conf \ - service_credentials os_auth_url http://controller:5000/v2.0 -# openstack-config --set /etc/ceilometer/ceilometer.conf \ - service_credentials os_username ceilometer -# openstack-config --set /etc/ceilometer/ceilometer.conf \ - service_credentials os_tenant_name service -# openstack-config --set /etc/ceilometer/ceilometer.conf \ - service_credentials os_password CEILOMETER_PASS - Edit the - /etc/ceilometer/ceilometer.conf file - and change the [keystone_authtoken] - section: - [keystone_authtoken] -auth_host = controller -auth_port = 35357 -auth_protocol = http -auth_uri = http://controller:5000 -admin_tenant_name = service -admin_user = ceilometer -admin_password = CEILOMETER_PASS - Also set the - [service_credentials] section: - [service_credentials] -os_auth_url = http://controller:5000/v2.0 -os_username = ceilometer -os_tenant_name = service -os_password = CEILOMETER_PASS - - - Register the Telemetry service with the Identity Service so - that other OpenStack services can locate it. Use the - keystone command to register the service - and specify the endpoint: - $ keystone service-create --name=ceilometer --type=metering \ - --description="Telemetry" -$ keystone endpoint-create \ - --service-id=$(keystone service-list | awk '/ metering / {print $2}') \ - --publicurl=http://controller:8777 \ - --internalurl=http://controller:8777 \ - --adminurl=http://controller:8777 - - - Restart the services with their new settings: - # service ceilometer-agent-central restart -# service ceilometer-agent-notification restart -# service ceilometer-api restart -# service ceilometer-collector restart -# service ceilometer-alarm-evaluator restart -# service ceilometer-alarm-notifier restart - - - Start the openstack-ceilometer-api, openstack-ceilometer-agent-centralopenstack-ceilometer-central, - openstack-ceilometer-collector, - openstack-ceilometer-alarm-evaluator, - and openstack-ceilometer-alarm-notifier - services and configure them to start when the system boots: - # service openstack-ceilometer-api start -# service openstack-ceilometer-agent-notification start -# service openstack-ceilometer-agent-central start -# service openstack-ceilometer-collector start -# service openstack-ceilometer-alarm-evaluator start -# service openstack-ceilometer-alarm-notifier start -# chkconfig openstack-ceilometer-api on -# chkconfig openstack-ceilometer-agent-notification on -# chkconfig openstack-ceilometer-agent-central on -# chkconfig openstack-ceilometer-collector on -# chkconfig openstack-ceilometer-alarm-evaluator on -# chkconfig openstack-ceilometer-alarm-notifier on - # service openstack-ceilometer-api start -# service openstack-ceilometer-notification start -# service openstack-ceilometer-central start -# service openstack-ceilometer-collector start -# service openstack-ceilometer-alarm-evaluator start -# service openstack-ceilometer-alarm-notifier start -# chkconfig openstack-ceilometer-api on -# chkconfig openstack-ceilometer-notification on -# chkconfig openstack-ceilometer-central on -# chkconfig openstack-ceilometer-collector on -# chkconfig openstack-ceilometer-alarm-evaluator on -# chkconfig openstack-ceilometer-alarm-notifier on - - -
diff --git a/doc/training-guides/basic-install-guide/section_ceilometer-nova.xml b/doc/training-guides/basic-install-guide/section_ceilometer-nova.xml deleted file mode 100644 index 61443036..00000000 --- a/doc/training-guides/basic-install-guide/section_ceilometer-nova.xml +++ /dev/null @@ -1,120 +0,0 @@ - -
- - Install the Compute agent for Telemetry - Telemetry is composed of an API service, a collector and a range - of disparate agents. This section explains how to install and configure - the agent that runs on the compute node. - - To configure prerequisites - - Install the package: - # apt-get install ceilometer-agent-compute - # yum install openstack-ceilometer-compute python-ceilometerclient python-pecan - # zypper install openstack-ceilometer-agent-compute - - - Edit the /etc/nova/nova.conf file and - add the following lines to the [DEFAULT] - section: - [DEFAULT] -... -instance_usage_audit = True -instance_usage_audit_period = hour -notify_on_state_change = vm_and_task_state -notification_driver = nova.openstack.common.notifier.rpc_notifier -notification_driver = ceilometer.compute.nova_notifier - - - Restart the Compute service: - # service nova-compute restart - # systemctl restart openstack-nova-compute.service - On SLES: - # service openstack-nova-compute restart - On openSUSE: - # systemctl restart openstack-nova-compute.service - - - - To configure the Compute agent for Telemetry - Edit the /etc/ceilometer/ceilometer.conf - file and complete the following actions: - - In the [publisher] section, set the - secret key for Telemetry service nodes: - [publisher] -# Secret value for signing metering messages (string value) -metering_secret = CEILOMETER_TOKEN - Replace CEILOMETER_TOKEN with - the ceilometer token that you created previously. - - - In the [DEFAULT] section, configure - RabbitMQ broker access: - [DEFAULT] -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the password - you chose for the guest account in RabbitMQ. - - - In the [keystone_authtoken] section, - configure Identity service access: - [keystone_authtoken] -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = ceilometer -admin_password = CEILOMETER_PASS - Replace CEILOMETER_PASS with the - password you chose for the Telemetry module database. - - Comment out the auth_host, - auth_port, and auth_protocol - keys, since they are replaced by the identity_uri - and auth_uri keys. - - - - In the [service_credentials] section, - configure service credentials: - [service_credentials] -os_auth_url = http://controller:5000/v2.0 -os_username = ceilometer -os_tenant_name = service -os_password = CEILOMETER_PASS -os_endpoint_type = internalURL - Replace CEILOMETER_PASS with the password you chose for the - ceilometer user in the Identity service. - - - In the [DEFAULT] section, configure the - log directory: - [DEFAULT] -log_dir = /var/log/ceilometer - - - - To finish installation - - Restart the service with its new settings: - # service ceilometer-agent-compute restart - - - Start the service and configure it to start when the - system boots: - # systemctl enable openstack-ceilometer-compute.service -# systemctl start openstack-ceilometer-compute.service - On SLES: - # service openstack-ceilometer-agent-compute start -# chkconfig openstack-ceilometer-agent-compute on - On openSUSE: - # systemctl enable openstack-ceilometer-compute.service -# systemctl start openstack-ceilometer-compute.service - - -
diff --git a/doc/training-guides/basic-install-guide/section_ceilometer-swift.xml b/doc/training-guides/basic-install-guide/section_ceilometer-swift.xml deleted file mode 100644 index abfaa6d2..00000000 --- a/doc/training-guides/basic-install-guide/section_ceilometer-swift.xml +++ /dev/null @@ -1,69 +0,0 @@ - -
- Configure the Object Storage service for Telemetry - - - Install the python-ceilometerclient - package on your Object Storage proxy server: - # apt-get install python-ceilometerclient - # yum install python-ceilometerclient - # zypper install python-ceilometerclient - - - To retrieve object store statistics, the Telemetry service - needs access to Object Storage with the - ResellerAdmin role. Give this role to - your os_username user for the - os_tenant_name tenant: - $ keystone role-create --name ResellerAdmin -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| id | 462fa46c13fd4798a95a3bfbe27b5e54 | -| name | ResellerAdmin | -+----------+----------------------------------+ - - $ keystone user-role-add --tenant service --user ceilometer \ - --role 462fa46c13fd4798a95a3bfbe27b5e54 - - - You must also add the Telemetry middleware to Object - Storage to handle incoming and outgoing traffic. Add - these lines to the - /etc/swift/proxy-server.conf - file: - [filter:ceilometer] -use = egg:ceilometer#swift - - - Add ceilometer to the - pipeline parameter of that same file: - [pipeline:main] -pipeline = healthcheck cache authtoken keystoneauth ceilometer proxy-server - - - Add the system user swift to the system group - ceilometer to give Object Storage access to the - ceilometer.conf file. - # usermod -a -G ceilometer swift - - - Add ResellerAdmin to the - operator_roles parameter of that same file: - operator_roles = Member,admin,swiftoperator,_member_,ResellerAdmin - - - Restart the service with its new settings: - # service swift-proxy restart - # systemctl restart openstack-swift-proxy.service - On SLES: - # service openstack-swift-proxy restart - On openSUSE: - # systemctl restart openstack-swift-proxy.service - - -
diff --git a/doc/training-guides/basic-install-guide/section_ceilometer-verify.xml b/doc/training-guides/basic-install-guide/section_ceilometer-verify.xml deleted file mode 100644 index aaa5b31e..00000000 --- a/doc/training-guides/basic-install-guide/section_ceilometer-verify.xml +++ /dev/null @@ -1,50 +0,0 @@ - -
- Verify the Telemetry installation - To test the Telemetry installation, download an image from the - Image Service, and use the ceilometer command to display usage statistics. - - - - Use the ceilometer meter-list command to test - the access to Telemetry: - $ ceilometer meter-list - +------------+-------+-------+--------------------------------------+---------+----------------------------------+ -| Name | Type | Unit | Resource ID | User ID | Project ID | -+------------+-------+-------+--------------------------------------+---------+----------------------------------+ -| image | gauge | image | acafc7c0-40aa-4026-9673-b879898e1fc2 | None | efa984b0a914450e9a47788ad330699d | -| image.size | gauge | B | acafc7c0-40aa-4026-9673-b879898e1fc2 | None | efa984b0a914450e9a47788ad330699d | -+------------+-------+-------+--------------------------------------+---------+----------------------------------+ - - - Download an image from the Image Service: - $ glance image-download "cirros-0.3.3-x86_64" > cirros.img - - - Call the ceilometer meter-list command again to - validate that the download has been detected and stored by the Telemetry: - $ ceilometer meter-list - +----------------+-------+-------+--------------------------------------+---------+----------------------------------+ -| Name | Type | Unit | Resource ID | User ID | Project ID | -+----------------+-------+-------+--------------------------------------+---------+----------------------------------+ -| image | gauge | image | acafc7c0-40aa-4026-9673-b879898e1fc2 | None | efa984b0a914450e9a47788ad330699d | -| image.download | delta | B | acafc7c0-40aa-4026-9673-b879898e1fc2 | None | efa984b0a914450e9a47788ad330699d | -| image.serve | delta | B | acafc7c0-40aa-4026-9673-b879898e1fc2 | None | efa984b0a914450e9a47788ad330699d | -| image.size | gauge | B | acafc7c0-40aa-4026-9673-b879898e1fc2 | None | efa984b0a914450e9a47788ad330699d | -+----------------+-------+-------+--------------------------------------+---------+----------------------------------+ - - - You can now get usage statistics for the various meters: - $ ceilometer statistics -m image.download -p 60 - +--------+---------------------+---------------------+-------+------------+------------+------------+------------+----------+----------------------------+----------------------------+ -| Period | Period Start | Period End | Count | Min | Max | Sum | Avg | Duration | Duration Start | Duration End | -+--------+---------------------+---------------------+-------+------------+------------+------------+------------+----------+----------------------------+----------------------------+ -| 60 | 2013-11-18T18:08:50 | 2013-11-18T18:09:50 | 1 | 13167616.0 | 13167616.0 | 13167616.0 | 13167616.0 | 0.0 | 2013-11-18T18:09:05.334000 | 2013-11-18T18:09:05.334000 | -+--------+---------------------+---------------------+-------+------------+------------+------------+------------+----------+----------------------------+----------------------------+ - - -
diff --git a/doc/training-guides/basic-install-guide/section_cinder-controller-node.xml b/doc/training-guides/basic-install-guide/section_cinder-controller-node.xml deleted file mode 100644 index 03796f12..00000000 --- a/doc/training-guides/basic-install-guide/section_cinder-controller-node.xml +++ /dev/null @@ -1,264 +0,0 @@ - -
- Install and configure controller node - This section describes how to install and configure the Block - Storage service, code-named cinder, on the controller node. This - service requires at least one additional storage node that provides - volumes to instances. - - To configure prerequisites - Before you install and configure the Block Storage service, you must - create a database and Identity service credentials including - endpoints. - - To create the database, complete these steps: - - - Use the database access client to connect to the database - server as the root user: - $ mysql -u root -p - - - Create the cinder database: - CREATE DATABASE cinder; - - - Grant proper access to the cinder - database: - GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; -GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - Replace CINDER_DBPASS with - a suitable password. - - - Exit the database access client. - - - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - To create the Identity service credentials, complete these - steps: - - - Create a cinder user: - $ keystone user-create --name cinder --pass CINDER_PASS -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| email | | -| enabled | True | -| id | 881ab2de4f7941e79504a759a83308be | -| name | cinder | -| username | cinder | -+----------+----------------------------------+ - Replace CINDER_PASS with a suitable - password. - - - Link the cinder user to the - service tenant and admin - role: - $ keystone user-role-add --user cinder --tenant service --role admin - - This command provides no output. - - - - Create the cinder services: - $ keystone service-create --name cinder --type volume \ - --description "OpenStack Block Storage" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | OpenStack Block Storage | -| enabled | True | -| id | 1e494c3e22a24baaafcaf777d4d467eb | -| name | cinder | -| type | volume | -+-------------+----------------------------------+ -$ keystone service-create --name cinderv2 --type volumev2 \ - --description "OpenStack Block Storage" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | OpenStack Block Storage | -| enabled | True | -| id | 16e038e449c94b40868277f1d801edb5 | -| name | cinderv2 | -| type | volumev2 | -+-------------+----------------------------------+ - - The Block Storage service requires two different services - to support API versions 1 and 2. - - - - Create the Block Storage service endpoints: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ volume / {print $2}') \ - --publicurl http://controller:8776/v1/%\(tenant_id\)s \ - --internalurl http://controller:8776/v1/%\(tenant_id\)s \ - --adminurl http://controller:8776/v1/%\(tenant_id\)s \ - --region regionOne -+-------------+-----------------------------------------+ -| Property | Value | -+-------------+-----------------------------------------+ -| adminurl | http://controller:8776/v1/%(tenant_id)s | -| id | d1b7291a2d794e26963b322c7f2a55a4 | -| internalurl | http://controller:8776/v1/%(tenant_id)s | -| publicurl | http://controller:8776/v1/%(tenant_id)s | -| region | regionOne | -| service_id | 1e494c3e22a24baaafcaf777d4d467eb | -+-------------+-----------------------------------------+ -$ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \ - --publicurl http://controller:8776/v2/%\(tenant_id\)s \ - --internalurl http://controller:8776/v2/%\(tenant_id\)s \ - --adminurl http://controller:8776/v2/%\(tenant_id\)s \ - --region regionOne -+-------------+-----------------------------------------+ -| Property | Value | -+-------------+-----------------------------------------+ -| adminurl | http://controller:8776/v2/%(tenant_id)s | -| id | 097b4a6fc8ba44b4b10d4822d2d9e076 | -| internalurl | http://controller:8776/v2/%(tenant_id)s | -| publicurl | http://controller:8776/v2/%(tenant_id)s | -| region | regionOne | -| service_id | 16e038e449c94b40868277f1d801edb5 | -+-------------+-----------------------------------------+ - - The Block Storage service requires two different endpoints - to support API versions 1 and 2. - - - - - - - To install and configure Block Storage controller components - - Install the packages: - # apt-get install cinder-api cinder-scheduler python-cinderclient - # yum install openstack-cinder python-cinderclient python-oslo-db - # zypper install openstack-cinder-api openstack-cinder-scheduler python-cinderclient - - - Edit the /etc/cinder/cinder.conf file and - complete the following actions: - - - In the [database] section, configure - database access: - [database] -... -connection = mysql://cinder:CINDER_DBPASS@controller/cinder - Replace CINDER_DBPASS with the - password you chose for the Block Storage database. - - - In the [DEFAULT] section, configure - RabbitMQ message broker access: - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - - - In the [DEFAULT] and - [keystone_authtoken] sections, - configure Identity service access: - [DEFAULT] -... -auth_strategy = keystone - -[keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = cinder -admin_password = CINDER_PASS - Replace CINDER_PASS with the - password you chose for the cinder user in the - Identity service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [DEFAULT] section, configure the - my_ip option to use the management interface IP - address of the controller node: - [DEFAULT] -... -my_ip = 10.0.0.11 - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - Populate the Block Storage database: - # su -s /bin/sh -c "cinder-manage db sync" cinder - - - - To install and configure Block Storage controller components - - Install the packages: - # apt-get install cinder-api cinder-scheduler python-cinderclient - - - - To finalize installation - - Restart the Block Storage services: - # service cinder-scheduler restart -# service cinder-api restart - - - Start the Block Storage services and configure them to start when - the system boots: - # systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service -# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service - On SLES: - # service openstack-cinder-api start -# service openstack-cinder-scheduler start -# chkconfig openstack-cinder-api on -# chkconfig openstack-cinder-scheduler on - On openSUSE: - # systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service -# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service - - - By default, the Ubuntu packages create an SQLite database. - Because this configuration uses a SQL database server, you can - remove the SQLite database file: - # rm -f /var/lib/cinder/cinder.sqlite - - -
diff --git a/doc/training-guides/basic-install-guide/section_cinder-controller.xml b/doc/training-guides/basic-install-guide/section_cinder-controller.xml deleted file mode 100644 index 337c8684..00000000 --- a/doc/training-guides/basic-install-guide/section_cinder-controller.xml +++ /dev/null @@ -1,181 +0,0 @@ - -
- Configure a Block Storage service controller - - This scenario configures OpenStack Block Storage - services on the Controller node - and assumes that a - second node provides storage through the cinder-volume service. - For - instructions on how to configure the second node, see . - - You can configure OpenStack to use various storage systems. - This example uses LVM. - - - Install the appropriate packages for the Block Storage - service: - # apt-get install cinder-api cinder-scheduler - # yum install openstack-cinder - # zypper install openstack-cinder-api openstack-cinder-scheduler - - - Respond to the prompts for database - management, [keystone_authtoken] settings, - RabbitMQ - credentials, and API endpoint registration. - - - Configure Block Storage to use your database. - Run the following command - to set connection option in the - [database] section, which is in the - /etc/cinder/cinder.conf file, replace - CINDER_DBPASS with the password for the - Block Storage database that you will create in a later step: - In the /etc/cinder/cinder.conf - file, set the connection option in the - [database] section and replace - CINDER_DBPASS with the password for the - Block Storage database that you will create in a later step: - # openstack-config --set /etc/cinder/cinder.conf \ - database connection mysql://cinder:CINDER_DBPASS@controller/cinder - [database] -... -connection = mysql://cinder:CINDER_DBPASS@controller/cinder - In some distributions, the /etc/cinder/cinder.conf - file does not include the - [database] section header. You must add this - section header to the end of the file before you proceed. - - - Use the password that you set to log in as root to create - a cinder database: - $ mysql -u root -p -mysql> CREATE DATABASE cinder; -mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; -mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - - - Create the database tables for the Block Storage - service: - # su -s /bin/sh -c "cinder-manage db sync" cinder - - - Create a cinder user. - The Block Storage service uses this user to authenticate - with the Identity service. - Use the service tenant and give the - user the admin role: - $ keystone user-create --name=cinder --pass=CINDER_PASS --email=cinder@example.com -$ keystone user-role-add --user=cinder --tenant=service --role=admin - - - Edit the - /etc/cinder/cinder.conf configuration file: - # openstack-config --set /etc/cinder/cinder.conf DEFAULT \ - auth_strategy keystone -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - auth_uri http://controller:5000 -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - auth_protocol http -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - auth_port 35357 -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - admin_user cinder -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - admin_password CINDER_PASS - Edit the - /etc/cinder/cinder.conf configuration - file and add this section for keystone credentials: - ... -[keystone_authtoken] -auth_uri = http://controller:5000 -auth_host = controller -auth_port = 35357 -auth_protocol = http -admin_tenant_name = service -admin_user = cinder -admin_password = CINDER_PASS - - - Configure Block Storage to use the RabbitMQ message - broker. - In the [DEFAULT] section in - the /etc/cinder/cinder.conf file, set - these configuration keys and replace - RABBIT_PASS with the password you - chose for RabbitMQ: - [DEFAULT] -... -rpc_backend = cinder.openstack.common.rpc.impl_kombu -rabbit_host = controller -rabbit_port = 5672 -rabbit_userid = guest -rabbit_password = RABBIT_PASS - - - Configure Block Storage to use the RabbitMQ message - broker. - Replace RABBIT_PASS with the - password you chose for RabbitMQ: - # openstack-config --set /etc/cinder/cinder.conf \ - DEFAULT rpc_backend cinder.openstack.common.rpc.impl_kombu -# openstack-config --set /etc/cinder/cinder.conf \ - DEFAULT rabbit_host controller -# openstack-config --set /etc/cinder/cinder.conf \ - DEFAULT rabbit_port 5672 -# openstack-config --set /etc/cinder/cinder.conf \ - DEFAULT rabbit_password RABBIT_PASS - - - Register the Block Storage service with the Identity - service so that other OpenStack services can locate it: - $ keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage" -$ keystone endpoint-create \ - --service-id=$(keystone service-list | awk '/ volume / {print $2}') \ - --publicurl=http://controller:8776/v1/%\(tenant_id\)s \ - --internalurl=http://controller:8776/v1/%\(tenant_id\)s \ - --adminurl=http://controller:8776/v1/%\(tenant_id\)s - - - Register a service and endpoint for version 2 of the Block - Storage service API: - $ keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2" -$ keystone endpoint-create \ - --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') \ - --publicurl=http://controller:8776/v2/%\(tenant_id\)s \ - --internalurl=http://controller:8776/v2/%\(tenant_id\)s \ - --adminurl=http://controller:8776/v2/%\(tenant_id\)s - - - Restart the Block Storage services with the new - settings: - # service cinder-scheduler restart -# service cinder-api restart - - - Start and configure the Block Storage services to start when - the system boots: - # service openstack-cinder-api start -# service openstack-cinder-scheduler start -# chkconfig openstack-cinder-api on -# chkconfig openstack-cinder-scheduler on - - -
diff --git a/doc/training-guides/basic-install-guide/section_cinder-node.xml b/doc/training-guides/basic-install-guide/section_cinder-node.xml deleted file mode 100644 index adda78a0..00000000 --- a/doc/training-guides/basic-install-guide/section_cinder-node.xml +++ /dev/null @@ -1,223 +0,0 @@ - -
- - Configure a Block Storage service node - After you configure the services on the controller node, - configure a second system to be a Block Storage service node. This - node contains the disk that serves volumes. - You can configure OpenStack to use various storage systems. - This example uses LVM. - - - Use the instructions in to - configure the system. Note the following differences from the - installation instructions for the controller node: - - - Set the host name to block1 and use - 10.0.0.41 as IP address on the management - network interface. Ensure that the IP addresses and host - names for both controller node and Block Storage service - node are listed in the /etc/hosts file - on each system. - - - Follow the instructions in to synchronize from the controller node. - - - - - Install the required LVM packages, if they are not already - installed: - # apt-get install lvm2 - - - Create the LVM physical and logical volumes. This guide - assumes a second disk /dev/sdb that is used - for this purpose: - # pvcreate /dev/sdb -# vgcreate cinder-volumes /dev/sdb - - - Add a filter entry to the devices - section in the /etc/lvm/lvm.conf file to - keep LVM from scanning devices used by virtual - machines: - devices { -... -filter = [ "a/sda1/", "a/sdb/", "r/.*/"] -... -} - - You must add required physical volumes for LVM on the - Block Storage host. Run the pvdisplay - command to get a list or required volumes. - - Each item in the filter array starts with either an - a for accept, or an r - for reject. The physical volumes that are required on the - Block Storage host have names that begin with - a. The array must end with - "r/.*/" to reject any device not - listed. - In this example, /dev/sda1 is the - volume where the volumes for the operating system for the node - reside, while /dev/sdb is the volume - reserved for cinder-volumes. - - - After you configure the operating system, install the - appropriate packages for the Block Storage service: - # apt-get install cinder-volume - # yum install openstack-cinder scsi-target-utils - # zypper install openstack-cinder-volume tgt - - - Respond to the debconf prompts about the database - management, [keystone_authtoken] settings, - and RabbitMQ - credentials. Make sure to enter the same details as - you did for your Block Storage service controller node. - Another screen prompts you for the volume-group to use. The Debian - package configuration script detects every active volume group - and tries to use the first one it sees, provided that the - lvm2 package was - installed before Block Storage. This should be the case if you - configured the volume group first, as this guide recommends. - If you have only one active volume group on your Block - Storage service node, you do not need to manually enter its - name when you install the cinder-volume package because it is detected - automatically. If no volume-group is available when you install - cinder-common, you - must use dpkg-reconfigure to manually - configure or re-configure cinder-common. - - - Copy the - /etc/cinder/cinder.conf configuration - file from the controller, or perform the following steps to - set the keystone credentials: - # openstack-config --set /etc/cinder/cinder.conf DEFAULT \ - auth_strategy keystone -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - auth_uri http://controller:5000 -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - auth_protocol http -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - auth_port 35357 -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - admin_user cinder -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - admin_password CINDER_PASS - Edit the - /etc/cinder/cinder.conf configuration - file and add this section for keystone credentials: - ... -[keystone_authtoken] -auth_uri = http://controller:5000 -auth_host = controller -auth_port = 35357 -auth_protocol = http -admin_tenant_name = service -admin_user = cinder -admin_password = CINDER_PASS - - - Configure Block Storage to use the RabbitMQ message - broker. - In the [DEFAULT] configuration section - of the /etc/cinder/cinder.conf file, set - these configuration keys and replace - RABBIT_PASS with the password you - chose for RabbitMQ: - [DEFAULT] -... -rpc_backend = cinder.openstack.common.rpc.impl_kombu -rabbit_host = controller -rabbit_port = 5672 -rabbit_userid = guest -rabbit_password = RABBIT_PASS - - - Configure Block Storage to use the RabbitMQ message - broker. Replace RABBIT_PASS with - the password you chose for RabbitMQ: - # openstack-config --set /etc/cinder/cinder.conf \ - DEFAULT rpc_backend cinder.openstack.common.rpc.impl_kombu -# openstack-config --set /etc/cinder/cinder.conf \ - DEFAULT rabbit_host controller -# openstack-config --set /etc/cinder/cinder.conf \ - DEFAULT rabbit_port 5672 -# openstack-config --set /etc/cinder/cinder.conf \ - DEFAULT rabbit_password RABBIT_PASS - - - Configure Block Storage to use your MySQL database. Edit - the /etc/cinder/cinder.conf file and add - the following key to the [database] - section. Replace CINDER_DBPASS with - the password you chose for the Block Storage database: - # openstack-config --set /etc/cinder/cinder.conf \ - database connection mysql://cinder:CINDER_DBPASS@controller/cinder - [database] -... -connection = mysql://cinder:CINDER_DBPASS@controller/cinder - - In some distributions, the - /etc/cinder/cinder.conf file does not - include the [database] section header. - You must add this section header to the end of the file - before you proceed. - - - - Configure Block Storage to use the Image Service. Block Storage - needs access to images to create bootable volumes. Edit the - /etc/cinder/cinder.conf file and update the - option in the [DEFAULT] - section: - # openstack-config --set /etc/cinder/cinder.conf \ - DEFAULT glance_host controller - [DEFAULT] -... -glance_host = controller - - - Restart the Block Storage services with the new - settings: - # service cinder-volume restart -# service tgt restart - - - Configure the iSCSI target service to discover Block - Storage volumes. Add the following line to the beginning of - the /etc/tgt/targets.conf file, if it is - not already present: - include /etc/cinder/volumes/* - - - Start and configure the Block Storage services to start - when the system boots: - # service openstack-cinder-volume start -# service tgtd start -# chkconfig openstack-cinder-volume on -# chkconfig tgtd on - - -
diff --git a/doc/training-guides/basic-install-guide/section_cinder-storage-node.xml b/doc/training-guides/basic-install-guide/section_cinder-storage-node.xml deleted file mode 100644 index cea1faae..00000000 --- a/doc/training-guides/basic-install-guide/section_cinder-storage-node.xml +++ /dev/null @@ -1,264 +0,0 @@ - -
- - Install and configure a storage node - This section describes how to install and configure storage nodes - for the Block Storage service. For simplicity, this configuration - references one storage node with an empty local block storage device - /dev/sdb that contains a suitable partition table with - one partition /dev/sdb1 occupying the entire device. - The service provisions logical volumes on this device using the - LVM driver and provides them to instances via - iSCSI transport. You can follow these instructions with - minor modifications to horizontally scale your environment with - additional storage nodes. - - To configure prerequisites - You must configure the storage node before you install and - configure the volume service on it. Similar to the controller node, - the storage node contains one network interface on the - management network. The storage node also - needs an empty block storage device of suitable size for your - environment. For more information, see - - - Configure the management interface: - IP address: 10.0.0.41 - Network mask: 255.255.255.0 (or /24) - Default gateway: 10.0.0.1 - - - Set the hostname of the node to - block1. - - - Copy the contents of the /etc/hosts file from - the controller node to the storage node and add the following - to it: - # block1 -10.0.0.41 block1 - Also add this content to the /etc/hosts file - on all other nodes in your environment. - - - Install the LVM packages: - # apt-get install lvm2 - # yum install lvm2 - - Some distributions include LVM by default. - - - - Start the LVM metadata service and configure it to start when the - system boots: - # systemctl enable lvm2-lvmetad.service -# systemctl start lvm2-lvmetad.service - - - Create the LVM physical volume /dev/sdb1: - # pvcreate /dev/sdb1 - Physical volume "/dev/sdb1" successfully created - - If your system uses a different device name, adjust these - steps accordingly. - - - - Create the LVM volume group - cinder-volumes: - # vgcreate cinder-volumes /dev/sdb1 - Volume group "cinder-volumes" successfully created - The Block Storage service creates logical volumes in this - volume group. - - - Only instances can access Block Storage volumes. However, the - underlying operating system manages the devices associated with - the volumes. By default, the LVM volume scanning tool scans the - /dev directory for block storage devices that - contain volumes. If tenants use LVM on their volumes, the scanning - tool detects these volumes and attempts to cache them which can cause - a variety of problems with both the underlying operating system - and tenant volumes. You must reconfigure LVM to scan only the devices - that contain the cinder-volume volume group. Edit - the /etc/lvm/lvm.conf file and complete the - following actions: - - - In the devices section, add a filter - that accepts the /dev/sdb device and rejects - all other devices: - devices { -... -filter = [ "a/sdb/", "r/.*/"] - Each item in the filter array begins with a - for accept or r for - reject and includes a regular expression - for the device name. The array must end with - r/.*/ to reject any remaining - devices. You can use the vgs -vvvv - command to test filters. - - If your storage nodes use LVM on the operating system disk, - you must also add the associated device to the filter. For - example, if the /dev/sda device contains - the operating system: - filter = [ "a/sda", "a/sdb/", "r/.*/"] - Similarly, if your compute nodes use LVM on the operating - system disk, you must also modify the filter in the - /etc/lvm/lvm.conf file on those nodes to - include only the operating system disk. For example, if the - /dev/sda device contains the operating - system: - filter = [ "a/sda", "r/.*/"] - - - - - - - Install and configure Block Storage volume components - - Install the packages: - # apt-get install cinder-volume python-mysqldb - # yum install openstack-cinder targetcli python-oslo-db MySQL-python - # zypper install openstack-cinder-volume tgt python-mysql - - - Edit the /etc/cinder/cinder.conf file - and complete the following actions: - - - In the [database] section, configure - database access: - [database] -... -connection = mysql://cinder:CINDER_DBPASS@controller/cinder - Replace CINDER_DBPASS with - the password you chose for the Block Storage database. - - - In the [DEFAULT] section, configure - RabbitMQ message broker access: - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - - - In the [DEFAULT] and - [keystone_authtoken] sections, - configure Identity service access: - [DEFAULT] -... -auth_strategy = keystone - -[keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = cinder -admin_password = CINDER_PASS - Replace CINDER_PASS with the - password you chose for the cinder user in the - Identity service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [DEFAULT] section, configure the - my_ip option: - [DEFAULT] -... -my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - Replace - MANAGEMENT_INTERFACE_IP_ADDRESS with - the IP address of the management network interface on your - storage node, typically 10.0.0.41 for the first node in the - example - architecture. - - - In the [DEFAULT] section, configure the - location of the Image Service: - [DEFAULT] -... -glance_host = controller - - - In the [DEFAULT] section, configure Block - Storage to use the lioadm iSCSI - service: - [DEFAULT] -... -iscsi_helper = lioadm - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - - Install and configure Block Storage volume components - - Install the packages: - # apt-get install cinder-volume python-mysqldb - - - Respond to prompts for the volume group to associate with the - Block Storage service. The script scans for volume groups and - attempts to use the first one. If your system only contains the - cinder-volumes volume group, the script should - automatically choose it. - - - - To finalize installation - - Restart the Block Storage volume service including its - dependencies: - # service tgt restart -# service cinder-volume restart - - - Start the Block Storage volume service including its dependencies - and configure them to start when the system boots: - # systemctl enable openstack-cinder-volume.service target.service -# systemctl start openstack-cinder-volume.service target.service - On SLES: - # service tgtd start -# chkconfig tgtd on -# service openstack-cinder-volume start -# chkconfig openstack-cinder-volume on - On openSUSE: - # systemctl enable openstack-cinder-volume.service tgtd.service -# systemctl start openstack-cinder-volume.service tgtd.service - - - By default, the Ubuntu packages create an SQLite database. - Because this configuration uses a SQL database server, remove - the SQLite database file: - # rm -f /var/lib/cinder/cinder.sqlite - - -
diff --git a/doc/training-guides/basic-install-guide/section_cinder-verify.xml b/doc/training-guides/basic-install-guide/section_cinder-verify.xml deleted file mode 100644 index 76612396..00000000 --- a/doc/training-guides/basic-install-guide/section_cinder-verify.xml +++ /dev/null @@ -1,79 +0,0 @@ - -
- Verify operation - This section describes how to verify operation of the Block Storage - service by creating a volume. - For more information about how to manage volumes, see the OpenStack User Guide. - - Perform these commands on the controller node. - - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - List service components to verify successful launch of each - process: - $ cinder service-list -+------------------+------------+------+---------+-------+----------------------------+-----------------+ -| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | -+------------------+------------+------+---------+-------+----------------------------+-----------------+ -| cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None | -| cinder-volume | block1 | nova | enabled | up | 2014-10-18T01:30:57.000000 | None | -+------------------+------------+------+---------+-------+----------------------------+-----------------+ - - - Source the demo tenant credentials to perform - the following steps as a non-administrative tenant: - $ source demo-openrc.sh - - - Create a 1 GB volume: - $ cinder create --display-name demo-volume1 1 -+---------------------+--------------------------------------+ -| Property | Value | -+---------------------+--------------------------------------+ -| attachments | [] | -| availability_zone | nova | -| bootable | false | -| created_at | 2014-10-14T23:11:50.870239 | -| display_description | None | -| display_name | demo-volume1 | -| encrypted | False | -| id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 | -| metadata | {} | -| size | 1 | -| snapshot_id | None | -| source_volid | None | -| status | creating | -| volume_type | None | -+---------------------+--------------------------------------+ - - - Verify creation and availability of the volume: - $ cinder list ---------------------------------------+-----------+--------------+------+-------------+----------+-------------+ -| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | -+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ -| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | available | demo-volume1 | 1 | None | false | | -+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ - If the status does not indicate available, - check the logs in the /var/log/cinder directory - on the controller and volume nodes for more information. - - The - launch an instance - chapter includes instructions for attaching this volume to an - instance. - - - -
diff --git a/doc/training-guides/basic-install-guide/section_dashboard-install.xml b/doc/training-guides/basic-install-guide/section_dashboard-install.xml deleted file mode 100644 index eb61410f..00000000 --- a/doc/training-guides/basic-install-guide/section_dashboard-install.xml +++ /dev/null @@ -1,143 +0,0 @@ - -
- - Install and configure - This section describes how to install and configure the dashboard - on the controller node. - Before you proceed, verify that your system meets the requirements - in . Also, the dashboard - relies on functional core services including Identity, Image Service, - Compute, and either Networking (neutron) or legacy networking - (nova-network). Environments with stand-alone services such as Object - Storage cannot use the dashboard. For more information, see the - developer documentation. - - To install the dashboard components - - Install the packages: - # apt-get install openstack-dashboard apache2 libapache2-mod-wsgi memcached python-memcache - # yum install openstack-dashboard httpd mod_wsgi memcached python-memcached - # zypper install openstack-dashboard apache2-mod_wsgi memcached python-python-memcached \ - openstack-dashboard-test - - Ubuntu installs the - openstack-dashboard-ubuntu-theme package - as a dependency. Some users reported issues with this theme in - previous releases. If you encounter issues, remove this package - to restore the original OpenStack theme. - - - - - To install the dashboard components - - Install the packages: - # apt-get install openstack-dashboard-apache - - - Respond to prompts for web server configuration. - - The automatic configuration process generates a self-signed - SSL certificate. Consider obtaining an official certificate for - production environments. - - - - - To configure the dashboard - - Configure the web server: - # cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \ - /etc/apache2/conf.d/openstack-dashboard.conf -# a2enmod rewrite;a2enmod ssl;a2enmod wsgi - - - Edit the - /etc/openstack-dashboard/local_settings.py - file and complete the following actions: - Edit the - /etc/openstack-dashboard/local_settings - file and complete the following actions: - Edit the - /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - file and complete the following actions: - - - Configure the dashboard to use OpenStack services on the - controller node: - OPENSTACK_HOST = "controller" - - - Allow all hosts to access the dashboard: - ALLOWED_HOSTS = ['*'] - - - Configure the memcached session - storage service: - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': '127.0.0.1:11211', - } -} - - Comment out any other session storage configuration. - - - By default, SLES and openSUSE use a SQL database for session - storage. For simplicity, we recommend changing the configuration - to use memcached for session - storage. - - - - Optionally, configure the time zone: - TIME_ZONE = "TIME_ZONE" - Replace TIME_ZONE with an - appropriate time zone identifier. For more information, see the - list of time zones. - - - - - - To finalize installation - - On RHEL and CentOS, configure SELinux to permit the web server - to connect to OpenStack services: - # setsebool -P httpd_can_network_connect on - - - Due to a packaging bug, the dashboard CSS fails to load properly. - Run the following command to resolve this issue: - # chown -R apache:apache /usr/share/openstack-dashboard/static - For more information, see the - bug report. - - - Restart the web server and session storage service: - # service apache2 restart -# service memcached restart - - - Start the web server and session storage service and configure - them to start when the system boots: - # systemctl enable httpd.service memcached.service -# systemctl start httpd.service memcached.service - On SLES: - # service apache2 start -# service memcached start -# chkconfig apache2 on -# chkconfig memcached on - On openSUSE: - # systemctl enable apache2.service memcached.service -# systemctl start apache2.service memcached.service - - -
diff --git a/doc/training-guides/basic-install-guide/section_dashboard-system-reqs.xml b/doc/training-guides/basic-install-guide/section_dashboard-system-reqs.xml deleted file mode 100644 index e52616f6..00000000 --- a/doc/training-guides/basic-install-guide/section_dashboard-system-reqs.xml +++ /dev/null @@ -1,59 +0,0 @@ - -
- System requirements - Before you install the OpenStack dashboard, you must meet - the following system requirements: - - - OpenStack Compute installation. Enable the Identity - Service for user and project management. - Note the URLs of the Identity Service and Compute - endpoints. - - - Identity Service user with sudo privileges. Because - Apache does not serve content from a root user, users - must run the dashboard as an Identity Service user - with sudo privileges. - - - Python 2.6 or 2.7. The Python version must support - Django. The Python version should run on any - system, including Mac OS X. Installation prerequisites - might differ by platform. - - - Then, install and configure the dashboard on a node that - can contact the Identity Service. - Provide users with the following information so that they - can access the dashboard through a web browser on their local - machine: - - - The public IP address from which they can access the - dashboard - - - The user name and password with which they can - access the dashboard - - - Your web browser, and that of your users, - must support HTML5 and have cookies and - JavaScript enabled. - - To use the VNC client with the dashboard, the browser - must support HTML5 Canvas and HTML5 WebSockets. - For details about browsers that support noVNC, see https://github.com/kanaka/noVNC/blob/master/README.md, - and https://github.com/kanaka/noVNC/wiki/Browser-support, - respectively. - -
diff --git a/doc/training-guides/basic-install-guide/section_dashboard-verify.xml b/doc/training-guides/basic-install-guide/section_dashboard-verify.xml deleted file mode 100644 index 6214e125..00000000 --- a/doc/training-guides/basic-install-guide/section_dashboard-verify.xml +++ /dev/null @@ -1,24 +0,0 @@ - -
- - Verify operation - This section describes how to verify operation of the - dashboard. - - - Access the dashboard using a web browser: - http://controller/horizon - https://controller/ - http://controller/dashboard - http://controller. - - - Authenticate using admin or - demo user credentials. - - -
diff --git a/doc/training-guides/basic-install-guide/section_debconf-api-endpoints.xml b/doc/training-guides/basic-install-guide/section_debconf-api-endpoints.xml deleted file mode 100644 index 805495b3..00000000 --- a/doc/training-guides/basic-install-guide/section_debconf-api-endpoints.xml +++ /dev/null @@ -1,103 +0,0 @@ - -
- Register API endpoints - All Debian packages for API services, except the - heat-api package, register the service in the - Identity Service catalog. This feature is helpful because API - endpoints are difficult to remember. - - The heat-common package and not the - heat-api package configures the - Orchestration service. - - When you install a package for an API service, you are - prompted to register that service. However, after you install or - upgrade the package for an API service, Debian immediately removes - your response to this prompt from the debconf - database. Consequently, you are prompted to re-register the - service with the Identity Service. If you already registered the - API service, respond no when you - upgrade. - - - - - - - - This screen registers packages in the Identity Service - catalog: - - - - - - - - You are prompted for the Identity Service - admin_token value. The Identity Service uses - this value to register the API service. When you set up the - keystone package, this value is configured - automatically. - - - - - - - - This screen configures the IP addresses for the service. The - configuration script automatically detects the IP address used by - the interface that is connected to the default route - (/sbin/route and /sbin/ip). - Unless you have a unique set up for your network, press - ENTER. - - - - - - - - This screen configures the region name for the service. For - example, us-east-coast or - europe-paris. - - - - - - - - The Debian package post installation scripts will then perform the - below commands for you: - PKG_SERVICE_ID=$(pkgos_get_id keystone --os-token ${AUTH_TOKEN} \ - --os-endpoint http://${KEYSTONE_ENDPOINT_IP}:35357/v2.0/ service-create \ - --name ${SERVICE_NAME} --type ${SERVICE_TYPE} --description "${SERVICE_DESC}") -keystone --os-token ${AUTH_TOKEN} \ - --os-endpoint http://${KEYSTONE_ENDPOINT_IP}:35357/v2.0/ - endpoint-create \ - --region "${REGION_NAME}" --service_id ${PKG_SERVICE_ID} \ - --publicurl http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL} \ - --internalurl http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL} \ - --adminurl http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL}) - The values of AUTH_TOKEN, KEYSTONE_ENDPOINT_IP, - PKG_ENDPOINT_IP and REGION_NAME depend on the - answer you will provide to the debconf prompts. But the values of SERVICE_NAME, - SERVICE_TYPE, SERVICE_DESC and SERVICE_URL - are already pre-wired in each package, so you don't have to remember them. -
diff --git a/doc/training-guides/basic-install-guide/section_debconf-concepts.xml b/doc/training-guides/basic-install-guide/section_debconf-concepts.xml deleted file mode 100644 index 20562f2f..00000000 --- a/doc/training-guides/basic-install-guide/section_debconf-concepts.xml +++ /dev/null @@ -1,98 +0,0 @@ - -
- - debconf concepts - This chapter explains how to use the Debian debconf and dbconfig-common packages to - configure OpenStack services. These packages enable users to - perform configuration tasks. When users install OpenStack - packages, debconf prompts the user for - responses, which seed the contents of configuration files - associated with that package. After package installation, users - can update the configuration of a package by using the - dpkg-reconfigure program. - If you are familiar with these packages and pre-seeding, you - can proceed to . -
- The Debian packages - The rules described here are from the Debian Policy Manual. If any - rule described in this chapter is not respected, you have found - a serious bug that must be fixed. - When you install or upgrade a Debian package, all - configuration file values are preserved. Using the debconf database as a registry is - considered a bug in Debian. If you edit something in any - OpenStack configuration file, the debconf package reads that value when it - prepares to prompt the user. For example, to change the log in - name for the RabbitMQ messaging queue for a service, you can - edit its value in the corresponding configuration file. - To opt out of using the debconf package, run the - dpkg-reconfigure command and select - non-interactive mode: - # dpkg-reconfigure -plow debconf - Then, debconf does - not prompt you. - Another way to disable the debconf package is to prefix the - apt command with - DEBIAN_FRONTEND=noninteractive, as - follows: - # DEBIAN_FRONTEND=noninteractive apt-get install nova-api - If you configure a package with debconf incorrectly, you can re-configure it, as - follows: - # dpkg-reconfigure PACKAGE-NAME - This calls the post-installation script for the - PACKAGE-NAME package after the user - responds to all prompts. If you cannot install a Debian package - in a non-interactive way, you have found a release-critical bug - in Debian. Report it to the Debian bug tracking system. - Generally, the -common packages install the configuration - files. For example, the glance-common package - installs the glance-api.conf and - glance-registry.conf files. So, for the - Image Service, you must re-configure the - glance-common package. The same applies for - cinder-common, - nova-common, and - heat-common packages. - In debconf, the - higher the priority for a screen, the - greater the chance that the user sees that screen. If a - debconf screen has - medium priority and you configure the - Debian system to show only critical prompts, - which is the default in Debian, the user does not see that - debconf screen. - Instead, the default for the related package is used. In the - Debian OpenStack packages, a number of debconf screens are set with - medium priority. Consequently, if you want - to respond to all debconf screens from the Debian OpenStack - packages, you must run the following command and select the - medium priority before you install any - packages: - # dpkg-reconfigure debconf - - The packages do not require pre-depends. If dbconfig-common is already - installed on the system, the user sees all prompts. However, - you cannot define the order in which the debconf screens appear. The - user must make sense of it even if the prompts appear in an - illogical order. - -
- -
diff --git a/doc/training-guides/basic-install-guide/section_debconf-dbconfig-common.xml b/doc/training-guides/basic-install-guide/section_debconf-dbconfig-common.xml deleted file mode 100644 index 2876e7f3..00000000 --- a/doc/training-guides/basic-install-guide/section_debconf-dbconfig-common.xml +++ /dev/null @@ -1,177 +0,0 @@ - -
- Configure the database with dbconfig-common - Many of the OpenStack services need to be configured - to access a database. These are configured through a DSN (Database - Source Name) directive as follows: - [database] -connection = mysql://keystone:0dec658e3f14a7d@localhost/keystonedb - This connection directive will be handled by - the dbconfig-common package, which provides a - standard Debian interface. It enables you to configure Debian - database parameters. It includes localized prompts for many - languages and it supports the following database backends: - SQLite, MySQL, and PostgreSQL. - By default, the dbconfig-common package - configures the OpenStack services to use SQLite. So if you use - debconf in non-interactive mode and without - pre-seeding, the OpenStack services that you install will use - SQLite. - By default, dbconfig-common does not - provide access to database servers over a network. If you want the - dbconfig-common package to prompt for remote - database servers that are accessed over a network and not through - a UNIX socket file, reconfigure it, as follows: - # apt-get install dbconfig-common && dpkg-reconfigure dbconfig-common - These screens appear when you re-configure the - dbconfig-common package: - - - - - - - - - - - - - - - Unlike other debconf prompts, you cannot - pre-seed the responses for the dbconfig-common - prompts by using debconf-set-selections. - Instead, you must create a file in - /etc/dbconfig-common. For example, you - might create a keystone configuration file for - dbconfig-common that is located in - /etc/dbconfig-common/keystone.conf, as - follows: - dbc_install='true' -dbc_upgrade='true' -dbc_remove='' -dbc_dbtype='mysql' -dbc_dbuser='keystone' -dbc_dbpass='PASSWORD' -dbc_dbserver='' -dbc_dbport='' -dbc_dbname='keystonedb' -dbc_dbadmin='root' -dbc_basepath='' -dbc_ssl='' -dbc_authmethod_admin='' -dbc_authmethod_user='' - After you create this file, run this command: - # apt-get install keystone - The Identity Service is installed with MySQL as the database - back end, keystonedb as database name, and the - localhost socket file. The corresponding DNS will then be: - [database] -connection = mysql://keystone:PASSWORD@localhost/keystonedb - The dbconfig-common package will configure - MySQL for these access rights, and create the database for you. - Since OpenStack 2014.1.1, all OpenStack packages in Debian are performing - the following MySQL query after database creation (if you decide - to use MySQL as a back-end): - ALTER DATABASE keystone CHARACTER SET utf8 COLLATE utf8_unicode_ci - So, if using Debian, you wont need to care about database - creation, access rights and character sets. All that is handled - for you by the packages. - As an example, here are screenshots from the - cinder-common package: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - By default in Debian, you can access the MySQL server from either - localhost through the socket file or 127.0.0.1. To access it over the - network, you must edit the /etc/mysql/my.cnf file, - and the mysql.user table. To do so, Debian provides - a helper script in the openstack-deploy package. - To use it, install the package and run: - # /usr/share/openstack-deploy/mysql-remote-root - Alternatively, if you do not want to install this package, run - this script to enable remote root access: - #!/bin/sh - -set -e - -SQL="mysql --defaults-file=/etc/mysql/debian.cnf -Dmysql -e" - -ROOT_PASS=`${SQL} "SELECT Password FROM user WHERE User='root' LIMIT 1;" \ - | tail -n 1` -${SQL} "REPLACE INTO user SET host='%', user='root',\ - password='${ROOT_PASS}', Select_priv='Y', Insert_priv='Y',\ - Update_priv='Y', Delete_priv='Y', Create_priv='Y', Drop_priv='Y',\ - Reload_priv='Y', Shutdown_priv='Y', Process_priv='Y', File_priv='Y',\ - Grant_priv='Y', References_priv='Y', Index_priv='Y', Alter_priv='Y',\ - Super_priv='Y', Show_db_priv='Y', Create_tmp_table_priv='Y',\ - Lock_tables_priv='Y', Execute_priv='Y', Repl_slave_priv='Y',\ - Repl_client_priv='Y', Create_view_priv='Y', Show_view_priv='Y',\ - Create_routine_priv='Y', Alter_routine_priv='Y', Create_user_priv='Y',\ - Event_priv='Y', Trigger_priv='Y' " -${SQL} "FLUSH PRIVILEGES" -sed -i 's|^bind-address[ \t]*=.*|bind-address = 0.0.0.0|' /etc/mysql/my.cnf -/etc/init.d/mysql restart - You must enable remote access before you install OpenStack - services on multiple nodes. -
diff --git a/doc/training-guides/basic-install-guide/section_debconf-keystone_authtoken.xml b/doc/training-guides/basic-install-guide/section_debconf-keystone_authtoken.xml deleted file mode 100644 index 68d07ca4..00000000 --- a/doc/training-guides/basic-install-guide/section_debconf-keystone_authtoken.xml +++ /dev/null @@ -1,66 +0,0 @@ - -
- Services and the [keystone_authtoken] - Because most OpenStack services must access the Identity - Service, you must configure the IP address of the - keystone server to be able to access it. You must - also configure the admin_tenant_name, - admin_user, and admin_password options - for each service to work. - Generally, this section looks like this: - [keystone_authtoken] -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = %SERVICE_TENANT_NAME% -admin_user = %SERVICE_USER% -admin_password = %SERVICE_PASSWORD% - The debconf system helps users configure the - auth_uri, identity_uri, - admin_tenant_name, admin_user and - admin_password options. - The following screens show an example Image Service - configuration: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - This information is stored in the configuration file for each - service. For example: - /etc/ceilometer/ceilometer.conf -/etc/nova/api-paste.ini -/etc/glance/glance-api-paste.ini -/etc/glance/glance-registry.ini -/etc/cinder/cinder.conf -/etc/neutron/neutron.conf - The Debian OpenStack packages offer automation for this, so - OpenStack users do not have to manually edit the configuration - files. -
diff --git a/doc/training-guides/basic-install-guide/section_debconf-preseeding.xml b/doc/training-guides/basic-install-guide/section_debconf-preseeding.xml deleted file mode 100644 index 3a85b3a8..00000000 --- a/doc/training-guides/basic-install-guide/section_debconf-preseeding.xml +++ /dev/null @@ -1,28 +0,0 @@ - -
- Pre-seed debconf prompts - You can pre-seed all debconf prompts. To pre-seed means - to store responses in the debconf database so - that debconf does not prompt the user for - responses. Pre-seeding enables a hands-free installation for - users. The package maintainer creates scripts that automatically - configure the services. - The following example shows how to pre-seed an automated MySQL - Server installation: - MYSQL_PASSWORD=MYSQL_PASSWORD -echo "mysql-server-5.5 mysql-server/root_password password ${MYSQL_PASSWORD} -mysql-server-5.5 mysql-server/root_password seen true -mysql-server-5.5 mysql-server/root_password_again password ${MYSQL_PASSWORD} -mysql-server-5.5 mysql-server/root_password_again seen true -" | debconf-set-selections -DEBIAN_FRONTEND=noninteractive apt-get install -y --force-yes mysql-server - The seen true option tells - debconf that a specified screen was already - seen by the user so do not show it again. This option is useful - for upgrades. -
diff --git a/doc/training-guides/basic-install-guide/section_debconf-rabbitmq.xml b/doc/training-guides/basic-install-guide/section_debconf-rabbitmq.xml deleted file mode 100644 index dab45380..00000000 --- a/doc/training-guides/basic-install-guide/section_debconf-rabbitmq.xml +++ /dev/null @@ -1,48 +0,0 @@ - -
- RabbitMQ credentials parameters - For every package that must connect to a Messaging Server, the - Debian package enables you to configure the IP address for that - server and the user name and password that is used to connect. The - following example shows configuration with the ceilometer-common package: - - - - - - - - - - - - - - - - - - - - - - These debconf screens appear in: ceilometer-common, cinder-common, glance-common, heat-common, neutron-common and nova-common. - This will configure the below directives (example from - nova.conf): - [DEFAULT] -rabbit_host=localhost -rabbit_userid=guest -rabbit_password=guest - The other directives concerning RabbitMQ will stay untouched. -
diff --git a/doc/training-guides/basic-install-guide/section_glance-install.xml b/doc/training-guides/basic-install-guide/section_glance-install.xml deleted file mode 100644 index f5cec9a0..00000000 --- a/doc/training-guides/basic-install-guide/section_glance-install.xml +++ /dev/null @@ -1,274 +0,0 @@ - -
- Install and configure - This section describes how to install and configure the Image Service, - code-named glance, on the controller node. For simplicity, this - configuration stores images on the local file system. - - To configure prerequisites - Before you install and configure the Image Service, you must create - a database and Identity service credentials including endpoints. - - To create the database, complete these steps: - - - Use the database access client to connect to the database - server as the root user: - $ mysql -u root -p - - - Create the glance database: - CREATE DATABASE glance; - - - Grant proper access to the glance - database: - GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; -GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - Replace GLANCE_DBPASS with a suitable - password. - - - Exit the database access client. - - - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - To create the Identity service credentials, complete these - steps: - - - Create the glance user: - $ keystone user-create --name glance --pass GLANCE_PASS -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| email | | -| enabled | True | -| id | f89cca5865dc42b18e2421fa5f5cce66 | -| name | glance | -| username | glance | -+----------+----------------------------------+ - Replace GLANCE_PASS with a suitable - password. - - - Link the glance user to the - service tenant and admin - role: - $ keystone user-role-add --user glance --tenant service --role admin - - This command provides no output. - - - - Create the glance service: - $ keystone service-create --name glance --type image \ - --description "OpenStack Image Service" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | OpenStack Image Service | -| enabled | True | -| id | 23f409c4e79f4c9e9d23d809c50fbacf | -| name | glance | -| type | image | -+-------------+----------------------------------+ - - - - - Create the Identity service endpoints: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ image / {print $2}') \ - --publicurl http://controller:9292 \ - --internalurl http://controller:9292 \ - --adminurl http://controller:9292 \ - --region regionOne -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| adminurl | http://controller:9292 | -| id | a2ee818c69cb475199a1ca108332eb35 | -| internalurl | http://controller:9292 | -| publicurl | http://controller:9292 | -| region | regionOne | -| service_id | 23f409c4e79f4c9e9d23d809c50fbacf | -+-------------+----------------------------------+ - - - - To install and configure the Image Service components - - Install the packages: - # apt-get install glance python-glanceclient - # yum install openstack-glance python-glanceclient - # zypper install openstack-glance python-glanceclient - - - Edit the /etc/glance/glance-api.conf - file and complete the following actions: - - - In the [database] section, configure - database access: - [database] -... -connection = mysql://glance:GLANCE_DBPASS@controller/glance - Replace GLANCE_DBPASS with the - password you chose for the Image Service database. - - - In the [keystone_authtoken] and - [paste_deploy] sections, configure Identity - service access: - [keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = glance -admin_password = GLANCE_PASS - -[paste_deploy] -... -flavor = keystone - Replace GLANCE_PASS with the - password you chose for the glance user in the - Identity service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [glance_store] section, configure - the local file system store and location of image files: - [glance_store] -... -default_store = file -filesystem_store_datadir = /var/lib/glance/images/ - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - Edit the /etc/glance/glance-registry.conf - file and complete the following actions: - - - In the [database] section, configure - database access: - [database] -... -connection = mysql://glance:GLANCE_DBPASS@controller/glance - Replace GLANCE_DBPASS with the - password you chose for the Image Service database. - - - In the [keystone_authtoken] and - [paste_deploy] sections, configure Identity - service access: - [keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = glance -admin_password = GLANCE_PASS - -[paste_deploy] -... -flavor = keystone - Replace GLANCE_PASS with the - password you chose for the glance user in the - Identity service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - Populate the Image Service database: - # su -s /bin/sh -c "glance-manage db_sync" glance - - - - To install and configure the Image Service components - - Install the packages: - # apt-get install glance python-glanceclient - - - Select the keystone pipeline to configure the - Image Service to use the Identity service: - - - - - - - - - To finalize installation - - Restart the Image Service services: - # service glance-registry restart -# service glance-api restart - - - Start the Image Service services and configure them to start when - the system boots: - # systemctl enable openstack-glance-api.service openstack-glance-registry.service -# systemctl start openstack-glance-api.service openstack-glance-registry.service - On SLES: - # service openstack-glance-api start -# service openstack-glance-registry start -# chkconfig openstack-glance-api on -# chkconfig openstack-glance-registry on - On openSUSE: - # systemctl enable openstack-glance-api.service openstack-glance-registry.service -# systemctl start openstack-glance-api.service openstack-glance-registry.service - - - By default, the Ubuntu packages create an SQLite database. - Because this configuration uses a SQL database server, you can - remove the SQLite database file: - # rm -f /var/lib/glance/glance.sqlite - - -
diff --git a/doc/training-guides/basic-install-guide/section_glance-verify.xml b/doc/training-guides/basic-install-guide/section_glance-verify.xml deleted file mode 100644 index 307fe8a0..00000000 --- a/doc/training-guides/basic-install-guide/section_glance-verify.xml +++ /dev/null @@ -1,93 +0,0 @@ - -
- Verify operation - This section describes how to verify operation of the Image - Service using - CirrOS, a small - Linux image that helps you test your OpenStack deployment. - For more information about how to download and build images, - see OpenStack Virtual Machine Image - Guide. For information about how to manage - images, see the OpenStack User Guide. - - - Create and change into a temporary local directory: - $ mkdir /tmp/images -$ cd /tmp/images - - - Download the image to the temporary local directory: - $ wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - Upload the image to the Image Service: - $ glance image-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3-x86_64-disk.img \ - --disk-format qcow2 --container-format bare --is-public True --progress -[=============================>] 100% -+------------------+--------------------------------------+ -| Property | Value | -+------------------+--------------------------------------+ -| checksum | 133eae9fb1c98f45894a4e60d8736619 | -| container_format | bare | -| created_at | 2014-10-10T13:14:42 | -| deleted | False | -| deleted_at | None | -| disk_format | qcow2 | -| id | acafc7c0-40aa-4026-9673-b879898e1fc2 | -| is_public | True | -| min_disk | 0 | -| min_ram | 0 | -| name | cirros-0.3.3-x86_64 | -| owner | ea8c352d253443118041c9c8b8416040 | -| protected | False | -| size | 13200896 | -| status | active | -| updated_at | 2014-10-10T13:14:43 | -| virtual_size | None | -+------------------+--------------------------------------+ - For information about the parameters for the - glance image-create command, see Image Service command-line client in the - OpenStack Command-Line Interface - Reference. - For information about disk and container formats for - images, see Disk and container formats for images in the - OpenStack Virtual Machine Image Guide. - - Because the returned image ID is generated dynamically, - your deployment generates a different ID than the one shown - in this example. - - - - Confirm upload of the image and validate - attributes: - $ glance image-list -+--------------------------------------+---------------------+-------------+------------------+----------+--------+ -| ID | Name | Disk Format | Container Format | Size | Status | -+--------------------------------------+---------------------+-------------+------------------+----------+--------+ -| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.3-x86_64 | qcow2 | bare | 13200896 | active | -+--------------------------------------+---------------------+-------------+------------------+----------+--------+ - - - Remove the temporary local directory: - $ rm -r /tmp/images - - -
diff --git a/doc/training-guides/basic-install-guide/section_heat-install.xml b/doc/training-guides/basic-install-guide/section_heat-install.xml deleted file mode 100644 index 45bdce31..00000000 --- a/doc/training-guides/basic-install-guide/section_heat-install.xml +++ /dev/null @@ -1,292 +0,0 @@ - -
- Install and configure Orchestration - This section describes how to install and configure the - Orchestration module, code-named heat, on the controller node. - - To configure prerequisites - Before you install and configure Orchestration, you must create a - database and Identity service credentials including endpoints. - - To create the database, complete these steps: - - - Use the database access client to connect to the database - server as the root user: - $ mysql -u root -p - - - Create the heat database: - CREATE DATABASE heat; - - - Grant proper access to the heat - database: - GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' \ - IDENTIFIED BY 'HEAT_DBPASS'; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' \ - IDENTIFIED BY 'HEAT_DBPASS'; - Replace HEAT_DBPASS with a suitable - password. - - - Exit the database access client. - - - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - To create the Identity service credentials, complete these - steps: - - - Create the heat user: - $ keystone user-create --name heat --pass HEAT_PASS -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| email | | -| enabled | True | -| id | 7fd67878dcd04d0393469ef825a7e005 | -| name | heat | -| username | heat | -+----------+----------------------------------+ - Replace HEAT_PASS with a suitable - password. - - - Link the heat user to the - service tenant and admin - role: - $ keystone user-role-add --user heat --tenant service --role admin - - This command provides no output. - - - - Create the heat_stack_user and heat_stack_owner roles: - $ keystone role-create --name heat_stack_user -$ keystone role-create --name heat_stack_owner - By default, users created by Orchestration use the - heat_stack_user role. - - - Create the heat and - heat-cfn services: - $ keystone service-create --name heat --type orchestration \ - --description "Orchestration" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Orchestration | -| enabled | True | -| id | 031112165cad4c2bb23e84603957de29 | -| name | heat | -| type | orchestration | -+-------------+----------------------------------+ -$ keystone service-create --name heat-cfn --type cloudformation \ - --description "Orchestration" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Orchestration | -| enabled | True | -| id | 297740d74c0a446bbff867acdccb33fa | -| name | heat-cfn | -| type | cloudformation | -+-------------+----------------------------------+ - - - Create the Identity service endpoints: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ orchestration / {print $2}') \ - --publicurl http://controller:8004/v1/%\(tenant_id\)s \ - --internalurl http://controller:8004/v1/%\(tenant_id\)s \ - --adminurl http://controller:8004/v1/%\(tenant_id\)s \ - --region regionOne -+-------------+-----------------------------------------+ -| Property | Value | -+-------------+-----------------------------------------+ -| adminurl | http://controller:8004/v1/%(tenant_id)s | -| id | f41225f665694b95a46448e8676b0dc2 | -| internalurl | http://controller:8004/v1/%(tenant_id)s | -| publicurl | http://controller:8004/v1/%(tenant_id)s | -| region | regionOne | -| service_id | 031112165cad4c2bb23e84603957de29 | -+-------------+-----------------------------------------+ -$ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ cloudformation / {print $2}') \ - --publicurl http://controller:8000/v1 \ - --internalurl http://controller:8000/v1 \ - --adminurl http://controller:8000/v1 \ - --region regionOne -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| adminurl | http://controller:8000/v1 | -| id | f41225f665694b95a46448e8676b0dc2 | -| internalurl | http://controller:8000/v1 | -| publicurl | http://controller:8000/v1 | -| region | regionOne | -| service_id | 297740d74c0a446bbff867acdccb33fa | -+-------------+----------------------------------+ - - - - - - To install and configure the Orchestration components - - Run the following commands to install the packages: - # apt-get install heat-api heat-api-cfn heat-engine python-heatclient - # yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \ - python-heatclient - # zypper install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \ - python-heatclient - - - Edit the /etc/heat/heat.conf file and - complete the following actions: - - - In the [database] section, configure - database access: - [database] -... -connection = mysql://heat:HEAT_DBPASS@controller/heat - Replace HEAT_DBPASS with the - password you chose for the Orchestration database. - - - In the [DEFAULT] section, configure - RabbitMQ message broker access: - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - - - In the [keystone_authtoken] and - [ec2authtoken] sections, configure Identity - service access: - [keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = heat -admin_password = HEAT_PASS - -[ec2authtoken] -... -auth_uri = http://controller:5000/v2.0 - Replace HEAT_PASS with the - password you chose for the heat user - in the Identity service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [DEFAULT] section, configure - the metadata and wait condition URLs: - [DEFAULT] -... -heat_metadata_server_url = http://controller:8000 -heat_waitcondition_server_url = http://controller:8000/v1/waitcondition - - - (Optional) To assist with troubleshooting, enable verbose - logging in the [DEFAULT] section: - [DEFAULT] -... -verbose = True - - - - - Populate the Orchestration database: - # su -s /bin/sh -c "heat-manage db_sync" heat - - - - To install and configure the Orchestration components - - Run the following commands to install the packages: - # apt-get install heat-api heat-api-cfn heat-engine python-heat-client - - - Respond to prompts for - database management, - Identity service - credentials, - service endpoint - registration, and - message broker - credentials. - - - Edit the /etc/heat/heat.conf file and - complete the following actions: - - - In the [ec2authtoken] section, configure - Identity service access: - [ec2authtoken] -... -auth_uri = http://controller:5000/v2.0 - - - - - - To finalize installation - - Restart the Orchestration services: - # service heat-api restart -# service heat-api-cfn restart -# service heat-engine restart - - - Start the Orchestration services and configure them to start when - the system boots: - # systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service \ - openstack-heat-engine.service -# systemctl start openstack-heat-api.service openstack-heat-api-cfn.service \ - openstack-heat-engine.service - On SLES: - # service openstack-heat-api start -# service openstack-heat-api-cfn start -# service openstack-heat-engine start -# chkconfig openstack-heat-api on -# chkconfig openstack-heat-api-cfn on -# chkconfig openstack-heat-engine on - On openSUSE: - # systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service \ - openstack-heat-engine.service -# systemctl start openstack-heat-api.service openstack-heat-api-cfn.service \ - openstack-heat-engine.service - - - By default, the Ubuntu packages create a SQLite database. - Because this configuration uses a SQL database server, you - can remove the SQLite database file: - # rm -f /var/lib/heat/heat.sqlite - - -
diff --git a/doc/training-guides/basic-install-guide/section_heat-verify.xml b/doc/training-guides/basic-install-guide/section_heat-verify.xml deleted file mode 100644 index 3f6aac02..00000000 --- a/doc/training-guides/basic-install-guide/section_heat-verify.xml +++ /dev/null @@ -1,49 +0,0 @@ - -
- Verify operation - This section describes how to verify operation of the Orchestration - module (heat). - - - Source the demo tenant credentials: - $ source demo-openrc.sh - - - The Orchestration module uses templates to describe stacks. To learn - about the template language, see the Template Guide in the Heat developer - documentation. - Create a test template in the test-stack.yml - file with the following content: - - - - Use the heat stack-create command to create a - stack from the template: - $ NET_ID=$(nova net-list | awk '/ demo-net / { print $2 }') -$ heat stack-create -f test-stack.yml \ - -P "ImageID=cirros-0.3.3-x86_64;NetID=$NET_ID" testStack -+--------------------------------------+------------+--------------------+----------------------+ -| id | stack_name | stack_status | creation_time | -+--------------------------------------+------------+--------------------+----------------------+ -| 477d96b4-d547-4069-938d-32ee990834af | testStack | CREATE_IN_PROGRESS | 2014-04-06T15:11:01Z | -+--------------------------------------+------------+--------------------+----------------------+ - - - Use the heat stack-list command to verify - successful creation of the stack: - $ heat stack-list -+--------------------------------------+------------+-----------------+----------------------+ -| id | stack_name | stack_status | creation_time | -+--------------------------------------+------------+-----------------+----------------------+ -| 477d96b4-d547-4069-938d-32ee990834af | testStack | CREATE_COMPLETE | 2014-04-06T15:11:01Z | -+--------------------------------------+------------+-----------------+----------------------+ - - -
diff --git a/doc/training-guides/basic-install-guide/section_keystone-install.xml b/doc/training-guides/basic-install-guide/section_keystone-install.xml deleted file mode 100644 index 12c4edb0..00000000 --- a/doc/training-guides/basic-install-guide/section_keystone-install.xml +++ /dev/null @@ -1,235 +0,0 @@ - -
- Install and configure - This section describes how to install and configure the OpenStack - Identity service on the controller node. - - To configure prerequisites - Before you configure the OpenStack Identity service, you must create - a database and an administration token. - - To create the database, complete these steps: - - - Use the database access client to connect to the database - server as the root user: - $ mysql -u root -p - - - Create the keystone database: - CREATE DATABASE keystone; - - - Grant proper access to the keystone - database: - GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; -GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - Replace KEYSTONE_DBPASS with a suitable password. - - - Exit the database access client. - - - - - Generate a random value to use as the administration token during - initial configuration: - # openssl rand -hex 10 - # openssl rand 10 | hexdump -e '1/1 "%.2x"' - - - - To configure prerequisites - - Generate a random value to use as the administration token during - initial configuration: - # openssl rand -hex 10 - - - - To install and configure the components - - Run the following command to install the packages: - # apt-get install keystone python-keystoneclient - # yum install openstack-keystone python-keystoneclient - # zypper install openstack-keystone python-keystoneclient - - - Edit the /etc/keystone/keystone.conf file and - complete the following actions: - - - In the [DEFAULT] section, define the value - of the initial administration token: - [DEFAULT] -... -admin_token = ADMIN_TOKEN - Replace ADMIN_TOKEN with the random - value that you generated in a previous step. - - - In the [database] section, configure - database access: - [database] -... -connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone - Replace KEYSTONE_DBPASS with the - password you chose for the database. - - - In the [token] section, configure the UUID - token provider and SQL driver: - [token] -... -provider = keystone.token.providers.uuid.Provider -driver = keystone.token.persistence.backends.sql.Token - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] section: - [DEFAULT] -... -verbose = True - - - - - Create generic certificates and keys and restrict access to the - associated files: - # keystone-manage pki_setup --keystone-user keystone --keystone-group keystone -# chown -R keystone:keystone /var/log/keystone -# chown -R keystone:keystone /etc/keystone/ssl -# chmod -R o-rwx /etc/keystone/ssl - - - Populate the Identity service database: - # su -s /bin/sh -c "keystone-manage db_sync" keystone - - - - To install and configure the components - - Run the following command to install the packages: - # apt-get install keystone python-keystoneclient - - - Respond to prompts for - - - Configure the initial administration token: - - - - - - - - Use the random value that you generated in a previous step. If you - install using non-interactive mode or you do not specify this token, - the configuration tool generates a random value. - - - Create the admin tenant and user: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Create the Identity service endpoints: - - - - - - - - - - - To finalize installation - - Restart the Identity service: - # service keystone restart - - - Start the Identity service and configure it to start when the - system boots: - # systemctl enable openstack-keystone.service -# systemctl start openstack-keystone.service - On SLES: - # service openstack-keystone start -# chkconfig openstack-keystone on - On openSUSE: - # systemctl enable openstack-keystone.service -# systemctl start openstack-keystone.service - - - By default, the Ubuntu packages create a SQLite database. - Because this configuration uses a SQL database server, you can - remove the SQLite database file: - # rm -f /var/lib/keystone/keystone.db - - - By default, the Identity service stores expired tokens in the - database indefinitely. The accumulation of expired tokens considerably - increases the database size and might degrade service performance, - particularly in environments with limited resources. - We recommend that you use - cron to configure a periodic - task that purges expired tokens hourly: - # (crontab -l -u keystone 2>&1 | grep -q token_flush) || \ - echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' \ - >> /var/spool/cron/crontabs/keystone - # (crontab -l -u keystone 2>&1 | grep -q token_flush) || \ - echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' \ - >> /var/spool/cron/keystone - # (crontab -l -u keystone 2>&1 | grep -q token_flush) || \ - echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' \ - >> /var/spool/cron/tabs/keystone - - -
diff --git a/doc/training-guides/basic-install-guide/section_keystone-openrc.xml b/doc/training-guides/basic-install-guide/section_keystone-openrc.xml deleted file mode 100644 index 7c2b67cd..00000000 --- a/doc/training-guides/basic-install-guide/section_keystone-openrc.xml +++ /dev/null @@ -1,52 +0,0 @@ - -
- Create OpenStack client environment scripts - The previous section used a combination of environment variables and - command options to interact with the Identity service via the - keystone client. To increase efficiency of client - operations, OpenStack supports simple client environment scripts also - known as OpenRC files. These scripts typically contain common options for - all clients, but also support unique options. For more information, see the - OpenStack User Guide. - - To create the scripts - Create client environment scripts for the admin - and demo tenants and users. Future portions of this - guide reference these scripts to load appropriate credentials for client - operations. - - Edit the admin-openrc.sh file and add the - following content: - export OS_TENANT_NAME=admin -export OS_USERNAME=admin -export OS_PASSWORD=ADMIN_PASS -export OS_AUTH_URL=http://controller:35357/v2.0 - Replace ADMIN_PASS with the password you chose - for the admin user in the Identity service. - - - Edit the demo-openrc.sh file and add the - following content: - export OS_TENANT_NAME=demo -export OS_USERNAME=demo -export OS_PASSWORD=DEMO_PASS -export OS_AUTH_URL=http://controller:5000/v2.0 - Replace DEMO_PASS with the password you chose - for the demo user in the Identity service. - - - - To load client environment scripts - - To run clients as a certain tenant and user, you can simply load - the associated client environment script prior to running them. For - example, to load the location of the Identity service and - admin tenant and user credentials: - $ source admin-openrc.sh - - -
diff --git a/doc/training-guides/basic-install-guide/section_keystone-services.xml b/doc/training-guides/basic-install-guide/section_keystone-services.xml deleted file mode 100644 index 0b26041a..00000000 --- a/doc/training-guides/basic-install-guide/section_keystone-services.xml +++ /dev/null @@ -1,84 +0,0 @@ - -
- Create the service entity and API endpoint - After you create tenants, users, and roles, you must create the - service entity and - API endpoint for the Identity service. - - To configure prerequisites - - Set the OS_SERVICE_TOKEN and - OS_SERVICE_ENDPOINT environment variables, as described - in . - - - - To create the service entity and API endpoint - - The Identity service manages a catalog of services in your - OpenStack environment. Services use this catalog to locate other - services in your environment. - Create the service entity for the Identity service: - $ keystone service-create --name keystone --type identity \ - --description "OpenStack Identity" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | OpenStack Identity | -| enabled | True | -| id | 15c11a23667e427e91bc31335b45f4bd | -| name | keystone | -| type | identity | -+-------------+----------------------------------+ - - Because OpenStack generates IDs dynamically, you will see - different values from this example command output. - - - - The Identity service manages a catalog of API endpoints associated - with the services in your OpenStack environment. Services use this - catalog to determine how to communicate with other services in your - environment. - OpenStack provides three API endpoint variations for each service: - admin, internal, and public. In a production environment, the variants - might reside on separate networks that service different types of users - for security reasons. Also, OpenStack supports multiple regions for - scalability. For simplicity, this configuration uses the management - network for all endpoint variations and the - regionOne region. - Create the API endpoint for the Identity service: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ identity / {print $2}') \ - --publicurl http://controller:5000/v2.0 \ - --internalurl http://controller:5000/v2.0 \ - --adminurl http://controller:35357/v2.0 \ - --region regionOne -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| adminurl | http://controller:35357/v2.0 | -| id | 11f9c625a3b94a3f8e66bf4e5de2679f | -| internalurl | http://controller:5000/v2.0 | -| publicurl | http://controller:5000/v2.0 | -| region | regionOne | -| service_id | 15c11a23667e427e91bc31335b45f4bd | -+-------------+----------------------------------+ - - This command references the ID of the service that you created - in the previous step. - - - - - Each service that you add to your OpenStack environment requires - adding information such as API endpoints to the Identity service. The - sections of this guide that cover service installation include steps - to add the appropriate information to the Identity service. - -
diff --git a/doc/training-guides/basic-install-guide/section_keystone-users.xml b/doc/training-guides/basic-install-guide/section_keystone-users.xml deleted file mode 100644 index 0238df3f..00000000 --- a/doc/training-guides/basic-install-guide/section_keystone-users.xml +++ /dev/null @@ -1,199 +0,0 @@ - -
- Create tenants, users, and roles - After you install the Identity service, create - tenants (projects), - users, and - roles for your environment. You - must use the temporary administration token that you created in - and manually configure the location - (endpoint) of the Identity service before you run - keystone commands. - You can pass the value of the administration token to the - keystone command with the --os-token - option or set the temporary OS_SERVICE_TOKEN environment - variable. Similarly, you can pass the location of the Identity service - to the keystone command with the - --os-endpoint option or set the temporary - OS_SERVICE_ENDPOINT environment variable. This guide - uses environment variables to reduce command length. - For more information, see the - Operations Guide - Managing Project and Users. - - To configure prerequisites - - Configure the administration token: - $ export OS_SERVICE_TOKEN=ADMIN_TOKEN - Replace ADMIN_TOKEN with the - administration token that you generated in - . For example: - $ export OS_SERVICE_TOKEN=294a4c8a8a475f9b9836 - - - Configure the endpoint: - $ export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0 - - - - To create tenants, users, and roles - - Create an administrative tenant, user, and role for - administrative operations in your environment: - - - Create the admin tenant: - $ keystone tenant-create --name admin --description "Admin Tenant" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Admin Tenant | -| enabled | True | -| id | 6f4c1e4cbfef4d5a8a1345882fbca110 | -| name | admin | -+-------------+----------------------------------+ - - Because OpenStack generates IDs dynamically, you will see - different values from this example command output. - - - - Create the admin user: - $ keystone user-create --name admin --pass ADMIN_PASS --email EMAIL_ADDRESS -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| email | admin@example.com | -| enabled | True | -| id | ea8c352d253443118041c9c8b8416040 | -| name | admin | -| username | admin | -+----------+----------------------------------+ - Replace ADMIN_PASS with a - suitable password and EMAIL_ADDRESS - with a suitable e-mail address. - - - Create the admin role: - $ keystone role-create --name admin -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| id | bff3a6083b714fa29c9344bf8930d199 | -| name | admin | -+----------+----------------------------------+ - - - Add the admin tenant and user to the - admin role: - $ keystone user-role-add --tenant admin --user admin --role admin - - This command provides no output. - - - - By default, the dashboard limits access to users with the - _member_ role. - Create the _member_ role: - $ keystone role-create --name _member_ -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| id | 0f198e94ffce416cbcbe344e1843eac8 | -| name | _member_ | -+----------+----------------------------------+ - - - Add the admin tenant and user to the - _member_ role: - $ keystone user-role-add --tenant admin --user admin --role _member_ - - This command provides no output. - - - - - Any roles that you create must map to roles specified in the - policy.json file included with each OpenStack - service. The default policy for most services grants administrative - access to the admin role. For more information, - see the - Operations Guide - Managing Projects and Users. - - - - Create a demo tenant and user for typical operations in your - environment: - - - Create the demo tenant: - $ keystone tenant-create --name demo --description "Demo Tenant" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Demo Tenant | -| enabled | True | -| id | 4aa51bb942be4dd0ac0555d7591f80a6 | -| name | demo | -+-------------+----------------------------------+ - - Do not repeat this step when creating additional - users for this tenant. - - - - Create the demo user: - $ keystone user-create --name demo --pass DEMO_PASS --email EMAIL_ADDRESS -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| email | demo@example.com | -| enabled | True | -| id | 7004dfa0dda84d63aef81cf7f100af01 | -| name | demo | -| username | demo | -+----------+----------------------------------+ - Replace DEMO_PASS with a suitable - password and EMAIL_ADDRESS with a - suitable e-mail address. - - - Add the demo tenant and user to the - _member_ role: - $ keystone user-role-add --tenant demo --user demo --role _member_ - - This command provides no output. - - - - - You can repeat this procedure to create additional tenants - and users. - - - - OpenStack services also require a tenant, user, and role to - interact with other services. You will create a user in the - service tenant for each service that you - install. - - - Create the service tenant: - $ keystone tenant-create --name service --description "Service Tenant" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Service Tenant | -| enabled | True | -| id | 6b69202e1bf846a4ae50d65bc4789122 | -| name | service | -+-------------+----------------------------------+ - - - - -
diff --git a/doc/training-guides/basic-install-guide/section_keystone-verify.xml b/doc/training-guides/basic-install-guide/section_keystone-verify.xml deleted file mode 100644 index 21273803..00000000 --- a/doc/training-guides/basic-install-guide/section_keystone-verify.xml +++ /dev/null @@ -1,119 +0,0 @@ - -
- Verify operation - This section describes how to verify operation of the Identity - service. - - - Unset the temporary OS_SERVICE_TOKEN and - OS_SERVICE_ENDPOINT environment variables: - $ unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT - - - As the admin tenant and user, request an - authentication token: - $ keystone --os-tenant-name admin --os-username admin --os-password ADMIN_PASS \ - --os-auth-url http://controller:35357/v2.0 token-get - Replace ADMIN_PASS with the password - you chose for the admin user in the Identity - service. You might need to use single quotes (') around your password - if it includes special characters. - Lengthy output that includes a token value verifies operation - for the admin tenant and user. - - - As the - admin tenant and user, list tenants to verify - that the admin tenant and user can execute - admin-only CLI commands and that the Identity service contains the - tenants that you created in : - As the admin tenant and user, list - tenants to verify that the admin tenant and user - can execute admin-only CLI commands and that the Identity service - contains the tenants created by the configuration tool: - $ keystone --os-tenant-name admin --os-username admin --os-password ADMIN_PASS \ - --os-auth-url http://controller:35357/v2.0 tenant-list -+----------------------------------+----------+---------+ -| id | name | enabled | -+----------------------------------+----------+---------+ -| 6f4c1e4cbfef4d5a8a1345882fbca110 | admin | True | -| 4aa51bb942be4dd0ac0555d7591f80a6 | demo | True | -| 6b69202e1bf846a4ae50d65bc4789122 | service | True | -+----------------------------------+----------+---------+ - - Because OpenStack generates IDs dynamically, you will see - different values from this example command output. - - - - As the - admin tenant and user, list users to verify - that the Identity service contains the users that you created - in : - As the admin tenant and user, list - users to verify that the Identity service contains the users - created by the configuration tool: - $ keystone --os-tenant-name admin --os-username admin --os-password ADMIN_PASS \ - --os-auth-url http://controller:35357/v2.0 user-list -+----------------------------------+---------+---------+---------------------+ -| id | name | enabled | email | -+----------------------------------+---------+---------+---------------------+ -| ea8c352d253443118041c9c8b8416040 | admin | True | admin@example.com | -| 7004dfa0dda84d63aef81cf7f100af01 | demo | True | demo@example.com | -+----------------------------------+---------+---------+---------------------+ - - - As the - admin tenant and user, list roles to verify - that the Identity service contains the role that you created - in : - As the admin tenant and user, list - roles to verify that the Identity service contains the role - created by the configuration tool: - $ keystone --os-tenant-name admin --os-username admin --os-password ADMIN_PASS \ - --os-auth-url http://controller:35357/v2.0 role-list -+----------------------------------+----------+ -| id | name | -+----------------------------------+----------+ -| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | -| bff3a6083b714fa29c9344bf8930d199 | admin | -+----------------------------------+----------+ - - - As the demo tenant and user, request an - authentication token: - $ keystone --os-tenant-name demo --os-username demo --os-password DEMO_PASS \ - --os-auth-url http://controller:35357/v2.0 token-get -+-----------+----------------------------------+ -| Property | Value | -+-----------+----------------------------------+ -| expires | 2014-10-10T12:51:33Z | -| id | 1b87ceae9e08411ba4a16e4dada04802 | -| tenant_id | 4aa51bb942be4dd0ac0555d7591f80a6 | -| user_id | 7004dfa0dda84d63aef81cf7f100af01 | -+-----------+----------------------------------+ - Replace DEMO_PASS with the password - you chose for the demo user in the Identity - service. - - - As the demo tenant and user, attempt to list - users to verify that you cannot execute admin-only CLI - commands: - $ keystone --os-tenant-name demo --os-username demo --os-password DEMO_PASS \ - --os-auth-url http://controller:35357/v2.0 user-list -You are not authorized to perform the requested action, admin_required. (HTTP 403) - - Each OpenStack service references a - policy.json file to determine the operations - available to a particular tenant, user, or role. For more - information, see the - Operations Guide - Managing Projects and Users. - - - -
diff --git a/doc/training-guides/basic-install-guide/section_launch-instance-neutron.xml b/doc/training-guides/basic-install-guide/section_launch-instance-neutron.xml deleted file mode 100644 index 39711b51..00000000 --- a/doc/training-guides/basic-install-guide/section_launch-instance-neutron.xml +++ /dev/null @@ -1,367 +0,0 @@ - -
- Launch an instance with OpenStack Networking (neutron) - - To generate a key pair - Most cloud images support - public key authentication rather than conventional - user name/password authentication. Before launching an instance, you must - generate a public/private key pair using ssh-keygen - and add the public key to your OpenStack environment. - - Source the demo tenant credentials: - $ source demo-openrc.sh - - - Generate a key pair: - $ ssh-keygen - - - Add the public key to your OpenStack environment: - $ nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key - - This command provides no output. - - - - Verify addition of the public key: - $ nova keypair-list -+----------+-------------------------------------------------+ -| Name | Fingerprint | -+----------+-------------------------------------------------+ -| demo-key | 6c:74:ec:3a:08:05:4e:9e:21:22:a6:dd:b2:62:b8:28 | -+----------+-------------------------------------------------+ - - - - To launch an instance - To launch an instance, you must at least specify the flavor, image - name, network, security group, key, and instance name. - - A flavor specifies a virtual resource allocation profile which - includes processor, memory, and storage. - List available flavors: - $ nova flavor-list -+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ -| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | -+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ -| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | -| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | -| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | -| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | -| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | -+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ - Your first instance uses the m1.tiny - flavor. - - You can also reference a flavor by ID. - - - - List available images: - $ nova image-list -+--------------------------------------+---------------------+--------+--------+ -| ID | Name | Status | Server | -+--------------------------------------+---------------------+--------+--------+ -| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.3-x86_64 | ACTIVE | | -+--------------------------------------+---------------------+--------+--------+ - Your first instance uses the - cirros-0.3.3-x86_64 image. - - - List available networks: - $ neutron net-list -+--------------------------------------+----------+-------------------------------------------------------+ -| id | name | subnets | -+--------------------------------------+----------+-------------------------------------------------------+ -| 3c612b5a-d1db-498a-babb-a4c50e344cb1 | demo-net | 20bcd3fd-5785-41fe-ac42-55ff884e3180 192.168.1.0/24 | -| 9bce64a3-a963-4c05-bfcd-161f708042d1 | ext-net | b54a8d85-b434-4e85-a8aa-74873841a90d 203.0.113.0/24 | -+--------------------------------------+----------+-------------------------------------------------------+ - Your first instance uses the demo-net tenant - network. However, you must reference this network using the ID instead - of the name. - - - List available security groups: - $ nova secgroup-list -+--------------------------------------+---------+-------------+ -| Id | Name | Description | -+--------------------------------------+---------+-------------+ -| ad8d4ea5-3cad-4f7d-b164-ada67ec59473 | default | default | -+--------------------------------------+---------+-------------+ - Your first instance uses the default security - group. By default, this security group implements a firewall that - blocks remote access to instances. If you would like to permit - remote access to your instance, launch it and then - configure remote access. - - - Launch the instance: - Replace DEMO_NET_ID with the ID of the - demo-net tenant network. - $ nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=DEMO_NET_ID \ - --security-group default --key-name demo-key demo-instance1 -+--------------------------------------+------------------------------------------------------------+ -| Property | Value | -+--------------------------------------+------------------------------------------------------------+ -| OS-DCF:diskConfig | MANUAL | -| OS-EXT-AZ:availability_zone | nova | -| OS-EXT-STS:power_state | 0 | -| OS-EXT-STS:task_state | scheduling | -| OS-EXT-STS:vm_state | building | -| OS-SRV-USG:launched_at | - | -| OS-SRV-USG:terminated_at | - | -| accessIPv4 | | -| accessIPv6 | | -| adminPass | vFW7Bp8PQGNo | -| config_drive | | -| created | 2014-04-09T19:24:27Z | -| flavor | m1.tiny (1) | -| hostId | | -| id | 05682b91-81a1-464c-8f40-8b3da7ee92c5 | -| image | cirros-0.3.3-x86_64 (acafc7c0-40aa-4026-9673-b879898e1fc2) | -| key_name | demo-key | -| metadata | {} | -| name | demo-instance1 | -| os-extended-volumes:volumes_attached | [] | -| progress | 0 | -| security_groups | default | -| status | BUILD | -| tenant_id | 7cf50047f8df4824bc76c2fdf66d11ec | -| updated | 2014-04-09T19:24:27Z | -| user_id | 0e47686e72114d7182f7569d70c519c9 | -+--------------------------------------+------------------------------------------------------------+ - - - Check the status of your instance: - $ nova list -+--------------------------------------+----------------+--------+------------+-------------+-------------------------+ -| ID | Name | Status | Task State | Power State | Networks | -+--------------------------------------+----------------+--------+------------+-------------+-------------------------+ -| 05682b91-81a1-464c-8f40-8b3da7ee92c5 | demo-instance1 | ACTIVE | - | Running | demo-net=192.168.1.3 | -+--------------------------------------+----------------+--------+------------+-------------+-------------------------+ - The status changes from BUILD to - ACTIVE when your instance finishes the build - process. - - - - To access your instance using a virtual console - - Obtain a Virtual Network Computing (VNC) - session URL for your instance and access it from a web browser: - $ nova get-vnc-console demo-instance1 novnc -+-------+------------------------------------------------------------------------------------+ -| Type | Url | -+-------+------------------------------------------------------------------------------------+ -| novnc | http://controller:6080/vnc_auto.html?token=2f6dd985-f906-4bfc-b566-e87ce656375b | -+-------+------------------------------------------------------------------------------------+ - - If your web browser runs on a host that cannot resolve the - controller host name, you can replace - controller with the IP address of the - management interface on your controller node. - - The CirrOS image includes conventional user name/password - authentication and provides these credentials at the login prompt. - After logging into CirrOS, we recommend that you verify network - connectivity using ping. - Verify the demo-net tenant network - gateway: - $ ping -c 4 192.168.1.1 -PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. -64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=0.357 ms -64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=0.473 ms -64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=0.504 ms -64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=0.470 ms - ---- 192.168.1.1 ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 2998ms -rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms - Verify the ext-net external network: - $ ping -c 4 openstack.org -PING openstack.org (174.143.194.225) 56(84) bytes of data. -64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms -64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms -64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms -64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms - ---- openstack.org ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3003ms -rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms - - - - To access your instance remotely - - Add rules to the default security group: - - - Permit ICMP (ping): - $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 -+-------------+-----------+---------+-----------+--------------+ -| IP Protocol | From Port | To Port | IP Range | Source Group | -+-------------+-----------+---------+-----------+--------------+ -| icmp | -1 | -1 | 0.0.0.0/0 | | -+-------------+-----------+---------+-----------+--------------+ - - - Permit secure shell (SSH) access: - $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 -+-------------+-----------+---------+-----------+--------------+ -| IP Protocol | From Port | To Port | IP Range | Source Group | -+-------------+-----------+---------+-----------+--------------+ -| tcp | 22 | 22 | 0.0.0.0/0 | | -+-------------+-----------+---------+-----------+--------------+ - - - - - Create a floating IP address on the - ext-net external network: - $ neutron floatingip-create ext-net -Created a new floatingip: -+---------------------+--------------------------------------+ -| Field | Value | -+---------------------+--------------------------------------+ -| fixed_ip_address | | -| floating_ip_address | 203.0.113.102 | -| floating_network_id | 9bce64a3-a963-4c05-bfcd-161f708042d1 | -| id | 05e36754-e7f3-46bb-9eaa-3521623b3722 | -| port_id | | -| router_id | | -| status | DOWN | -| tenant_id | 7cf50047f8df4824bc76c2fdf66d11ec | -+---------------------+--------------------------------------+ - - - Associate the floating IP address with your instance: - $ nova floating-ip-associate demo-instance1 203.0.113.102 - - This command provides no output. - - - - Check the status of your floating IP address: - $ nova list -+--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ -| ID | Name | Status | Task State | Power State | Networks | -+--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ -| 05682b91-81a1-464c-8f40-8b3da7ee92c5 | demo-instance1 | ACTIVE | - | Running | demo-net=192.168.1.3, 203.0.113.102 | -+--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - - - Verify network connectivity using ping from the - controller node or any host on the external network: - $ ping -c 4 203.0.113.102 -PING 203.0.113.102 (203.0.113.112) 56(84) bytes of data. -64 bytes from 203.0.113.102: icmp_req=1 ttl=63 time=3.18 ms -64 bytes from 203.0.113.102: icmp_req=2 ttl=63 time=0.981 ms -64 bytes from 203.0.113.102: icmp_req=3 ttl=63 time=1.06 ms -64 bytes from 203.0.113.102: icmp_req=4 ttl=63 time=0.929 ms - ---- 203.0.113.102 ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3002ms -rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms - - - Access your instance using SSH from the controller node or any - host on the external network: - $ ssh cirros@203.0.113.102 -The authenticity of host '203.0.113.102 (203.0.113.102)' can't be established. -RSA key fingerprint is ed:05:e9:e7:52:a0:ff:83:68:94:c7:d1:f2:f8:e2:e9. -Are you sure you want to continue connecting (yes/no)? yes -Warning: Permanently added '203.0.113.102' (RSA) to the list of known hosts. -$ - - If your host does not contain the public/private key pair created - in an earlier step, SSH prompts for the default password associated - with the cirros user. - - - - - To attach a Block Storage volume to your instance - If your environment includes the Block Storage service, you can - attach a volume to the instance. - - Source the demo tenant credentials: - $ source demo-openrc.sh - - - List volumes: - $ nova volume-list -+--------------------------------------+-----------+--------------+------+-------------+-------------+ -| ID | Status | Display Name | Size | Volume Type | Attached to | -+--------------------------------------+-----------+--------------+------+-------------+-------------+ -| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | available | demo-volume1 | 1 | None | | -+--------------------------------------+-----------+--------------+------+-------------+-------------+ - - - Attach the demo-volume1 volume to - the demo-instance1 instance: - $ nova volume-attach demo-instance1 158bea89-07db-4ac2-8115-66c0d6a4bb48 -+----------+--------------------------------------+ -| Property | Value | -+----------+--------------------------------------+ -| device | /dev/vdb | -| id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 | -| serverId | 05682b91-81a1-464c-8f40-8b3da7ee92c5 | -| volumeId | 158bea89-07db-4ac2-8115-66c0d6a4bb48 | -+----------+--------------------------------------+ - - You must reference volumes using the IDs instead of - names. - - - - List volumes: - $ nova volume-list -+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+ -| ID | Status | Display Name | Size | Volume Type | Attached to | -+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+ -| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | in-use | demo-volume1 | 1 | None | 05682b91-81a1-464c-8f40-8b3da7ee92c5 | -+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+ - The demo-volume1 volume status should indicate - in-use by the ID of the - demo-instance1 instance. - - - Access your instance using SSH from the controller node or any - host on the external network and use the fdisk - command to verify presence of the volume as the - /dev/vdb block storage device: - $ ssh cirros@203.0.113.102 -$ sudo fdisk -l - -Disk /dev/vda: 1073 MB, 1073741824 bytes -255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors -Units = sectors of 1 * 512 = 512 bytes -Sector size (logical/physical): 512 bytes / 512 bytes -I/O size (minimum/optimal): 512 bytes / 512 bytes -Disk identifier: 0x00000000 - - Device Boot Start End Blocks Id System -/dev/vda1 * 16065 2088449 1036192+ 83 Linux - -Disk /dev/vdb: 1073 MB, 1073741824 bytes -16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors -Units = sectors of 1 * 512 = 512 bytes -Sector size (logical/physical): 512 bytes / 512 bytes -I/O size (minimum/optimal): 512 bytes / 512 bytes -Disk identifier: 0x00000000 - -Disk /dev/vdb doesn't contain a valid partition table - - You must create a partition table and file system to use - the volume. - - - - If your instance does not launch or seem to work as you expect, see the - assistance. We want your environment to work! -
diff --git a/doc/training-guides/basic-install-guide/section_launch-instance-nova.xml b/doc/training-guides/basic-install-guide/section_launch-instance-nova.xml deleted file mode 100644 index 9ea79c22..00000000 --- a/doc/training-guides/basic-install-guide/section_launch-instance-nova.xml +++ /dev/null @@ -1,328 +0,0 @@ - -
- Launch an instance with legacy networking (nova-network) - - To generate a key pair - Most cloud images support - public key authentication rather than conventional - user name/password authentication. Before launching an instance, you must - generate a public/private key pair using ssh-keygen - and add the public key to your OpenStack environment. - - Source the demo tenant credentials: - $ source demo-openrc.sh - - - Generate a key pair: - $ ssh-keygen - - - Add the public key to your OpenStack environment: - $ nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key - - This command provides no output. - - - - Verify addition of the public key: - $ nova keypair-list -+----------+-------------------------------------------------+ -| Name | Fingerprint | -+----------+-------------------------------------------------+ -| demo-key | 6c:74:ec:3a:08:05:4e:9e:21:22:a6:dd:b2:62:b8:28 | -+----------+-------------------------------------------------+ - - - - To launch an instance - To launch an instance, you must at least specify the flavor, image - name, network, security group, key, and instance name. - - A flavor specifies a virtual resource allocation profile which - includes processor, memory, and storage. - List available flavors: - $ nova flavor-list -+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ -| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | -+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ -| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | -| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | -| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | -| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | -| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | -+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ - Your first instance uses the m1.tiny - flavor. - - You can also reference a flavor by ID. - - - - List available images: - $ nova image-list -+--------------------------------------+---------------------+--------+--------+ -| ID | Name | Status | Server | -+--------------------------------------+---------------------+--------+--------+ -| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.3-x86_64 | ACTIVE | | -+--------------------------------------+---------------------+--------+--------+ - Your first instance uses the - cirros-0.3.3-x86_64 image. - - - List available networks: - - You must source the admin tenant credentials - for this step and then source the demo tenant - credentials for the remaining steps. - $ source admin-openrc.sh - - $ nova net-list -+--------------------------------------+----------+------------------+ -| ID | Label | CIDR | -+--------------------------------------+----------+------------------+ -| 7f849be3-4494-495a-95a1-0f99ccb884c4 | demo-net | 203.0.113.24/29 | -+--------------------------------------+----------+------------------+ - Your first instance uses the demo-net tenant - network. However, you must reference this network using the ID instead - of the name. - - - List available security groups: - $ nova secgroup-list -+--------------------------------------+---------+-------------+ -| Id | Name | Description | -+--------------------------------------+---------+-------------+ -| ad8d4ea5-3cad-4f7d-b164-ada67ec59473 | default | default | -+--------------------------------------+---------+-------------+ - Your first instance uses the default security - group. By default, this security group implements a firewall that - blocks remote access to instances. If you would like to permit - remote access to your instance, launch it and then - configure remote access. - - - Launch the instance: - Replace DEMO_NET_ID with the ID of the - demo-net tenant network. - $ nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=DEMO_NET_ID \ - --security-group default --key-name demo-key demo-instance1 -+--------------------------------------+------------------------------------------------------------+ -| Property | Value | -+--------------------------------------+------------------------------------------------------------+ -| OS-DCF:diskConfig | MANUAL | -| OS-EXT-AZ:availability_zone | nova | -| OS-EXT-STS:power_state | 0 | -| OS-EXT-STS:task_state | scheduling | -| OS-EXT-STS:vm_state | building | -| OS-SRV-USG:launched_at | - | -| OS-SRV-USG:terminated_at | - | -| accessIPv4 | | -| accessIPv6 | | -| adminPass | ThZqrg7ach78 | -| config_drive | | -| created | 2014-04-10T00:09:16Z | -| flavor | m1.tiny (1) | -| hostId | | -| id | 45ea195c-c469-43eb-83db-1a663bbad2fc | -| image | cirros-0.3.3-x86_64 (acafc7c0-40aa-4026-9673-b879898e1fc2) | -| key_name | demo-key | -| metadata | {} | -| name | demo-instance1 | -| os-extended-volumes:volumes_attached | [] | -| progress | 0 | -| security_groups | default | -| status | BUILD | -| tenant_id | 93849608fe3d462ca9fa0e5dbfd4d040 | -| updated | 2014-04-10T00:09:16Z | -| user_id | 8397567baf4746cca7a1e608677c3b23 | -+--------------------------------------+------------------------------------------------------------+ - - - Check the status of your instance: - $ nova list -+--------------------------------------+----------------+--------+------------+-------------+------------------------+ -| ID | Name | Status | Task State | Power State | Networks | -+--------------------------------------+----------------+--------+------------+-------------+------------------------+ -| 45ea195c-c469-43eb-83db-1a663bbad2fc | demo-instance1 | ACTIVE | - | Running | demo-net=203.0.113.26 | -+--------------------------------------+----------------+--------+------------+-------------+------------------------+ - The status changes from BUILD to - ACTIVE when your instance finishes the build - process. - - - - To access your instance using a virtual console - - Obtain a Virtual Network Computing (VNC) - session URL for your instance and access it from a web browser: - $ nova get-vnc-console demo-instance1 novnc -+-------+------------------------------------------------------------------------------------+ -| Type | Url | -+-------+------------------------------------------------------------------------------------+ -| novnc | http://controller:6080/vnc_auto.html?token=2f6dd985-f906-4bfc-b566-e87ce656375b | -+-------+------------------------------------------------------------------------------------+ - - If your web browser runs on a host that cannot resolve the - controller host name, you can replace - controller with the IP address of the - management interface on your controller node. - - The CirrOS image includes conventional user name/password - authentication and provides these credentials at the login prompt. - After logging into CirrOS, we recommend that you verify network - connectivity using ping. - Verify the demo-net network: - $ ping -c 4 openstack.org -PING openstack.org (174.143.194.225) 56(84) bytes of data. -64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms -64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms -64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms -64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms - ---- openstack.org ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3003ms -rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms - - - - To access your instance remotely - - Add rules to the default security group: - - - Permit ICMP (ping): - $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 -+-------------+-----------+---------+-----------+--------------+ -| IP Protocol | From Port | To Port | IP Range | Source Group | -+-------------+-----------+---------+-----------+--------------+ -| icmp | -1 | -1 | 0.0.0.0/0 | | -+-------------+-----------+---------+-----------+--------------+ - - - Permit secure shell (SSH) access: - $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 -+-------------+-----------+---------+-----------+--------------+ -| IP Protocol | From Port | To Port | IP Range | Source Group | -+-------------+-----------+---------+-----------+--------------+ -| tcp | 22 | 22 | 0.0.0.0/0 | | -+-------------+-----------+---------+-----------+--------------+ - - - - - Verify network connectivity using ping from the - controller node or any host on the external network: - $ ping -c 4 203.0.113.26 -PING 203.0.113.26 (203.0.113.26) 56(84) bytes of data. -64 bytes from 203.0.113.26: icmp_req=1 ttl=63 time=3.18 ms -64 bytes from 203.0.113.26: icmp_req=2 ttl=63 time=0.981 ms -64 bytes from 203.0.113.26: icmp_req=3 ttl=63 time=1.06 ms -64 bytes from 203.0.113.26: icmp_req=4 ttl=63 time=0.929 ms - ---- 203.0.113.26 ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 3002ms -rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms - - - Access your instance using SSH from the controller node or any - host on the external network: - $ ssh cirros@203.0.113.26 -The authenticity of host '203.0.113.26 (203.0.113.26)' can't be established. -RSA key fingerprint is ed:05:e9:e7:52:a0:ff:83:68:94:c7:d1:f2:f8:e2:e9. -Are you sure you want to continue connecting (yes/no)? yes -Warning: Permanently added '203.0.113.26' (RSA) to the list of known hosts. -$ - - If your host does not contain the public/private key pair created - in an earlier step, SSH prompts for the default password associated - with the cirros user. - - - - - To attach a Block Storage volume to your instance - If your environment includes the Block Storage service, you can - attach a volume to the instance. - - Source the demo tenant credentials: - $ source demo-openrc.sh - - - List volumes: - $ nova volume-list -+--------------------------------------+-----------+--------------+------+-------------+-------------+ -| ID | Status | Display Name | Size | Volume Type | Attached to | -+--------------------------------------+-----------+--------------+------+-------------+-------------+ -| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | available | demo-volume1 | 1 | None | | -+--------------------------------------+-----------+--------------+------+-------------+-------------+ - - - Attach the demo-volume1 volume to - the demo-instance1 instance: - $ nova volume-attach demo-instance1 158bea89-07db-4ac2-8115-66c0d6a4bb48 -+----------+--------------------------------------+ -| Property | Value | -+----------+--------------------------------------+ -| device | /dev/vdb | -| id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 | -| serverId | 45ea195c-c469-43eb-83db-1a663bbad2fc | -| volumeId | 158bea89-07db-4ac2-8115-66c0d6a4bb48 | -+----------+--------------------------------------+ - - You must reference volumes using the IDs instead of - names. - - - - List volumes: - $ nova volume-list -+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+ -| ID | Status | Display Name | Size | Volume Type | Attached to | -+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+ -| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | in-use | demo-volume1 | 1 | None | 45ea195c-c469-43eb-83db-1a663bbad2fc | -+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+ - The demo-volume1 volume status should indicate - in-use by the ID of the - demo-instance1 instance. - - - Access your instance using SSH from the controller node or any - host on the external network and use the fdisk - command to verify presence of the volume as the - /dev/vdb block storage device: - $ ssh cirros@203.0.113.102 -$ sudo fdisk -l - -Disk /dev/vda: 1073 MB, 1073741824 bytes -255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors -Units = sectors of 1 * 512 = 512 bytes -Sector size (logical/physical): 512 bytes / 512 bytes -I/O size (minimum/optimal): 512 bytes / 512 bytes -Disk identifier: 0x00000000 - - Device Boot Start End Blocks Id System -/dev/vda1 * 16065 2088449 1036192+ 83 Linux - -Disk /dev/vdb: 1073 MB, 1073741824 bytes -16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors -Units = sectors of 1 * 512 = 512 bytes -Sector size (logical/physical): 512 bytes / 512 bytes -I/O size (minimum/optimal): 512 bytes / 512 bytes -Disk identifier: 0x00000000 - -Disk /dev/vdb doesn't contain a valid partition table - - You must create a partition table and file system to use - the volume. - - - - If your instance does not launch or seem to work as you expect, see the - OpenStack Operations Guide for more - information or use one of the - assistance. We want your environment to work! -
diff --git a/doc/training-guides/basic-install-guide/section_neutron-compute-node.xml b/doc/training-guides/basic-install-guide/section_neutron-compute-node.xml deleted file mode 100644 index 94bdb359..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-compute-node.xml +++ /dev/null @@ -1,334 +0,0 @@ - -
- Install and configure compute node - The compute node handles connectivity and - security groups - for instances. - - To configure prerequisites - Before you install and configure OpenStack Networking, you - must configure certain kernel networking parameters. - - Edit the /etc/sysctl.conf file to - contain the following parameters: - net.ipv4.conf.all.rp_filter=0 -net.ipv4.conf.default.rp_filter=0 - - - Implement the changes: - # sysctl -p - - - - To install the Networking components - - # apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent - # yum install openstack-neutron-ml2 openstack-neutron-openvswitch - # zypper install --no-recommends openstack-neutron-openvswitch-agent ipset - - SUSE does not use a separate ML2 plug-in package. - - - - - To install and configure the Networking components - - # apt-get install neutron-plugin-openvswitch-agent openvswitch-datapath-dkms - - Debian does not use a separate ML2 plug-in package. - - - - Select the ML2 plug-in: - - - - - - - - - Selecting the ML2 plug-in also populates the - and - options in the - /etc/neutron/neutron.conf file with the - appropriate values. - - - - - To configure the Networking common components - The Networking common component configuration includes the - authentication mechanism, message broker, and plug-in. - - Edit the /etc/neutron/neutron.conf file - and complete the following actions: - - - In the [database] section, comment out - any connection options because compute nodes - do not directly access the database. - - - In the [DEFAULT] section, configure - RabbitMQ message broker access: - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - - - In the [DEFAULT] and - [keystone_authtoken] sections, - configure Identity service access: - [DEFAULT] -... -auth_strategy = keystone - -[keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS - Replace NEUTRON_PASS with the - password you chose or the neutron user in the - Identity service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [DEFAULT] section, enable the - Modular Layer 2 (ML2) plug-in, router service, and overlapping - IP addresses: - [DEFAULT] -... -core_plugin = ml2 -service_plugins = router -allow_overlapping_ips = True - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - - To configure the Modular Layer 2 (ML2) plug-in - The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to - build the virtual networking framework for instances. - - Edit the - /etc/neutron/plugins/ml2/ml2_conf.ini - file and complete the following actions: - - - In the [ml2] section, enable the - flat and - generic routing encapsulation (GRE) - network type drivers, GRE tenant networks, and the OVS - mechanism driver: - [ml2] -... -type_drivers = flat,gre -tenant_network_types = gre -mechanism_drivers = openvswitch - - - In the [ml2_type_gre] section, configure - the tunnel identifier (id) range: - [ml2_type_gre] -... -tunnel_id_ranges = 1:1000 - - - In the [securitygroup] section, enable - security groups, enable ipset, and - configure the OVS iptables firewall - driver: - [securitygroup] -... -enable_security_group = True -enable_ipset = True -firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - - - In the [ovs] section, configure the - Open vSwitch (OVS) agent: - [ovs] -... -local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS -tunnel_type = gre -enable_tunneling = True - Replace - INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS - with the IP address of the instance tunnels network interface - on your compute node. - - - - - - To configure the Open vSwitch (OVS) service - The OVS service provides the underlying virtual networking framework - for instances. - - Start the OVS service and configure it to start when the - system boots: - # systemctl enable openvswitch.service -# systemctl start openvswitch.service - On SLES: - # service openvswitch-switch start -# chkconfig openvswitch-switch on - On openSUSE: - # systemctl enable openvswitch.service -# systemctl start openvswitch.service - - - Restart the OVS service: - # service openvswitch-switch restart - - - - To configure Compute to use Networking - By default, distribution packages configure Compute to use - legacy networking. You must reconfigure Compute to manage - networks through Networking. - - Edit the /etc/nova/nova.conf file and - complete the following actions: - - - In the [DEFAULT] section, configure - the APIs and drivers: - [DEFAULT] -... -network_api_class = nova.network.neutronv2.api.API -security_group_api = neutron -linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver -firewall_driver = nova.virt.firewall.NoopFirewallDriver - - By default, Compute uses an internal firewall service. - Since Networking includes a firewall service, you must - disable the Compute firewall service by using the - nova.virt.firewall.NoopFirewallDriver - firewall driver. - - - - In the [neutron] section, configure - access parameters: - [neutron] -... -url = http://controller:9696 -auth_strategy = keystone -admin_auth_url = http://controller:35357/v2.0 -admin_tenant_name = service -admin_username = neutron -admin_password = NEUTRON_PASS - Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. - - - - - - To finalize the installation - - The Networking service initialization scripts expect a - symbolic link /etc/neutron/plugin.ini - pointing to the ML2 plug-in configuration file, - /etc/neutron/plugins/ml2/ml2_conf.ini. - If this symbolic link does not exist, create it using the - following command: - # ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - Due to a packaging bug, the Open vSwitch agent initialization - script explicitly looks for the Open vSwitch plug-in configuration - file rather than a symbolic link - /etc/neutron/plugin.ini pointing to the ML2 - plug-in configuration file. Run the following commands to resolve this - issue: - # cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \ - /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig -# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \ - /usr/lib/systemd/system/neutron-openvswitch-agent.service - - - The Networking service initialization scripts expect the - variable NEUTRON_PLUGIN_CONF in the - /etc/sysconfig/neutron file to - reference the ML2 plug-in configuration file. Edit the - /etc/sysconfig/neutron file and add the - following: - NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini" - - - Restart the Compute service: - # systemctl restart openstack-nova-compute.service - On SLES: - # service openstack-nova-compute restart - On openSUSE: - # systemctl restart openstack-nova-compute.service - # service nova-compute restart - - - Start the Open vSwitch (OVS) agent and configure it to - start when the system boots: - # systemctl enable neutron-openvswitch-agent.service -# systemctl start neutron-openvswitch-agent.service - On SLES: - # service openstack-neutron-openvswitch-agent start -# chkconfig openstack-neutron-openvswitch-agent on - On openSUSE: - # systemctl enable openstack-neutron-openvswitch-agent.service -# systemctl start openstack-neutron-openvswitch-agent.service - - - Restart the Open vSwitch (OVS) agent: - # service neutron-plugin-openvswitch-agent restart - - - - Verify operation - - Perform these commands on the controller node. - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - List agents to verify successful launch of the - neutron agents: - $ neutron agent-list -+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ -| id | agent_type | host | alive | admin_state_up | binary | -+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ -... -| a5a49051-05eb-4b4f-bfc7-d36235fe9131 | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent | -+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ - - -
diff --git a/doc/training-guides/basic-install-guide/section_neutron-concepts.xml b/doc/training-guides/basic-install-guide/section_neutron-concepts.xml deleted file mode 100644 index 828f094b..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-concepts.xml +++ /dev/null @@ -1,63 +0,0 @@ - -
- Networking concepts - OpenStack Networking (neutron) manages all networking facets - for the Virtual Networking Infrastructure (VNI) and the access - layer aspects of the Physical Networking Infrastructure (PNI) in - your OpenStack environment. OpenStack Networking enables tenants - to create advanced virtual network topologies including services - such as firewalls, - load balancers, - and virtual - private networks (VPNs). - Networking provides the networks, subnets, and routers object - abstractions. Each abstraction has functionality that mimics its - physical counterpart: networks contain subnets, and routers route - traffic between different subnet and networks. - Each router has one gateway that connects to a network, and - many interfaces connected to subnets. Subnets can access machines - on other subnets connected to the same router. - Any given Networking set up has at least one external network. - Unlike the other networks, the external network is not merely a - virtually defined network. Instead, it represents a view into a - slice of the physical, external network accessible outside the - OpenStack installation. IP addresses on the external network are - accessible by anybody physically on the outside network. Because - the external network merely represents a view into the outside - network, DHCP is disabled on this network. - In addition to external networks, any Networking set up has - one or more internal networks. These software-defined networks - connect directly to the VMs. Only the VMs on any given internal - network, or those on subnets connected through interfaces to a - similar router, can access VMs connected to that network - directly. - For the outside network to access VMs, and vice versa, routers - between the networks are needed. Each router has one gateway that - is connected to a network and many interfaces that are connected - to subnets. Like a physical router, subnets can access machines on - other subnets that are connected to the same router, and machines - can access the outside network through the gateway for the - router. - Additionally, you can allocate IP addresses on external - networks to ports on the internal network. Whenever something is - connected to a subnet, that connection is called a port.You can - associate external network IP addresses with ports to VMs. This - way, entities on the outside network can access VMs. - Networking also supports security - groups. Security groups enable administrators to - define firewall rules in groups. A VM can belong to one or more - security groups, and Networking applies the rules in those - security groups to block or unblock ports, port ranges, or traffic - types for that VM. - Each plug-in that Networking uses has its own concepts. While - not vital to operating the VNI and OpenStack environment, - understanding these concepts can help you set up Networking. - All Networking installations use a core plug-in and a security group - plug-in (or just the No-Op security group plug-in). Additionally, - Firewall-as-a-Service (FWaaS) and Load-Balancer-as-a-Service (LBaaS) - plug-ins are available. -
diff --git a/doc/training-guides/basic-install-guide/section_neutron-controller-node.xml b/doc/training-guides/basic-install-guide/section_neutron-controller-node.xml deleted file mode 100644 index b9baf208..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-controller-node.xml +++ /dev/null @@ -1,448 +0,0 @@ - -
- Install and configure controller node - - To configure prerequisites - Before you configure OpenStack Networking (neutron), you must create - a database and Identity service credentials including endpoints. - - To create the database, complete these steps: - - - Use the database access client to connect to the database - server as the root user: - $ mysql -u root -p - - - Create the neutron database: - CREATE DATABASE neutron; - - - Grant proper access to the neutron - database: - GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; -GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - Replace NEUTRON_DBPASS with a - suitable password. - - - Exit the database access client. - - - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - To create the Identity service credentials, complete these - steps: - - - Create the neutron user: - $ keystone user-create --name neutron --pass NEUTRON_PASS -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| email | | -| enabled | True | -| id | 7fd67878dcd04d0393469ef825a7e005 | -| name | neutron | -| username | neutron | -+----------+----------------------------------+ - Replace NEUTRON_PASS with a suitable - password. - - - Link the neutron user to the - service tenant and admin - role: - $ keystone user-role-add --user neutron --tenant service --role admin - - This command provides no output. - - - - Create the neutron service: - $ keystone service-create --name neutron --type network \ - --description "OpenStack Networking" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | OpenStack Networking | -| enabled | True | -| id | 6369ddaf99a447f3a0d41dac5e342161 | -| name | neutron | -| type | network | -+-------------+----------------------------------+ - - - Create the Identity service endpoints: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ network / {print $2}') \ - --publicurl http://controller:9696 \ - --adminurl http://controller:9696 \ - --internalurl http://controller:9696 \ - --region regionOne -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| adminurl | http://controller:9696 | -| id | fa18b41938a94bf6b35e2c152063ee21 | -| internalurl | http://controller:9696 | -| publicurl | http://controller:9696 | -| region | regionOne | -| service_id | 6369ddaf99a447f3a0d41dac5e342161 | -+-------------+----------------------------------+ - - - - - - To install the Networking components - - # apt-get install neutron-server neutron-plugin-ml2 python-neutronclient - # yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which - # zypper install openstack-neutron openstack-neutron-server - - SUSE does not use a separate ML2 plug-in package. - - - - - To install and configure the Networking components - - # apt-get install neutron-server - - Debian does not use a separate ML2 plug-in package. - - - - Select the ML2 plug-in: - - - - - - - - - Selecting the ML2 plug-in also populates the - and - options in the - /etc/neutron/neutron.conf file with the - appropriate values. - - - - - To configure the Networking server component - The Networking server component configuration includes the database, - authentication mechanism, message broker, topology change notifications, - and plug-in. - - Edit the /etc/neutron/neutron.conf file - and complete the following actions: - - - In the [database] section, configure - database access: - [database] -... -connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron - Replace NEUTRON_DBPASS with the - password you chose for the database. - - - In the [DEFAULT] section, configure - RabbitMQ message broker access: - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - - - In the [DEFAULT] and - [keystone_authtoken] sections, - configure Identity service access: - [DEFAULT] -... -auth_strategy = keystone - -[keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS - Replace NEUTRON_PASS with the - password you chose or the neutron user in the - Identity service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [DEFAULT] section, enable the - Modular Layer 2 (ML2) plug-in, router service, and overlapping - IP addresses: - [DEFAULT] -... -core_plugin = ml2 -service_plugins = router -allow_overlapping_ips = True - - - In the [DEFAULT] section, configure - Networking to notify Compute of network topology changes: - [DEFAULT] -... -notify_nova_on_port_status_changes = True -notify_nova_on_port_data_changes = True -nova_url = http://controller:8774/v2 -nova_admin_auth_url = http://controller:35357/v2.0 -nova_region_name = regionOne -nova_admin_username = nova -nova_admin_tenant_id = SERVICE_TENANT_ID -nova_admin_password = NOVA_PASS - Replace SERVICE_TENANT_ID with the - service tenant identifier (id) in the Identity - service and NOVA_PASS with the password - you chose for the nova user in the Identity - service. - - To obtain the service tenant - identifier (id): - $ source admin-openrc.sh -$ keystone tenant-get service -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Service Tenant | -| enabled | True | -| id | f727b5ec2ceb4d71bad86dfc414449bf | -| name | service | -+-------------+----------------------------------+ - - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - - To configure the Modular Layer 2 (ML2) plug-in - The ML2 plug-in uses the - Open vSwitch (OVS) - mechanism (agent) to build the virtual networking framework for - instances. However, the controller node does not need the OVS - components because it does not handle instance network traffic. - - Edit the - /etc/neutron/plugins/ml2/ml2_conf.ini - file and complete the following actions: - - - In the [ml2] section, enable the - flat and - generic routing encapsulation (GRE) - network type drivers, GRE tenant networks, and the OVS - mechanism driver: - [ml2] -... -type_drivers = flat,gre -tenant_network_types = gre -mechanism_drivers = openvswitch - - Once you configure the ML2 plug-in, be aware that disabling - a network type driver and re-enabling it later can lead to - database inconsistency. - - - - In the [ml2_type_gre] section, configure - the tunnel identifier (id) range: - [ml2_type_gre] -... -tunnel_id_ranges = 1:1000 - - - In the [securitygroup] section, enable - security groups, enable ipset, and - configure the OVS iptables firewall - driver: - [securitygroup] -... -enable_security_group = True -enable_ipset = True -firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - - - - - - To configure Compute to use Networking - By default, distribution packages configure Compute to use legacy - networking. You must reconfigure Compute to manage networks through - Networking. - - Edit the /etc/nova/nova.conf file and - complete the following actions: - - - In the [DEFAULT] section, configure - the APIs and drivers: - [DEFAULT] -... -network_api_class = nova.network.neutronv2.api.API -security_group_api = neutron -linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver -firewall_driver = nova.virt.firewall.NoopFirewallDriver - - By default, Compute uses an internal firewall service. - Since Networking includes a firewall service, you must - disable the Compute firewall service by using the - nova.virt.firewall.NoopFirewallDriver - firewall driver. - - - - In the [neutron] section, configure - access parameters: - [neutron] -... -url = http://controller:9696 -auth_strategy = keystone -admin_auth_url = http://controller:35357/v2.0 -admin_tenant_name = service -admin_username = neutron -admin_password = NEUTRON_PASS - Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. - - - - - - To finalize installation - - The Networking service initialization scripts expect a - symbolic link /etc/neutron/plugin.ini - pointing to the ML2 plug-in configuration file, - /etc/neutron/plugins/ml2/ml2_conf.ini. - If this symbolic link does not exist, create it using the - following command: - # ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - - - The Networking service initialization scripts expect the - variable NEUTRON_PLUGIN_CONF in the - /etc/sysconfig/neutron file to - reference the ML2 plug-in configuration file. Edit the - /etc/sysconfig/neutron file and add the - following: - NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini" - - - Populate the database: - # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron - - Database population occurs later for Networking because the - script requires complete server and plug-in configuration - files. - - - - Restart the Compute services: - # systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service - On SLES: - # service openstack-nova-api restart -# service openstack-nova-scheduler restart -# service openstack-nova-conductor restart - On openSUSE: - # systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service - # service nova-api restart -# service nova-scheduler restart -# service nova-conductor restart - - - Start the Networking service and configure it to start when the - system boots: - # systemctl enable neutron-server.service -# systemctl start neutron-server.service - On SLES: - # service openstack-neutron start -# chkconfig openstack-neutron on - On openSUSE: - # systemctl enable openstack-neutron.service -# systemctl start openstack-neutron.service - - - Restart the Networking service: - # service neutron-server restart - - - - Verify operation - - Perform these commands on the controller node. - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - List loaded extensions to verify successful launch of the - neutron-server process: - $ neutron ext-list -+-----------------------+-----------------------------------------------+ -| alias | name | -+-----------------------+-----------------------------------------------+ -| security-group | security-group | -| l3_agent_scheduler | L3 Agent Scheduler | -| ext-gw-mode | Neutron L3 Configurable external gateway mode | -| binding | Port Binding | -| provider | Provider Network | -| agent | agent | -| quotas | Quota management support | -| dhcp_agent_scheduler | DHCP Agent Scheduler | -| l3-ha | HA Router extension | -| multi-provider | Multi Provider Network | -| external-net | Neutron external network | -| router | Neutron L3 Router | -| allowed-address-pairs | Allowed Address Pairs | -| extraroute | Neutron Extra Route | -| extra_dhcp_opt | Neutron Extra DHCP opts | -| dvr | Distributed Virtual Router | -+-----------------------+-----------------------------------------------+ - - -
diff --git a/doc/training-guides/basic-install-guide/section_neutron-initial-networks.xml b/doc/training-guides/basic-install-guide/section_neutron-initial-networks.xml deleted file mode 100644 index fd48f1de..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-initial-networks.xml +++ /dev/null @@ -1,264 +0,0 @@ - -
- Create initial networks - Before launching your first instance, you must create the - necessary virtual network infrastructure to which the instance will - connect, including the - external network - and - tenant network. - See . After - creating this infrastructure, we recommend that you - verify - connectivity and resolve any issues before proceeding further. - -
- Initial networks - - - - - -
-
- External network - The external network typically provides Internet access for - your instances. By default, this network only allows Internet - access from instances using - Network Address Translation (NAT). You can - enable Internet access to individual instances - using a floating IP address and suitable - security group rules. The admin - tenant owns this network because it provides external network - access for multiple tenants. You must also enable sharing to allow - access by those tenants. - - Perform these commands on the controller node. - - - To create the external network - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - Create the network: - $ neutron net-create ext-net --shared --router:external True \ - --provider:physical_network external --provider:network_type flat -Created a new network: -+---------------------------+--------------------------------------+ -| Field | Value | -+---------------------------+--------------------------------------+ -| admin_state_up | True | -| id | 893aebb9-1c1e-48be-8908-6b947f3237b3 | -| name | ext-net | -| provider:network_type | flat | -| provider:physical_network | external | -| provider:segmentation_id | | -| router:external | True | -| shared | True | -| status | ACTIVE | -| subnets | | -| tenant_id | 54cd044c64d5408b83f843d63624e0d8 | -+---------------------------+--------------------------------------+ - - - Like a physical network, a virtual network requires a - subnet assigned to it. The external network - shares the same subnet and gateway associated - with the physical network connected to the external interface on the - network node. You should specify an exclusive slice of this subnet - for router and floating IP addresses to prevent - interference with other devices on the external network. - - To create a subnet on the external network - - Create the subnet: - $ neutron subnet-create ext-net --name ext-subnet \ - --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \ - --disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY EXTERNAL_NETWORK_CIDR - Replace FLOATING_IP_START and - FLOATING_IP_END with the first and last - IP addresses of the range that you want to allocate for floating IP - addresses. Replace EXTERNAL_NETWORK_CIDR - with the subnet associated with the physical network. Replace - EXTERNAL_NETWORK_GATEWAY with the gateway - associated with the physical network, typically the ".1" IP address. - You should disable DHCP on this subnet because - instances do not connect directly to the external network and - floating IP addresses require manual assignment. - For example, using 203.0.113.0/24 with - floating IP address range 203.0.113.101 to - 203.0.113.200: - $ neutron subnet-create ext-net --name ext-subnet \ - --allocation-pool start=203.0.113.101,end=203.0.113.200 \ - --disable-dhcp --gateway 203.0.113.1 203.0.113.0/24 -Created a new subnet: -+-------------------+------------------------------------------------------+ -| Field | Value | -+-------------------+------------------------------------------------------+ -| allocation_pools | {"start": "203.0.113.101", "end": "203.0.113.200"} | -| cidr | 203.0.113.0/24 | -| dns_nameservers | | -| enable_dhcp | False | -| gateway_ip | 203.0.113.1 | -| host_routes | | -| id | 9159f0dc-2b63-41cf-bd7a-289309da1391 | -| ip_version | 4 | -| ipv6_address_mode | | -| ipv6_ra_mode | | -| name | ext-subnet | -| network_id | 893aebb9-1c1e-48be-8908-6b947f3237b3 | -| tenant_id | 54cd044c64d5408b83f843d63624e0d8 | -+-------------------+------------------------------------------------------+ - - -
-
- Tenant network - The tenant network provides internal network access for instances. - The architecture isolates this type of network from other tenants. The - demo tenant owns this network because it only - provides network access for instances within it. - - Perform these commands on the controller node. - - - To create the tenant network - - Source the demo credentials to gain access to - user-only CLI commands: - $ source demo-openrc.sh - - - Create the network: - $ neutron net-create demo-net -Created a new network: -+-----------------+--------------------------------------+ -| Field | Value | -+-----------------+--------------------------------------+ -| admin_state_up | True | -| id | ac108952-6096-4243-adf4-bb6615b3de28 | -| name | demo-net | -| router:external | False | -| shared | False | -| status | ACTIVE | -| subnets | | -| tenant_id | cdef0071a0194d19ac6bb63802dc9bae | -+-----------------+--------------------------------------+ - - - Like the external network, your tenant network also requires - a subnet attached to it. You can specify any valid subnet because the - architecture isolates tenant networks. By default, this subnet will - use DHCP so your instances can obtain IP addresses. - - To create a subnet on the tenant network - - Create the subnet: - $ neutron subnet-create demo-net --name demo-subnet \ - --gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR - Replace TENANT_NETWORK_CIDR with the - subnet you want to associate with the tenant network and - TENANT_NETWORK_GATEWAY with the gateway - you want to associate with it, typically the ".1" IP address. - Example using 192.168.1.0/24: - $ neutron subnet-create demo-net --name demo-subnet \ - --gateway 192.168.1.1 192.168.1.0/24 -Created a new subnet: -+-------------------+------------------------------------------------------+ -| Field | Value | -+-------------------+------------------------------------------------------+ -| allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} | -| cidr | 192.168.1.0/24 | -| dns_nameservers | | -| enable_dhcp | True | -| gateway_ip | 192.168.1.1 | -| host_routes | | -| id | 69d38773-794a-4e49-b887-6de6734e792d | -| ip_version | 4 | -| ipv6_address_mode | | -| ipv6_ra_mode | | -| name | demo-subnet | -| network_id | ac108952-6096-4243-adf4-bb6615b3de28 | -| tenant_id | cdef0071a0194d19ac6bb63802dc9bae | -+-------------------+------------------------------------------------------+ - - - A virtual router passes network traffic between two or more virtual - networks. Each router requires one or more - interfaces and/or gateways - that provide access to specific networks. In this case, you will create - a router and attach your tenant and external networks to it. - - To create a router on the tenant network and attach the external - and tenant networks to it - - Create the router: - $ neutron router-create demo-router -Created a new router: -+-----------------------+--------------------------------------+ -| Field | Value | -+-----------------------+--------------------------------------+ -| admin_state_up | True | -| external_gateway_info | | -| id | 635660ae-a254-4feb-8993-295aa9ec6418 | -| name | demo-router | -| routes | | -| status | ACTIVE | -| tenant_id | cdef0071a0194d19ac6bb63802dc9bae | -+-----------------------+--------------------------------------+ - - - Attach the router to the demo tenant - subnet: - $ neutron router-interface-add demo-router demo-subnet -Added interface b1a894fd-aee8-475c-9262-4342afdc1b58 to router demo-router. - - - Attach the router to the external network by setting it as - the gateway: - $ neutron router-gateway-set demo-router ext-net -Set gateway for router demo-router - - -
-
- Verify connectivity - We recommend that you verify network connectivity and resolve any - issues before proceeding further. Following the external network - subnet example using 203.0.113.0/24, the tenant - router gateway should occupy the lowest IP address in the floating - IP address range, 203.0.113.101. If you configured - your external physical network and virtual networks correctly, you - should be able to ping this IP address from any - host on your external physical network. - - If you are building your OpenStack nodes as virtual machines, - you must configure the hypervisor to permit promiscuous mode on the - external network. - - - To verify network connectivity - - Ping the tenant router gateway: - $ ping -c 4 203.0.113.101 -PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data. -64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619 ms -64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189 ms -64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165 ms -64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216 ms - ---- 203.0.113.101 ping statistics --- -4 packets transmitted, 4 received, 0% packet loss, time 2999ms -rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms - - -
-
diff --git a/doc/training-guides/basic-install-guide/section_neutron-ml2-compute-node.xml b/doc/training-guides/basic-install-guide/section_neutron-ml2-compute-node.xml deleted file mode 100644 index 6b8cd34f..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-ml2-compute-node.xml +++ /dev/null @@ -1,377 +0,0 @@ - -
- Configure compute node - Before you install and configure OpenStack Networking, you - must enable certain kernel networking functions. - - To enable kernel networking functions - - Edit the /etc/sysctl.conf file and - add the following lines: - net.ipv4.conf.all.rp_filter=0 -net.ipv4.conf.default.rp_filter=0 - - - Implement the changes: - # sysctl -p - - - - To install the Networking components - - # apt-get install neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent \ - openvswitch-datapath-dkms - # yum install openstack-neutron-ml2 openstack-neutron-openvswitch - # zypper install openstack-neutron-openvswitch-agent - - Ubuntu installations that use Linux kernel version 3.11 - or later do not require the - openvswitch-datapath-dkms - package. - - - SUSE does not use a separate ML2 plug-in package. - - - - - To configure the Networking common components - The Networking common component configuration includes the - authentication mechanism, message broker, and plug-in. - - Configure Networking to use the Identity service for - authentication: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - auth_strategy keystone -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_uri http://controller:5000 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_protocol http -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_port 35357 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_user neutron -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_password NEUTRON_PASS - Replace NEUTRON_PASS with the - password you chose for the neutron user in - the Identity service. - - - Configure Networking to use the Identity service for - authentication: - - - Edit the - /etc/neutron/neutron.conf file and - add the following key to the [DEFAULT] - section: - [DEFAULT] -... -auth_strategy = keystone - Add the following keys to the - [keystone_authtoken] section: - [keystone_authtoken] -... -auth_uri = http://controller:5000 -auth_host = controller -auth_protocol = http -auth_port = 35357 -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS - Replace NEUTRON_PASS with - the password you chose for the neutron - user in the Identity service. - - - - - Configure Networking to use the message broker: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rpc_backend neutron.openstack.common.rpc.impl_kombu -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_host controller -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_userid guest -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_password RABBIT_PASS - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - - - Configure Networking to use the message broker: - - - Edit the - /etc/neutron/neutron.conf file and - add the following keys to the [DEFAULT] - section: - Replace RABBIT_PASS with - the password you chose for the guest - account in RabbitMQ. - [DEFAULT] -... -rpc_backend = neutron.openstack.common.rpc.impl_kombu -rabbit_host = controller -rabbit_password = RABBIT_PASS - - - - - Configure Networking to use the Modular Layer 2 (ML2) - plug-in and associated services: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - core_plugin ml2 -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - service_plugins router - - To assist with troubleshooting, add verbose = - True to the [DEFAULT] section - in the /etc/neutron/neutron.conf - file. - - - - Configure Networking to use the Modular Layer 2 (ML2) - plug-in and associated services: - - - Edit the - /etc/neutron/neutron.conf file and - add the following keys to the [DEFAULT] - section: - [DEFAULT] -... -core_plugin = ml2 -service_plugins = router -allow_overlapping_ips = True - - To assist with troubleshooting, add verbose - = True to the [DEFAULT] - section in the - /etc/neutron/neutron.conf - file. - - - - - - - To configure the Modular Layer 2 (ML2) plug-in - The ML2 plug-in uses the Open vSwitch (OVS) mechanism - (agent) to build the virtual networking framework for - instances. - - Run the following commands: - # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ - type_drivers gre -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ - tenant_network_types gre -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ - mechanism_drivers openvswitch -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ - tunnel_id_ranges 1:1000 -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ - local_ip INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ - tunnel_type gre -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ - enable_tunneling True -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ - firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ - enable_security_group True - Replace - INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS - with the IP address of the instance tunnels network interface - on your compute node. This guide uses - 10.0.1.31 for the IP address of the - instance tunnels network interface on the first compute - node. - - - Edit the - /etc/neutron/plugins/ml2/ml2_conf.ini - file and add the following keys to the - [ml2] section: - [ml2] -... -type_drivers = gre -tenant_network_types = gre -mechanism_drivers = openvswitch - Add the following keys to the - [ml2_type_gre] section: - [ml2_type_gre] -... -tunnel_id_ranges = 1:1000 - Add the [ovs] section and the following - keys to it: - Replace - INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS - with the IP address of the instance tunnels network interface - on your compute node. - [ovs] -... -local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS -tunnel_type = gre -enable_tunneling = True - Add the [securitygroup] section and the - following keys to it: - [securitygroup] -... -firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver -enable_security_group = True - - - - To configure the Open vSwitch (OVS) service - The OVS service provides the underlying virtual networking framework - for instances. - - Start the OVS service and configure it to start when the - system boots: - # service openvswitch start -# chkconfig openvswitch on - - - Start the OVS service and configure it to start when the - system boots: - # service openvswitch-switch start -# chkconfig openvswitch-switch on - - - Restart the OVS service: - # service openvswitch-switch restart - - - Restart the OVS service: - # service openvswitch restart - - - - To configure Compute to use Networking - By default, most distributions configure Compute to use - legacy networking. You must reconfigure Compute to manage - networks through Networking. - - Run the following commands: - # openstack-config --set /etc/nova/nova.conf DEFAULT \ - network_api_class nova.network.neutronv2.api.API -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_url http://controller:9696 -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_auth_strategy keystone -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_tenant_name service -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_username neutron -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_password NEUTRON_PASS -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_auth_url http://controller:35357/v2.0 -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - firewall_driver nova.virt.firewall.NoopFirewallDriver -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - security_group_api neutron - Replace NEUTRON_PASS with the - password you chose for the neutron user in - the Identity service. - - By default, Compute uses an internal firewall service. - Since Networking includes a firewall service, you must - disable the Compute firewall service by using the - nova.virt.firewall.NoopFirewallDriver - firewall driver. - - - - Edit the /etc/nova/nova.conf and add - the following keys to the [DEFAULT] - section: - Replace NEUTRON_PASS with the - password you chose for the neutron user in - the Identity service. - [DEFAULT] -... -network_api_class = nova.network.neutronv2.api.API -neutron_url = http://controller:9696 -neutron_auth_strategy = keystone -neutron_admin_tenant_name = service -neutron_admin_username = neutron -neutron_admin_password = NEUTRON_PASS -neutron_admin_auth_url = http://controller:35357/v2.0 -linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver -firewall_driver = nova.virt.firewall.NoopFirewallDriver -security_group_api = neutron - - By default, Compute uses an internal firewall service. - Since Networking includes a firewall service, you must - disable the Compute firewall service by using the - nova.virt.firewall.NoopFirewallDriver - firewall driver. - - - - - To finalize the installation - - The Networking service initialization scripts expect a - symbolic link /etc/neutron/plugin.ini - pointing to the configuration file associated with your chosen - plug-in. Using the ML2 plug-in, for example, the symbolic link - must point to - /etc/neutron/plugins/ml2/ml2_conf.ini. - If this symbolic link does not exist, create it using the - following commands: - # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - - Due to a packaging bug, the Open vSwitch agent - initialization script explicitly looks for the Open vSwitch - plug-in configuration file rather than a symbolic link - /etc/neutron/plugin.ini pointing to the - ML2 plug-in configuration file. Run the following commands to - resolve this issue: - # cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig -# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent - - - The Networking service initialization scripts expect the - variable NEUTRON_PLUGIN_CONF in the - /etc/sysconfig/neutron file to - reference the configuration file associated with your chosen - plug-in. Using ML2, for example, edit the - /etc/sysconfig/neutron file and add the - following: - NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini" - - - Restart the Compute service: - # service openstack-nova-compute restart - # service nova-compute restart - - - Start the Open vSwitch (OVS) agent and configure it to - start when the system boots: - # service neutron-openvswitch-agent start -# chkconfig neutron-openvswitch-agent on - # service openstack-neutron-openvswitch-agent start -# chkconfig openstack-neutron-openvswitch-agent on - - - Restart the Open vSwitch (OVS) agent: - # service neutron-plugin-openvswitch-agent restart - - -
diff --git a/doc/training-guides/basic-install-guide/section_neutron-ml2-controller-node.xml b/doc/training-guides/basic-install-guide/section_neutron-ml2-controller-node.xml deleted file mode 100644 index 65de0bea..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-ml2-controller-node.xml +++ /dev/null @@ -1,452 +0,0 @@ - -
- Configure controller node - - Prerequisites - Before you configure OpenStack Networking (neutron), you must create - a database and Identity service credentials including a user and - service. - - Connect to the database as the root user, create the - neutron database, and grant the proper - access to it: - Replace NEUTRON_DBPASS with a suitable - password. - $ mysql -u root -p -mysql> CREATE DATABASE neutron; -mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ -IDENTIFIED BY 'NEUTRON_DBPASS'; -mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ -IDENTIFIED BY 'NEUTRON_DBPASS'; - - - Create Identity service credentials for Networking: - - - Create the neutron user: - Replace NEUTRON_PASS with a suitable - password and neutron@example.com with - a suitable e-mail address. - $ keystone user-create --name neutron --pass NEUTRON_PASS --email neutron@example.com - - - Link the neutron user to the - service tenant and admin - role: - $ keystone user-role-add --user neutron --tenant service --role admin - - - Create the neutron service: - $ keystone service-create --name neutron --type network --description "OpenStack Networking" - - - Create the service endpoint: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ network / {print $2}') \ - --publicurl http://controller:9696 \ - --adminurl http://controller:9696 \ - --internalurl http://controller:9696 - - - - - - To install the Networking components - - # apt-get install neutron-server neutron-plugin-ml2 - # apt-get install neutron-server - # yum install openstack-neutron openstack-neutron-ml2 python-neutronclient - # zypper install openstack-neutron openstack-neutron-server - - SUSE does not use a separate ML2 plug-in package. - - - Debian does not use a separate ML2 plug-in package. - - - - - To configure the Networking server component - The Networking server component configuration includes the database, - authentication mechanism, message broker, topology change notifier, - and plug-in. - - During the installation, you will also be prompted for which - Networking plug-in to use. This will automatically fill the - directive in the - /etc/neutron/neutron.conf file. - - - - - - - - If the ML2 plug-in is selected, then the - option will be filled with - neutron.plugins.ml2.plugin.Ml2Plugin, which is the - full class name for the ML2 plug-in. In Debian, you cannot (yet) use - the short names for the plug-ins. The - and options are filled with the - appropriate values by default, so it is fine to not touch them. - - - Configure Networking to use the database: - Replace NEUTRON_DBPASS with a suitable - password. - # openstack-config --set /etc/neutron/neutron.conf database connection \ - mysql://neutron:NEUTRON_DBPASS@controller/neutron - - - Configure Networking to use the database: - - - Edit the /etc/neutron/neutron.conf - file and add the following key to the - [database] section: - Replace NEUTRON_DBPASS with the - password you chose for the database. - [database] -... -connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron - - - - - Configure Networking to use the Identity service for - authentication: - Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - auth_strategy keystone -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_uri http://controller:5000 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_protocol http -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_port 35357 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_user neutron -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_password NEUTRON_PASS - - - Configure Networking to use the Identity service for - authentication: - - - Edit the /etc/neutron/neutron.conf - file and add the following key to the - [DEFAULT] section: - [DEFAULT] -... -auth_strategy = keystone - Add the following keys to the - [keystone_authtoken] section: - Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. - [keystone_authtoken] -... -auth_uri = http://controller:5000 -auth_host = controller -auth_protocol = http -auth_port = 35357 -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS - - - - - Configure Networking to use the message broker: - Replace RABBIT_PASS with the password - you chose for the guest account in - RabbitMQ. - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rpc_backend neutron.openstack.common.rpc.impl_kombu -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_host controller -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_userid guest -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_password RABBIT_PASS - - - Configure Networking to use the message broker: - - - Edit the /etc/neutron/neutron.conf file - and add the following keys to the [DEFAULT] - section: - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - [DEFAULT] -... -rpc_backend = neutron.openstack.common.rpc.impl_kombu -rabbit_host = controller -rabbit_password = RABBIT_PASS - - - - - Configure Networking to notify Compute about network topology - changes: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - notify_nova_on_port_status_changes True -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - notify_nova_on_port_data_changes True -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - nova_url http://controller:8774/v2 -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - nova_admin_username nova -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }') -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - nova_admin_password NOVA_PASS -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - nova_admin_auth_url http://controller:35357/v2.0 - - - Configure Networking to notify Compute about network topology - changes: - Replace SERVICE_TENANT_ID with the - service tenant identifier (id) in the Identity - service and NOVA_PASS with the password - you chose for the nova user in the Identity - service. - - - Edit the /etc/neutron/neutron.conf file - and add the following keys to the [DEFAULT] - section: - [DEFAULT] -... -notify_nova_on_port_status_changes = True -notify_nova_on_port_data_changes = True -nova_url = http://controller:8774/v2 -nova_admin_username = nova -nova_admin_tenant_id = SERVICE_TENANT_ID -nova_admin_password = NOVA_PASS -nova_admin_auth_url = http://controller:35357/v2.0 - - - - To obtain the service tenant - identifier (id): - $ source admin-openrc.sh -$ keystone tenant-get service -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Service Tenant | -| enabled | True | -| id | f727b5ec2ceb4d71bad86dfc414449bf | -| name | service | -+-------------+----------------------------------+ - - - - Configure Networking to use the Modular Layer 2 (ML2) plug-in - and associated services: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - core_plugin ml2 -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - service_plugins router - - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/neutron.conf to assist with - troubleshooting. - - - - Configure Networking to use the Modular Layer 2 (ML2) plug-in - and associated services: - - - Edit the /etc/neutron/neutron.conf file - and add the following keys to the [DEFAULT] - section: - [DEFAULT] -... -core_plugin = ml2 -service_plugins = router -allow_overlapping_ips = True - - We recommend adding verbose = True to - the [DEFAULT] section in - /etc/neutron/neutron.conf to assist with - troubleshooting. - - - - - - - To configure the Modular Layer 2 (ML2) plug-in - The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to - build the virtual networking framework for instances. However, the - controller node does not need the OVS agent or service because it - does not handle instance network traffic. - - Run the following commands: - # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ - type_drivers gre -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ - tenant_network_types gre -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ - mechanism_drivers openvswitch -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ - tunnel_id_ranges 1:1000 -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ - firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ - enable_security_group True - - - Edit the - /etc/neutron/plugins/ml2/ml2_conf.ini - file: - Add the following keys to the [ml2] - section: - [ml2] -... -type_drivers = gre -tenant_network_types = gre -mechanism_drivers = openvswitch - Add the following key to the - [ml2_type_gre] section: - [ml2_type_gre] -... -tunnel_id_ranges = 1:1000 - Add the [securitygroup] section and the - following keys to it: - [securitygroup] -... -firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver -enable_security_group = True - - - - To configure Compute to use Networking - By default, most distributions configure Compute to use legacy - networking. You must reconfigure Compute to manage networks through - Networking. - - Run the following commands: - Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. - # openstack-config --set /etc/nova/nova.conf DEFAULT \ - network_api_class nova.network.neutronv2.api.API -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_url http://controller:9696 -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_auth_strategy keystone -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_tenant_name service -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_username neutron -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_password NEUTRON_PASS -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_auth_url http://controller:35357/v2.0 -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - firewall_driver nova.virt.firewall.NoopFirewallDriver -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - security_group_api neutron - - By default, Compute uses an internal firewall service. Since - Networking includes a firewall service, you must disable the - Compute firewall service by using the - nova.virt.firewall.NoopFirewallDriver firewall - driver. - - - - Edit the /etc/nova/nova.conf and add the - following keys to the [DEFAULT] section: - Replace NEUTRON_PASS with the - password you chose for the neutron user - in the Identity service. - [DEFAULT] -... -network_api_class = nova.network.neutronv2.api.API -neutron_url = http://controller:9696 -neutron_auth_strategy = keystone -neutron_admin_tenant_name = service -neutron_admin_username = neutron -neutron_admin_password = NEUTRON_PASS -neutron_admin_auth_url = http://controller:35357/v2.0 -linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver -firewall_driver = nova.virt.firewall.NoopFirewallDriver -security_group_api = neutron - - By default, Compute uses an internal firewall service. Since - Networking includes a firewall service, you must disable the - Compute firewall service by using the - nova.virt.firewall.NoopFirewallDriver firewall - driver. - - - - - To finalize installation - - The Networking service initialization scripts expect a symbolic - link /etc/neutron/plugin.ini pointing to the - configuration file associated with your chosen plug-in. Using - ML2, for example, the symbolic link must point to - /etc/neutron/plugins/ml2/ml2_conf.ini. - If this symbolic link does not exist, create it using the - following commands: - # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - - - The Networking service initialization scripts expect the variable - NEUTRON_PLUGIN_CONF in file - /etc/sysconfig/neutron to reference the - configuration file associated with your chosen plug-in. Using - ML2, for example, edit the - /etc/sysconfig/neutron file and add the - following: - NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini" - - - Restart the Compute services: - # service openstack-nova-api restart -# service openstack-nova-scheduler restart -# service openstack-nova-conductor restart - # service nova-api restart -# service nova-scheduler restart -# service nova-conductor restart - - - Start the Networking service and configure it to start when the - system boots: - # service neutron-server start -# chkconfig neutron-server on - # service openstack-neutron start -# chkconfig openstack-neutron on - - - Restart the Networking service: - # service neutron-server restart - - -
diff --git a/doc/training-guides/basic-install-guide/section_neutron-ml2-network-node.xml b/doc/training-guides/basic-install-guide/section_neutron-ml2-network-node.xml deleted file mode 100644 index f8fd94f7..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-ml2-network-node.xml +++ /dev/null @@ -1,518 +0,0 @@ - -
- Configure network node - Before you install and configure OpenStack Networking, you - must enable certain kernel networking functions. - - To enable kernel networking functions - - Edit /etc/sysctl.conf to contain the - following: - net.ipv4.ip_forward=1 -net.ipv4.conf.all.rp_filter=0 -net.ipv4.conf.default.rp_filter=0 - - - Implement the changes: - # sysctl -p - - - - To install the Networking components - - # apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent openvswitch-datapath-dkms \ - neutron-l3-agent neutron-dhcp-agent - # yum install openstack-neutron openstack-neutron-ml2 \ - openstack-neutron-openvswitch - # zypper install openstack-neutron-openvswitch-agent openstack-neutron-l3-agent \ - openstack-neutron-dhcp-agent openstack-neutron-metadata-agent - - Ubuntu installations using Linux kernel version 3.11 or - newer do not require the - openvswitch-datapath-dkms - package. - - - SUSE does not use a separate ML2 plug-in package. - - - - - To configure the Networking common components - The Networking common component configuration includes the - authentication mechanism, message broker, and plug-in. - - Configure Networking to use the Identity service for - authentication: - Replace NEUTRON_PASS with the - password you chose for the neutron user in - the Identity service. - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - auth_strategy keystone -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_uri http://controller:5000 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_protocol http -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_port 35357 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_user neutron -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_password NEUTRON_PASS - - - Configure Networking to use the Identity service for - authentication: - - - Edit the - /etc/neutron/neutron.conf file and - add the following key to the [DEFAULT] - section: - [DEFAULT] -... -auth_strategy = keystone - Add the following keys to the - [keystone_authtoken] section: - Replace NEUTRON_PASS with - the password you chose for the neutron - user in the Identity service. - [keystone_authtoken] -... -auth_uri = http://controller:5000 -auth_host = controller -auth_protocol = http -auth_port = 35357 -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS - - - - - Configure Networking to use the message broker: - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rpc_backend neutron.openstack.common.rpc.impl_kombu -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_host controller -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_userid guest -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_password RABBIT_PASS - - - Configure Networking to use the message broker: - - - Edit the - /etc/neutron/neutron.conf file and - add the following keys to the [DEFAULT] - section: - Replace RABBIT_PASS with - the password you chose for the guest - account in RabbitMQ. - [DEFAULT] -... -rpc_backend = neutron.openstack.common.rpc.impl_kombu -rabbit_host = controller -rabbit_password = RABBIT_PASS - - - - - Configure Networking to use the Modular Layer 2 (ML2) - plug-in and associated services: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - core_plugin ml2 -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - service_plugins router - - To assist with troubleshooting, add verbose = - True to the [DEFAULT] section - in the /etc/neutron/neutron.conf - file. - - - - Configure Networking to use the Modular Layer 2 (ML2) - plug-in and associated services: - - - Edit the - /etc/neutron/neutron.conf file and - add the following keys to the [DEFAULT] - section: - [DEFAULT] -... -core_plugin = ml2 -service_plugins = router -allow_overlapping_ips = True - - To assist with troubleshooting, add verbose - = True to the [DEFAULT] - section in the - /etc/neutron/neutron.conf - file. - - - - - - - To configure the Layer-3 (L3) agent - The Layer-3 (L3) agent provides - routing services for instance virtual networks. - - Run the following commands: - # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ - interface_driver neutron.agent.linux.interface.OVSInterfaceDriver -# openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ - use_namespaces True - - To assist with troubleshooting, add verbose = - True to the [DEFAULT] section - in the /etc/neutron/l3_agent.ini - file. - - - - Edit the /etc/neutron/l3_agent.ini - file and add the following keys to the - [DEFAULT] section: - [DEFAULT] -... -interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver -use_namespaces = True - - To assist with troubleshooting, add verbose = - True to the [DEFAULT] section - in the /etc/neutron/l3_agent.ini - file. - - - - - To configure the DHCP agent - The DHCP agent provides - DHCP services for instance virtual - networks. - - Run the following commands: - # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ - interface_driver neutron.agent.linux.interface.OVSInterfaceDriver -# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ - dhcp_driver neutron.agent.linux.dhcp.Dnsmasq -# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ - use_namespaces True - - To assist with troubleshooting, add verbose = - True to the [DEFAULT] section - in the /etc/neutron/dhcp_agent.ini - file. - - - - Edit the /etc/neutron/dhcp_agent.ini - file and add the following keys to the - [DEFAULT] section: - [DEFAULT] -... -interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver -dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq -use_namespaces = True - - To assist with troubleshooting, add verbose = - True to the [DEFAULT] section - in the /etc/neutron/dhcp_agent.ini - file. - - - - - To configure the metadata agent - The metadata agent provides - configuration information such as credentials for remote access - to instances. - - Run the following commands: - Replace NEUTRON_PASS with the - password you chose for the neutron user in - the Identity service. Replace - METADATA_SECRET with a suitable - secret for the metadata proxy. - # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - auth_url http://controller:5000/v2.0 -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - auth_region regionOne -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - admin_tenant_name service -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - admin_user neutron -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - admin_password NEUTRON_PASS -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - nova_metadata_ip controller -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - metadata_proxy_shared_secret METADATA_SECRET - - To assist with troubleshooting, add verbose = - True to the [DEFAULT] section - in the /etc/neutron/metadata_agent.ini - file. - - - - Edit the - /etc/neutron/metadata_agent.ini file - and add the following keys to the [DEFAULT] - section: - Replace NEUTRON_PASS with the - password you chose for the neutron user in - the Identity service. Replace - METADATA_SECRET with a suitable - secret for the metadata proxy. - [DEFAULT] -... -auth_url = http://controller:5000/v2.0 -auth_region = regionOne -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS -nova_metadata_ip = controller -metadata_proxy_shared_secret = METADATA_SECRET - - To assist with troubleshooting, add verbose = - True to the [DEFAULT] section - in the /etc/neutron/metadata_agent.ini - file. - - - - - Perform the next two steps on the - controller node. - - - - On the controller node, configure - Compute to use the metadata service: - Replace METADATA_SECRET with - the secret you chose for the metadata proxy. - # openstack-config --set /etc/nova/nova.conf DEFAULT \ - service_neutron_metadata_proxy true -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_metadata_proxy_shared_secret METADATA_SECRET - - - On the controller node, edit the - /etc/nova/nova.conf file and add the - following keys to the [DEFAULT] - section: - Replace METADATA_SECRET with - the secret you chose for the metadata proxy. - [DEFAULT] -... -service_neutron_metadata_proxy = true -neutron_metadata_proxy_shared_secret = METADATA_SECRET - - - On the controller node, restart the - Compute API service: - # service openstack-nova-api restart - # service nova-api restart - - - - To configure the Modular Layer 2 (ML2) plug-in - The ML2 plug-in uses the Open vSwitch (OVS) mechanism - (agent) to build virtual networking framework for - instances. - - Run the following commands: - Replace - INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS - with the IP address of the instance tunnels network interface - on your network node. This guide uses - 10.0.1.21 for the IP address of the - instance tunnels network interface on the network node. - # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ - type_drivers gre -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ - tenant_network_types gre -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ - mechanism_drivers openvswitch -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ - tunnel_id_ranges 1:1000 -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ - local_ip INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ - tunnel_type gre -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ - enable_tunneling True -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ - firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver -# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ - enable_security_group True - - - Edit the - /etc/neutron/plugins/ml2/ml2_conf.ini - file. - Add the following keys to the [ml2] - section: - [ml2] -... -type_drivers = gre -tenant_network_types = gre -mechanism_drivers = openvswitch - Add the following keys to the - [ml2_type_gre] section: - [ml2_type_gre] -... -tunnel_id_ranges = 1:1000 - Add the [ovs] section and the following - keys to it: - Replace - INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS - with the IP address of the instance tunnels network interface - on your network node. - [ovs] -... -local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS -tunnel_type = gre -enable_tunneling = True - Add the [securitygroup] section and the - following keys to it: - [securitygroup] -... -firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver -enable_security_group = True - - - - To configure the Open vSwitch (OVS) service - The OVS service provides the underlying virtual networking - framework for instances. The integration bridge - br-int handles internal instance network - traffic within OVS. The external bridge br-ex - handles external instance network traffic within OVS. The - external bridge requires a port on the physical external network - interface to provide instances with external network access. In - essence, this port bridges the virtual and physical external - networks in your environment. - - Start the OVS service and configure it to start when the - system boots: - # service openvswitch start -# chkconfig openvswitch on - - - Start the OVS service and configure it to start when the - system boots: - # service openvswitch-switch start -# chkconfig openvswitch-switch on - - - Restart the OVS service: - # service openvswitch-switch restart - - - Restart the OVS service: - # service openvswitch restart - - - Add the external bridge: - # ovs-vsctl add-br br-ex - - - Add a port to the external bridge that connects to the - physical external network interface: - Replace INTERFACE_NAME with the - actual interface name. For example, eth2 - or ens256. - # ovs-vsctl add-port br-ex INTERFACE_NAME - - Depending on your network interface driver, you may need - to disable Generic Receive Offload - (GRO) to achieve suitable throughput between - your instances and the external network. - To temporarily disable GRO on the external network - interface while testing your environment: - # ethtool -K INTERFACE_NAME gro off - - - - - To finalize the installation - - The Networking service initialization scripts expect a - symbolic link /etc/neutron/plugin.ini - pointing to the configuration file associated with your chosen - plug-in. Using the ML2 plug-in, for example, the symbolic link - must point to - /etc/neutron/plugins/ml2/ml2_conf.ini. - If this symbolic link does not exist, create it using the - following commands: - # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - - Due to a packaging bug, the Open vSwitch agent - initialization script explicitly looks for the Open vSwitch - plug-in configuration file rather than a symbolic link - /etc/neutron/plugin.ini pointing to the - ML2 plug-in configuration file. Run the following commands to - resolve this issue: - # cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig -# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent - - - The Networking service initialization scripts expect the - variable NEUTRON_PLUGIN_CONF in the - /etc/sysconfig/neutron file to - reference the configuration file associated with your chosen - plug-in. Using ML2, for example, edit the - /etc/sysconfig/neutron file and add the - following: - NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini" - - - Start the Networking services and configure them to start - when the system boots: - # service neutron-openvswitch-agent start -# service neutron-l3-agent start -# service neutron-dhcp-agent start -# service neutron-metadata-agent start -# chkconfig neutron-openvswitch-agent on -# chkconfig neutron-l3-agent on -# chkconfig neutron-dhcp-agent on -# chkconfig neutron-metadata-agent on -# chkconfig neutron-ovs-cleanup on - # service openstack-neutron-openvswitch-agent start -# service openstack-neutron-l3-agent start -# service openstack-neutron-dhcp-agent start -# service openstack-neutron-metadata-agent start -# chkconfig openstack-neutron-openvswitch-agent on -# chkconfig openstack-neutron-l3-agent on -# chkconfig openstack-neutron-dhcp-agent on -# chkconfig openstack-neutron-metadata-agent on -# chkconfig openstack-neutron-ovs-cleanup on - - - Restart the Networking services: - # service neutron-plugin-openvswitch-agent restart -# service neutron-l3-agent restart -# service neutron-dhcp-agent restart -# service neutron-metadata-agent restart - - -
diff --git a/doc/training-guides/basic-install-guide/section_neutron-network-node.xml b/doc/training-guides/basic-install-guide/section_neutron-network-node.xml deleted file mode 100644 index 1fd4d000..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-network-node.xml +++ /dev/null @@ -1,550 +0,0 @@ - -
- Install and configure network node - The network node primarily handles internal and external routing - and DHCP services for virtual networks. - - To configure prerequisites - Before you install and configure OpenStack Networking, you - must configure certain kernel networking parameters. - - Edit the /etc/sysctl.conf file to - contain the following parameters: - net.ipv4.ip_forward=1 -net.ipv4.conf.all.rp_filter=0 -net.ipv4.conf.default.rp_filter=0 - - - Implement the changes: - # sysctl -p - - - - To install the Networking components - - # apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent \ - neutron-l3-agent neutron-dhcp-agent - # yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch - # zypper install --no-recommends openstack-neutron-openvswitch-agent openstack-neutron-l3-agent \ - openstack-neutron-dhcp-agent openstack-neutron-metadata-agent ipset - - SUSE does not use a separate ML2 plug-in package. - - - - - To install and configure the Networking components - - # apt-get install neutron-plugin-openvswitch-agent openvswitch-datapath-dkms \ - neutron-l3-agent neutron-dhcp-agent - - Debian does not use a separate ML2 plug-in package. - - - - Select the ML2 plug-in: - - - - - - - - - Selecting the ML2 plug-in also populates the - and - options in the - /etc/neutron/neutron.conf file with the - appropriate values. - - - - - To configure the Networking common components - The Networking common component configuration includes the - authentication mechanism, message broker, and plug-in. - - Edit the /etc/neutron/neutron.conf file - and complete the following actions: - - - In the [database] section, comment out - any connection options because network nodes - do not directly access the database. - - - In the [DEFAULT] section, configure - RabbitMQ message broker access: - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - - - In the [DEFAULT] and - [keystone_authtoken] sections, - configure Identity service access: - [DEFAULT] -... -auth_strategy = keystone - -[keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS - Replace NEUTRON_PASS with the - password you chose or the neutron user in the - Identity service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [DEFAULT] section, enable the - Modular Layer 2 (ML2) plug-in, router service, and overlapping - IP addresses: - [DEFAULT] -... -core_plugin = ml2 -service_plugins = router -allow_overlapping_ips = True - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - - To configure the Modular Layer 2 (ML2) plug-in - The ML2 plug-in uses the - Open vSwitch (OVS) - mechanism (agent) to build the virtual networking framework for - instances. - - Edit the - /etc/neutron/plugins/ml2/ml2_conf.ini - file and complete the following actions: - - - In the [ml2] section, enable the - flat and - generic routing encapsulation (GRE) - network type drivers, GRE tenant networks, and the OVS - mechanism driver: - [ml2] -... -type_drivers = flat,gre -tenant_network_types = gre -mechanism_drivers = openvswitch - - - In the [ml2_type_flat] section, configure - the external network: - [ml2_type_flat] -... -flat_networks = external - - - In the [ml2_type_gre] section, configure - the tunnel identifier (id) range: - [ml2_type_gre] -... -tunnel_id_ranges = 1:1000 - - - In the [securitygroup] section, enable - security groups, enable ipset, and - configure the OVS iptables firewall - driver: - [securitygroup] -... -enable_security_group = True -enable_ipset = True -firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - - - In the [ovs] section, configure the - Open vSwitch (OVS) agent: - [ovs] -... -local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS -tunnel_type = gre -enable_tunneling = True -bridge_mappings = external:br-ex - Replace - INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS - with the IP address of the instance tunnels network interface - on your network node. - - - - - - To configure the Layer-3 (L3) agent - The Layer-3 (L3) agent provides - routing services for virtual networks. - - Edit the /etc/neutron/l3_agent.ini file - and complete the following actions: - - - In the [DEFAULT] section, configure - the driver, enable - network - namespaces, and configure the external - network bridge: - [DEFAULT] -... -interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver -use_namespaces = True -external_network_bridge = br-ex - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - - To configure the DHCP agent - The DHCP agent provides DHCP - services for virtual networks. - - Edit the /etc/neutron/dhcp_agent.ini file - and complete the following actions: - - - In the [DEFAULT] section, configure - the drivers and enable namespaces: - [DEFAULT] -... -interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver -dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq -use_namespaces = True - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - (Optional) - Tunneling protocols such as GRE include additional packet - headers that increase overhead and decrease space available for the - payload or user data. Without knowledge of the virtual network - infrastructure, instances attempt to send packets using the default - Ethernet maximum transmission unit (MTU) of - 1500 bytes. Internet protocol (IP) networks - contain the path MTU discovery (PMTUD) - mechanism to detect end-to-end MTU and adjust packet size - accordingly. However, some operating systems and networks block or - otherwise lack support for PMTUD causing performance degradation - or connectivity failure. - Ideally, you can prevent these problems by enabling - jumbo frames on the - physical network that contains your tenant virtual networks. - Jumbo frames support MTUs up to approximately 9000 bytes which - negates the impact of GRE overhead on virtual networks. However, - many network devices lack support for jumbo frames and OpenStack - administrators often lack control over network infrastructure. - Given the latter complications, you can also prevent MTU problems - by reducing the instance MTU to account for GRE overhead. - Determining the proper MTU value often takes experimentation, - but 1454 bytes works in most environments. You can configure the - DHCP server that assigns IP addresses to your instances to also - adjust the MTU. - - Some cloud images ignore the DHCP MTU option in which case - you should configure it using metadata, script, or other suitable - method. - - - - Edit the /etc/neutron/dhcp_agent.ini - file and complete the following action: - - - In the [DEFAULT] section, enable the - dnsmasq configuration file: - [DEFAULT] -... -dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf - - - - - Create and edit the - /etc/neutron/dnsmasq-neutron.conf file and - complete the following action: - - - Enable the DHCP MTU option (26) and configure it to - 1454 bytes: - dhcp-option-force=26,1454 - - - - - Kill any existing - dnsmasq processes: - # pkill dnsmasq - - - - - - To configure the metadata agent - The metadata agent - provides configuration information such as credentials to - instances. - - Edit the /etc/neutron/metadata_agent.ini - file and complete the following actions: - - - In the [DEFAULT] section, configure - access parameters: - [DEFAULT] -... -auth_url = http://controller:5000/v2.0 -auth_region = regionOne -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS - Replace NEUTRON_PASS with the - password you chose for the neutron user in - the Identity service. - - - In the [DEFAULT] section, configure the - metadata host: - [DEFAULT] -... -nova_metadata_ip = controller - - - In the [DEFAULT] section, configure the - metadata proxy shared secret: - [DEFAULT] -... -metadata_proxy_shared_secret = METADATA_SECRET - Replace METADATA_SECRET with a - suitable secret for the metadata proxy. - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - On the controller node, edit the - /etc/nova/nova.conf file and complete the - following action: - - - In the [neutron] section, enable the - metadata proxy and configure the secret: - [neutron] -... -service_metadata_proxy = True -metadata_proxy_shared_secret = METADATA_SECRET - Replace METADATA_SECRET with - the secret you chose for the metadata proxy. - - - - - On the controller node, restart the - Compute API service: - # systemctl restart openstack-nova-api.service - On SLES: - # service openstack-nova-api restart - On openSUSE: - # systemctl restart openstack-nova-api.service - # service nova-api restart - - - - To configure the Open vSwitch (OVS) service - The OVS service provides the underlying virtual networking - framework for instances. The integration bridge - br-int handles internal instance network - traffic within OVS. The external bridge br-ex - handles external instance network traffic within OVS. The - external bridge requires a port on the physical external network - interface to provide instances with external network access. In - essence, this port connects the virtual and physical external - networks in your environment. - - Start the OVS service and configure it to start when the - system boots: - # systemctl enable openvswitch.service -# systemctl start openvswitch.service - On SLES: - # service openvswitch-switch start -# chkconfig openvswitch-switch on - On openSUSE: - # systemctl enable openvswitch.service -# systemctl start openvswitch.service - - - Restart the OVS service: - # service openvswitch-switch restart - - - Add the external bridge: - # ovs-vsctl add-br br-ex - - - Add a port to the external bridge that connects to the - physical external network interface: - Replace INTERFACE_NAME with the - actual interface name. For example, eth2 - or ens256. - # ovs-vsctl add-port br-ex INTERFACE_NAME - - Depending on your network interface driver, you may need - to disable generic receive offload - (GRO) to achieve suitable throughput between - your instances and the external network. - To temporarily disable GRO on the external network - interface while testing your environment: - # ethtool -K INTERFACE_NAME gro off - - - - - To finalize the installation - - The Networking service initialization scripts expect a - symbolic link /etc/neutron/plugin.ini - pointing to the ML2 plug-in configuration file, - /etc/neutron/plugins/ml2/ml2_conf.ini. - If this symbolic link does not exist, create it using the - following command: - # ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - Due to a packaging bug, the Open vSwitch agent initialization - script explicitly looks for the Open vSwitch plug-in configuration - file rather than a symbolic link - /etc/neutron/plugin.ini pointing to the ML2 - plug-in configuration file. Run the following commands to resolve this - issue: - # cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \ - /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig -# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \ - /usr/lib/systemd/system/neutron-openvswitch-agent.service - - - The Networking service initialization scripts expect the - variable NEUTRON_PLUGIN_CONF in the - /etc/sysconfig/neutron file to - reference the ML2 plug-in configurarion file. Edit the - /etc/sysconfig/neutron file and add the - following: - NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini" - - - Start the Networking services and configure them to start - when the system boots: - # systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \ - neutron-dhcp-agent.service neutron-metadata-agent.service \ - neutron-ovs-cleanup.service -# systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \ - neutron-dhcp-agent.service neutron-metadata-agent.service - - Do not explictly start the - neutron-ovs-cleanup - service. - - On SLES: - # service openstack-neutron-openvswitch-agent start -# service openstack-neutron-l3-agent start -# service openstack-neutron-dhcp-agent start -# service openstack-neutron-metadata-agent start -# chkconfig openstack-neutron-openvswitch-agent on -# chkconfig openstack-neutron-l3-agent on -# chkconfig openstack-neutron-dhcp-agent on -# chkconfig openstack-neutron-metadata-agent on -# chkconfig openstack-neutron-ovs-cleanup on - On openSUSE: - # systemctl enable openstack-neutron-openvswitch-agent.service openstack-neutron-l3-agent.service \ - openstack-neutron-dhcp-agent.service openstack-neutron-metadata-agent.service \ - openstack-neutron-ovs-cleanup.service -# systemctl start openstack-neutron-openvswitch-agent.service openstack-neutron-l3-agent.service \ - openstack-neutron-dhcp-agent.service openstack-neutron-metadata-agent.service - - Do not explictly start the - openstack-neutron-ovs-cleanup - service. - - - - Restart the Networking services: - # service neutron-plugin-openvswitch-agent restart -# service neutron-l3-agent restart -# service neutron-dhcp-agent restart -# service neutron-metadata-agent restart - - - - Verify operation - - Perform these commands on the controller node. - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - List agents to verify successful launch of the - neutron agents: - $ neutron agent-list -+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ -| id | agent_type | host | alive | admin_state_up | binary | -+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ -| 30275801-e17a-41e4-8f53-9db63544f689 | Metadata agent | network | :-) | True | neutron-metadata-agent | -| 4bd8c50e-7bad-4f3b-955d-67658a491a15 | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent | -| 756e5bba-b70f-4715-b80e-e37f59803d20 | L3 agent | network | :-) | True | neutron-l3-agent | -| 9c45473c-6d6d-4f94-8df1-ebd0b6838d5f | DHCP agent | network | :-) | True | neutron-dhcp-agent | -+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ - - -
diff --git a/doc/training-guides/basic-install-guide/section_neutron-ovs-compute-node.xml b/doc/training-guides/basic-install-guide/section_neutron-ovs-compute-node.xml deleted file mode 100644 index 5666ba6b..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-ovs-compute-node.xml +++ /dev/null @@ -1,326 +0,0 @@ - -
- Configure compute node - - This section details set up for any node that runs the - nova-compute component but does not run - the full network stack. - - - By default, the system-config-firewall automated - firewall configuration tool is in place on RHEL. This graphical interface - (and a curses-style interface with -tui on the end of - the name) enables you to configure IP tables as a basic firewall. You - should disable it when you work with OpenStack Networking unless you are - familiar with the underlying network technologies, as, by default, it - blocks various types of network traffic that are important to neutron - services. To disable it, launch the program and clear the - Enabled check box. - After you successfully set up OpenStack Networking with Neutron, you - can re-enable and configure the tool. However, during OpenStack - Networking setup, disable the tool to make it easier to debug network - issues. - - - Prerequisites - - Disable packet destination filtering (route - verification) to let the networking services route traffic - to the VMs. Edit the /etc/sysctl.conf - file and run the following command to activate - changes: - net.ipv4.conf.all.rp_filter=0 -net.ipv4.conf.default.rp_filter=0 - # sysctl -p - - - - Install Open vSwitch plug-in - OpenStack Networking supports a variety of plug-ins. For - simplicity, we chose to cover the most common plug-in, Open - vSwitch, and configure it to use basic GRE tunnels for tenant - network traffic. - - Install the Open vSwitch plug-in and its - dependencies: - # apt-get install neutron-plugin-openvswitch-agent openvswitch-datapath-dkms - # yum install openstack-neutron-openvswitch - # zypper install openstack-neutron-openvswitch-agent - - - Restart Open vSwitch: - # service openvswitch-switch restart - - - Start Open vSwitch and configure it to start when - the system boots: - # service openvswitch start -# chkconfig openvswitch on - # service openvswitch-switch start -# chkconfig openvswitch-switch on - - - You must set some common configuration options. You - must configure Networking core to use - OVS. Edit the - /etc/neutron/neutron.conf - file: - core_plugin = openvswitch - core_plugin = openvswitch - - - You must configure a firewall as well. You should - use the same firewall plug-in that you chose to use when - you set up the network node. To do this, edit - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - file and set the firewall_driver - value under the securitygroup to the - same value used on the network node. For instance, if - you chose to use the Hybrid OVS-IPTables plug-in, your - configuration looks like this: - [securitygroup] -# Firewall driver for realizing neutron security group function. -firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - - You must use at least the No-Op firewall. - Otherwise, Horizon and other OpenStack services cannot - get and set required VM boot options. - - - - Configure the OVS plug-in to start - on boot. - # chkconfig neutron-openvswitch-agent on - # chkconfig openstack-neutron-openvswitch-agent on - - - Tell the OVS plug-in to use GRE - tunneling with a br-int integration - bridge, a br-tun tunneling bridge, - and a local IP for the tunnel of - DATA_INTERFACE's IP Edit - the - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - file: - [ovs] -... -tenant_network_type = gre -tunnel_id_ranges = 1:1000 -enable_tunneling = True -integration_bridge = br-int -tunnel_bridge = br-tun -local_ip = DATA_INTERFACE_IP - - - - Configure common components - - Configure Networking to use keystone for authentication: - - - Set the auth_strategy - configuration key to keystone in the - [DEFAULT] section of the file: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone - - - Set the neutron - configuration for - keystone - authentication: - # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_uri http://controller:5000 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_protocol http -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_port 35357 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_user neutron -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_password NEUTRON_PASS - - - - - To configure neutron - to use keystone - for authentication, edit the - /etc/neutron/neutron.conf file. - - - Set the auth_strategy - configuration key to keystone in the - [DEFAULT] section of the file: - [DEFAULT] -... -auth_strategy = keystone - - - Add these lines to the - [keystone_authtoken] section of the - file: - [keystone_authtoken] -... -auth_uri = http://controller:5000 -auth_host = controller -auth_protocol = http -auth_port = 35357 -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS - - - - - Configure access to the RabbitMQ service: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rpc_backend neutron.openstack.common.rpc.impl_kombu -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_host controller -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_userid guest -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_password RABBIT_PASS - - - Configure the RabbitMQ access. - Edit the /etc/neutron/neutron.conf file - to modify the following parameters in the - [DEFAULT] section. - rabbit_host = controller -rabbit_userid = guest -rabbit_password = RABBIT_PASS - - - - Configure Compute services for Networking - - Configure OpenStack Compute to use OpenStack Networking - services. Configure the /etc/nova/nova.conf - file as per instructions below: - # openstack-config --set /etc/nova/nova.conf DEFAULT \ - network_api_class nova.network.neutronv2.api.API -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_url http://controller:9696 -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_auth_strategy keystone -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_tenant_name service -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_username neutron -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_password NEUTRON_PASS -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_auth_url http://controller:35357/v2.0 -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - firewall_driver nova.virt.firewall.NoopFirewallDriver -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - security_group_api neutron - Configure OpenStack Compute to use OpenStack Networking - services. Edit the /etc/nova/nova.conf - file: - network_api_class=nova.network.neutronv2.api.API -neutron_url=http://controller:9696 -neutron_auth_strategy=keystone -neutron_admin_tenant_name=service -neutron_admin_username=neutron -neutron_admin_password=NEUTRON_PASS -neutron_admin_auth_url=http://controller:35357/v2.0 -linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver -firewall_driver=nova.virt.firewall.NoopFirewallDriver -security_group_api=neutron - - - - No matter which firewall driver you chose when you - configured the network and compute nodes, you must - edit the /etc/nova/nova.conf file - to set the firewall driver to - nova.virt.firewall.NoopFirewallDriver. - Because OpenStack Networking handles the firewall, - this statement instructs Compute to not use a - firewall. - - - If you want Networking to handle the firewall, - edit the - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - file to set the firewall_driver option to - the firewall for the plug-in. For example, with - OVS, edit the file as - follows: - [securitygroup] -# Firewall driver for realizing neutron security group function. -firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - # openstack-config --set \ - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini securitygroup firewall_driver \ - neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - - - If you do not want to use a firewall in Compute or - Networking, edit both configuration files and set - firewall_driver=nova.virt.firewall.NoopFirewallDriver. - Also, edit the - /etc/nova/nova.conf file and - comment out or remove the - security_group_api=neutron - statement. - Otherwise, when you issue nova - list commands, the ERROR: The - server has either erred or is incapable of - performing the requested operation. (HTTP - 500) error might be returned. - - - - - - - Finalize installation - - The neutron-server - initialization script expects a symbolic link - /etc/neutron/plugin.ini pointing to the - configuration file associated with your chosen plug-in. Using - Open vSwitch, for example, the symbolic link must point to - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. - If this symbolic link does not exist, create it using the - following commands: - # cd /etc/neutron -# ln -s plugins/openvswitch/ovs_neutron_plugin.ini plugin.ini - - - The openstack-neutron - initialization script expects the variable - NEUTRON_PLUGIN_CONF in file - /etc/sysconfig/neutron to reference the - configuration file associated with your chosen plug-in. Using - Open vSwitch, for example, edit the - /etc/sysconfig/neutron file and add the - following: - NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini" - - - Restart Networking services. - # service neutron-plugin-openvswitch-agent restart - # service neutron-openvswitch-agent restart - # service openstack-neutron-openvswitch-agent restart - - - Restart the Compute service. - # service nova-compute restart - # service openstack-nova-compute restart - # service openstack-nova-compute restart - - -
diff --git a/doc/training-guides/basic-install-guide/section_neutron-ovs-controller-node.xml b/doc/training-guides/basic-install-guide/section_neutron-ovs-controller-node.xml deleted file mode 100644 index 6455a45d..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-ovs-controller-node.xml +++ /dev/null @@ -1,332 +0,0 @@ - -
- Configure controller node - - By default, the system-config-firewall - automated firewall configuration tool is in place on RHEL. - This graphical interface (and a curses-style interface with - -tui on the end of the name) enables you - to configure IP tables as a basic firewall. You should disable - it when you work with Neutron unless you are familiar with the - underlying network technologies, as, by default, it blocks - various types of network traffic that are important to - Neutron. To disable it, simply launch the program and clear - the Enabled check box. - After you successfully set up OpenStack with Neutron, you - can re-enable and configure the tool. However, during Neutron - set up, disable the tool to make it easier to debug network - issues. - - - Prerequisites - Before you - configure individual nodes for Networking, you must create the - required OpenStack components: user, service, database, and one or - more endpoints. After you complete these steps on the controller - node, follow the instructions in this guide to set up OpenStack - Networking nodes. - - Connect to the MySQL database as the root user, create the - neutron database, and grant the proper - access to it: - $ mysql -u root -p -mysql> CREATE DATABASE neutron; -mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ -IDENTIFIED BY 'NEUTRON_DBPASS'; -mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ -IDENTIFIED BY 'NEUTRON_DBPASS'; - - - Create the required user, service, and endpoint so that - Networking can interface with the Identity Service. - Create a neutron user: - $ keystone user-create --name=neutron --pass=NEUTRON_PASS --email=neutron@example.com - Add the user role to the neutron user: - $ keystone user-role-add --user=neutron --tenant=service --role=admin - Create the neutron service: - $ keystone service-create --name=neutron --type=network \ - --description="OpenStack Networking" - Create a Networking endpoint: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ network / {print $2}') \ - --publicurl http://controller:9696 \ - --adminurl http://controller:9696 \ - --internalurl http://controller:9696 - - - - Install and configure server component - - Install the server component of Networking and any dependencies. - # apt-get install neutron-server - # yum install openstack-neutron python-neutron python-neutronclient - # zypper install openstack-neutron python-neutron python-neutronclient - - - Configure Networking to connect to the database: - # openstack-config --set /etc/neutron/neutron.conf database connection \ - mysql://neutron:NEUTRON_DBPASS@controller/neutron - - - Configure Networking to use your MySQL database. Edit the - /etc/neutron/neutron.conf file and add the - following key under the [database] section. - Replace NEUTRON_DBPASS with the password - you chose for the Neutron database. - [database] -... -connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron - - - Configure Networking to use - keystone as the Identity - Service for authentication: - - - Set the auth_strategy - configuration key to keystone in the - DEFAULT section of the file: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone - - - Set the neutron configuration for - keystone authentication: - # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_uri http://controller:5000 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_protocol http -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_port 35357 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_user neutron -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_password NEUTRON_PASS - - - - - Configure Networking to use - keystone as the Identity - Service for authentication. - - - Edit the /etc/neutron/neutron.conf - file and add the - file and add the following key under the - [DEFAULT] section. - [DEFAULT] -... -auth_strategy = keystone - Add the following keys under the - [keystone_authtoken] section. Replace - NEUTRON_PASS with the password you - chose for the Neutron user in Keystone. - [keystone_authtoken] -... -auth_uri = http://controller:5000 -auth_host = controller -auth_protocol = http -auth_port = 35357 -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS - - - - - Configure access to the RabbitMQ - service: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rpc_backend neutron.openstack.common.rpc.impl_kombu -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_host controller -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_userid guest -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_password RABBIT_PASS - - - Configure Networking to use your message broker. Edit the - /etc/neutron/neutron.conf file and add - the following keys under the [DEFAULT] - section. - Replace RABBIT_PASS with the - password you chose for RabbitMQ. - [DEFAULT] -... -rpc_backend = neutron.openstack.common.rpc.impl_kombu -rabbit_host = controller -rabbit_password = RABBIT_PASS - - - - Install and configure Open vSwitch (OVS) plug-in - OpenStack Networking supports a variety of plug-ins. For - simplicity, we chose to cover the most common plug-in, Open - vSwitch, and configure it to use basic GRE tunnels for tenant - network traffic. - - Install the Open vSwitch plug-in: - # apt-get install neutron-plugin-openvswitch - # yum install openstack-neutron-openvswitch - # zypper install openstack-neutron-openvswitch-agent - - - You must set some common configuration options no - matter which networking technology you choose to use - with Open vSwitch. You must configure Networking core to - use OVS. Edit the - /etc/neutron/neutron.conf - file: - core_plugin = openvswitch - - The dedicated controller node does not need to run - Open vSwitch or the Open vSwitch agent. - - - - Configure the OVS plug-in to use GRE - tunneling. Edit the - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - file: - [ovs] -tenant_network_type = gre -tunnel_id_ranges = 1:1000 -enable_tunneling = True - - - - Configure Compute services for Networking - - Configure Compute to use - OpenStack Networking services. Configure the - /etc/nova/nova.conf file per the instructions - below: - # openstack-config --set /etc/nova/nova.conf DEFAULT \ - network_api_class nova.network.neutronv2.api.API -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_url http://controller:9696 -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_auth_strategy keystone -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_tenant_name service -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_username neutron -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_password NEUTRON_PASS -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_admin_auth_url http://controller:35357/v2.0 -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - firewall_driver nova.virt.firewall.NoopFirewallDriver -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - security_group_api neutron - Configure Compute to use OpenStack Networking - services. Edit the /etc/nova/nova.conf - file: - network_api_class=nova.network.neutronv2.api.API -neutron_url=http://controller:9696 -neutron_auth_strategy=keystone -neutron_admin_tenant_name=service -neutron_admin_username=neutron -neutron_admin_password=NEUTRON_PASS -neutron_admin_auth_url=http://controller:35357/v2.0 -linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver -firewall_driver=nova.virt.firewall.NoopFirewallDriver -security_group_api=neutron - - - - Regardless of which firewall driver you chose when you - configured the network and compute nodes, set this driver - as the No-Op firewall. This firewall is a - nova firewall, - and because neutron - handles the Firewall, you must tell - nova not to use one. - When Networking handles the firewall, the option - firewall_driver should be set according to - the specified plug-in. For example with - OVS, edit the - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - file: - [securitygroup] -# Firewall driver for realizing neutron security group function. -firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - # openstack-config --set \ - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini securitygroup firewall_driver \ - neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - - - If you do not want to use a firewall in Compute or - Networking, set - firewall_driver=nova.virt.firewall.NoopFirewallDriver - in both config files, and comment out or remove - security_group_api=neutron in the - /etc/nova/nova.conf file, otherwise - you may encounter ERROR: The server has either - erred or is incapable of performing the requested - operation. (HTTP 500) when issuing - nova list commands. - - - - - - The neutron-server - initialization script expects a symbolic link - /etc/neutron/plugin.ini pointing to the - configuration file associated with your chosen plug-in. Using - Open vSwitch, for example, the symbolic link must point to - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. - If this symbolic link does not exist, create it using the - following commands: - # cd /etc/neutron -# ln -s plugins/openvswitch/ovs_neutron_plugin.ini plugin.ini - - - The openstack-neutron - initialization script expects the variable - NEUTRON_PLUGIN_CONF in file - /etc/sysconfig/neutron to reference the - configuration file associated with your chosen plug-in. Using - Open vSwitch, for example, edit the - /etc/sysconfig/neutron file and add the - following: - NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini" - - - - Finalize installation - - Restart the Compute and Networking services: - # service nova-api restart -# service nova-scheduler restart -# service nova-conductor restart -# service neutron-server restart - - - Restart the Compute services: - # service openstack-nova-api restart -# service openstack-nova-scheduler restart -# service openstack-nova-conductor restart - - - Start the Networking service and configure it to start when the - system boots: - # service neutron-server start -# chkconfig neutron-server on - # service openstack-neutron start -# chkconfig openstack-neutron on - - -
diff --git a/doc/training-guides/basic-install-guide/section_neutron-ovs-network-node.xml b/doc/training-guides/basic-install-guide/section_neutron-ovs-network-node.xml deleted file mode 100644 index a20daee4..00000000 --- a/doc/training-guides/basic-install-guide/section_neutron-ovs-network-node.xml +++ /dev/null @@ -1,446 +0,0 @@ - -
- Configure network node - - Before you start, set up a machine as a dedicated network - node. Dedicated network nodes have a - MGMT_INTERFACE NIC, a - DATA_INTERFACE NIC, and an - EXTERNAL_INTERFACE NIC. - The management network handles communication among nodes. - The data network handles communication coming to and from VMs. - The external NIC connects the network node, and optionally to - the controller node, so your VMs can connect to the outside - world. - - - By default, the system-config-firewall automated - firewall configuration tool is in place on RHEL. This graphical interface - (and a curses-style interface with -tui on the end of - the name) enables you to configure IP tables as a basic firewall. You - should disable it when you work with Networking unless you are familiar - with the underlying network technologies. By default, it blocks various - types of network traffic that are important to Networking. To disable it, - simply launch the program and clear the Enabled check - box. - After you successfully set up OpenStack Networking, you - can re-enable and configure the tool. However, during - Networking set up, disable the tool to make it easier to debug - network issues. - - - Install agents and configure common components - - Install the Networking packages and any dependencies. - - # apt-get install neutron-dhcp-agent neutron-l3-agent - # yum install openstack-neutron - # zypper install openstack-neutron openstack-neutron-l3-agent \ - openstack-neutron-dhcp-agent openstack-neutron-metadata-agent - - - Configure Networking agents to start at boot time: - # for s in neutron-{dhcp,metadata,l3}-agent; do chkconfig $s on; done - # for s in openstack-neutron-{dhcp,metadata,l3}-agent; do chkconfig $s on; done - - - Enable packet forwarding and disable packet destination - filtering so that the network node can coordinate traffic - for the VMs. Edit the /etc/sysctl.conf - file, as follows: - net.ipv4.ip_forward=1 -net.ipv4.conf.all.rp_filter=0 -net.ipv4.conf.default.rp_filter=0 - Use the sysctl command to ensure the - changes made to the /etc/sysctl.conf - file take effect: - # sysctl -p - - It is recommended that the networking service is - restarted after changing values related to the networking - configuration. This ensures that all modified values take - effect immediately: - # service networking restart - # service network restart - - - - Configure Networking to use keystone for authentication: - - - Set the auth_strategy - configuration key to keystone in the - DEFAULT section of the file: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone - - - Set the - neutron - configuration for - keystone - authentication: - # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_uri http://controller:5000 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_protocol http -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - auth_port 35357 -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_user neutron -# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ - admin_password NEUTRON_PASS - - - - To configure neutron - to use keystone - for authentication, edit the - /etc/neutron/neutron.conf file. - - - Set the auth_strategy - configuration key to keystone in the - DEFAULT section of the file: - auth_strategy = keystone - - - Add these lines to the - [keystone_authtoken] section of the - file: - [keystone_authtoken] -... -auth_uri = http://controller:5000 -auth_host = controller -auth_port = 35357 -auth_protocol = http -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS - - - - - Configure access to the RabbitMQ service: - # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rpc_backend neutron.openstack.common.rpc.impl_kombu -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_host controller -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_userid guest -# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ - rabbit_password RABBIT_PASS - - - Configure the RabbitMQ access. - Edit the /etc/neutron/neutron.conf file - to modify the following parameters in the - DEFAULT section. - rabbit_host = controller -rabbit_userid = guest -rabbit_password = RABBIT_PASS - - - - Install and configure the Open vSwitch (OVS) plug-in - OpenStack Networking supports a variety of plug-ins. For - simplicity, we chose to cover the most common plug-in, Open - vSwitch, and configure it to use basic GRE tunnels for tenant - network traffic. - - Install the Open vSwitch plug-in and its - dependencies: - # apt-get install neutron-plugin-openvswitch-agent openvswitch-datapath-dkms - # yum install openstack-neutron-openvswitch - # zypper install openstack-neutron-openvswitch-agent - - - Start Open vSwitch: - # service openvswitch start - # service openvswitch-switch start - # service openvswitch-switch restart - And configure - it to start when the system boots: - # chkconfig openvswitch on - # chkconfig openvswitch-switch on - - - No matter which networking technology you use, you must add the - br-ex external bridge, which - connects to the outside world. - # ovs-vsctl add-br br-ex - - - Add a port (connection) from - the EXTERNAL_INTERFACE - interface to br-ex interface: - # ovs-vsctl add-port br-ex EXTERNAL_INTERFACE - - The host must have an IP address associated - with an interface other than - EXTERNAL_INTERFACE, - and your remote terminal session must be associated with - this other IP address. - If you associate an IP address with - EXTERNAL_INTERFACE, - that IP address stops working after you issue the - ovs-vsctl add-port br-ex EXTERNAL_INTERFACE - command. If you associate a remote terminal session with that - IP address, you lose connectivity with the host. - For more details about this behavior, see the - Configuration Problems section of the - Open vSwitch FAQ. - - - - Configure the - EXTERNAL_INTERFACE without - an IP address and in promiscuous mode. Additionally, you - must set the newly created br-ex - interface to have the IP address that formerly belonged - to EXTERNAL_INTERFACE. - - Generic Receive Offload (GRO) should not be - enabled on this interface as it can cause severe - performance problems. It can be disabled with the - ethtool utility. - - Edit the - /etc/sysconfig/network-scripts/ifcfg-EXTERNAL_INTERFACE - file: - DEVICE_INFO_HERE -ONBOOT=yes -BOOTPROTO=none -PROMISC=yes - - - Create and edit the - /etc/sysconfig/network-scripts/ifcfg-br-ex - file: - DEVICE=br-ex -TYPE=Bridge -ONBOOT=no -BOOTPROTO=none -IPADDR=EXTERNAL_INTERFACE_IP -NETMASK=EXTERNAL_INTERFACE_NETMASK -GATEWAY=EXTERNAL_INTERFACE_GATEWAY - - - You must set some common configuration options no - matter which networking technology you choose to use - with Open vSwitch. Configure the L3 and DHCP agents to - use OVS and namespaces. Edit the - /etc/neutron/l3_agent.ini and - /etc/neutron/dhcp_agent.ini - files, respectively: - interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver -use_namespaces = True - - While the examples in this guide enable network - namespaces by default, you can disable them if issues - occur or your kernel does not support them. Edit the - /etc/neutron/l3_agent.ini and - /etc/neutron/dhcp_agent.ini - files, respectively: - use_namespaces = False - Edit the /etc/neutron/neutron.conf file - to disable overlapping IP addresses: - allow_overlapping_ips = False - Note that when network namespaces are disabled, - you can have only one router for each network node and - overlapping IP addresses are not supported. - You must complete additional steps after you - create the initial Neutron virtual networks and - router. - - - - Similarly, you must also tell Neutron core to use - OVS. Edit the - /etc/neutron/neutron.conf - file: - core_plugin = openvswitch - - - Configure a firewall plug-in. If you do not wish to - enforce firewall rules, called security groups - by OpenStack, you can use - neutron.agent.firewall.NoopFirewall. - Otherwise, you can choose one of the Networking firewall - plug-ins. The most common choice is the Hybrid - OVS-IPTables driver, but you can also use the - Firewall-as-a-Service driver. Edit the - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - file: - [securitygroup] -# Firewall driver for realizing neutron security group function. -firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - - You must use at least the No-Op firewall. - Otherwise, Horizon and other OpenStack services cannot - get and set required VM boot options. - - - - Configure the OVS plug-in to start - on boot. - # chkconfig neutron-openvswitch-agent on - # chkconfig openstack-neutron-openvswitch-agent on - - - Configure the OVS plug-in to - use GRE tunneling, the br-int - integration bridge, the br-tun - tunneling bridge, and a local IP for the - DATA_INTERFACE tunnel IP. - Edit the - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - file: - [ovs] -... -tenant_network_type = gre -tunnel_id_ranges = 1:1000 -enable_tunneling = True -integration_bridge = br-int -tunnel_bridge = br-tun -local_ip = DATA_INTERFACE_IP - - - - Configure the agents - - To perform DHCP on the software-defined networks, - Networking supports several different plug-ins. However, in - general, you use the dnsmasq plug-in. - Configure the - /etc/neutron/dhcp_agent.ini file: - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - -# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ - dhcp_driver neutron.agent.linux.dhcp.Dnsmasq - - - To allow virtual machines to access the Compute metadata - information, the Networking metadata agent must be enabled - and configured. The agent will act as a proxy for the - Compute metadata service. - On the controller, edit the - /etc/nova/nova.conf file to define a - secret key that will be shared between the Compute service - and the Networking metadata agent. - Add to the - [DEFAULT] section: - [DEFAULT] -... -neutron_metadata_proxy_shared_secret = METADATA_PASS -service_neutron_metadata_proxy = true - Set the - neutron_metadata_proxy_shared_secret - key: - # openstack-config --set /etc/nova/nova.conf DEFAULT \ - neutron_metadata_proxy_shared_secret METADATA_PASS -# openstack-config --set /etc/nova/nova.conf DEFAULT \ - service_neutron_metadata_proxy true - Restart the - nova-api service: - # service nova-api restart - # service openstack-nova-api restart - On the network node, modify the metadata agent - configuration. - Edit the - /etc/neutron/metadata_agent.ini file - and modify the [DEFAULT] section: - [DEFAULT] -... -auth_url = http://controller:5000/v2.0 -auth_region = regionOne -admin_tenant_name = service -admin_user = neutron -admin_password = NEUTRON_PASS -nova_metadata_ip = controller -metadata_proxy_shared_secret = METADATA_PASS - Set the required - keys: - # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - auth_url http://controller:5000/v2.0 -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - auth_region regionOne -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - admin_tenant_name service -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - admin_user neutron -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - admin_password NEUTRON_PASS -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - nova_metadata_ip controller -# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ - metadata_proxy_shared_secret METADATA_PASS - - The value of auth_region is - case-sensitive and must match the endpoint region defined - in Keystone. - - - If you serve the OpenStack Networking API over HTTPS with - self-signed certificates, you must perform additional configuration - for the metadata agent because Networking cannot validate the SSL - certificates from the service catalog. - Add this statement to the - [DEFAULT] section: - -neutron_insecure = True - Set the required keys: - # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT neutron_insecure True - - - - - Finalize installation - - The neutron-server - initialization script expects a symbolic link - /etc/neutron/plugin.ini pointing to the - configuration file associated with your chosen plug-in. Using - Open vSwitch, for example, the symbolic link must point to - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. - If this symbolic link does not exist, create it using the - following commands: - # cd /etc/neutron -# ln -s plugins/openvswitch/ovs_neutron_plugin.ini plugin.ini - - - The openstack-neutron - initialization script expects the variable - NEUTRON_PLUGIN_CONF in file - /etc/sysconfig/neutron to reference the - configuration file associated with your chosen plug-in. Using - Open vSwitch, for example, edit the - /etc/sysconfig/neutron file and add the - following: - NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini" - - - Restart Networking services. - # service neutron-dhcp-agent restart -# service neutron-l3-agent restart -# service neutron-metadata-agent restart -# service neutron-plugin-openvswitch-agent restart - # service neutron-dhcp-agent restart -# service neutron-l3-agent restart -# service neutron-metadata-agent restart -# service neutron-openvswitch-agent restart - # service openstack-neutron-dhcp-agent restart -# service openstack-neutron-l3-agent restart -# service openstack-neutron-metadata-agent restart -# service openstack-neutron-openvswitch-agent restart - - -
diff --git a/doc/training-guides/basic-install-guide/section_nova-compute-install.xml b/doc/training-guides/basic-install-guide/section_nova-compute-install.xml deleted file mode 100644 index a7f445aa..00000000 --- a/doc/training-guides/basic-install-guide/section_nova-compute-install.xml +++ /dev/null @@ -1,205 +0,0 @@ - -
- Install and configure a compute node - This section describes how to install and configure the Compute - service on a compute node. The service supports several - hypervisors to - deploy instances or - VMs. For simplicity, - this configuration uses the - QEMU hypervisor - with the - KVM extension - on compute nodes that support hardware acceleration for virtual machines. - On legacy hardware, this configuration uses the generic QEMU hypervisor. - You can follow these instructions with minor modifications to horizontally - scale your environment with additional compute nodes. - - To install and configure the Compute hypervisor components - - Install the packages: - # apt-get install nova-compute sysfsutils - # yum install openstack-nova-compute sysfsutils - # zypper install openstack-nova-compute genisoimage kvm - - - Edit the /etc/nova/nova.conf file and - complete the following actions: - - - In the [DEFAULT] section, configure - RabbitMQ message broker access: - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the password - you chose for the guest account in - RabbitMQ. - - - In the [DEFAULT] and - [keystone_authtoken] sections, - configure Identity service access: - [DEFAULT] -... -auth_strategy = keystone - -[keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = nova -admin_password = NOVA_PASS - Replace NOVA_PASS with the password - you chose for the nova user in the Identity - service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [DEFAULT] section, configure the - my_ip option: - [DEFAULT] -... -my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - Replace - MANAGEMENT_INTERFACE_IP_ADDRESS with - the IP address of the management network interface on your - compute node, typically 10.0.0.31 for the first node in the - example - architecture. - - - In the [DEFAULT] section, enable and - configure remote console access: - [DEFAULT] -... -vnc_enabled = True -vncserver_listen = 0.0.0.0 -vncserver_proxyclient_address = MANAGEMENT_INTERFACE_IP_ADDRESS -novncproxy_base_url = http://controller:6080/vnc_auto.html - The server component listens on all IP addresses and the proxy - component only listens on the management interface IP address of - the compute node. The base URL indicates the location where you - can use a web browser to access remote consoles of instances - on this compute node. - Replace - MANAGEMENT_INTERFACE_IP_ADDRESS with - the IP address of the management network interface on your - compute node, typically 10.0.0.31 for the first node in the - example - architecture. - - If the web browser to access remote consoles resides on a - host that cannot resolve the - controller hostname, you must replace - controller with the management - interface IP address of the controller node. - - - - In the [glance] section, configure the - location of the Image Service: - [glance] -... -host = controller - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] section: - [DEFAULT] -... -verbose = True - - - - - - - Ensure the kernel module nbd is - loaded. - # modprobe nbd - - - Ensure the module will be loaded on every boot. - On openSUSE by adding nbd in the - /etc/modules-load.d/nbd.conf file. - On SLES by adding or modifying the following line in the - /etc/sysconfig/kernel file. - MODULES_LOADED_ON_BOOT = "nbd" - - - - - - To install and configure the Compute hypervisor components - - Install the packages: - # apt-get install nova-compute - - - - To finalize installation - - Determine whether your compute node supports hardware acceleration - for virtual machines: - $ egrep -c '(vmx|svm)' /proc/cpuinfo - If this command returns a value of - one or greater, your compute node supports - hardware acceleration which typically requires no additional - configuration. - If this command returns a value of zero, - your compute node does not support hardware acceleration and you must - configure libvirt to use QEMU instead of KVM. - - - Edit the [libvirt] - section in the - /etc/nova/nova-compute.conf - /etc/nova/nova.conf file as follows: - [libvirt] -... -virt_type = qemu - - - - - Restart the Compute service: - # service nova-compute restart - - - Start the Compute service including its dependencies and configure - them to start automatically when the system boots: - # systemctl enable libvirtd.service openstack-nova-compute.service -# systemctl start libvirtd.service -# systemctl start openstack-nova-compute.service - On SLES: - # service libvirtd start -# chkconfig libvirtd on -# service openstack-nova-compute start -# chkconfig openstack-nova-compute on - On openSUSE: - # systemctl enable libvirtd.service openstack-nova-compute.service -# systemctl start libvirtd.service -# systemctl start openstack-nova-compute.service - - - By default, the Ubuntu packages create an SQLite database. - Because this configuration uses a SQL database server, you can - remove the SQLite database file: - # rm -f /var/lib/nova/nova.sqlite - - -
diff --git a/doc/training-guides/basic-install-guide/section_nova-controller-install.xml b/doc/training-guides/basic-install-guide/section_nova-controller-install.xml deleted file mode 100644 index 9f4b34c5..00000000 --- a/doc/training-guides/basic-install-guide/section_nova-controller-install.xml +++ /dev/null @@ -1,282 +0,0 @@ - -
- Install and configure controller node - This section describes how to install and configure the Compute - service, code-named nova, on the controller node. - - To configure prerequisites - Before you install and configure Compute, you must create a database - and Identity service credentials including endpoints. - - To create the database, complete these steps: - - - Use the database access client to connect to the database - server as the root user: - $ mysql -u root -p - - - Create the nova database: - CREATE DATABASE nova; - - - Grant proper access to the nova - database: - GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; -GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - Replace NOVA_DBPASS with a suitable - password. - - - Exit the database access client. - - - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - To create the Identity service credentials, complete these - steps: - - - Create the nova user: - $ keystone user-create --name nova --pass NOVA_PASS -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| email | | -| enabled | True | -| id | 387dd4f7e46d4f72965ee99c76ae748c | -| name | nova | -| username | nova | -+----------+----------------------------------+ - Replace NOVA_PASS with a suitable - password. - - - Link the nova user to the - service tenant and admin - role: - $ keystone user-role-add --user nova --tenant service --role admin - - This command provides no output. - - - - Create the nova service: - $ keystone service-create --name nova --type compute \ - --description "OpenStack Compute" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | OpenStack Compute | -| enabled | True | -| id | 6c7854f52ce84db795557ebc0373f6b9 | -| name | nova | -| type | compute | -+-------------+----------------------------------+ - - - - - Create the Compute service endpoints: - $ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ compute / {print $2}') \ - --publicurl http://controller:8774/v2/%\(tenant_id\)s \ - --internalurl http://controller:8774/v2/%\(tenant_id\)s \ - --adminurl http://controller:8774/v2/%\(tenant_id\)s \ - --region regionOne -+-------------+-----------------------------------------+ -| Property | Value | -+-------------+-----------------------------------------+ -| adminurl | http://controller:8774/v2/%(tenant_id)s | -| id | c397438bd82c41198ec1a9d85cb7cc74 | -| internalurl | http://controller:8774/v2/%(tenant_id)s | -| publicurl | http://controller:8774/v2/%(tenant_id)s | -| region | regionOne | -| service_id | 6c7854f52ce84db795557ebc0373f6b9 | -+-------------+-----------------------------------------+ - - - - To install and configure Compute controller components - - Install the packages: - # apt-get install nova-api nova-cert nova-conductor nova-consoleauth \ - nova-novncproxy nova-scheduler python-novaclient - # yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor \ - openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \ - python-novaclient - # zypper install openstack-nova-api openstack-nova-scheduler openstack-nova-cert \ - openstack-nova-conductor openstack-nova-consoleauth openstack-nova-novncproxy \ - python-novaclient iptables - - - Edit the /etc/nova/nova.conf file and - complete the following actions: - - - In the [database] section, configure - database access: - [database] -... -connection = mysql://nova:NOVA_DBPASS@controller/nova - Replace NOVA_DBPASS with the - password you chose for the Compute database. - - - In the [DEFAULT] section, configure - RabbitMQ message broker access: - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - - - In the [DEFAULT] and - [keystone_authtoken] sections, - configure Identity service access: - [DEFAULT] -... -auth_strategy = keystone - -[keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = nova -admin_password = NOVA_PASS - Replace NOVA_PASS with the password - you chose for the nova user in the Identity - service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [DEFAULT] section, configure the - my_ip option to use the management interface IP - address of the controller node: - [DEFAULT] -... -my_ip = 10.0.0.11 - - - In the [DEFAULT] section, configure the - VNC proxy to use the management interface IP address of the - controller node: - [DEFAULT] -... -vncserver_listen = 10.0.0.11 -vncserver_proxyclient_address = 10.0.0.11 - - - In the [glance] section, configure the - location of the Image Service: - [glance] -... -host = controller - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] section: - [DEFAULT] -... -verbose = True - - - - - Populate the Compute database: - # su -s /bin/sh -c "nova-manage db sync" nova - - - - To install and configure the Compute controller components - - Install the packages: - # apt-get install nova-api nova-cert nova-conductor nova-consoleauth \ - nova-novncproxy nova-scheduler python-novaclient - - - Edit the /etc/nova/nova.conf file and - complete the following actions: - - - In the [DEFAULT] section, configure the VNC - proxy to use the management interface IP address of the controller - node: - [DEFAULT] -... -vncserver_listen = 10.0.0.11 -vncserver_proxyclient_address = 10.0.0.11 - - - - - - To finalize installation - - Restart the Compute services: - # service nova-api restart -# service nova-cert restart -# service nova-consoleauth restart -# service nova-scheduler restart -# service nova-conductor restart -# service nova-novncproxy restart - - - Start the Compute services and configure them to start when the - system boots: - # systemctl enable openstack-nova-api.service openstack-nova-cert.service \ - openstack-nova-consoleauth.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service openstack-nova-novncproxy.service -# systemctl start openstack-nova-api.service openstack-nova-cert.service \ - openstack-nova-consoleauth.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service openstack-nova-novncproxy.service - On SLES: - # service openstack-nova-api start -# service openstack-nova-cert start -# service openstack-nova-consoleauth start -# service openstack-nova-scheduler start -# service openstack-nova-conductor start -# service openstack-nova-novncproxy start -# chkconfig openstack-nova-api on -# chkconfig openstack-nova-cert on -# chkconfig openstack-nova-consoleauth on -# chkconfig openstack-nova-scheduler on -# chkconfig openstack-nova-conductor on -# chkconfig openstack-nova-novncproxy on - On openSUSE: - # systemctl enable openstack-nova-api.service openstack-nova-cert.service \ - openstack-nova-consoleauth.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service openstack-nova-novncproxy.service -# systemctl start openstack-nova-api.service openstack-nova-cert.service \ - openstack-nova-consoleauth.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service openstack-nova-novncproxy.service - - - By default, the Ubuntu packages create an SQLite database. - Because this configuration uses a SQL database server, you can - remove the SQLite database file: - # rm -f /var/lib/nova/nova.sqlite - - -
diff --git a/doc/training-guides/basic-install-guide/section_nova-networking-compute-node.xml b/doc/training-guides/basic-install-guide/section_nova-networking-compute-node.xml deleted file mode 100644 index dc9c87b1..00000000 --- a/doc/training-guides/basic-install-guide/section_nova-networking-compute-node.xml +++ /dev/null @@ -1,71 +0,0 @@ - -
- Configure compute node - This section covers deployment of a simple - flat network that provides IP addresses to your - instances via DHCP. If your environment includes - multiple compute nodes, the multi-host feature - provides redundancy by spreading network functions across compute - nodes. - - To install legacy networking components - - # apt-get install nova-network nova-api-metadata - # apt-get install nova-network nova-api - # yum install openstack-nova-network openstack-nova-api - # zypper install openstack-nova-network openstack-nova-api - - - - To configure legacy networking - - Edit the /etc/nova/nova.conf file and - complete the following actions: - - - In the [DEFAULT] section, configure - the network parameters: - [DEFAULT] -... -network_api_class = nova.network.api.API -security_group_api = nova -firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver -network_manager = nova.network.manager.FlatDHCPManager -network_size = 254 -allow_same_net_traffic = False -multi_host = True -send_arp_for_ha = True -share_dhcp_address = True -force_dhcp_release = True -flat_network_bridge = br100 -flat_interface = INTERFACE_NAME -public_interface = INTERFACE_NAME - Replace INTERFACE_NAME with the - actual interface name for the external network. For example, - eth1 or ens224. - - - - - Restart the services: - # service nova-network restart -# service nova-api-metadata restart - Start the services and - configure them to start when the system boots: - # systemctl enable openstack-nova-network.service openstack-nova-metadata-api.service -# systemctl start openstack-nova-network.service openstack-nova-metadata-api.service - On SLES: - # service openstack-nova-network start -# service openstack-nova-api-metadata start -# chkconfig openstack-nova-network on -# chkconfig openstack-nova-api-metadata on - On openSUSE: - # systemctl enable openstack-nova-network.service openstack-nova-metadata-api.service -# systemctl start openstack-nova-network.service penstack-nova-metadata-api.service - - -
diff --git a/doc/training-guides/basic-install-guide/section_nova-networking-controller-node.xml b/doc/training-guides/basic-install-guide/section_nova-networking-controller-node.xml deleted file mode 100644 index 94c9a092..00000000 --- a/doc/training-guides/basic-install-guide/section_nova-networking-controller-node.xml +++ /dev/null @@ -1,43 +0,0 @@ - -
- Configure controller node - Legacy networking primarily involves compute nodes. However, - you must configure the controller node to use legacy - networking. - - To configure legacy networking - - Edit the /etc/nova/nova.conf file and - complete the following actions: - - - In the [DEFAULT] section, configure - the network and security group APIs: - [DEFAULT] -... -network_api_class = nova.network.api.API -security_group_api = nova - - - - - Restart the Compute services: - # systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service - On SLES: - # service openstack-nova-api restart -# service openstack-nova-scheduler restart -# service openstack-nova-conductor restart - On openSUSE: - # systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service - # service nova-api restart -# service nova-scheduler restart -# service nova-conductor restart - - -
diff --git a/doc/training-guides/basic-install-guide/section_nova-networking-initial-network.xml b/doc/training-guides/basic-install-guide/section_nova-networking-initial-network.xml deleted file mode 100644 index 8657553e..00000000 --- a/doc/training-guides/basic-install-guide/section_nova-networking-initial-network.xml +++ /dev/null @@ -1,57 +0,0 @@ - -
- Create initial network - Before launching your first instance, you must create the necessary - virtual network infrastructure to which the instance will connect. - This network typically provides Internet access - from instances. You can enable Internet access - to individual instances using a - floating IP address and suitable - security group rules. The admin - tenant owns this network because it provides external network access - for multiple tenants. - This network shares the same subnet - associated with the physical network connected to the external - interface on the compute node. You should specify - an exclusive slice of this subnet to prevent interference with other - devices on the external network. - - Perform these commands on the controller node. - - - To create the network - - Source the admin tenant credentials: - $ source admin-openrc.sh - - - Create the network: - Replace NETWORK_CIDR with the subnet - associated with the physical network. - $ nova network-create demo-net --bridge br100 --multi-host T \ - --fixed-range-v4 NETWORK_CIDR - For example, using an exclusive slice of - 203.0.113.0/24 with IP address range - 203.0.113.24 to 203.0.113.32: - - $ nova network-create demo-net --bridge br100 --multi-host T \ - --fixed-range-v4 203.0.113.24/29 - - This command provides no output. - - - - Verify creation of the network: - $ nova net-list -+--------------------------------------+----------+------------------+ -| ID | Label | CIDR | -+--------------------------------------+----------+------------------+ -| 84b34a65-a762-44d6-8b5e-3b461a53f513 | demo-net | 203.0.113.24/29 | -+--------------------------------------+----------+------------------+ - - -
diff --git a/doc/training-guides/basic-install-guide/section_nova-verify.xml b/doc/training-guides/basic-install-guide/section_nova-verify.xml deleted file mode 100644 index 2c36b826..00000000 --- a/doc/training-guides/basic-install-guide/section_nova-verify.xml +++ /dev/null @@ -1,48 +0,0 @@ - -
- Verify operation - This section describes how to verify operation of the Compute - service. - - - Perform these commands on the controller node. - - - Source the admin credentials to gain access to - admin-only CLI commands: - $ source admin-openrc.sh - - - List service components to verify successful launch of each - process: - $ nova service-list -+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ -| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | -+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ -| 1 | nova-conductor | controller | internal | enabled | up | 2014-09-16T23:54:02.000000 | - | -| 2 | nova-consoleauth | controller | internal | enabled | up | 2014-09-16T23:54:04.000000 | - | -| 3 | nova-scheduler | controller | internal | enabled | up | 2014-09-16T23:54:07.000000 | - | -| 4 | nova-cert | controller | internal | enabled | up | 2014-09-16T23:54:00.000000 | - | -| 5 | nova-compute | compute1 | nova | enabled | up | 2014-09-16T23:54:06.000000 | - | -+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ - - This output should indicate four components enabled on the - controller node one component enabled on the compute node. - - - - List images in the Image Service catalog to verify connectivity - with the Identity service and Image Service: - $ nova image-list -+--------------------------------------+---------------------+--------+--------+ -| ID | Name | Status | Server | -+--------------------------------------+---------------------+--------+--------+ -| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.3-x86_64 | ACTIVE | | -+--------------------------------------+---------------------+--------+--------+ - - -
diff --git a/doc/training-guides/basic-install-guide/section_sahara-install.xml b/doc/training-guides/basic-install-guide/section_sahara-install.xml deleted file mode 100644 index a1f88f11..00000000 --- a/doc/training-guides/basic-install-guide/section_sahara-install.xml +++ /dev/null @@ -1,97 +0,0 @@ - -
- Install the Data processing service - This procedure installs the Data processing service (sahara) on the - controller node. - To install the Data processing service on the controller: - - - Install required packages: - # yum install openstack-sahara python-saharaclient - # zypper install openstack-sahara python-saharaclient - - - You need to install required packages. For now, sahara - doesn't have packages for Ubuntu and Debian. - Documentation will be updated once packages are available. The rest - of this document assumes that you have sahara service packages - installed on the system. - - - Edit /etc/sahara/sahara.conf configuration file - - First, edit parameter in - the [database] section. The URL provided here - should point to an empty database. For instance, connection - string for MySQL database will be: - connection = mysql://sahara:SAHARA_DBPASS@controller/sahara - - Switch to the [keystone_authtoken] - section. The parameter should point to - the public Identity API endpoint. - should point to the admin Identity API endpoint. For example: - auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 - - Next specify admin_user, - admin_password and - admin_tenant_name. These parameters must specify - a keystone user which has the admin role in the - given tenant. These credentials allow sahara to authenticate and - authorize its users. - - Switch to the [DEFAULT] section. - Proceed to the networking parameters. If you are using Neutron - for networking, then set use_neutron=true. - Otherwise if you are using nova-network set - the given parameter to false. - - That should be enough for the first run. If you want to - increase logging level for troubleshooting, there are two parameters - in the config: verbose and - debug. If the former is set to - true, sahara will - start to write logs of INFO level and above. If - debug is set to - true, sahara will write all the logs, including - the DEBUG ones. - - - - If you use the Data processing service with MySQL database, - then for storing big job binaries in sahara internal database you must - configure size of max allowed packet. Edit my.cnf - file and change parameter: - [mysqld] -max_allowed_packet = 256M - and restart MySQL server. - - Create database schema: - # sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head - - You must register the Data processing service with the Identity - service so that other OpenStack services can locate it. Register the - service and specify the endpoint: - $ keystone service-create --name sahara --type data_processing \ - --description "Data processing service" -$ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ sahara / {print $2}') \ - --publicurl http://controller:8386/v1.1/%\(tenant_id\)s \ - --internalurl http://controller:8386/v1.1/%\(tenant_id\)s \ - --adminurl http://controller:8386/v1.1/%\(tenant_id\)s \ - --region regionOne - - Start the sahara service: - # systemctl start openstack-sahara-all - # service openstack-sahara-all start - - (Optional) Enable the Data processing service to start on boot - # systemctl enable openstack-sahara-all - # chkconfig openstack-sahara-all on - - -
diff --git a/doc/training-guides/basic-install-guide/section_sahara-verify.xml b/doc/training-guides/basic-install-guide/section_sahara-verify.xml deleted file mode 100644 index 03440c43..00000000 --- a/doc/training-guides/basic-install-guide/section_sahara-verify.xml +++ /dev/null @@ -1,26 +0,0 @@ - -
- Verify the Data processing service installation - To verify that the Data processing service (sahara) is installed and - configured correctly, try requesting clusters list using sahara - client. - - - Source the demo tenant credentials: - $ source demo-openrc.sh - - - Retrieve sahara clusters list: - $ sahara cluster-list - You should see output similar to this: - +------+----+--------+------------+ -| name | id | status | node_count | -+------+----+--------+------------+ -+------+----+--------+------------+ - - -
diff --git a/doc/training-guides/basic-install-guide/section_trove-install.xml b/doc/training-guides/basic-install-guide/section_trove-install.xml deleted file mode 100644 index 2a462b56..00000000 --- a/doc/training-guides/basic-install-guide/section_trove-install.xml +++ /dev/null @@ -1,251 +0,0 @@ - -
- Install the Database service - This procedure installs the Database module on the controller - node. - - Prerequisites - This chapter assumes that you already have a working - OpenStack environment with at least the following components - installed: Compute, Image Service, Identity. - - - - If you want to do backup and restore, you also need Object Storage. - - - If you want to provision datastores on block-storage volumes, you also need Block Storage. - - - To install the Database module on the controller: - - - Install required packages: - # apt-get install python-trove python-troveclient \ - trove-common trove-api trove-taskmanager trove-conductor - # yum install openstack-trove python-troveclient - # zypper install openstack-trove python-troveclient - - - Respond to the prompts for database management and [keystone_authtoken] settings, - and API endpoint - registration. The trove-manage db_sync - command runs automatically. - - - Prepare OpenStack: - - - Source the admin-openrc.sh file. - $ source ~/admin-openrc.sh - - - - Create a trove user that Compute uses to - authenticate with the Identity service. Use the - service tenant and give the user the - admin role: - - $ keystone user-create --name trove --pass TROVE_PASS -$ keystone user-role-add --user trove --tenant service --role admin - Replace TROVE_PASS with a - suitable password. - - - - - - - All configuration files should be placed at /etc/trove directory. - Edit the following configuration files, taking the below - actions for each file: - - api-paste.ini - trove.conf - trove-taskmanager.conf - trove-conductor.conf - - - - You need to take upstream api-paste.ini and change content below in it: - [composite:trove] -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -auth_host = controller -admin_tenant_name = service -admin_user = trove -admin_password = TROVE_PASS - Edit the [DEFAULT] section of - each file (except api-paste.ini) and set appropriate values for the OpenStack service - URLs (can be handled by Keystone service catalog), logging and messaging configuration, and SQL - connections: - [DEFAULT] -log_dir = /var/log/trove -trove_auth_url = http://controller:5000/v2.0 -nova_compute_url = http://controller:8774/v2 -cinder_url = http://controller:8776/v1 -swift_url = http://controller:8080/v1/AUTH_ -sql_connection = mysql://trove:TROVE_DBPASS@controller/trove -notifier_queue_hostname = controller - - - - Configure the Database module to use the RabbitMQ message broker by - setting the following options in the [DEFAULT] - configuration group of each file: - [DEFAULT] -control_exchange = trove -rabbit_host = controller -rabbit_userid = guest -rabbit_password = RABBIT_PASS -rabbit_virtual_host= / -rpc_backend = trove.openstack.common.rpc.impl_kombu - - - - - - Edit the trove.conf file so it includes - appropriate values for the default datastore and network label - regex as shown below: - [DEFAULT] -# Config option for showing the IP address that nova doles out -add_addresses = True -network_label_regex = ^NETWORK_LABEL$ -control_exchange = trove - - - - - Edit the trove-taskmanager.conf file - so it includes the required settings to - connect to the OpenStack Compute service as shown below: - [DEFAULT] -# Configuration options for talking to nova via the novaclient. -# These options are for an admin user in your keystone config. -# It proxy's the token received from the user to send to nova via this admin users creds, -# basically acting like the client via that proxy token. -nova_proxy_admin_user = admin -nova_proxy_admin_pass = ADMIN_PASS -nova_proxy_admin_tenant_name = service -taskmanager_manager = trove.taskmanager.manager.Manager -log_file=trove-taskmanager.log - - - - - Prepare the trove admin database: - $ mysql -u root -p -mysql> CREATE DATABASE trove; -mysql> GRANT ALL PRIVILEGES ON trove.* TO trove@'localhost' \ -IDENTIFIED BY 'TROVE_DBPASS'; -mysql> GRANT ALL PRIVILEGES ON trove.* TO trove@'%' \ -IDENTIFIED BY 'TROVE_DBPASS'; - - - - Prepare the Database service: - - - Initialize the database: - # trove-manage db_sync - - - Create a datastore. You need to create a separate datastore for - each type of database you want to use, for example, MySQL, MongoDB, Cassandra. - This example shows you how to create a datastore for a MySQL database: - # su -s /bin/sh -c "trove-manage datastore_update mysql ''" trove - - - - - Create a trove image. - Create an image for the type of database you want to use, - for example, MySQL, MongoDB, Cassandra. - This image must have the trove guest agent installed, and - it must have the trove-guestagent.conf file - configured to connect to your OpenStack environment. To - correctly configure the - trove-guestagent.conf file, follow these steps - on the guest instance you are using to build your image: - - - - Add the following lines to trove-guestagent.conf: - rabbit_host = controller -rabbit_password = RABBIT_PASS -nova_proxy_admin_user = admin -nova_proxy_admin_pass = ADMIN_PASS -nova_proxy_admin_tenant_name = service -trove_auth_url = http://controller:35357/v2.0 -log_file = trove-guestagent.log - - - - - Update the datastore and version to use the specific image with the trove-manage command. - #trove-manage datastore_update datastore_name datastore_version - #trove-manage datastore_version_update datastore_name version_name \ - datastore_manager glance_image_id packages active - This example shows you how to create a MySQL datastore with version 5.5: - #trove-manage datastore_update mysql '' - #trove-manage datastore_version_update mysql 5.5 mysql glance_image_ID mysql-server-5.5 1 - #trove-manage datastore_update mysql 5.5 - - - Upload post-provisioning configuration validation rules: - - #trove-manage db_load_datastore_config_parameters datastore_name version_name \ - /etc/datastore_name/validation-rules.json - Example for uplodating rules for MySQL datastore: - # trove-manage db_load_datastore_config_parameters \ - mysql 5.5 "$PYBASEDIR"/trove/templates/mysql/validation-rules.json - - - - - You must register the Database module with the Identity service so - that other OpenStack services can locate it. Register the - service and specify the endpoint: - $ keystone service-create --name trove --type database \ - --description "OpenStack Database Service" -$ keystone endpoint-create \ - --service-id $(keystone service-list | awk '/ trove / {print $2}') \ - --publicurl http://controller:8779/v1.0/%\(tenant_id\)s \ - --internalurl http://controller:8779/v1.0/%\(tenant_id\)s \ - --adminurl http://controller:8779/v1.0/%\(tenant_id\)s \ - --region regionOne - - - Restart the Database services: - # service trove-api restart -# service trove-taskmanager restart -# service trove-conductor restart - - - Start the Database services and configure them to start when the - system boots: - # systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \ - openstack-trove-conductor.service -# systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - On SLES: - # service openstack-trove-api start -# service openstack-trove-taskmanager start -# service openstack-trove-conductor start -# chkconfig openstack-trove-api on -# chkconfig openstack-trove-taskmanager on -# chkconfig openstack-trove-conductor on - On openSUSE: - # systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \ - openstack-trove-conductor.service -# systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - - - -
diff --git a/doc/training-guides/basic-install-guide/section_trove-verify.xml b/doc/training-guides/basic-install-guide/section_trove-verify.xml deleted file mode 100644 index 1cfe1169..00000000 --- a/doc/training-guides/basic-install-guide/section_trove-verify.xml +++ /dev/null @@ -1,39 +0,0 @@ - -
- Verify the Database service installation - To verify that the Database service is installed and - configured correctly, try executing a Trove command: - - - Source the demo-openrc.sh file. - $ source ~/demo-openrc.sh - - - Retrieve the Trove instances list: - $ trove list - You should see output similar to this: - +----+------+-----------+-------------------+--------+-----------+------+ -| id | name | datastore | datastore_version | status | flavor_id | size | -+----+------+-----------+-------------------+--------+-----------+------+ -+----+------+-----------+-------------------+--------+-----------+------+ - - - - Assuming you have created an image for the type of - database you want, and have updated the datastore to - use that image, you can now create a Trove instance - (database). To do this, use the trove - create command. - - This example shows you how to create a MySQL 5.5 - database: - $ trove create name 2 --size=2 --databases DBNAME \ - --users USER:PASSWORD --datastore_version mysql-5.5 \ - --datastore mysql - - -
diff --git a/doc/training-guides/st-training-guides.xml b/doc/training-guides/st-training-guides.xml index ee3261d4..74d7c270 100644 --- a/doc/training-guides/st-training-guides.xml +++ b/doc/training-guides/st-training-guides.xml @@ -105,5 +105,4 @@ -