Updates basic install guides

Update basic install guides from install guides. Need this for rough
draft for POC for basic install guides as discussed during the
openstack kilo summit.

Change-Id: Iacbc0297ffe26932a1c1fc847554796c478250f1
This commit is contained in:
Pranav Salunke 2014-11-14 14:14:09 +01:00
parent 016b9561f8
commit 688a2a9f25
85 changed files with 5299 additions and 1506 deletions

View File

@ -3,14 +3,13 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="reserved_uids">
xml:id="reserved_user_ids">
<title>Reserved user IDs</title>
<para>
In OpenStack, certain user IDs are reserved and used to run
specific OpenStack services and own specific OpenStack
files. These users are set up according to the distribution
packages. The following table gives an overview.
OpenStack reserves certain user IDs to run specific services and
own specific files. These user IDs are set up according to the
distribution packages. The following table gives an overview.
</para>
<note os="debian;opensuse;sles;ubuntu">

View File

@ -1,85 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<book xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="0.1"
xml:id="openstack-basic-install-manual-juno">
<title>OpenStack Installation Guide for
<phrase os="ubuntu">Ubuntu 12.04/14.04 (LTS)</phrase>
</title>
<?rax
status.bar.text.font.size="40px"
status.bar.text="Juno"?>
<?rax subtitle.font.size="17px" title.font.size="32px"?>
<titleabbrev>OpenStack Installation Guide for
<phrase os="rhel;centos;fedora">Red Hat Enterprise Linux, CentOS, and Fedora</phrase>
<phrase os="ubuntu">Ubuntu 12.04/14.04 (LTS)</phrase>
<phrase os="opensuse">openSUSE and SUSE Linux Enterprise Server</phrase>
<!--phrase os="debian">Debian 7.0 (Wheezy)</phrase-->
</titleabbrev>
<!-- para>
<para>OpenStack® consists of several key
projects that are installed separately but work
together depending on the requirements of the users. These projects
include Compute, Identity Service, Networking, Image
Service, Block Storage, Object Storage, Telemetry,
Orchestration, and Database. These projects can be installed
separately and configured as a stand-alone project or
as connected entities. <phrase
os="debian">This guide walks through the
installation by using packages available through
Debian 7.0 (code name: Wheezy).</phrase>
<phrase os="ubuntu">This guide will walk you through an
installation by using packages available through
Ubuntu 12.04 (LTS) or 14.04 (LTS).</phrase>
<phrase os="rhel;centos;fedora">This guide will show
how to install OpenStack by using packages
available through Fedora 20 as well as with Red Hat
Enterprise Linux and its derivatives through the
EPEL repository.</phrase>
<phrase os="opensuse">This guide shows you how to
install OpenStack by using packages on openSUSE
through the Open Build Service Cloud
repository.</phrase> Explanations of configuration
options and sample configuration files are
included.</para>
</para -->
<!--revhistory>
<revision>
<date>2014-07-16</date>
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Start documentation for Icehouse.
</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
</revhistory-->
<!-- Chapters are referred from the book file through these
include statements. You can add additional chapters using
these types of statements. -->
<!-- xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/ch_preface.xml"/ -->
<xi:include href="ch_overview.xml"/>
<xi:include href="ch_basics.xml"/>
<xi:include href="ch_debconf.xml"/>
<xi:include href="ch_keystone.xml"/>
<xi:include href="ch_clients.xml"/>
<xi:include href="ch_glance.xml"/>
<xi:include href="ch_nova.xml"/>
<xi:include href="ch_networking.xml"/>
<xi:include href="ch_horizon.xml"/>
<xi:include href="ch_cinder.xml"/>
<xi:include href="ch_swift.xml"/>
<xi:include href="ch_heat.xml"/>
<xi:include href="ch_ceilometer.xml"/>
<xi:include href="ch_trove.xml"/>
<xi:include href="ch_launch-instance.xml"/>
<xi:include href="app_reserved_uids.xml"/>
<!-- xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/app_support.xml"/-->
<glossary role="auto"/>
</book>

View File

@ -0,0 +1,53 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_basic_environment">
<?dbhtml stop-chunking?>
<title>Basic environment</title>
<note>
<para>
The trunk version of this guide focuses on the future Juno
release and will not work for the current Icehouse release. If
you want to install Icehouse, you must use the <link
xlink:href="http://docs.openstack.org">Icehouse version</link>
of this guide instead.
</para>
</note>
<para>This chapter explains how to configure each node in the
<link linkend="architecture_example-architectures">example architectures</link>
including the <link linkend="example-architecture-with-legacy-networking">
two-node architecture with legacy networking</link> and
<link linkend="example-architecture-with-neutron-networking">three-node
architecture with OpenStack Networking (neutron)</link>.</para>
<note>
<para>Although most environments include OpenStack Identity, Image Service,
Compute, at least one networking service, and the dashboard, OpenStack
Object Storage can operate independently of most other services. If your
use case only involves Object Storage, you can skip to
<xref linkend="ch_swift"/>. However, the dashboard will not run without
at least OpenStack Image Service and Compute.</para>
</note>
<note>
<para>You must use an account with administrative privileges to configure
each node. Either run the commands as the <literal>root</literal> user
or configure the <literal>sudo</literal> utility.</para>
</note>
<note>
<para>
The <command>systemctl enable</command> call on openSUSE outputs
a warning message when the service uses SysV Init scripts
instead of native systemd files. This warning can be ignored.
</para>
</note>
<xi:include href="section_basics-prerequisites.xml"/>
<xi:include href="section_basics-security.xml"/>
<xi:include href="section_basics-networking.xml"/>
<xi:include href="section_basics-ntp.xml"/>
<xi:include href="section_basics-packages.xml"/>
<xi:include href="section_basics-database.xml"/>
<xi:include href="section_basics-queue.xml"/>
</chapter>

View File

@ -0,0 +1,37 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_basic_networking">
<title>Add a networking component</title>
<para>This chapter explains how to install and configure either
OpenStack Networking (neutron) or the legacy <systemitem
class="service">nova-network</systemitem> networking service.
The <systemitem class="service">nova-network</systemitem> service
enables you to deploy one network type per instance and is
suitable for basic network functionality. OpenStack Networking
enables you to deploy multiple network types per instance and
includes <glossterm baseform="plug-in">plug-ins</glossterm> for a
variety of products that support <glossterm>virtual
networking</glossterm>.</para>
<para>For more information, see the <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html"
>Networking</link> chapter of the <citetitle>OpenStack Cloud
Administrator Guide</citetitle>.</para>
<section xml:id="section_neutron-networking">
<title>OpenStack Networking (neutron)</title>
<xi:include href="section_neutron-concepts.xml"/>
<xi:include href="section_neutron-controller-node.xml"/>
<xi:include href="section_neutron-network-node.xml"/>
<xi:include href="section_neutron-compute-node.xml"/>
<xi:include href="section_neutron-initial-networks.xml"/>
</section>
<section xml:id="section_networking_next_steps">
<title>Next steps</title>
<para>Your OpenStack environment now includes the core components
necessary to launch a basic instance. You can <link
linkend="launch-instance">launch an instance</link> or add
more OpenStack services to your environment.</para>
</section>
</chapter>

View File

@ -8,8 +8,7 @@
<para>Telemetry provides a framework for monitoring and metering
the OpenStack cloud. It is also known as the ceilometer
project.</para>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_getstart_telemetry.xml"/>
<xi:include href="section_ceilometer-install.xml"/>
<xi:include href="section_ceilometer-controller.xml"/>
<xi:include href="section_ceilometer-nova.xml"/>
<xi:include href="section_ceilometer-glance.xml"/>
<xi:include href="section_ceilometer-cinder.xml"/>

View File

@ -5,22 +5,25 @@
version="5.0"
xml:id="ch_cinder">
<title>Add the Block Storage service</title>
<para>The OpenStack Block Storage service works through the
interaction of a series of daemon processes named <systemitem
role="process">cinder-*</systemitem> that reside persistently on
the host machine or machines. You can run the binaries from a
single node or across multiple nodes. You can also run them on the
same node as other OpenStack services. The following sections
introduce Block Storage service components and concepts. They will also show
you how to configure and install the Block Storage service.</para>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_getstart_block-storage.xml"/>
<xi:include href="section_cinder-controller.xml"/>
<xi:include href="section_cinder-node.xml"/>
<para>The OpenStack Block Storage service provides block storage devices
to instances using various backends. The Block Storage API and scheduler
services run on the controller node and the volume service runs on one
or more storage nodes. Storage nodes provide volumes to instances using
local block storage devices or SAN/NAS backends with the appropriate
drivers. For more information, see the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/section_volume-drivers.html"
><citetitle>Configuration Reference</citetitle></link>.</para>
<note>
<para>This chapter omits the backup manager because it depends on the
Object Storage service.</para>
</note>
<xi:include href="section_cinder-controller-node.xml"/>
<xi:include href="section_cinder-storage-node.xml"/>
<xi:include href="section_cinder-verify.xml"/>
<section xml:id="section_cinder_next_steps">
<title>Next steps</title>
<para>Your OpenStack environment now includes Block Storage. You can
<link linkend="launch-instance">launch an instance</link> or add more
services to your environment in the next chapters.</para>
services to your environment in the following chapters.</para>
</section>
</chapter>

View File

@ -13,10 +13,6 @@
<para>Configure the clients on your desktop rather than on the
server so that you have a similar experience to your
users.</para>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_cli_overview.xml"/>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_cli_install.xml"/>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_cli_openrc.xml"/>
<section xml:id="ch_clients_openrc_files">
<title>Create openrc.sh files</title>

View File

@ -25,7 +25,6 @@
><citetitle>Configuration
Reference</citetitle></link>.</para>
</important>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_getstart_image.xml"/>
<xi:include href="section_glance-install.xml"/>
<xi:include href="section_glance-verify.xml"/>
</chapter>

View File

@ -7,7 +7,6 @@
<title>Add the Orchestration module</title>
<para>The Orchestration module (heat) uses a heat orchestration template
(HOT) to create and manage cloud resources.</para>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_getstart_orchestration.xml"/>
<xi:include href="section_heat-install.xml"/>
<xi:include href="section_heat-verify.xml"/>
<section xml:id="section_heat_next_steps">

View File

@ -7,38 +7,38 @@
<title>Add the dashboard</title>
<para>The OpenStack dashboard, also known as <link
xlink:href="https://github.com/openstack/horizon/"
>Horizon</link>, is a web interface that enables cloud
administrators and users to manage various OpenStack resources and
services.</para>
>Horizon</link>, is a Web interface that enables cloud
administrators and users to manage various OpenStack resources and
services.</para>
<para>The dashboard enables web-based interactions with the
OpenStack Compute cloud controller through the OpenStack
APIs.</para>
<para>These instructions show an example deployment, configured with
an Apache web server.</para>
<para>After you <link linkend="install_dashboard">install and
configure the dashboard</link>, you can complete the following
tasks:</para>
<itemizedlist>
<listitem>
<para>Customize your dashboard. See section <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/ch_install-dashboard.html#dashboard-custom-brand"
>Customize the dashboard</link> in the <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/"
><citetitle>OpenStack Cloud Administrator
Guide</citetitle></link>.</para>
</listitem>
<listitem>
<para>Set up session storage for the dashboard. See <xref
linkend="dashboard-sessions"/>.</para>
</listitem>
</itemizedlist>
OpenStack Compute cloud controller through the OpenStack
APIs.</para>
<para>Horizon enables you to customize the brand of the dashboard.</para>
<para>Horizon provides a set of core classes and reusable templates and tools.</para>
<para>This example deployment uses an Apache web server.</para>
<xi:include href="section_dashboard-system-reqs.xml"/>
<xi:include href="section_dashboard-install.xml"/>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_dashboard_sessions.xml"/>
<xi:include href="section_dashboard-verify.xml"/>
<section xml:id="section_horizon_next_steps">
<title>Next steps</title>
<para>Your OpenStack environment now includes the dashboard. You can
<link linkend="launch-instance">launch an instance</link> and add more
services to your environment in the following chapters.</para>
<link linkend="launch-instance">launch an instance</link> or add
more services to your environment in the following chapters.</para>
<para>After you install and configure the dashboard, you can
complete the following tasks:</para>
<itemizedlist>
<listitem>
<para>Customize your dashboard. See section <link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/ch_install-dashboard.html#dashboard-custom-brand"
>Customize the dashboard</link> in the <link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/"
><citetitle>OpenStack Cloud Administrator Guide</citetitle></link>
for information on setting up colors, logos, and site titles.</para>
</listitem>
<listitem>
<para>Set up session storage. See section <link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/dashboard-sessions.html#dashboard-sessions">Set up session storage for the dashboard</link>
in the <link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/"
><citetitle>OpenStack Cloud Administrator Guide</citetitle></link> for information on user
session data.</para>
</listitem>
</itemizedlist>
</section>
</chapter>

View File

@ -5,9 +5,9 @@
version="5.0"
xml:id="ch_keystone">
<title>Add the Identity service</title>
<xi:include href="common/section_keystone-concepts.xml"/>
<xi:include href="section_keystone-install.xml"/>
<xi:include href="section_keystone-users.xml"/>
<xi:include href="section_keystone-services.xml"/>
<xi:include href="section_keystone-verify.xml"/>
<xi:include href="section_keystone-openrc.xml"/>
</chapter>

View File

@ -5,16 +5,16 @@
version="5.0"
xml:id="ch_networking">
<title>Add a networking component</title>
<para>This chapter explains how to install and configure
<para>This chapter explains how to install and configure either
OpenStack Networking (neutron) or the legacy <systemitem
class="service">nova-network</systemitem> service.
class="service">nova-network</systemitem> networking service.
The <systemitem class="service">nova-network</systemitem> service
enables you to deploy one network type per instance and is
suitable for basic network functionality. OpenStack Networking
enables you to deploy multiple network types per instance and
includes <glossterm baseform="plug-in">plug-ins</glossterm> for a
variety of products that support <glossterm>virtual
networking</glossterm>.</para>
networking</glossterm>.</para>
<para>For more information, see the <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html"
>Networking</link> chapter of the <citetitle>OpenStack Cloud
@ -22,13 +22,10 @@
<section xml:id="section_neutron-networking">
<title>OpenStack Networking (neutron)</title>
<xi:include href="section_neutron-concepts.xml"/>
<section xml:id="section_neutron-networking-ml2">
<title>Modular Layer 2 (ML2) plug-in</title>
<xi:include href="section_neutron-ml2-controller-node.xml"/>
<xi:include href="section_neutron-ml2-network-node.xml"/>
<xi:include href="section_neutron-ml2-compute-node.xml"/>
<xi:include href="section_neutron-initial-networks.xml"/>
</section>
<xi:include href="section_neutron-controller-node.xml"/>
<xi:include href="section_neutron-network-node.xml"/>
<xi:include href="section_neutron-compute-node.xml"/>
<xi:include href="section_neutron-initial-networks.xml"/>
</section>
<section xml:id="section_nova-networking">
<title>Legacy networking (nova-network)</title>
@ -39,8 +36,8 @@
<section xml:id="section_networking_next_steps">
<title>Next steps</title>
<para>Your OpenStack environment now includes the core components
necessary to launch an instance. You can <link
linkend="launch-instance">launch an instance</link> and add
necessary to launch a basic instance. You can <link
linkend="launch-instance">launch an instance</link> or add
more OpenStack services to your environment.</para>
</section>
</chapter>

View File

@ -6,7 +6,6 @@
xml:id="ch_nova">
<?dbhtml stop-chunking?>
<title>Add the Compute service</title>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_getstart_compute.xml"/>
<xi:include href="section_nova-controller-install.xml"/>
<xi:include href="section_nova-compute-install.xml"/>
<xi:include href="section_nova-verify.xml"/>

View File

@ -18,8 +18,6 @@
services. Each service offers an application programming interface
(<glossterm>API</glossterm>) that facilitates this integration. The
following table provides a list of OpenStack services:</para>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/ch_getstart.xml"
xpointer="element(table1)"/>
<para>This guide describes how to deploy these services in a functional
test environment and, by example, teaches you how to build a production
environment.</para>
@ -29,15 +27,6 @@
<para>Launching a virtual machine or instance involves many interactions
among several services. The following diagram provides the conceptual
architecture of a typical OpenStack environment.</para>
<figure xml:id="conceptual-architecture">
<title>Conceptual architecture</title>
<mediaobject>
<imageobject>
<imagedata contentwidth="6in"
fileref="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/figures/openstack_havana_conceptual_arch.png"/>
</imageobject>
</mediaobject>
</figure>
</section>
<section xml:id="architecture_example-architectures">
<title>Example architectures</title>
@ -48,8 +37,6 @@
architectures:</para>
<itemizedlist>
<listitem>
<para>Three-node architecture with OpenStack Networking (neutron).
To be implemented</para>
<itemizedlist>
<listitem>
<para>The basic controller node runs the Identity service, Image
@ -65,27 +52,41 @@
your environment.</para>
</listitem>
<listitem>
<para>The network node runs the Networking plug-in, layer 2 agent,
and several layer 3 agents that provision and operate tenant
networks. Layer 2 services include provisioning of virtual
networks and tunnels. Layer 3 services include routing,
<glossterm baseform="Network Address Translation (NAT)">NAT</glossterm>
, and <glossterm>DHCP</glossterm>. This node also handles
external (internet) connectivity for tenant virtual machines
<para>The network node runs the Networking plug-in, layer-2 agent,
and several layer-3 agents that provision and operate tenant
networks. Layer-2 services include provisioning of virtual
networks and tunnels. Layer-3 services include routing,
<glossterm baseform="Network Address Translation (NAT)">NAT</glossterm>,
and <glossterm>DHCP</glossterm>. This node also handles
external (Internet) connectivity for tenant virtual machines
or instances.</para>
</listitem>
<listitem>
<para>The compute node runs the hypervisor portion of Compute,
which operates tenant virtual machines or instances. By default
Compute uses KVM as the hypervisor. The compute node also runs
the Networking plug-in and layer 2 agent which operate tenant
the Networking plug-in and layer-2 agent which operate tenant
networks and implement security groups. You can run more than
one compute node.</para>
<para>Optionally, the compute node also runs the Telemetry
agent. This component provides additional features for
your environment.</para>
</listitem>
<listitem>
<para>The optional storage node contains the disks that the Block
Storage service uses to serve volumes. You can run more than one
storage node.</para>
<para>Optionally, the storage node also runs the Telemetry
agent. This component provides additional features for
your environment.</para>
</listitem>
</itemizedlist>
<note>
<para>When you implement this architecture, skip
To use optional services, you
might need to install additional nodes, as described in
subsequent chapters.</para>
</note>
<figure xml:id="example-architecture-with-neutron-networking">
<title>Three-node architecture with OpenStack Networking (neutron)</title>
<mediaobject>
@ -97,6 +98,7 @@
</figure>
</listitem>
<listitem>
<para>Two-node architecture with legacy networking (nova-network). See</para>
<itemizedlist>
<listitem>
<para>The basic
@ -126,6 +128,11 @@
your environment.</para>
</listitem>
</itemizedlist>
<note>
<para>When you implement this architecture, skip
might need to install additional nodes, as described in
subsequent chapters.</para>
</note>
<figure xml:id="example-architecture-with-legacy-networking">
<title>Two-node architecture with legacy networking (nova-network)</title>
<mediaobject>

View File

@ -0,0 +1,18 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_sahara">
<title>Add the Data processing service</title>
<para>The Data processing service (sahara) enables users to provide a
scalable data processing stack and associated management interfaces.
This includes provision and operation of data processing clusters as
well as scheduling and operation of data processing jobs.
</para>
<warning><para>This chapter is a work in progress. It may contain
incorrect information, and will be updated frequently.</para></warning>
<xi:include href="section_sahara-install.xml" />
<xi:include href="section_sahara-verify.xml" />
</chapter>

View File

@ -7,26 +7,22 @@
<title>Add Object Storage</title>
<para>The OpenStack Object Storage services work together to provide
object storage and retrieval through a REST API. For this example
architecture, as a prerequisite, you should already have the Identity
Service, also known as Keystone, installed.</para>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_getstart_object-storage.xml"/>
architecture, you must have already installed the Identity
Service, also known as Keystone.</para>
<xi:include
href="object-storage/section_object-storage-sys-requirements.xml"/>
href="object-storage/section_swift-system-reqs.xml"/>
<xi:include
href="object-storage/section_object-storage-network-planning.xml"/>
href="object-storage/section_swift-example-arch.xml"/>
<xi:include
href="object-storage/section_object-storage-example-install-arch.xml"/>
<xi:include href="object-storage/section_object-storage-install.xml"/>
href="object-storage/section_swift-controller-node.xml"/>
<xi:include
href="object-storage/section_object-storage-install-config-storage-nodes.xml"/>
href="object-storage/section_swift-storage-node.xml"/>
<xi:include
href="object-storage/section_object-storage-install-config-proxy-node.xml"/>
href="object-storage/section_swift-initial-rings.xml"/>
<xi:include
href="object-storage/section_start-storage-node-services.xml"/>
href="object-storage/section_swift-finalize-installation.xml"/>
<xi:include
href="object-storage/section_object-storage-verifying-install.xml"/>
<xi:include
href="object-storage/section_object-storage-adding-proxy-server.xml"/>
href="object-storage/section_swift-verify.xml"/>
<section xml:id="section_swift_next_steps">
<title>Next steps</title>
<para>Your OpenStack environment now includes Object Storage. You can

View File

@ -10,7 +10,6 @@
integrated project name is <glossterm>trove</glossterm>.</para>
<warning><para>This chapter is a work in progress. It may contain
incorrect information, and will be updated frequently.</para></warning>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_getstart_trove.xml"/>
<xi:include href="section_trove-install.xml" />
<xi:include href="section_trove-verify.xml" />
</chapter>

View File

@ -1,8 +1,4 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "https://raw.githubusercontent.com/openstack/openstack-manuals/master/doc/common/entities/openstack.ent">
%openstack;
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"

View File

@ -17,21 +17,13 @@
instructions or <link xlink:href="http://devstack.org"
>http://devstack.org</link> for all-in-one including authentication
with the Identity Service (keystone) v2.0 API.</para>
<warning>
<para>In this guide we recommend installing and configuring the Identity
service so that it implements Identity API v2.0. The Object Storage
service is unaware of domains when implementing Access Control Lists
(ACLs), so you must use the v2.0 API to avoid having identical user
names in different domains, which would enable two users to access
the same objects.</para>
</warning>
<section xml:id="before-you-begin-swift-install">
<title>Before you begin</title>
<para>Have a copy of the operating system installation media available
if you are installing on a new server.</para>
<para>These steps assume you have set up repositories for packages for
your operating system as shown in <link linkend="basics-packages"
>OpenStack Packages</link>.</para>
your operating system as shown in
<link linkend="basics-packages"/>.</para>
<para>This document demonstrates how to install a cluster by using the
following types of nodes:</para>
<itemizedlist>
@ -69,15 +61,16 @@
the <literal>swift</literal> user. Use the
<literal>service</literal> tenant and give the user the
<literal>admin</literal> role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name=swift --pass=<replaceable>SWIFT_PASS</replaceable> \
--email=<replaceable>swift@example.com</replaceable></userinput>
<prompt>$</prompt> <userinput>keystone user-role-add --user=swift --tenant=service --role=admin</userinput></screen>
<screen><prompt>$</prompt> <userinput>keystone user-create --name swift --pass <replaceable>SWIFT_PASS</replaceable></userinput>
<prompt>$</prompt> <userinput>keystone user-role-add --user swift --tenant service --role admin</userinput></screen>
<para>Replace <replaceable>SWIFT_PASS</replaceable> with a
suitable password.</para>
</step>
<step>
<para>Create a service entry for the Object Storage
Service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name=swift --type=object-store \
--description="OpenStack Object Storage"</userinput>
<screen><prompt>$</prompt> <userinput>keystone service-create --name swift --type object-store \
--description "OpenStack Object Storage"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
@ -98,10 +91,11 @@
API. In this guide, the <literal>controller</literal> host
name is used:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ object-store / {print $2}') \
--publicurl='http://<replaceable>controller</replaceable>:8080/v1/AUTH_%(tenant_id)s' \
--internalurl='http://<replaceable>controller</replaceable>:8080/v1/AUTH_%(tenant_id)s' \
--adminurl=http://<replaceable>controller</replaceable>:8080</userinput>
--service-id $(keystone service-list | awk '/ object-store / {print $2}') \
--publicurl 'http://<replaceable>controller</replaceable>:8080/v1/AUTH_%(tenant_id)s' \
--internalurl 'http://<replaceable>controller</replaceable>:8080/v1/AUTH_%(tenant_id)s' \
--adminurl http://<replaceable>controller</replaceable>:8080 \
--region regionOne</userinput>
<computeroutput>+-------------+---------------------------------------------------+
| Property | Value |
+-------------+---------------------------------------------------+

View File

@ -1,8 +1,4 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"

View File

@ -14,11 +14,23 @@
swift-container swift-container-replicator swift-container-updater swift-container-auditor \
swift-account swift-account-replicator swift-account-reaper swift-account-auditor; do \
service $service start; done</userinput></screen>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>for service in \
<screen os="fedora;rhel;centos"><prompt>#</prompt> <userinput>for service in \
openstack-swift-object openstack-swift-object-replicator openstack-swift-object-updater openstack-swift-object-auditor \
openstack-swift-container openstack-swift-container-replicator openstack-swift-container-updater openstack-swift-container-auditor \
openstack-swift-account openstack-swift-account-replicator openstack-swift-account-reaper openstack-swift-account-auditor; do \
systemctl enable $service.service; systemctl start $service.service; done</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>for service in \
openstack-swift-object openstack-swift-object-replicator openstack-swift-object-updater openstack-swift-object-auditor \
openstack-swift-container openstack-swift-container-replicator openstack-swift-container-updater openstack-swift-container-auditor \
openstack-swift-account openstack-swift-account-replicator openstack-swift-account-reaper openstack-swift-account-auditor; do \
service $service start; chkconfig $service on; done</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>for service in \
openstack-swift-object openstack-swift-object-replicator openstack-swift-object-updater openstack-swift-object-auditor \
openstack-swift-container openstack-swift-container-replicator openstack-swift-container-updater openstack-swift-container-auditor \
openstack-swift-account openstack-swift-account-replicator openstack-swift-account-reaper openstack-swift-account-auditor; do \
systemctl enable $service.service; systemctl start $service.service; done</userinput></screen>
<note>
<para>To start all swift services at once, run the command:</para>
<screen><prompt>#</prompt> <userinput>swift-init all start</userinput></screen>

View File

@ -0,0 +1,195 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-install-controller-node">
<title>Install and configure the controller node</title>
<para>This section describes how to install and configure the proxy
service that handles requests for the account, container, and object
services operating on the storage nodes. For simplicity, this
guide installs and configures the proxy service on the controller node.
However, you can run the proxy service on any node with network
connectivity to the storage nodes. Additionally, you can install and
configure the proxy service on multiple nodes to increase performance
and redundancy. For more information, see the
<link xlink:href="http://docs.openstack.org/developer/swift/deployment_guide.html"
>Deployment Guide</link>.</para>
<procedure>
<title>To configure prerequisites</title>
<para>The proxy service relies on an authentication and authorization
mechanism such as the Identity service. However, unlike other services,
it also offers an internal mechanism that allows it to operate without
any other OpenStack services. However, for simplicity, this guide
references the Identity service in <xref linkend="ch_keystone"/>. Before
you configure the Object Storage service, you must create Identity
service credentials including endpoints.</para>
<note>
<para>The Object Storage service does not use a SQL database on
the controller node.</para>
</note>
<step>
<para>To create the Identity service credentials, complete these
steps:</para>
<substeps>
<step>
<para>Create a <literal>swift</literal> user:</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name swift --pass <replaceable>SWIFT_PASS</replaceable></userinput>
<computeroutput>+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | d535e5cbd2b74ac7bfb97db9cced3ed6 |
| name | swift |
| username | swift |
+----------+----------------------------------+</computeroutput></screen>
<para>Replace <replaceable>SWIFT_PASS</replaceable> with a suitable
password.</para>
</step>
<step>
<para>Link the <literal>swift</literal> user to the
<literal>service</literal> tenant and <literal>admin</literal>
role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user swift --tenant service --role admin</userinput></screen>
<note>
<para>This command provides no output.</para>
</note>
</step>
<step>
<para>Create the <literal>swift</literal> service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name swift --type object-store \
--description "OpenStack Object Storage"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Object Storage |
| enabled | True |
| id | 75ef509da2c340499d454ae96a2c5c34 |
| name | swift |
| type | object-store |
+-------------+----------------------------------+</computeroutput></screen>
</step>
</substeps>
</step>
<step>
<para>Create the Identity service endpoints:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id $(keystone service-list | awk '/ object-store / {print $2}') \
--publicurl 'http://<replaceable>controller</replaceable>:8080/v1/AUTH_%(tenant_id)s' \
--internalurl 'http://<replaceable>controller</replaceable>:8080/v1/AUTH_%(tenant_id)s' \
--adminurl http://<replaceable>controller</replaceable>:8080 \
--region regionOne</userinput>
<computeroutput>+-------------+---------------------------------------------------+
| Property | Value |
+-------------+---------------------------------------------------+
| adminurl | http://controller:8080/ |
| id | af534fb8b7ff40a6acf725437c586ebe |
| internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
| publicurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
| region | regionOne |
| service_id | 75ef509da2c340499d454ae96a2c5c34 |
+-------------+---------------------------------------------------+</computeroutput></screen>
</step>
</procedure>
<procedure>
<title>To install and configure the controller node components</title>
<step>
<para>Install the packages:</para>
<note>
<para>Complete OpenStack environments already include some of these
packages.</para>
</note>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install swift swift-proxy python-swiftclient python-keystoneclient memcached</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token memcached</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-swift-proxy python-swiftclient python-keystoneclient memcached python-xml</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>Create the <literal>/etc/swift</literal> directory.</para>
</step>
<step os="ubuntu;debian;rhel;centos;fedora">
<para>Obtain the proxy service configuration file from the Object
Storage source repository:</para>
<screen><prompt>#</prompt> <userinput>curl -o /etc/swift/proxy-server.conf \
https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/swift/proxy-server.conf</filename>
file and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
the bind port, user, and configuration directory:</para>
<programlisting language="ini">[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift</programlisting>
</step>
<step>
<para>In the <literal>[pipeline]</literal> section, enable
the appropriate modules:</para>
<programlisting language="ini">[pipeline]
pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server</programlisting>
<note>
<para>For more information on other modules that enable
additional features, see the
<link xlink:href="http://docs.openstack.org/developer/swift/deployment_guide.html"
>Deployment Guide</link>.</para>
</note>
</step>
<step>
<para>In the <literal>[app:proxy-server]</literal> section, enable
account management:</para>
<programlisting language="ini">[app:proxy-server]
...
allow_account_management = true
account_autocreate = true</programlisting>
</step>
<step>
<para>In the <literal>[filter:keystoneauth]</literal> section,
configure the operator roles:</para>
<programlisting language="ini">[filter:keystoneauth]
use = egg:swift#keystoneauth
...
operator_roles = admin,_member_</programlisting>
<note os="ubuntu;debian;rhel;centos;fedora">
<para>You might need to uncomment this section.</para>
</note>
</step>
<step>
<para>In the <literal>[filter:authtoken]</literal> section,
configure Identity service access:</para>
<programlisting language="ini">[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = swift
admin_password = <replaceable>SWIFT_PASS</replaceable>
delay_auth_decision = true</programlisting>
<para>Replace <replaceable>SWIFT_PASS</replaceable> with the
password you chose for the <literal>swift</literal> user in the
Identity service.</para>
<note os="ubuntu;debian;rhel;centos;fedora">
<para>You might need to uncomment this section.</para>
</note>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[filter:cache]</literal> section, configure
the <application>memcached</application> location:</para>
<programlisting language="ini">[filter:cache]
...
memcache_servers = 127.0.0.1:11211</programlisting>
</step>
</substeps>
</step>
</procedure>
</section>

View File

@ -0,0 +1,56 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-example-arch">
<title>Example architecture</title>
<para>In a production environment, the Object Storage service requires
at least two proxy nodes and five storage nodes. For simplicity, this
guide uses a minimal architecture with the proxy service running on
the existing OpenStack controller node and two storage nodes. However,
these concepts still apply.</para>
<itemizedlist>
<listitem>
<para>Node: A host machine that runs one or more OpenStack
Object Storage services.</para>
</listitem>
<listitem>
<para>Proxy node: Runs proxy services.</para>
</listitem>
<listitem>
<para>Storage node: Runs account, container, and object
services. Contains the SQLite databases.</para>
</listitem>
<listitem>
<para>Ring: A set of mappings between OpenStack Object
Storage data to physical devices.</para>
</listitem>
<listitem>
<para>Replica: A copy of an object. By default, three
copies are maintained in the cluster.</para>
</listitem>
<listitem>
<para>Zone (optional): A logically separate section of the cluster,
related to independent failure characteristics.</para>
</listitem>
<listitem>
<para>Region (optional): A logically separate section of
the cluster, representing distinct physical locations
such as cities or countries. Similar to zones, but
representing physical locations of portions of the
cluster rather than logical segments.</para>
</listitem>
</itemizedlist>
<para>To increase reliability and performance, you can add
additional proxy servers.</para>
<para>The following diagram shows one possible architecture for a
minimal production environment:</para>
<para>
<inlinemediaobject>
<imageobject>
<imagedata fileref="../figures/swift_install_arch.png"/>
</imageobject>
</inlinemediaobject>
</para>
</section>

View File

@ -0,0 +1,134 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-finalize-installation">
<title>Finalize installation</title>
<procedure>
<title>Configure hashes and default storage policy</title>
<step os="ubuntu;debian;rhel;centos;fedora">
<para>Obtain the <filename>/etc/swift/swift.conf</filename> file from
the Object Storage source repository:</para>
<screen><prompt>#</prompt> <userinput>curl -o /etc/swift/swift.conf \
https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/swift.conf-sample</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/swift/swift.conf</filename> file and
complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[swift-hash]</literal> section, configure
the hash path prefix and suffix for your environment.</para>
<programlisting language="ini">[swift-hash]
...
swift_hash_path_suffix = <replaceable>HASH_PATH_PREFIX</replaceable>
swift_hash_path_prefix = <replaceable>HASH_PATH_SUFFIX</replaceable></programlisting>
<para>Replace <replaceable>HASH_PATH_PREFIX</replaceable> and
<replaceable>HASH_PATH_SUFFIX</replaceable> with unique
values.</para>
<warning>
<para>Keep these values secret and do not change or lose
them.</para>
</warning>
</step>
<step>
<para>In the <literal>[storage-policy:0]</literal> section,
configure the default storage policy:</para>
<programlisting language="ini">[storage-policy:0]
...
name = Policy-0
default = yes</programlisting>
</step>
</substeps>
</step>
<step>
<para>Copy the <filename>swift.conf</filename> file to
the <literal>/etc/swift</literal> directory on each storage node
and any additional nodes running the proxy service.</para>
</step>
<step>
<para>On all nodes, ensure proper ownership of the configuration
directory:</para>
<screen><prompt>#</prompt> <userinput>chown -R swift:swift /etc/swift</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>On the controller node and any other nodes running the proxy
service, restart the Object Storage proxy service including
its dependencies:</para>
<screen><prompt>#</prompt> <userinput>service memcached restart</userinput>
<prompt>#</prompt> <userinput>service swift-proxy restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>On the controller node and any other nodes running the proxy
service, start the Object Storage proxy service including its
dependencies and configure them to start when the system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-swift-proxy.service memcached.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-proxy.service memcached.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service memcached start</userinput>
<prompt>#</prompt> <userinput>service openstack-swift-proxy start</userinput>
<prompt>#</prompt> <userinput>chkconfig memcached on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-swift-proxy on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-swift-proxy.service memcached.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-proxy.service memcached.service</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>On the storage nodes, start the Object Storage services:</para>
<screen><prompt>#</prompt> <userinput>swift-init all start</userinput></screen>
<note>
<para>The storage node runs many Object Storage services and the
<command>swift-init</command> command makes them easier to
manage. You can ignore errors from services not running on the
storage node.</para>
</note>
</step>
<step os="rhel;centos;fedora">
<para>On the storage nodes, start the Object Storage services and
configure them to start when the system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service openstack-swift-container-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-container.service openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service openstack-swift-container-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service</userinput></screen>
</step>
<step os="sles;opensuse">
<para>On the storage nodes, start the Object Storage services and
configure them to start when the system boots:</para>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>for service in \
openstack-swift-account openstack-swift-account-auditor \
openstack-swift-account-reaper openstack-swift-account-replicator; do \
service $service start; chkconfig $service on; done</userinput>
<prompt>#</prompt> <userinput>for service in \
openstack-swift-container openstack-swift-container-auditor \
openstack-swift-container-replicator openstack-swift-container-updater; do \
service $service start; chkconfig $service on; done</userinput>
<prompt>#</prompt> <userinput>for service in \
openstack-swift-object openstack-swift-object-auditor \
openstack-swift-object-replicator openstack-swift-object-updater; do \
service $service start; chkconfig $service on; done</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service openstack-swift-container-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-container.service openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service openstack-swift-container-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -0,0 +1,190 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-initial-rings">
<title>Create initial rings</title>
<para>Before starting the Object Storage services, you must create
the initial account, container, and object rings. The ring builder
creates configuration files that each node uses to determine and
deploy the storage architecture. For simplicity, this guide uses one
region and zone with 2^10 (1024) maximum partitions, 3 replicas of each
object, and 1 hour minimum time between moving a partition more than
once. For Object Storage, a partition indicates a directory on a storage
device rather than a conventional partition table. For more information,
see the
<link xlink:href="http://docs.openstack.org/developer/swift/deployment_guide.html"
>Deployment Guide</link>.</para>
<section xml:id="swift-initial-rings-account">
<title>Account ring</title>
<para>The account server uses the account ring to maintain lists
of containers.</para>
<procedure>
<title>To create the ring</title>
<note>
<para>Perform these steps on the controller node.</para>
</note>
<step>
<para>Change to the <literal>/etc/swift</literal> directory.</para>
</step>
<step>
<para>Create the base <filename>account.builder</filename> file:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder create 10 3 1</userinput></screen>
</step>
<step>
<para>Add each storage node to the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder \
add r1z1-<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>:6002/<replaceable>DEVICE_NAME</replaceable> <replaceable>DEVICE_WEIGHT</replaceable></userinput></screen>
<para>Replace
<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the management network on the storage node.
Replace <replaceable>DEVICE_NAME</replaceable> with a storage
device name on the same storage node. For example, using the first
storage node in
<xref linkend="swift-install-storage-node"/> with the
<literal>/dev/sdb1</literal> storage device and weight of 100:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder add r1z1-10.0.0.51:6002/sdb1 100</userinput></screen>
<para>Repeat this command for each storage device on each storage
node. The example architecture requires four variations of this
command.</para>
</step>
<step>
<para>Verify the ring contents:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder</userinput>
<computeroutput>account.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance
The minimum number of hours before a partition can be reassigned is 1
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6002 10.0.0.51 6002 sdb1 100.00 768 0.00
1 1 1 10.0.0.51 6002 10.0.0.51 6002 sdc1 100.00 768 0.00
2 1 1 10.0.0.52 6002 10.0.0.52 6002 sdb1 100.00 768 0.00
3 1 1 10.0.0.52 6002 10.0.0.52 6002 sdc1 100.00 768 0.00</computeroutput></screen>
</step>
<step>
<para>Rebalance the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder rebalance</userinput></screen>
<note>
<para>This process can take a while.</para>
</note>
</step>
</procedure>
</section>
<section xml:id="swift-initial-rings-container">
<title>Container ring</title>
<para>The container server uses the container ring to maintain lists
of objects. However, it does not track object locations.</para>
<procedure>
<title>To create the ring</title>
<note>
<para>Perform these steps on the controller node.</para>
</note>
<step>
<para>Change to the <literal>/etc/swift</literal> directory.</para>
</step>
<step>
<para>Create the base <filename>container.builder</filename>
file:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder container.builder create 10 3 1</userinput></screen>
</step>
<step>
<para>Add each storage node to the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder container.builder \
add r1z1-<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>:6001/<replaceable>DEVICE_NAME</replaceable> <replaceable>DEVICE_WEIGHT</replaceable></userinput></screen>
<para>Replace
<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the management network on the storage node.
Replace <replaceable>DEVICE_NAME</replaceable> with a storage
device name on the same storage node. For example, using the first
storage node in
<xref linkend="swift-install-storage-node"/> with the
<literal>/dev/sdb1</literal> storage device and weight of 100:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder container.builder add r1z1-10.0.0.51:6001/sdb1 100</userinput></screen>
<para>Repeat this command for each storage device on each storage
node. The example architecture requires four variations of this
command.</para>
</step>
<step>
<para>Verify the ring contents:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder container.builder</userinput>
<computeroutput>container.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance
The minimum number of hours before a partition can be reassigned is 1
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6001 10.0.0.51 6001 sdb1 100.00 768 0.00
1 1 1 10.0.0.51 6001 10.0.0.51 6001 sdc1 100.00 768 0.00
2 1 1 10.0.0.52 6001 10.0.0.52 6001 sdb1 100.00 768 0.00
3 1 1 10.0.0.52 6001 10.0.0.52 6001 sdc1 100.00 768 0.00</computeroutput></screen>
</step>
<step>
<para>Rebalance the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder container.builder rebalance</userinput></screen>
<note>
<para>This process can take a while.</para>
</note>
</step>
</procedure>
</section>
<section xml:id="swift-initial-rings-object">
<title>Object ring</title>
<para>The object server uses the object ring to maintain lists
of object locations on local devices.</para>
<procedure>
<title>To create the ring</title>
<note>
<para>Perform these steps on the controller node.</para>
</note>
<step>
<para>Change to the <literal>/etc/swift</literal> directory.</para>
</step>
<step>
<para>Create the base <filename>object.builder</filename> file:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder object.builder create 10 3 1</userinput></screen>
</step>
<step>
<para>Add each storage node to the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder object.builder \
add r1z1-<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>:6000/<replaceable>DEVICE_NAME</replaceable> <replaceable>DEVICE_WEIGHT</replaceable></userinput></screen>
<para>Replace
<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the management network on the storage node.
Replace <replaceable>DEVICE_NAME</replaceable> with a storage
device name on the same storage node. For example, using the first
storage node in
<xref linkend="swift-install-storage-node"/> with the
<literal>/dev/sdb1</literal> storage device and weight of 100:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder object.builder add r1z1-10.0.0.51:6000/sdb1 100</userinput></screen>
<para>Repeat this command for each storage device on each storage
node. The example architecture requires four variations of this
command.</para>
</step>
<step>
<para>Verify the ring contents:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder object.builder</userinput>
<computeroutput>object.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance
The minimum number of hours before a partition can be reassigned is 1
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6000 10.0.0.51 6000 sdb1 100.00 768 0.00
1 1 1 10.0.0.51 6000 10.0.0.51 6000 sdc1 100.00 768 0.00
2 1 1 10.0.0.52 6000 10.0.0.52 6000 sdb1 100.00 768 0.00
3 1 1 10.0.0.52 6000 10.0.0.52 6000 sdc1 100.00 768 0.00</computeroutput></screen>
</step>
<step>
<para>Rebalance the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder object.builder rebalance</userinput></screen>
<note>
<para>This process can take a while.</para>
</note>
</step>
</procedure>
</section>
<section xml:id="swift-initial-rings-distribute">
<title>Distribute ring configuration files</title>
<para>Copy the <filename>account.ring.gz</filename>,
<filename>container.ring.gz</filename>, and
<filename>object.ring.gz</filename> files to the
<literal>/etc/swift</literal> directory on each storage node and
any additional nodes running the proxy service.</para>
</section>
</section>

View File

@ -0,0 +1,256 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-install-storage-node">
<title>Install and configure the storage nodes</title>
<para>This section describes how to install and configure storage nodes
that operate the account, container, and object services. For
simplicity, this configuration references two storage nodes, each
containing two empty local block storage devices. Each of the
devices, <literal>/dev/sdb</literal> and <literal>/dev/sdc</literal>,
must contain a suitable partition table with one partition occupying
the entire device. Although the Object Storage service supports any
file system with <glossterm>extended attributes (xattr)</glossterm>,
testing and benchmarking indicate the best performance and reliability
on <glossterm>XFS</glossterm>. For more information on horizontally
scaling your environment, see the
<link xlink:href="http://docs.openstack.org/developer/swift/deployment_guide.html"
>Deployment Guide</link>.</para>
<procedure>
<title>To configure prerequisites</title>
<para>You must configure each storage node before you install and
configure the Object Storage service on it. Similar to the controller
node, each storage node contains one network interface on the
<glossterm>management network</glossterm>. Optionally, each storage
node can contain a second network interface on a separate network for
replication. For more information, see
<xref linkend="ch_basic_environment"/>.</para>
<step>
<para>Configure unique items on the first storage node:</para>
<substeps>
<step>
<para>Configure the management interface:</para>
<para>IP address: 10.0.0.51</para>
<para>Network mask: 255.255.255.0 (or /24)</para>
<para>Default gateway: 10.0.0.1</para>
</step>
<step>
<para>Set the hostname of the node to
<replaceable>object1</replaceable>.</para>
</step>
</substeps>
</step>
<step>
<para>Configure unique items on the second storage node:</para>
<substeps>
<step>
<para>Configure the management interface:</para>
<para>IP address: 10.0.0.52</para>
<para>Network mask: 255.255.255.0 (or /24)</para>
<para>Default gateway: 10.0.0.1</para>
</step>
<step>
<para>Set the hostname of the node to
<replaceable>object2</replaceable>.</para>
</step>
</substeps>
</step>
<step>
<para>Configure shared items on both storage nodes:</para>
<substeps>
<step>
<para>Copy the contents of the <filename>/etc/hosts</filename> file
from the controller node and add the following to it:</para>
<programlisting language="ini"># object1
10.0.0.51 object1
# object2
10.0.0.52 object2</programlisting>
<para>Also add this content to the <filename>/etc/hosts</filename>
file on all other nodes in your environment.</para>
</step>
<step>
<para>Install and configure
<glossterm baseform="Network Time Protocol (NTP)">NTP</glossterm>
using the instructions in
<xref linkend="basics-ntp-other-nodes"/>.</para>
</step>
<step>
<para>Install the supporting utility packages:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install xfsprogs rsync</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install xfsprogs rsync</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install xfsprogs rsync xinetd</userinput></screen>
</step>
<step>
<para>Format the <literal>/dev/sdb1</literal> and
<literal>/dev/sdc1</literal> partitions as XFS:</para>
<screen><prompt>#</prompt> <userinput>mkfs.xfs /dev/sdb1</userinput>
<prompt>#</prompt> <userinput>mkfs.xfs /dev/sdc1</userinput></screen>
</step>
<step>
<para>Create the mount point directory structure:</para>
<screen><prompt>#</prompt> <userinput>mkdir -p /srv/node/sdb1</userinput>
<prompt>#</prompt> <userinput>mkdir -p /srv/node/sdc1</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/fstab</filename> file and add the
following to it:</para>
<programlisting language="ini">/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2</programlisting>
</step>
<step>
<para>Mount the devices:</para>
<screen><prompt>#</prompt> <userinput>mount /srv/node/sdb1</userinput>
<prompt>#</prompt> <userinput>mount /srv/node/sdc1</userinput></screen>
</step>
</substeps>
</step>
<step>
<para>Edit the <filename>/etc/rsyncd.conf</filename> file and add the
following to it:</para>
<programlisting language="ini">uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = <replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock</programlisting>
<para>Replace <replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the management network on the storage
node.</para>
<note>
<para>The <systemitem role="service">rsync</systemitem> service
requires no authentication, so consider running it on a private
network.</para>
</note>
</step>
<step os="ubuntu;debian">
<para>Edit the <filename>/etc/default/rsync</filename> file and enable
the <systemitem role="service">rsync</systemitem> service:</para>
<programlisting language="ini">RSYNC_ENABLE=true</programlisting>
</step>
<step os="sles;opensuse">
<para>Edit the <filename>/etc/xinetd.d/rsync</filename> file and enable
the <systemitem role="service">rsync</systemitem> service:</para>
<programlisting language="ini">disable = no</programlisting>
</step>
<step os="ubuntu;debian">
<para>Start the <systemitem class="service">rsync</systemitem>
service:</para>
<screen><prompt>#</prompt> <userinput>service rsync start</userinput></screen>
</step>
<step os="rhel;centos;fedora">
<para>Start the <systemitem class="service">rsyncd</systemitem> service
and configure it to start when the system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable rsyncd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start rsyncd.service</userinput></screen>
</step>
<step os="sles;opensuse">
<para>Start the <systemitem class="service">xinetd</systemitem> service
and configure it to start when the system boots:</para>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service xinetd start</userinput>
<prompt>#</prompt> <userinput>chkconfig xinetd on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable xinetd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start xinetd.service</userinput></screen>
</step>
</procedure>
<procedure>
<title>Install and configure storage node components</title>
<note>
<para>Perform these steps on each storage node.</para>
</note>
<step>
<para>Install the packages:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install swift swift-account swift-container swift-object</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-swift-account openstack-swift-container \
openstack-swift-object</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-swift-account openstack-swift-container \
openstack-swift-object python-xml</userinput></screen>
</step>
<step os="ubuntu;debian;rhel;centos;fedora">
<para>Obtain the accounting, container, and object service configuration
files from the Object Storage source repository:</para>
<screen><prompt>#</prompt> <userinput>curl -o /etc/swift/account-server.conf \
https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample</userinput></screen>
<screen><prompt>#</prompt> <userinput>curl -o /etc/swift/container-server.conf \
https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample</userinput></screen>
<screen><prompt>#</prompt> <userinput>curl -o /etc/swift/object-server.conf \
https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample</userinput></screen>
</step>
<step>
<para>Edit the
<filename>/etc/swift/account-server.conf</filename>,
<filename>/etc/swift/container-server.conf</filename>, and
<filename>/etc/swift/object-server.conf</filename> files and
complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
bind IP address, bind port, user, configuration directory, and
mount point directory:</para>
<programlisting language="ini">[DEFAULT]
...
bind_ip = <replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node</programlisting>
<para>Replace
<replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the management network on the storage
node.</para>
</step>
<step>
<para>In the <literal>[pipeline]</literal> section, enable
the appropriate modules:</para>
<programlisting language="ini">[pipeline]
pipeline = healthcheck recon account-server</programlisting>
<note>
<para>For more information on other modules that enable
additional features, see the
<link xlink:href="http://docs.openstack.org/developer/swift/deployment_guide.html"
>Deployment Guide</link>.</para>
</note>
</step>
<step>
<para>In the <literal>[filter:recon]</literal> section, configure
the recon (metrics) cache directory:</para>
<programlisting language="ini">[filter:recon]
...
recon_cache_path = /var/cache/swift</programlisting>
</step>
</substeps>
</step>
<step>
<para>Ensure proper ownership of the mount point directory
structure:</para>
<screen><prompt>#</prompt> <userinput>chown -R swift:swift /srv/node</userinput></screen>
</step>
<step>
<para>Create the <literal>recon</literal> directory and ensure proper
ownership of it:</para>
<screen><prompt>#</prompt> <userinput>mkdir -p /var/cache/swift</userinput>
<prompt>#</prompt> <userinput>chown -R swift:swift /var/cache/swift</userinput></screen>
</step>
</procedure>
</section>

View File

@ -0,0 +1,103 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-system-reqs">
<?dbhtml stop-chunking?>
<title>System requirements</title>
<para><emphasis role="bold">Hardware</emphasis>: OpenStack Object
Storage is designed to run on commodity hardware.</para>
<note>
<para>When you install only the Object Storage and Identity
Service, you cannot use the dashboard unless you also
install Compute and the Image Service.</para>
</note>
<table rules="all">
<caption>Hardware recommendations</caption>
<col width="20%"/>
<col width="23%"/>
<col width="57%"/>
<thead>
<tr>
<td>Server</td>
<td>Recommended Hardware</td>
<td>Notes</td>
</tr>
</thead>
<tbody>
<tr>
<td><para>Object Storage object servers</para></td>
<td>
<para>Processor: dual quad
core</para><para>Memory: 8 or 12GB RAM</para>
<para>Disk space: optimized for cost per GB</para>
<para>Network: one 1GB Network Interface Card
(NIC)</para></td>
<td><para>The amount of disk space depends on how much
you can fit into the rack efficiently. You
want to optimize these for best cost per GB
while still getting industry-standard failure
rates. At Rackspace, our storage servers are
currently running fairly generic 4U servers
with 24 2T SATA drives and 8 cores of
processing power. RAID on the storage drives
is not required and not recommended. Swift's
disk usage pattern is the worst case possible
for RAID, and performance degrades very
quickly using RAID 5 or 6.</para>
<para>As an example, Rackspace runs Cloud Files
storage servers with 24 2T SATA drives and 8
cores of processing power. Most services
support either a worker or concurrency value
in the settings. This allows the services to
make effective use of the cores
available.</para></td>
</tr>
<tr>
<td><para>Object Storage container/account
servers</para></td>
<td>
<para>Processor: dual quad core</para>
<para>Memory: 8 or 12GB RAM</para>
<para>Network: one 1GB Network Interface Card
(NIC)</para></td>
<td><para>Optimized for IOPS due to tracking with
SQLite databases.</para></td>
</tr>
<tr>
<td><para>Object Storage proxy server</para></td>
<td>
<para>Processor: dual quad
core</para><para>Network: one 1 GB Network
Interface Card (NIC)</para></td>
<td><para>Higher network throughput offers better
performance for supporting many API
requests.</para>
<para>Optimize your proxy servers for best CPU
performance. The Proxy Services are more CPU
and network I/O intensive. If you are using 10
GB networking to the proxy, or are terminating
SSL traffic at the proxy, greater CPU power is
required.</para></td>
</tr>
</tbody>
</table>
<para><emphasis role="bold">Operating system</emphasis>: OpenStack
Object Storage currently runs on Ubuntu, RHEL, CentOS, Fedora,
openSUSE, or SLES.</para>
<para><emphasis role="bold">Networking</emphasis>: 1 Gbps or 10
Gbps is suggested internally. For OpenStack Object Storage, an
external network should connect the outside world to the proxy
servers, and the storage network is intended to be isolated on
a private network or multiple private networks.</para>
<para><emphasis role="bold">Database</emphasis>: For OpenStack
Object Storage, a SQLite database is part of the OpenStack
Object Storage container and account management
process.</para>
<para><emphasis role="bold">Permissions</emphasis>: You can
install OpenStack Object Storage either as root or as a user
with sudo permissions if you configure the sudoers file to
enable all the permissions.</para>
</section>

View File

@ -0,0 +1,50 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-verify">
<title>Verify operation</title>
<para>This section describes how to verify operation of the Object
Storage service.</para>
<procedure>
<note>
<para>Perform these steps on the controller node.</para>
</note>
<step>
<para>Source the <literal>demo</literal> tenant credentials:</para>
<screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen>
</step>
<step>
<para>Show the service status:</para>
<screen><prompt>$</prompt> <userinput>swift stat</userinput>
<computeroutput>Account: AUTH_11b9758b7049476d9b48f7a91ea11493
Containers: 0
Objects: 0
Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1381434243.83760
X-Trans-Id: txdcdd594565214fb4a2d33-0052570383
X-Put-Timestamp: 1381434243.83760</computeroutput></screen>
</step>
<step>
<para>Upload a test file:</para>
<screen><prompt>$</prompt> <userinput>swift upload demo-container1 <replaceable>FILE</replaceable></userinput></screen>
<para>Replace <replaceable>FILE</replaceable> with the name of a local
file to upload to the <literal>demo-container1</literal>
container.</para>
</step>
<step>
<para>List containers:</para>
<screen><prompt>$</prompt> <userinput>swift list</userinput>
<computeroutput>demo-container1</computeroutput></screen>
</step>
<step>
<para>Download a test file:</para>
<screen><prompt>$</prompt> <userinput>swift download demo-container1 <replaceable>FILE</replaceable></userinput></screen>
<para>Replace <replaceable>FILE</replaceable> with the name of the
file uploaded to the <literal>demo-container1</literal>
container.</para>
</step>
</procedure>
</section>

View File

@ -11,20 +11,23 @@ This guide has an overall blueprint with spec at:
https://wiki.openstack.org/wiki/Documentation/InstallationGuideImprovements
To do tasks:
- Remove openstack-config (crudini) commands; standardize manual install
- Unify chapter and section names (such as Overview)
- Add sample output of each command and highlight important parts
- Mention project as standard but tenant must be used for CLI params
- Refer to generic SQL database and update for MariaDB (RHEL), MySQL, and
PostgreSQL
- Refer to generic SQL database and update for MariaDB (RHEL), MySQL,
and PostgreSQL
- Provide sample configuration files for each node
- Compute and network nodes should reference server on controller node
- Update password list
- Add audience information; who is this book intended for
Ongoing tasks:
- Ensure it meets conventions and standards
- Continually update with latest release information relevant to install
Wishlist tasks:
- Replace all individual client commands (like keystone, nova) with openstack client commands
- Replace all individual client commands (like keystone, nova) with
openstack client commands

View File

@ -6,111 +6,86 @@
xml:id="basics-database">
<?dbhtml stop-chunking?>
<title>Database</title>
<para os="ubuntu;debian;rhel;fedora;centos">Most OpenStack
services require a database to store information. These examples
use a MySQL database that runs on the controller node. You must
install the MySQL database on the controller node. You must
install the MySQL Python library on any additional nodes that
access MySQL.</para>
<para os="opensuse;sles">Most OpenStack services require a
database to store information. This guide uses a MySQL database
on SUSE Linux Enterprise Server and a compatible database on
openSUSE running on the controller node. This compatible
database for openSUSE is MariaDB. You must install the MariaDB
database on the controller node. You must install the MySQL
Python library on any additional nodes that access MySQL or MariaDB.
</para>
<section xml:id="basics-database-controller">
<title>Controller setup</title>
<para><phrase os="sles">For SUSE Linux Enterprise Server:
</phrase> On the controller node, install the MySQL client and
server packages, and the Python library.</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install mysql-client mysql python-mysql</userinput></screen>
<para os="opensuse">For openSUSE: On the controller node,
install the MariaDB client and database server packages,
and the MySQL Python library.</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install mariadb-client mariadb python-mysql</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install python-mysqldb mysql-server</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>yum install mysql mysql-server MySQL-python</userinput></screen>
<note os="ubuntu;debian">
<para>When you install the server package, you are prompted
for the root password for the database. Choose a strong
password and remember it.</para>
</note>
<para>The MySQL configuration requires some changes to work with
OpenStack.</para>
<procedure>
<step>
<para os="ubuntu;debian">Edit the
<filename>/etc/mysql/my.cnf</filename> file:</para>
<para os="opensuse;sles;rhel;fedora;centos">Edit the
<filename>/etc/my.cnf</filename> file:</para>
<substeps>
<step>
<para>Under the <literal>[mysqld]</literal> section, set the
<literal>bind-address</literal> key to the management IP
address of the controller node to enable access by other
nodes via the management network:</para>
<programlisting>[mysqld]
<para>Most OpenStack services use an SQL database to store information.
The database typically runs on the controller node. The procedures in
this guide use <application>MariaDB</application> or
<application>MySQL</application> depending on the distribution.
OpenStack services also support other SQL databases including
<link xlink:href="http://www.postgresql.org/">PostgreSQL</link>.</para>
<procedure>
<title>To install and configure the database server</title>
<step>
<para>Install the packages:</para>
<note os="ubuntu;rhel;centos;fedora;opensuse">
<para>The Python MySQL library is compatible with MariaDB.</para>
</note>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install mariadb-server python-mysqldb</userinput></screen>
<screen os="debian"><prompt>#</prompt> <userinput>apt-get install mysql-server python-mysqldb</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>yum install mariadb mariadb-server MySQL-python</userinput></screen>
<para os="sles;opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install mariadb-client mariadb python-mysql</userinput></screen>
<para os="sles;opensuse">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>zypper install mysql-client mysql python-mysql</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>Choose a suitable password for the database root account.</para>
</step>
<step>
<para os="ubuntu;debian">Edit the
<filename>/etc/mysql/my.cnf</filename> file and complete the
following actions:</para>
<para os="rhel;centos;fedora;sles;opensuse">Edit the
<filename>/etc/my.cnf</filename> file and complete the following
actions:</para>
<substeps>
<step>
<para>In the <literal>[mysqld]</literal> section, set the
<literal>bind-address</literal> key to the management IP
address of the controller node to enable access by other
nodes via the management network:</para>
<programlisting language="ini">[mysqld]
...
bind-address = 10.0.0.11</programlisting>
</step>
<step>
<para>Under the <literal>[mysqld]</literal> section, set the
following keys to enable InnoDB, UTF-8 character set, and
UTF-8 collation by default:</para>
<programlisting>[mysqld]
</step>
<step>
<para>In the <literal>[mysqld]</literal> section, set the
following keys to enable useful options and the UTF-8
character set:</para>
<programlisting language="ini">[mysqld]
...
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8</programlisting>
</step>
</substeps>
</step>
</procedure>
<para os="ubuntu;debian">Restart the MySQL service to apply
the changes:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service mysql restart</userinput></screen>
<para os="rhel;centos;fedora;opensuse;sles">Start the <phrase
os="rhel;fedora;centos">MySQL</phrase>
<phrase os="opensuse;sles">MariaDB or MySQL</phrase> database
server and set it to start automatically when the system
boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>service mysqld start</userinput>
<prompt>#</prompt> <userinput>chkconfig mysqld on</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service mysql start</userinput>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To finalize installation</title>
<step os="ubuntu;debian">
<para>Restart the database service:</para>
<screen><prompt>#</prompt> <userinput>service mysql restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the database service and configure it to start when the
system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable mariadb.service</userinput>
<prompt>#</prompt> <userinput>systemctl start mariadb.service</userinput></screen>
<para os="sles;opensuse">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service mysql start</userinput>
<prompt>#</prompt> <userinput>chkconfig mysql on</userinput></screen>
<para os="rhel;centos;fedora;opensuse;sles">Finally, you should
set a root password for your <phrase os="rhel;fedora;centos"
>MySQL</phrase>
<phrase os="opensuse;sles">MariaDB or MySQL</phrase> database.
The OpenStack programs that set up databases and tables prompt
you for this password if it is set.</para>
<para os="ubuntu;debian;rhel;centos;fedora;opensuse;sles">You must
delete the anonymous users that are created when the database is
first started. Otherwise, database connection problems occur
when you follow the instructions in this guide. To do this, use
the <command>mysql_secure_installation</command> command.
Note that if <command>mysql_secure_installation</command> fails
you might need to use <command>mysql_install_db</command> first:</para>
<screen os="ubuntu;debian;rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>mysql_install_db</userinput>
<prompt>#</prompt> <userinput>mysql_secure_installation</userinput></screen>
<para><phrase os="rhel;centos;fedora;opensuse;sles">If you have
not already set a root database password, press
<keycap>ENTER</keycap> when you are prompted for the
password.</phrase> This command presents a number of options
for you to secure your database installation. Respond
<userinput>yes</userinput> to all prompts unless you have a
good reason to do otherwise.</para>
</section>
<section xml:id="basics-database-node">
<title>Node setup</title>
<para>On all nodes other than the controller node, install the
MySQL Python library:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install python-mysqldb</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>yum install MySQL-python</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install python-mysql</userinput></screen>
</section>
<para os="sles;opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl start mysql.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable mysql.service</userinput></screen>
</step>
<step>
<para os="ubuntu;debian">Secure the database service:</para>
<para os="rhel;centos;fedora;sles;opensuse">Secure the database
service including choosing a suitable password for the root
account:</para>
<screen><prompt>#</prompt> <userinput>mysql_secure_installation</userinput></screen>
</step>
</procedure>
</section>

View File

@ -54,6 +54,9 @@
<para>Network mask: 255.255.255.0 (or /24)</para>
<para>Default gateway: 10.0.0.1</para>
</step>
<step>
<para>Reboot the system to activate the changes.</para>
</step>
</procedure>
<procedure>
<title>To configure name resolution:</title>
@ -133,9 +136,7 @@ BOOTPROTO='static'</programlisting>
</substeps>
</step>
<step>
<para>Restart networking:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service networking stop &amp;&amp; service networking start</userinput></screen>
<screen os="rhel;centos;fedora;sles;opensuse"><prompt>#</prompt> <userinput>service network restart</userinput></screen>
<para>Reboot the system to activate the changes.</para>
</step>
</procedure>
<procedure>
@ -185,6 +186,9 @@ BOOTPROTO='static'</programlisting>
and so on.</para>
</note>
</step>
<step>
<para>Reboot the system to activate the changes.</para>
</step>
</procedure>
<procedure>
<title>To configure name resolution:</title>
@ -211,12 +215,12 @@ BOOTPROTO='static'</programlisting>
</section>
<section xml:id="basics-neutron-networking-verify">
<title>Verify connectivity</title>
<para>We recommend that you verify network connectivity to the internet
<para>We recommend that you verify network connectivity to the Internet
and among the nodes before proceeding further.</para>
<procedure>
<step>
<para>From the <emphasis>controller</emphasis> node,
<command>ping</command> a site on the internet:</para>
<command>ping</command> a site on the Internet:</para>
<screen><prompt>#</prompt> <userinput>ping -c 4 openstack.org</userinput>
<computeroutput>PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
@ -260,7 +264,7 @@ rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms</computeroutput></screen>
</step>
<step>
<para>From the <emphasis>network</emphasis> node,
<command>ping</command> a site on the internet:</para>
<command>ping</command> a site on the Internet:</para>
<screen><prompt>#</prompt> <userinput>ping -c 4 openstack.org</userinput>
<computeroutput>PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
@ -304,7 +308,7 @@ rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms</computeroutput></screen>
</step>
<step>
<para>From the <emphasis>compute</emphasis> node,
<command>ping</command> a site on the internet:</para>
<command>ping</command> a site on the Internet:</para>
<screen><prompt>#</prompt> <userinput>ping -c 4 openstack.org</userinput>
<computeroutput>PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms

View File

@ -50,6 +50,9 @@
<para>Network mask: 255.255.255.0 (or /24)</para>
<para>Default gateway: 10.0.0.1</para>
</step>
<step>
<para>Reboot the system to activate the changes.</para>
</step>
</procedure>
<procedure>
<title>To configure name resolution:</title>
@ -120,13 +123,11 @@ BOOTPROTO="none"</programlisting>
file to contain the following:</para>
<programlisting>STARTMODE='auto'
BOOTPROTO='static'</programlisting>
</step>
</substeps>
</step>
<step>
<para>Restart networking:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service networking stop &amp;&amp; service networking start</userinput></screen>
<screen os="rhel;centos;fedora;sles;opensuse"><prompt>#</prompt> <userinput>service network restart</userinput></screen>
</substeps>
</step>
<step>
<para>Reboot the system to activate the changes.</para>
</step>
</procedure>
<procedure>
@ -151,12 +152,12 @@ BOOTPROTO='static'</programlisting>
</section>
<section xml:id="basics-networking-nova-verify">
<title>Verify connectivity</title>
<para>We recommend that you verify network connectivity to the internet
<para>We recommend that you verify network connectivity to the Internet
and among the nodes before proceeding further.</para>
<procedure>
<step>
<para>From the <emphasis>controller</emphasis> node,
<command>ping</command> a site on the internet:</para>
<command>ping</command> a site on the Internet:</para>
<screen><prompt>#</prompt> <userinput>ping -c 4 openstack.org</userinput>
<computeroutput>PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
@ -185,7 +186,7 @@ rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms</computeroutput></screen>
</step>
<step>
<para>From the <emphasis>compute</emphasis> node,
<command>ping</command> a site on the internet:</para>
<command>ping</command> a site on the Internet:</para>
<screen><prompt>#</prompt> <userinput>ping -c 4 openstack.org</userinput>
<computeroutput>PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms

View File

@ -28,19 +28,8 @@
<link os="sles;opensuse"
xlink:href="http://activedoc.opensuse.org/book/opensuse-reference/chapter-13-basic-networking"
>openSUSE documentation.</link></para>
<procedure os="fedora">
<title>To disable <systemitem class="service">NetworkManager</systemitem>
and enable the <systemitem class="service">network</systemitem>
service:</title>
<step>
<screen><prompt>#</prompt> <userinput>service NetworkManager stop</userinput>
<prompt>#</prompt> <userinput>service network start</userinput>
<prompt>#</prompt> <userinput>chkconfig NetworkManager off</userinput>
<prompt>#</prompt> <userinput>chkconfig network on</userinput></screen>
</step>
</procedure>
<procedure os="sles;opensuse">
<title>To disable <systemitem class="service">NetworkManager</systemitem>:</title>
<title>To disable Network Manager:</title>
<step>
<para>Use the YaST network module:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>yast2 network</userinput></screen>
@ -52,28 +41,23 @@
</para>
</step>
</procedure>
<para os="rhel;centos">RHEL and derivatives including CentOS and Scientific
Linux enable a restrictive <glossterm>firewall</glossterm> by default.
During this installation, certain steps will fail unless you alter or
disable the firewall. For further information about securing your
installation, refer to the
<link xlink:href="http://docs.openstack.org/sec/">
OpenStack Security Guide</link>.</para>
<para os="fedora">On Fedora, <literal>firewalld</literal> replaces
<literal>iptables</literal> as the default firewall system. While you
can use <literal>firewalld</literal> successfully, this guide
references <literal>iptables</literal> for compatibility with other
distributions.</para>
<procedure os="fedora">
<title>To disable <literal>firewalld</literal> and enable
<literal>iptables</literal>:</title>
<step>
<screen><prompt>#</prompt> <userinput>service firewalld stop</userinput>
<prompt>#</prompt> <userinput>service iptables start</userinput>
<prompt>#</prompt> <userinput>chkconfig firewalld off</userinput>
<prompt>#</prompt> <userinput>chkconfig iptables on</userinput></screen>
</step>
</procedure>
<para os="rhel;centos">RHEL and CentOS enable a restrictive
<glossterm>firewall</glossterm> by default. During the installation
process, certain steps will fail unless you alter or disable the
firewall. For more information about securing your environment, refer
to the <link xlink:href="http://docs.openstack.org/sec/">OpenStack
Security Guide</link>.</para>
<para os="opensuse;sles">openSUSE and SLES enable a restrictive
<glossterm>firewall</glossterm> by default. During the installation
process, certain steps will fail unless you alter or disable the
firewall. For more information about securing your environment, refer
to the <link xlink:href="http://docs.openstack.org/sec/">OpenStack
Security Guide</link>.</para>
<para os="ubuntu;debian">Your distribution does not enable a
restrictive <glossterm>firewall</glossterm> by default. For more
information about securing your environment, refer to the
<link xlink:href="http://docs.openstack.org/sec/">OpenStack
Security Guide</link>.</para>
<para>Proceed to network configuration for the example
<link linkend="basics-networking-neutron">OpenStack Networking (neutron)
</link> or <link linkend="basics-networking-nova">legacy

View File

@ -9,10 +9,10 @@
<para>You must install
<glossterm baseform="Network Time Protocol (NTP)">NTP</glossterm> to
properly synchronize services among nodes. We recommend that you configure
the controller node to reference upstream servers and other nodes to
reference the controller node.</para>
the controller node to reference more accurate (lower stratum) servers and
other nodes to reference the controller node.</para>
<section xml:id="basics-ntp-controller-node">
<title>Configure controller node</title>
<title>Controller node</title>
<procedure>
<title>To install the NTP service</title>
<step>
@ -28,12 +28,21 @@
<filename>/etc/ntp.conf</filename> file to configure alternative
servers such as those provided by your organization.</para>
<step>
<para>Edit the <filename>/etc/ntp.conf</filename> file:</para>
<para>Add, change, or remove the <literal>server</literal> keys as
necessary for your environment. Replace
<replaceable>NTP_SERVER</replaceable> with the hostname or IP address
of suitable NTP server.</para>
<programlisting>server <replaceable>NTP_SERVER</replaceable> iburst</programlisting>
<para>Edit the <filename>/etc/ntp.conf</filename> file and add,
change, or remove the following keys as necessary for your
environment:</para>
<programlisting language="ini">server <replaceable>NTP_SERVER</replaceable> iburst
restrict -4 default kod notrap nomodify
restrict -6 default kod notrap nomodify</programlisting>
<para>Replace <replaceable>NTP_SERVER</replaceable> with the
hostname or IP address of a suitable more accurate (lower stratum)
NTP server. The configuration supports multiple
<literal>server</literal> keys.</para>
<note>
<para>For the <literal>restrict</literal> keys, you essentially
remove the <literal>nopeer</literal> and <literal>noquery</literal>
options.</para>
</note>
<note os="ubuntu;debian">
<para>Remove the <filename>/var/lib/ntp/ntp.conf.dhcp</filename> file
if it exists.</para>
@ -46,15 +55,19 @@
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the NTP service and configure it to start when the system
boots:</para>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>service ntpd start</userinput>
<prompt>#</prompt> <userinput>chkconfig ntpd on</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service ntp start</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable ntpd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start ntpd.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service ntp start</userinput>
<prompt>#</prompt> <userinput>chkconfig ntp on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable ntp.service</userinput>
<prompt>#</prompt> <userinput>systemctl start ntp.service</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="basics-ntp-other-nodes">
<title>Configure other nodes</title>
<title>Other nodes</title>
<procedure>
<title>To install the NTP service</title>
<step>
@ -71,7 +84,7 @@
<para>Edit the <filename>/etc/ntp.conf</filename> file:</para>
<para>Comment out or remove all but one <literal>server</literal>
key and change it to reference the controller node.</para>
<programlisting>server <replaceable>controller</replaceable> iburst</programlisting>
<programlisting language="ini">server <replaceable>controller</replaceable> iburst</programlisting>
<note os="ubuntu;debian">
<para>Remove the <filename>/var/lib/ntp/ntp.conf.dhcp</filename> file
if it exists.</para>
@ -84,10 +97,14 @@
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the NTP service and configure it to start when the system
boots:</para>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>service ntpd start</userinput>
<prompt>#</prompt> <userinput>chkconfig ntpd on</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service ntp start</userinput>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>systemctl enable ntpd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start ntpd.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service ntp start</userinput>
<prompt>#</prompt> <userinput>chkconfig ntp on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable ntp.service</userinput>
<prompt>#</prompt> <userinput>systemctl start ntp.service</userinput></screen>
</step>
</procedure>
</section>
@ -97,7 +114,6 @@
further. Some nodes, particularly those that reference the controller
node, can take several minutes to synchronize.</para>
<procedure>
<title>To verify NTP synchronization</title>
<step>
<para>Run this command on the <emphasis>controller</emphasis> node:
</para>

View File

@ -6,133 +6,85 @@
xml:id="basics-packages">
<?dbhtml stop-chunking?>
<title>OpenStack packages</title>
<para>Distributions might release OpenStack packages as part of
their distribution or through other methods because the
OpenStack and distribution release times are independent of each
other.</para>
<para>This section describes the configuration you must
complete after you configure machines to install the latest
OpenStack packages.</para>
<para os="fedora;centos;rhel">The examples in this guide use the
OpenStack packages from the RDO repository. These packages work
on Red Hat Enterprise Linux 6, compatible versions of CentOS,
and Fedora 20.</para>
<para os="fedora;centos;rhel">
Install the <package>yum-plugin-priorities</package> plug-in. This package
allows the assignment of relative priorities to the configured software
repositories. This functionality is used by the RDO release packages:
</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install yum-plugin-priorities</userinput></screen>
<para os="fedora;centos;rhel">
To enable the RDO repository, download and
install the <package>rdo-release-juno</package>
package:</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install http://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm</userinput></screen>
<para os="fedora;centos;rhel">The EPEL package includes GPG keys
for package signing and repository information. This should only
be installed on Red Hat Enterprise Linux and CentOS, not Fedora.
Install the latest <package>epel-release</package> package (see
<link
xlink:href="http://download.fedoraproject.org/pub/epel/6/x86_64/repoview/epel-release.html"
>http://download.fedoraproject.org/pub/epel/6/x86_64/repoview/epel-release.html</link>).
For example:</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm</userinput></screen>
<para os="fedora;centos;rhel">The
<package>openstack-utils</package> package contains utility
programs that make installation and configuration easier. These
programs are used throughout this guide. Install
<package>openstack-utils</package>. This verifies that you can
access the RDO repository:</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install openstack-utils</userinput></screen>
<para os="opensuse;sles">Use the Open Build Service repositories
for <glossterm>Juno</glossterm> based on your openSUSE or
SUSE Linux Enterprise Server version.</para>
<para os="opensuse">For openSUSE 13.1 use:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Juno/openSUSE_13.1 Juno</userinput></screen>
<para os="sles">If you use SUSE Linux Enterprise Server 11 SP3,
use:</para>
<screen os="sles"><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Juno/SLE_11_SP3 Juno</userinput></screen>
<para os="opensuse;sles">The packages are signed by GPG key 893A90DAD85F9316. You should verify the fingerprint of the imported GPG key before using it.
<programlisting>Key ID: 893A90DAD85F9316
<para>Distributions release OpenStack packages as part of the distribution
or using other methods because of differing release schedules. Perform
these procedures on all nodes.</para>
<note>
<para>Disable or remove any automatic update services because they can
impact your OpenStack environment.</para>
</note>
<procedure os="ubuntu">
<title>To configure prerequisites</title>
<step>
<para>Install the <package>python-software-properties</package> package
to ease repository management:</para>
<screen><prompt>#</prompt> <userinput>apt-get install python-software-properties</userinput></screen>
</step>
</procedure>
<procedure os="ubuntu">
<title>To enable the OpenStack repository</title>
<step>
<para>Enable the Ubuntu Cloud archive repository:</para>
<screen><prompt>#</prompt> <userinput>add-apt-repository cloud-archive:juno</userinput></screen>
</step>
</procedure>
<procedure os="rhel;centos;fedora">
<title>To configure prerequisites</title>
<step>
<para>Install the <package>yum-plugin-priorities</package> package to
enable assignment of relative priorities within repositories:</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install yum-plugin-priorities</userinput></screen>
</step>
<step>
<para>Install the <package>epel-release</package> package to enable the
<link
xlink:href="http://download.fedoraproject.org/pub/epel/7/x86_64/repoview/epel-release.html">EPEL</link> repository:</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm</userinput></screen>
<note>
<para>Fedora does not require this package.</para>
</note>
</step>
</procedure>
<procedure os="rhel;centos;fedora">
<title>To enable the OpenStack repository</title>
<step>
<para>Install the <package>rdo-release-juno</package> package to enable
the RDO repository:</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm</userinput></screen>
</step>
</procedure>
<procedure os="sles;opensuse">
<title>To enable the OpenStack repository</title>
<step>
<para>Enable the Open Build Service repositories based on your openSUSE
or SLES version:</para>
<para>On openSUSE 13.1:</para>
<screen><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Juno/openSUSE_13.1 Juno</userinput></screen>
<para>On SLES 11 SP3:</para>
<screen><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Juno/SLE_11_SP3 Juno</userinput></screen>
<note>
<para>The packages are signed by GPG key 893A90DAD85F9316. You should
verify the fingerprint of the imported GPG key before using
it.</para>
<programlisting>Key ID: 893A90DAD85F9316
Key Name: Cloud:OpenStack OBS Project &lt;Cloud:OpenStack@build.opensuse.org&gt;
Key Fingerprint: 35B34E18ABC1076D66D5A86B893A90DAD85F9316
Key Created: Tue Oct 8 13:34:21 2013
Key Expires: Thu Dec 17 13:34:21 2015</programlisting>
</para>
<para os="opensuse;sles">The <package>openstack-utils</package>
package contains utility programs that make installation and
configuration easier. These programs are used throughout this
guide. Install <package>openstack-utils</package>. This verifies
that you can access the Open Build Service repository:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-utils</userinput></screen>
<warning os="fedora;centos;rhel;opensuse;sles">
<para>The <application>openstack-config</application> program
in the <package>openstack-utils</package> package uses
<application>crudini</application> to manipulate configuration
files. However, <application>crudini</application> version 0.3
does not support multi valued options. See
<link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1269271"
>https://bugs.launchpad.net/openstack-manuals/+bug/1269271</link>.
As a work around, you must manually set any multi valued
options or the new value overwrites the previous value instead
of creating a new option.</para>
</warning>
<para os="centos;rhel">The
<package>openstack-selinux</package> package includes the
policy files that are required to configure SELinux during
OpenStack installation on RHEL and CentOS. This step is not required during
OpenStack installation on Fedora.
Install <package>openstack-selinux</package>:</para>
<screen os="centos;rhel"><prompt>#</prompt> <userinput>yum install openstack-selinux</userinput></screen>
<para os="fedora;centos;rhel;opensuse;sles">Upgrade your system packages:</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum upgrade</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper refresh</userinput>
<prompt>#</prompt> <userinput>zypper dist-upgrade</userinput></screen>
<para os="fedora;centos;rhel;opensuse;sles">If the upgrade included a new
kernel package, reboot the system to ensure the new kernel is running:</para>
<screen os="fedora;centos;rhel;opensuse;sles"><prompt>#</prompt> <userinput>reboot</userinput></screen>
<procedure xml:id="ubuntu-cloud-archive" os="ubuntu">
<title>To use the Ubuntu Cloud Archive for Juno</title>
<para>The <link
xlink:href="https://wiki.ubuntu.com/ServerTeam/CloudArchive"
>Ubuntu Cloud Archive</link> is a special repository that
allows you to install newer releases of OpenStack on the
stable supported version of Ubuntu.</para>
<step>
<para>Install the Ubuntu Cloud Archive for
<glossterm>Juno</glossterm>:
<screen><prompt>#</prompt> <userinput>apt-get install python-software-properties</userinput>
<prompt>#</prompt> <userinput>add-apt-repository cloud-archive:juno</userinput></screen></para>
</step>
<step>
<para>Update the package database and upgrade your system:</para>
<screen><prompt>#</prompt> <userinput>apt-get update</userinput>
<prompt>#</prompt> <userinput>apt-get dist-upgrade</userinput></screen>
</step>
<step>
<para>If you intend to use OpenStack Networking with Ubuntu 12.04,
you should install a backported Linux kernel to improve the
stability of your system. This installation is not needed if you
intend to use the legacy networking service.</para>
<para>Install the Ubuntu 13.10 backported kernel:</para>
<screen><prompt>#</prompt> <userinput>apt-get install linux-image-generic-lts-saucy linux-headers-generic-lts-saucy</userinput></screen>
</step>
<step>
<para>Reboot the system for all changes to take effect:</para>
<screen><prompt>#</prompt> <userinput>reboot</userinput></screen>
</note>
</step>
</procedure>
<procedure xml:id="debian-cloud-archive" os="debian">
<procedure os="debian">
<title>To use the Debian Wheezy backports archive for
Juno</title>
<para>The <glossterm>Juno</glossterm> release is available
only in Debian Sid
(otherwise called Unstable). However, the Debian maintainers
only in Debian Experimental (otherwise called rc-buggy),
as Jessie is frozen soon, and will contain Icehouse.
However, the Debian maintainers
of OpenStack also maintain a non-official Debian repository
for OpenStack containing Wheezy backports.</para>
<step>
<para>Install the Debian Wheezy backport repository
<para>On all nodes, install the Debian Wheezy backport repository
Juno:</para>
<screen><prompt>#</prompt> <userinput>echo "deb http://archive.gplhost.com/debian juno-backports main" >>/etc/apt/sources.list</userinput></screen>
</step>
@ -158,7 +110,7 @@ Key Expires: Thu Dec 17 13:34:21 2015</programlisting>
mirrors is available at <link
xlink:href="http://archive.gplhost.com/readme.mirrors"
>http://archive.gplhost.com/readme.mirrors</link>.</para>
<section xml:id="basics-argparse" os="debian">
<procedure xml:id="basics-argparse" os="debian">
<title>Manually install python-argparse</title>
<para>The Debian OpenStack packages are maintained on Debian Sid
(also known as Debian Unstable) - the current development
@ -172,6 +124,7 @@ Key Expires: Thu Dec 17 13:34:21 2015</programlisting>
Python 2.7, this package is installed by default. Unfortunately,
in Python 2.7, this package does not include <code>Provides:
python-argparse</code> directive.</para>
<step>
<para>Because the packages are maintained in Sid where the
<code>Provides: python-argparse</code> directive causes an
error, and the Debian OpenStack maintainer wants to maintain one
@ -183,5 +136,33 @@ Key Expires: Thu Dec 17 13:34:21 2015</programlisting>
<screen><prompt>#</prompt> <userinput>apt-get install python-argparse</userinput></screen>
<para>This caveat applies to most OpenStack packages in
Wheezy.</para>
</section>
</step>
</procedure>
<procedure>
<title>To finalize installation</title>
<step>
<para>Upgrade the packages on your system:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get update &amp;&amp; apt-get dist-upgrade</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum upgrade</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper refresh &amp;&amp; zypper dist-upgrade</userinput></screen>
<note>
<para>If the upgrade process includes a new kernel, reboot your system
to activate it.</para>
</note>
</step>
<step os="rhel;centos">
<para>RHEL and CentOS enable <glossterm>SELinux</glossterm> by
default. Install the <package>openstack-selinux</package> package
to automatically manage security policies for OpenStack
services:</para>
<screen os="rhel;centos"><prompt>#</prompt> <userinput>yum install openstack-selinux</userinput></screen>
<note>
<para>Fedora does not require this package.</para>
</note>
<note>
<para>The installation process for this package can take a
while.</para>
</note>
</step>
</procedure>
</section>

View File

@ -6,7 +6,7 @@
xml:id="basics-prerequisites">
<?dbhtml stop-chunking?>
<title>Before you begin</title>
<para>For a functional environment, OpenStack does not require a
<para>For a functional environment, OpenStack doesn't require a
significant amount of resources. We recommend that your environment meets
or exceeds the following minimum requirements which can support several
minimal <glossterm>CirrOS</glossterm> instances:</para>
@ -28,7 +28,7 @@
recommend a minimal installation of your Linux distribution. Also, we
strongly recommend that you install a 64-bit version of your distribution
on at least the compute node. If you install a 32-bit version of your
distribution on the compute node, starting an instance using
distribution on the compute node, attempting to start an instance using
a 64-bit image will fail.</para>
<note>
<para>A single disk partition on each node works for most basic
@ -38,20 +38,20 @@
</note>
<para>Many users build their test environments on
<glossterm baseform="virtual machine (VM)">virtual machines
(VMs)</glossterm>. The primary benefits of this method include the
(VMs)</glossterm>. The primary benefits of VMs include the
following:</para>
<itemizedlist>
<listitem>
<para>One physical server can support multiple nodes with almost
<para>One physical server can support multiple nodes, each with almost
any number of network interfaces.</para>
</listitem>
<listitem>
<para>The ability to take periodic "snapshots" throughout the installation
<para>Ability to take periodic "snap shots" throughout the installation
process and "roll back" to a working configuration in the event of
a problem.</para>
</listitem>
</itemizedlist>
<para>VMs can result in slow instances, particularly
<para>However, VMs will reduce performance of your instances, particularly
if your hypervisor and/or processor lacks support for hardware
acceleration of nested VMs.</para>
<note>
@ -59,7 +59,5 @@
permits <glossterm>promiscuous mode</glossterm> on the
<glossterm>external network</glossterm>.</para>
</note>
<para>For more information about system requirements, see the <link
xlink:href="http://docs.openstack.org/ops/">OpenStack Operations
Guide</link>.</para>
<para>For more information about system requirements, see the</para>
</section>

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="basics-queue">
xml:id="basics-messaging-server">
<?dbhtml stop-chunking?>
<title>Messaging server</title>
<para>OpenStack uses a <glossterm>message broker</glossterm> to coordinate
@ -11,7 +11,7 @@
service typically runs on the controller node. OpenStack supports several
message brokers including <application>RabbitMQ</application>,
<application>Qpid</application>, and <application>ZeroMQ</application>.
Most distributions that package OpenStack support a particular
However, most distributions that package OpenStack support a particular
message broker. This guide covers the RabbitMQ message broker which is
supported by each distribution. If you prefer to implement a
different message broker, consult the documentation associated
@ -41,11 +41,17 @@
</procedure>
<procedure>
<title>To configure the message broker service</title>
<step os="sles;opensuse;rhel;centos;fedora">
<para>Start the message broker service and enable it to start when the
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the message broker service and configure it to start when the
system boots:</para>
<screen><prompt>#</prompt> <userinput>service rabbitmq-server start</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable rabbitmq-server.service</userinput>
<prompt>#</prompt> <userinput>systemctl start rabbitmq-server.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service rabbitmq-server start</userinput>
<prompt>#</prompt> <userinput>chkconfig rabbitmq-server on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable rabbitmq-server.service</userinput>
<prompt>#</prompt> <userinput>systemctl start rabbitmq-server.service</userinput></screen>
</step>
<step>
<para>The message broker creates a default account that uses
@ -55,17 +61,19 @@
<para>Run the following command:</para>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with a suitable
password.</para>
<screen><prompt>#</prompt> <userinput>rabbitmqctl change_password guest <replaceable>RABBIT_PASS</replaceable></userinput></screen>
<screen><prompt>#</prompt> <userinput>rabbitmqctl change_password guest <replaceable>RABBIT_PASS</replaceable></userinput>
<computeroutput>Changing password for user "guest" ...
...done.</computeroutput></screen>
<para>You must configure the <literal>rabbit_password</literal> key
in the configuration file for each OpenStack service that uses the
message broker.</para>
<note>
<para>For production environments, you should create a unique account
with a suitable password. For more information on securing the
with suitable password. For more information on securing the
message broker, see the
<link xlink:href="https://www.rabbitmq.com/man/rabbitmqctl.1.man.html"
>documentation</link>.</para>
<para>If you decide to create a unique account with a suitable password
<para>If you decide to create a unique account with suitable password
for your test environment, you must configure the
<literal>rabbit_userid</literal> and
<literal>rabbit_password</literal> keys in the configuration file
@ -73,6 +81,6 @@
</note>
</step>
</procedure>
<para>Congratulations, you are now ready to install OpenStack
<para>Congratulations, now you are ready to install OpenStack
services!</para>
</section>

View File

@ -0,0 +1,130 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="basics-security">
<?dbhtml stop-chunking?>
<title>Security</title>
<para>OpenStack services support various security methods including
password, policy, and encryption. Additionally, supporting services
including the database server and message broker support at least
password security.</para>
<para>To ease the installation process, this guide only covers password
security where applicable. You can create secure passwords manually,
generate them using a tool such as
<link xlink:href="http://sourceforge.net/projects/pwgen/">pwgen</link>, or
by running the following command:</para>
<screen><prompt>$</prompt> <userinput>openssl rand -hex 10</userinput></screen>
<para>For OpenStack services, this guide uses
<replaceable>SERVICE_PASS</replaceable> to reference service account
passwords and <replaceable>SERVICE_DBPASS</replaceable> to reference
database passwords.</para>
<para>The following table provides a list of services that require
passwords and their associated references in the guide:
<table rules="all">
<caption>Passwords</caption>
<thead>
<tr>
<th>Password name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Database password (no variable used)</td>
<td>Root password for the database</td>
</tr>
<tr>
<td><literal><replaceable>RABBIT_PASS</replaceable></literal></td>
<td>Password of user guest of RabbitMQ</td>
</tr>
<tr>
<td><literal><replaceable>KEYSTONE_DBPASS</replaceable></literal></td>
<td>Database password of Identity service</td>
</tr>
<tr>
<td><literal><replaceable>DEMO_PASS</replaceable></literal></td>
<td>Password of user <literal>demo</literal></td>
</tr>
<tr>
<td><literal><replaceable>ADMIN_PASS</replaceable></literal></td>
<td>Password of user <literal>admin</literal></td>
</tr>
<tr>
<td><literal><replaceable>GLANCE_DBPASS</replaceable></literal></td>
<td>Database password for Image Service</td>
</tr>
<tr>
<td><literal><replaceable>GLANCE_PASS</replaceable></literal></td>
<td>Password of Image Service user <literal>glance</literal></td>
</tr>
<tr>
<td><literal><replaceable>NOVA_DBPASS</replaceable></literal></td>
<td>Database password for Compute service</td>
</tr>
<tr>
<td><literal><replaceable>NOVA_PASS</replaceable></literal></td>
<td>Password of Compute service user <literal>nova</literal></td>
</tr>
<tr>
<td><literal><replaceable>DASH_DBPASS</replaceable></literal></td>
<td>Database password for the dashboard</td>
</tr>
<tr>
<td><literal><replaceable>CINDER_DBPASS</replaceable></literal></td>
<td>Database password for the Block Storage service</td>
</tr>
<tr>
<td><literal><replaceable>CINDER_PASS</replaceable></literal></td>
<td>Password of Block Storage service user <literal>cinder</literal></td>
</tr>
<tr>
<td><literal><replaceable>NEUTRON_DBPASS</replaceable></literal></td>
<td>Database password for the Networking service</td>
</tr>
<tr>
<td><literal><replaceable>NEUTRON_PASS</replaceable></literal></td>
<td>Password of Networking service user <literal>neutron</literal></td>
</tr>
<tr>
<td><literal><replaceable>HEAT_DBPASS</replaceable></literal></td>
<td>Database password for the Orchestration service</td>
</tr>
<tr>
<td><literal><replaceable>HEAT_PASS</replaceable></literal></td>
<td>Password of Orchestration service user <literal>heat</literal></td>
</tr>
<tr>
<td><literal><replaceable>CEILOMETER_DBPASS</replaceable></literal></td>
<td>Database password for the Telemetry service</td>
</tr>
<tr>
<td><literal><replaceable>CEILOMETER_PASS</replaceable></literal></td>
<td>Password of Telemetry service user <literal>ceilometer</literal></td>
</tr>
<tr>
<td><literal><replaceable>TROVE_DBPASS</replaceable></literal></td>
<td>Database password of Database service</td>
</tr>
<tr>
<td><literal><replaceable>TROVE_PASS</replaceable></literal></td>
<td>Password of Database Service user <literal>trove</literal></td>
</tr>
</tbody>
</table>
</para>
<para>OpenStack and supporting services require administrative privileges
during installation and operation. In some cases, services perform
modifications to the host that can interfere with deployment automation
tools such as Ansible, Chef, and Puppet. For example, some OpenStack
services add a root wrapper to <literal>sudo</literal> that can interfere
with security policies. See the
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/root-wrap-reference.html">Cloud Administrator Guide</link>
for more information. Also, the Networking service assumes default values
for kernel network parameters and modifies firewall rules. To avoid most
issues during your initial installation, we recommend using a stock
deployment of a supported distribution on your hosts. However, if you
choose to automate deployment of your hosts, review the configuration
and policies applied to them before proceeding further.</para>
</section>

View File

@ -3,37 +3,44 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ceilometer-install-cinder">
xml:id="ceilometer-agent-cinder">
<?dbhtml stop-chunking?>
<title>Add the Block Storage service agent for Telemetry</title>
<procedure>
<step>
<para>To retrieve volume samples, you must configure the Block
Storage service to send notifications to the bus.</para>
<para os="debian;ubuntu">Edit <filename>/etc/cinder/cinder.conf</filename>
<para>Edit <filename>/etc/cinder/cinder.conf</filename>
and add in the <literal>[DEFAULT]</literal> section on the controller
and volume nodes:</para>
<programlisting language="ini" os="debian;ubuntu">control_exchange = cinder
<programlisting language="ini">control_exchange = cinder
notification_driver = cinder.openstack.common.notifier.rpc_notifier</programlisting>
<para os="opensuse;sles;fedora;rhel;centos">Run the following commands on
the controller and volume nodes:</para>
<screen os="opensuse;sles;fedora;rhel;centos"><prompt>#</prompt> <userinput>openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/cinder/cinder.conf DEFAULT notification_driver cinder.openstack.common.notifier.rpc_notifier</userinput></screen>
</step>
<step>
<para>Restart the Block Storage services with their new
settings.</para>
<para>On the controller node:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service cinder-api restart</userinput>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service cinder-api restart</userinput>
<prompt>#</prompt> <userinput>service cinder-scheduler restart</userinput></screen>
<screen os="rhel;fedora;centos;sles;opensuse"><prompt>#</prompt> <userinput>service openstack-cinder-api restart</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-cinder-api restart</userinput>
<prompt>#</prompt> <userinput>service openstack-cinder-scheduler restart</userinput></screen>
<para>On the volume node:</para>
<screen os="rhel;fedora;centos;sles;opensuse"><prompt>#</prompt> <userinput>service openstack-cinder-volume restart</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service</userinput></screen>
<para>On the storage node:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service cinder-volume restart</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl restart openstack-cinder-volume.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-cinder-volume restart</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-cinder-volume.service</userinput></screen>
</step>
<step>
<para>If you want to collect OpenStack Block Storage notification on demand,
you can use <command>cinder-volume-usage-audit</command> from OpenStack Block Storage.
For more information, <link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-cinder-audit-script.html"
><citetitle>Block Storage audit script setup to get notifications</citetitle></link>.</para>
</step>
</procedure>
</section>

View File

@ -0,0 +1,384 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ceilometer-controller-install">
<title>Install and configure controller node</title>
<para>This section describes how to install and configure the Telemetry
module, code-named ceilometer, on the controller node. The Telemetry
module uses separate agents to collect measurements from each OpenStack
service in your environment.</para>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure prerequisites</title>
<para>Before you install and configure Telemetry, you must install
<application>MongoDB</application>, create a MongoDB database, and
create Identity service credentials including endpoints.</para>
<step os="opensuse;sles">
<para>Enable the Open Build Service repositories for MongoDB based on
your openSUSE or SLES version:</para>
<para>On openSUSE:</para>
<screen><prompt>#</prompt> <userinput>zypper addrepo -f obs://server:database/openSUSE_13.1 Database</userinput></screen>
<para>On SLES:</para>
<screen><prompt>#</prompt> <userinput>zypper addrepo -f obs://server:database/SLE_11_SP3 Database</userinput></screen>
<note>
<para>The packages are signed by GPG key
<literal>562111AC05905EA8</literal>. You should
verify the fingerprint of the imported GPG key before using
it.</para>
<programlisting>Key Name: server:database OBS Project &lt;server:database@build.opensuse.org&gt;
Key Fingerprint: 116EB86331583E47E63CDF4D562111AC05905EA8
Key Created: Thu Oct 11 20:08:39 2012
Key Expires: Sat Dec 20 20:08:39 2014</programlisting>
</note>
</step>
<step>
<para>Install the MongoDB package:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install mongodb-server mongodb</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install mongodb</userinput></screen>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install mongodb-server</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/mongodb.conf</filename> file and
complete the following actions:</para>
<substeps>
<step>
<para>Configure the <literal>bind_ip</literal> key to use the
management interface IP address of the controller node.</para>
<programlisting language="ini">bind_ip = 10.0.0.11</programlisting>
</step>
<step>
<para>By default, MongoDB creates several 1GB journal files
in the <filename>/var/lib/mongodb/journal</filename>
directory. If you want to reduce the size of each journal file
to 128MB and limit total journal space consumption to
512MB, assert the <literal>smallfiles</literal> key:</para>
<programlisting language="ini">smallfiles = true</programlisting>
<para os="ubuntu">If you change the journaling configuration,
stop the MongoDB service, remove the initial journal files, and
start the service:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>service mongodb stop</userinput>
<prompt>#</prompt> <userinput>rm /var/lib/mongodb/journal/prealloc.*</userinput>
<prompt>#</prompt> <userinput>service mongodb start</userinput></screen>
<para>You can also disable journaling. For more information, see
the <link xlink:href="http://docs.mongodb.org/manual/"
>MongoDB manual</link>.</para>
</step>
<step os="ubuntu">
<para>Restart the MongoDB service:</para>
<screen><prompt>#</prompt> <userinput>service mongodb restart</userinput></screen>
</step>
<step os="centos;fedora;opensuse;rhel;sles">
<para>Start the MongoDB services and configure them to start when
the system boots:</para>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service mongodb start</userinput>
<prompt>#</prompt> <userinput>chkconfig mongodb on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="sles"><prompt>#</prompt> <userinput>systemctl enable mongodb.service</userinput>
<prompt>#</prompt> <userinput>systemctl start mongodb.service</userinput></screen>
<!-- NB: The use of mongod, and not mongodb, in the below screen is
intentional. -->
<screen os="centos;fedora;rhel"><prompt>#</prompt> <userinput>service mongod start</userinput>
<prompt>#</prompt> <userinput>chkconfig mongod on</userinput></screen>
</step>
</substeps>
</step>
<step>
<para>Create the <literal>ceilometer</literal> database:</para>
<screen><prompt>#</prompt> <userinput>mongo --host <replaceable>controller</replaceable> --eval '
db = db.getSiblingDB("ceilometer");
db.addUser({user: "ceilometer",
pwd: "<replaceable>CEILOMETER_DBPASS</replaceable>",
roles: [ "readWrite", "dbAdmin" ]})'</userinput></screen>
<para>Replace <replaceable>CEILOMETER_DBPASS</replaceable> with a
suitable password.</para>
</step>
<step>
<para>Source the <literal>admin</literal> credentials to gain access
to admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>To create the Identity service credentials:</para>
<substeps>
<step>
<para>Create the <literal>ceilometer</literal> user:</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name ceilometer --pass <replaceable>CEILOMETER_PASS</replaceable></userinput></screen>
<para>Replace <replaceable>CEILOMETER_PASS</replaceable> with a
suitable password.</para>
</step>
<step>
<para>Link the <literal>ceilometer</literal> user to the
<literal>service</literal> tenant and <literal>admin</literal>
role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user ceilometer --tenant service --role admin</userinput></screen>
</step>
<step>
<para>Create the <literal>ceilometer</literal> service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name ceilometer --type metering \
--description "Telemetry"</userinput></screen>
</step>
<step>
<para>Create the Identity service endpoints:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id $(keystone service-list | awk '/ metering / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:8777 \
--internalurl http://<replaceable>controller</replaceable>:8777 \
--adminurl http://<replaceable>controller</replaceable>:8777</userinput> \
--region regionOne</screen>
</step>
</substeps>
</step>
</procedure>
<procedure os="debian">
<title>To configure prerequisites</title>
<para>Before you install and configure Telemetry, you must install
<application>MongoDB</application>.</para>
<step>
<para>Install the MongoDB package:</para>
<screen><prompt>#</prompt> <userinput>apt-get install mongodb-server</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/mongodb.conf</filename> file and
complete the following actions:</para>
<substeps>
<step>
<para>Configure the <literal>bind_ip</literal> key to use the
management interface IP address of the controller node.</para>
<programlisting language="ini">bind_ip = 10.0.0.11</programlisting>
</step>
<step>
<para>By default, MongoDB creates several 1GB journal files
in the <filename>/var/lib/mongodb/journal</filename>
directory. If you want to reduce the size of each journal file
to 128MB and limit total journal space consumption to
512MB, assert the <literal>smallfiles</literal> key:</para>
<programlisting language="ini">smallfiles = true</programlisting>
<para>If you change the journaling configuration, stop the MongoDB
service, remove the initial journal files, and start the
service:</para>
<screen><prompt>#</prompt> <userinput>service mongodb stop</userinput>
<prompt>#</prompt> <userinput>rm /var/lib/mongodb/journal/prealloc.*</userinput>
<prompt>#</prompt> <userinput>service mongodb start</userinput></screen>
<para>You can also disable journaling. For more information, see
the <link xlink:href="http://docs.mongodb.org/manual/"
>MongoDB manual</link>.</para>
</step>
<step>
<para>Restart the MongoDB service:</para>
<screen><prompt>#</prompt> <userinput>service mongodb restart</userinput></screen>
</step>
</substeps>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To install and configure the Telemetry module components</title>
<step>
<para>Install the packages:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central \
ceilometer-agent-notification ceilometer-alarm-evaluator ceilometer-alarm-notifier \
python-ceilometerclient</userinput></screen>
<screen os="centos;fedora;rhel"><prompt>#</prompt> <userinput>yum install openstack-ceilometer-api openstack-ceilometer-collector \
openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm \
python-ceilometerclient</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-ceilometer-api openstack-ceilometer-collector \
openstack-ceilometer-agent-notification openstack-ceilometer-agent-central python-ceilometerclient \
openstack-ceilometer-alarm-evaluator openstack-ceilometer-alarm-notifier</userinput></screen>
</step>
<step>
<para>Generate a random value to use as the metering secret:</para>
<screen os="ubuntu;rhel;centos;fedora"><prompt>#</prompt> <userinput>openssl rand -hex 10</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>openssl rand 10 | hexdump -e '1/1 "%.2x"'</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/ceilometer/ceilometer.conf</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[database]</literal> section,
configure database access:</para>
<programlisting language="ini">[database]
...
connection = mongodb://ceilometer:<replaceable>CEILOMETER_DBPASS</replaceable>@<replaceable>controller</replaceable>:27017/ceilometer</programlisting>
<para>Replace <replaceable>CEILOMETER_DBPASS</replaceable> with
the password you chose for the Telemetry module database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the password
you chose for the <literal>guest</literal> account in
<application>RabbitMQ</application>.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections, configure
Identity service access:</para>
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = <replaceable>CEILOMETER_PASS</replaceable></programlisting>
<para>Replace <replaceable>CEILOMETER_PASS</replaceable> with the
password you chose for the <literal>celiometer</literal>
user in the Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[service_credentials]</literal>
section, configure service credentials:</para>
<programlisting language="ini">[service_credentials]
...
os_auth_url = http://<replaceable>controller</replaceable>:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = <replaceable>CEILOMETER_PASS</replaceable></programlisting>
<para>Replace <replaceable>CEILOMETER_PASS</replaceable> with
the password you chose for the <literal>ceilometer</literal>
user in the Identity service.</para>
</step>
<step>
<para>In the <literal>[publisher]</literal> section, configure
the metering secret:</para>
<programlisting language="ini">[publisher]
...
metering_secret = <replaceable>METERING_SECRET</replaceable></programlisting>
<para>Replace <replaceable>METERING_SECRET</replaceable> with the
random value that you generated in a previous step.</para>
</step>
<step os="ubuntu">
<para>In the <literal>[DEFAULT]</literal> section, configure the log
directory:</para>
<programlisting language="ini">[DEFAULT]
...
log_dir = /var/log/ceilometer</programlisting>
</step>
<step os="opensuse;sles">
<para>In the <literal>[collector]</literal> section, configure the
dispatcher:</para>
<!-- should this be 'dispatcher = database' ? -->
<programlisting language="ini">[collector]
...
dispatcher = database</programlisting>
</step>
</substeps>
</step>
</procedure>
<procedure os="debian">
<title>To install and configure the Telemetry module components</title>
<step>
<para>Install the packages:</para>
<screen><prompt>#</prompt> <userinput>apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central \
ceilometer-agent-notification ceilometer-alarm-evaluator ceilometer-alarm-notifier \
python-ceilometerclient</userinput></screen>
</step>
<step>
<para>Respond to prompts for
<link linkend="debconf-dbconfig-common">database management</link>,
<link linkend="debconf-keystone_authtoken">Identity service
credentials</link>,
<link linkend="debconf-api-endpoints">service endpoint
registration</link>, and
<link linkend="debconf-rabbitmq">message broker
credentials</link>.</para>
</step>
<step>
<para>Generate a random value to use as the metering secret:</para>
<screen><prompt>#</prompt> <userinput>openssl rand -hex 10</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/ceilometer/ceilometer.conf</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[publisher]</literal> section, configure
the metering secret:</para>
<programlisting language="ini">[publisher]
...
metering_secret = <replaceable>METERING_SECRET</replaceable></programlisting>
<para>Replace <replaceable>METERING_SECRET</replaceable> with the
random value that you generated in a previous step.</para>
</step>
<step>
<para>In the <literal>[service_credentials]</literal>
section, configure service credentials:</para>
<programlisting language="ini">[service_credentials]
...
os_auth_url = http://<replaceable>controller</replaceable>:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = <replaceable>CEILOMETER_PASS</replaceable></programlisting>
<para>Replace <replaceable>CEILOMETER_PASS</replaceable> with
the password you chose for the <literal>ceilometer</literal>
user in the Identity service.</para>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To finalize installation</title>
<step os="ubuntu;debian">
<para>Restart the Telemetry services:</para>
<screen><prompt>#</prompt> <userinput>service ceilometer-agent-central restart</userinput>
<prompt>#</prompt> <userinput>service ceilometer-agent-notification restart</userinput>
<prompt>#</prompt> <userinput>service ceilometer-api restart</userinput>
<prompt>#</prompt> <userinput>service ceilometer-collector restart</userinput>
<prompt>#</prompt> <userinput>service ceilometer-alarm-evaluator restart</userinput>
<prompt>#</prompt> <userinput>service ceilometer-alarm-notifier restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Telemetry services and configure them to start when the
system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service \
openstack-ceilometer-central.service openstack-ceilometer-collector.service \
openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-ceilometer-api.service openstack-ceilometer-notification.service \
openstack-ceilometer-central.service openstack-ceilometer-collector.service \
openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-ceilometer-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-ceilometer-agent-notification start</userinput>
<prompt>#</prompt> <userinput>service openstack-ceilometer-agent-central start</userinput>
<prompt>#</prompt> <userinput>service openstack-ceilometer-collector start</userinput>
<prompt>#</prompt> <userinput>service openstack-ceilometer-alarm-evaluator start</userinput>
<prompt>#</prompt> <userinput>service openstack-ceilometer-alarm-notifier start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-ceilometer-api on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-ceilometer-agent-notification on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-ceilometer-agent-central on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-ceilometer-collector on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-ceilometer-alarm-evaluator on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-ceilometer-alarm-notifier on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-ceilometer-api.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-ceilometer-agent-notification.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-ceilometer-agent-central.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-ceilometer-collector.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-ceilometer-alarm-evaluator.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-ceilometer-alarm-notifier.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-ceilometer-api.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-ceilometer-agent-notification.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-ceilometer-agent-central.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-ceilometer-collector.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-ceilometer-alarm-evaluator.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-ceilometer-alarm-notifier.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -3,35 +3,31 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ceilometer-install-glance">
xml:id="ceilometer-agent-glance">
<title>Configure the Image Service for Telemetry</title>
<procedure>
<step>
<para>To retrieve image samples, you must configure the Image
Service to send notifications to the bus.</para>
<para os="debian;ubuntu">Edit
<para>Edit
<filename>/etc/glance/glance-api.conf</filename> and modify the
<literal>[DEFAULT]</literal> section:</para>
<programlisting language="ini" os="debian;ubuntu">notification_driver = messaging
<programlisting language="ini">notification_driver = messaging
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para os="opensuse;sles;fedora;rhel;centos">Run the following commands:</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver messaging</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf DEFAULT rabbit_host <replaceable>controller</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf DEFAULT rabbit_password <replaceable>RABBIT_PASS</replaceable></userinput></screen>
</step>
<step>
<para>Restart the Image Services with their new
settings:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service glance-registry restart</userinput>
<prompt>#</prompt> <userinput>service glance-api restart</userinput></screen>
<screen os="rhel;fedora;centos;opensuse;sles"><prompt>#</prompt> <userinput>service openstack-glance-api restart</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl restart openstack-glance-api.service openstack-glance-registry.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-glance-api restart</userinput>
<prompt>#</prompt> <userinput>service openstack-glance-registry restart</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-glance-api.service openstack-glance-registry.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -3,143 +3,118 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ceilometer-install-nova">
xml:id="ceilometer-agent-nova">
<?dbhtml stop-chunking?>
<title>Install the Compute agent for Telemetry</title>
<para>Telemetry is composed of an API service, a collector and a range
of disparate agents. This section explains how to install and configure
the agent that runs on the compute node.</para>
<procedure>
<para>Telemetry provides an API service that provides a
collector and a range of disparate agents. This procedure
details how to install the agent that runs on the compute
node.</para>
<title>To configure prerequisites</title>
<step>
<para>Install the Telemetry service on the compute node:</para>
<para>Install the package:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install ceilometer-agent-compute</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-ceilometer-compute python-ceilometerclient python-pecan</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-ceilometer-agent-compute</userinput></screen>
</step>
<step>
<para os="fedora;rhel;centos;opensuse;sles">Set the following
options in the <filename>/etc/nova/nova.conf</filename>
file:</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
instance_usage_audit True</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
instance_usage_audit_period hour</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
notify_on_state_change vm_and_task_state</userinput></screen>
<note os="fedora;rhel;centos;opensuse;sles">
<para>The <option>notification_driver</option> option is a multi
valued option, which
<application>openstack-config</application> cannot set
properly. See <xref linkend="basics-packages"/>.
</para>
</note>
<para>Edit the
<filename>/etc/nova/nova.conf</filename> file and add the
following lines to the <literal>[DEFAULT]</literal>
<para>Edit the <filename>/etc/nova/nova.conf</filename> file and
add the following lines to the <literal>[DEFAULT]</literal>
section:</para>
<programlisting os="ubuntu;debian" language="ini">[DEFAULT]
<programlisting language="ini">[DEFAULT]
...
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_driver = ceilometer.compute.nova_notifier</programlisting>
<programlisting os = "fedora;rhel;centos;opensuse;sles" language="ini">[DEFAULT]
...
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_driver = ceilometer.compute.nova_notifier</programlisting>
</step>
<step>
<para>Restart the Compute service:</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>service openstack-nova-compute restart</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-compute restart</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-compute.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-nova-compute restart</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-compute.service</userinput></screen>
</step>
</procedure>
<procedure>
<title>To configure the Compute agent for Telemetry</title>
<para>Edit the <filename>/etc/ceilometer/ceilometer.conf</filename>
file and complete the following actions:</para>
<step>
<para>You must set the secret key that you defined previously.
The Telemetry service nodes share this key as a shared
secret:</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf publisher \
metering_secret <replaceable>CEILOMETER_TOKEN</replaceable></userinput></screen>
<para os="ubuntu;debian">Edit the
<filename>/etc/ceilometer/ceilometer.conf</filename> file
and change these lines in the <literal>[publisher]</literal>
section. Replace <replaceable>CEILOMETER_TOKEN</replaceable> with
the ceilometer token that you created previously:</para>
<programlisting os="ubuntu;debian" language="ini">[publisher]
<para>In the <literal>[publisher]</literal> section, set the
secret key for Telemetry service nodes:</para>
<programlisting language="ini">[publisher]
# Secret value for signing metering messages (string value)
metering_secret = <replaceable>CEILOMETER_TOKEN</replaceable></programlisting>
<para>Replace <replaceable>CEILOMETER_TOKEN</replaceable> with
the ceilometer token that you created previously.</para>
</step>
<step os="opensuse;sles;ubuntu;rhel;centos;fedora">
<para>Configure the RabbitMQ access:</para>
<screen os="opensuse;sles;rhel;centos;fedora"><prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT rabbit_host controller</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT rabbit_password <replaceable>RABBIT_PASS</replaceable></userinput></screen>
<para os="ubuntu">Edit the <filename>/etc/ceilometer/ceilometer.conf</filename> file and update the <literal>[DEFAULT]</literal> section:</para>
<programlisting os="ubuntu" language="ini">[DEFAULT]
<step os="centos;fedora;opensuse;rhel;sles;ubuntu">
<para>In the <literal>[DEFAULT]</literal> section, configure
RabbitMQ broker access:</para>
<programlisting language="ini">[DEFAULT]
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the password
you chose for the guest account in RabbitMQ.</para>
</step>
<step>
<para>Add the Identity service credentials:</para>
<screen os="centos;rhel;fedora;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken auth_host <replaceable>controller</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken admin_user ceilometer</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken admin_tenant_name service</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken auth_protocol http</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken admin_password <replaceable>CEILOMETER_PASS</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf \
service_credentials os_username ceilometer</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf \
service_credentials os_tenant_name service</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf \
service_credentials os_password <replaceable>CEILOMETER_PASS</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf \
service_credentials os_auth_url http://<replaceable>controller</replaceable>:5000/v2.0</userinput></screen>
<para os="ubuntu;debian">Edit the
<filename>/etc/ceilometer/ceilometer.conf</filename> file
and change the <literal>[keystone_authtoken]</literal>
section:</para>
<programlisting os="ubuntu;debian" language="ini">[keystone_authtoken]
auth_host = controller
auth_port = 35357
auth_protocol = http
<step>
<para>In the <literal>[keystone_authtoken]</literal> section,
configure Identity service access:</para>
<programlisting language="ini">[keystone_authtoken]
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = <replaceable>CEILOMETER_PASS</replaceable></programlisting>
<para os="ubuntu;debian">Also set the
<literal>[service_credentials]</literal> section:</para>
<programlisting os="ubuntu;debian" language="ini">[service_credentials]
<para>Replace <replaceable>CEILOMETER_PASS</replaceable> with the
password you chose for the Telemetry module database.</para>
<note>
<para>Comment out the <literal>auth_host</literal>,
<literal>auth_port</literal>, and <literal>auth_protocol</literal>
keys, since they are replaced by the <literal>identity_uri</literal>
and <literal>auth_uri</literal> keys.</para>
</note>
</step>
<step>
<para>In the <literal>[service_credentials]</literal> section,
configure service credentials:</para>
<programlisting language="ini">[service_credentials]
os_auth_url = http://<replaceable>controller</replaceable>:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = <replaceable>CEILOMETER_PASS</replaceable></programlisting>
os_password = <replaceable>CEILOMETER_PASS</replaceable>
os_endpoint_type = internalURL</programlisting>
<para>Replace CEILOMETER_PASS with the password you chose for the
ceilometer user in the Identity service.</para>
</step>
<step os="ubuntu">
<para>Configure the log directory.</para>
<para>Edit the <filename>/etc/ceilometer/ceilometer.conf</filename> file
and update the <literal>[DEFAULT]</literal> section:</para>
<programlisting os="ubuntu" language="ini">[DEFAULT]
<para>In the <literal>[DEFAULT]</literal> section, configure the
log directory:</para>
<programlisting language="ini">[DEFAULT]
log_dir = /var/log/ceilometer</programlisting>
</step>
</procedure>
<procedure>
<title>To finish installation</title>
<step os="ubuntu;debian">
<para>Restart the service with its new settings:</para>
<screen><prompt>#</prompt> <userinput>service ceilometer-agent-compute restart</userinput></screen>
</step>
<step os="rhel;fedora;centos;opensuse;sles">
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the service and configure it to start when the
system boots:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service openstack-ceilometer-agent-compute start</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-ceilometer-compute.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-ceilometer-compute.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-ceilometer-agent-compute start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-ceilometer-agent-compute on</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>service openstack-ceilometer-compute start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-ceilometer-compute on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-ceilometer-compute.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-ceilometer-compute.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -3,16 +3,23 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ceilometer-install-swift">
xml:id="ceilometer-agent-swift">
<title>Configure the Object Storage service for Telemetry</title>
<procedure>
<step>
<para>Install the <package>python-ceilometerclient</package>
package on your Object Storage proxy server:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install python-ceilometerclient</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install python-ceilometerclient</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install python-ceilometerclient</userinput></screen>
</step>
<step>
<para>To retrieve object store statistics, the Telemetry service
needs access to Object Storage with the
<literal>ResellerAdmin</literal> role. Give this role to
your <literal>os_username</literal> user for the
<literal>os_tenant_name</literal> tenant:</para>
<screen><prompt>$</prompt> <userinput>keystone role-create --name=ResellerAdmin</userinput>
<screen><prompt>$</prompt> <userinput>keystone role-create --name ResellerAdmin</userinput>
<computeroutput>+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
@ -38,10 +45,25 @@ use = egg:ceilometer#swift</programlisting>
<programlisting language="ini">[pipeline:main]
pipeline = healthcheck cache authtoken keystoneauth ceilometer proxy-server</programlisting>
</step>
<step>
<para>Add the system user <literal>swift</literal> to the system group
<literal>ceilometer</literal> to give Object Storage access to the
<filename>ceilometer.conf</filename> file.</para>
<screen><prompt>#</prompt> <userinput>usermod -a -G ceilometer swift</userinput></screen>
</step>
<step>
<para>Add <literal>ResellerAdmin</literal> to the
<literal>operator_roles</literal> parameter of that same file:</para>
<programlisting language="ini">operator_roles = Member,admin,swiftoperator,_member_,ResellerAdmin</programlisting>
</step>
<step>
<para>Restart the service with its new settings:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service swift-proxy restart</userinput></screen>
<screen os="rhel;fedora;centos;sles;opensuse"><prompt>#</prompt> <userinput>service openstack-swift-proxy restart</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl restart openstack-swift-proxy.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-swift-proxy restart</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-swift-proxy.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -22,7 +22,7 @@
</step>
<step>
<para>Download an image from the Image Service:</para>
<screen><prompt>$</prompt> <userinput>glance image-download "cirros-0.3.2-x86_64" > cirros.img</userinput></screen>
<screen><prompt>$</prompt> <userinput>glance image-download "cirros-0.3.3-x86_64" > cirros.img</userinput></screen>
</step>
<step>
<para>Call the <literal>ceilometer meter-list</literal> command again to

View File

@ -0,0 +1,264 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="cinder-install-controller-node">
<title>Install and configure controller node</title>
<para>This section describes how to install and configure the Block
Storage service, code-named cinder, on the controller node. This
service requires at least one additional storage node that provides
volumes to instances.</para>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure prerequisites</title>
<para>Before you install and configure the Block Storage service, you must
create a database and Identity service credentials including
endpoints.</para>
<step>
<para>To create the database, complete these steps:</para>
<substeps>
<step>
<para>Use the database access client to connect to the database
server as the <literal>root</literal> user:</para>
<screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput></screen>
</step>
<step>
<para>Create the <literal>cinder</literal> database:</para>
<screen><userinput>CREATE DATABASE cinder;</userinput></screen>
</step>
<step>
<para>Grant proper access to the <literal>cinder</literal>
database:</para>
<screen><userinput>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY '<replaceable>CINDER_DBPASS</replaceable>';</userinput>
<userinput>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY '<replaceable>CINDER_DBPASS</replaceable>';</userinput></screen>
<para>Replace <replaceable>CINDER_DBPASS</replaceable> with
a suitable password.</para>
</step>
<step>
<para>Exit the database access client.</para>
</step>
</substeps>
</step>
<step>
<para>Source the <literal>admin</literal> credentials to gain access to
admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>To create the Identity service credentials, complete these
steps:</para>
<substeps>
<step>
<para>Create a <literal>cinder</literal> user:</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name cinder --pass <replaceable>CINDER_PASS</replaceable></userinput>
<computeroutput>+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 881ab2de4f7941e79504a759a83308be |
| name | cinder |
| username | cinder |
+----------+----------------------------------+</computeroutput></screen>
<para>Replace <replaceable>CINDER_PASS</replaceable> with a suitable
password.</para>
</step>
<step>
<para>Link the <literal>cinder</literal> user to the
<literal>service</literal> tenant and <literal>admin</literal>
role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user cinder --tenant service --role admin</userinput></screen>
<note>
<para>This command provides no output.</para>
</note>
</step>
<step>
<para>Create the <literal>cinder</literal> services:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name cinder --type volume \
--description "OpenStack Block Storage"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 1e494c3e22a24baaafcaf777d4d467eb |
| name | cinder |
| type | volume |
+-------------+----------------------------------+</computeroutput>
<prompt>$</prompt> <userinput>keystone service-create --name cinderv2 --type volumev2 \
--description "OpenStack Block Storage"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 16e038e449c94b40868277f1d801edb5 |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+</computeroutput></screen>
<note>
<para>The Block Storage service requires two different services
to support API versions 1 and 2.</para>
</note>
</step>
<step>
<para>Create the Block Storage service endpoints:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id $(keystone service-list | awk '/ volume / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:8776/v1/%\(tenant_id\)s \
--internalurl http://<replaceable>controller</replaceable>:8776/v1/%\(tenant_id\)s \
--adminurl http://<replaceable>controller</replaceable>:8776/v1/%\(tenant_id\)s \
--region regionOne</userinput>
<computeroutput>+-------------+-----------------------------------------+
| Property | Value |
+-------------+-----------------------------------------+
| adminurl | http://controller:8776/v1/%(tenant_id)s |
| id | d1b7291a2d794e26963b322c7f2a55a4 |
| internalurl | http://controller:8776/v1/%(tenant_id)s |
| publicurl | http://controller:8776/v1/%(tenant_id)s |
| region | regionOne |
| service_id | 1e494c3e22a24baaafcaf777d4d467eb |
+-------------+-----------------------------------------+</computeroutput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:8776/v2/%\(tenant_id\)s \
--internalurl http://<replaceable>controller</replaceable>:8776/v2/%\(tenant_id\)s \
--adminurl http://<replaceable>controller</replaceable>:8776/v2/%\(tenant_id\)s \
--region regionOne</userinput>
<computeroutput>+-------------+-----------------------------------------+
| Property | Value |
+-------------+-----------------------------------------+
| adminurl | http://controller:8776/v2/%(tenant_id)s |
| id | 097b4a6fc8ba44b4b10d4822d2d9e076 |
| internalurl | http://controller:8776/v2/%(tenant_id)s |
| publicurl | http://controller:8776/v2/%(tenant_id)s |
| region | regionOne |
| service_id | 16e038e449c94b40868277f1d801edb5 |
+-------------+-----------------------------------------+</computeroutput></screen>
<note>
<para>The Block Storage service requires two different endpoints
to support API versions 1 and 2.</para>
</note>
</step>
</substeps>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To install and configure Block Storage controller components</title>
<step>
<para>Install the packages:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install cinder-api cinder-scheduler python-cinderclient</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-cinder python-cinderclient python-oslo-db</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-cinder-api openstack-cinder-scheduler python-cinderclient</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/cinder/cinder.conf</filename> file and
complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[database]</literal> section, configure
database access:</para>
<programlisting language="ini">[database]
...
connection = mysql://cinder:<replaceable>CINDER_DBPASS</replaceable>@controller/cinder</programlisting>
<para>Replace <replaceable>CINDER_DBPASS</replaceable> with the
password you chose for the Block Storage database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the
password you chose for the <literal>guest</literal> account in
<application>RabbitMQ</application>.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections,
configure Identity service access:</para>
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = cinder
admin_password = <replaceable>CINDER_PASS</replaceable></programlisting>
<para>Replace <replaceable>CINDER_PASS</replaceable> with the
password you chose for the <literal>cinder</literal> user in the
Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
<literal>my_ip</literal> option to use the management interface IP
address of the controller node:</para>
<programlisting language="ini">[DEFAULT]
...
my_ip = 10.0.0.11</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
<step>
<para>Populate the Block Storage database:</para>
<screen><prompt>#</prompt> <userinput>su -s /bin/sh -c "cinder-manage db sync" cinder</userinput></screen>
</step>
</procedure>
<procedure os="debian">
<title>To install and configure Block Storage controller components</title>
<step>
<para>Install the packages:</para>
<screen><prompt>#</prompt> <userinput>apt-get install cinder-api cinder-scheduler python-cinderclient</userinput></screen>
</step>
</procedure>
<procedure>
<title>To finalize installation</title>
<step os="ubuntu;debian">
<para>Restart the Block Storage services:</para>
<screen><prompt>#</prompt> <userinput>service cinder-scheduler restart</userinput>
<prompt>#</prompt> <userinput>service cinder-api restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Block Storage services and configure them to start when
the system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-cinder-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-cinder-scheduler start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-cinder-api on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-cinder-scheduler on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service</userinput></screen>
</step>
<step os="ubuntu">
<para>By default, the Ubuntu packages create an SQLite database.</para>
<para>Because this configuration uses a SQL database server, you can
remove the SQLite database file:</para>
<screen><prompt>#</prompt> <userinput>rm -f /var/lib/cinder/cinder.sqlite</userinput></screen>
</step>
</procedure>
</section>

View File

@ -0,0 +1,264 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="cinder-install-storage-node">
<?dbhtml stop-chunking?>
<title>Install and configure a storage node</title>
<para>This section describes how to install and configure storage nodes
for the Block Storage service. For simplicity, this configuration
references one storage node with an empty local block storage device
<literal>/dev/sdb</literal> that contains a suitable partition table with
one partition <literal>/dev/sdb1</literal> occupying the entire device.
The service provisions logical volumes on this device using the
<glossterm>LVM</glossterm> driver and provides them to instances via
<glossterm baseform="Internet Small Computer Systems Interface (iSCSI)"
>iSCSI</glossterm> transport. You can follow these instructions with
minor modifications to horizontally scale your environment with
additional storage nodes.</para>
<procedure>
<title>To configure prerequisites</title>
<para>You must configure the storage node before you install and
configure the volume service on it. Similar to the controller node,
the storage node contains one network interface on the
<glossterm>management network</glossterm>. The storage node also
needs an empty block storage device of suitable size for your
environment. For more information, see
</para>
<step>
<para>Configure the management interface:</para>
<para>IP address: 10.0.0.41</para>
<para>Network mask: 255.255.255.0 (or /24)</para>
<para>Default gateway: 10.0.0.1</para>
</step>
<step>
<para>Set the hostname of the node to
<replaceable>block1</replaceable>.</para>
</step>
<step>
<para>Copy the contents of the <filename>/etc/hosts</filename> file from
the controller node to the storage node and add the following
to it:</para>
<programlisting language="ini"># block1
10.0.0.41 block1</programlisting>
<para>Also add this content to the <filename>/etc/hosts</filename> file
on all other nodes in your environment.</para>
</step>
<step>
<para>Install the LVM packages:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install lvm2</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install lvm2</userinput></screen>
<note>
<para>Some distributions include LVM by default.</para>
</note>
</step>
<step os="rhel;centos;fedora">
<para>Start the LVM metadata service and configure it to start when the
system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable lvm2-lvmetad.service</userinput>
<prompt>#</prompt> <userinput>systemctl start lvm2-lvmetad.service</userinput></screen>
</step>
<step>
<para>Create the LVM physical volume <literal>/dev/sdb1</literal>:</para>
<screen><prompt>#</prompt> <userinput>pvcreate /dev/sdb1</userinput>
<computeroutput> Physical volume "/dev/sdb1" successfully created</computeroutput></screen>
<note>
<para>If your system uses a different device name, adjust these
steps accordingly.</para>
</note>
</step>
<step>
<para>Create the LVM volume group
<literal>cinder-volumes</literal>:</para>
<screen><prompt>#</prompt> <userinput>vgcreate cinder-volumes /dev/sdb1</userinput>
<computeroutput> Volume group "cinder-volumes" successfully created</computeroutput></screen>
<para>The Block Storage service creates logical volumes in this
volume group.</para>
</step>
<step>
<para>Only instances can access Block Storage volumes. However, the
underlying operating system manages the devices associated with
the volumes. By default, the LVM volume scanning tool scans the
<literal>/dev</literal> directory for block storage devices that
contain volumes. If tenants use LVM on their volumes, the scanning
tool detects these volumes and attempts to cache them which can cause
a variety of problems with both the underlying operating system
and tenant volumes. You must reconfigure LVM to scan only the devices
that contain the <literal>cinder-volume</literal> volume group. Edit
the <filename>/etc/lvm/lvm.conf</filename> file and complete the
following actions:</para>
<substeps>
<step>
<para>In the <literal>devices</literal> section, add a filter
that accepts the <literal>/dev/sdb</literal> device and rejects
all other devices:</para>
<programlisting language="ini">devices {
...
filter = [ "a/sdb/", "r/.*/"]</programlisting>
<para>Each item in the filter array begins with <literal>a</literal>
for <emphasis>accept</emphasis> or <literal>r</literal> for
<emphasis>reject</emphasis> and includes a regular expression
for the device name. The array must end with
<literal>r/.*/</literal> to reject any remaining
devices. You can use the <command>vgs -vvvv</command>
command to test filters.</para>
<warning>
<para>If your storage nodes use LVM on the operating system disk,
you must also add the associated device to the filter. For
example, if the <literal>/dev/sda</literal> device contains
the operating system:</para>
<programlisting language="ini">filter = [ "a/sda", "a/sdb/", "r/.*/"]</programlisting>
<para>Similarly, if your compute nodes use LVM on the operating
system disk, you must also modify the filter in the
<literal>/etc/lvm/lvm.conf</literal> file on those nodes to
include only the operating system disk. For example, if the
<literal>/dev/sda</literal> device contains the operating
system:</para>
<programlisting language="ini">filter = [ "a/sda", "r/.*/"]</programlisting>
</warning>
</step>
</substeps>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>Install and configure Block Storage volume components</title>
<step>
<para>Install the packages:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install cinder-volume python-mysqldb</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-cinder targetcli python-oslo-db MySQL-python</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-cinder-volume tgt python-mysql</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/cinder/cinder.conf</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[database]</literal> section, configure
database access:</para>
<programlisting language="ini">[database]
...
connection = mysql://cinder:<replaceable>CINDER_DBPASS</replaceable>@<replaceable>controller</replaceable>/cinder</programlisting>
<para>Replace <replaceable>CINDER_DBPASS</replaceable> with
the password you chose for the Block Storage database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the
password you chose for the <literal>guest</literal> account in
RabbitMQ.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections,
configure Identity service access:</para>
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = cinder
admin_password = <replaceable>CINDER_PASS</replaceable></programlisting>
<para>Replace <replaceable>CINDER_PASS</replaceable> with the
password you chose for the <literal>cinder</literal> user in the
Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
<literal>my_ip</literal> option:</para>
<programlisting language="ini">[DEFAULT]
...
my_ip = <replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable></programlisting>
<para>Replace
<replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable> with
the IP address of the management network interface on your
storage node, typically 10.0.0.41 for the first node in the
<link linkend="architecture_example-architectures">example
architecture</link>.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
location of the Image Service:</para>
<programlisting language="ini">[DEFAULT]
...
glance_host = <replaceable>controller</replaceable></programlisting>
</step>
<step os="rhel;centos;fedora">
<para>In the <literal>[DEFAULT]</literal> section, configure Block
Storage to use the <command>lioadm</command> iSCSI
service:</para>
<programlisting language="ini">[DEFAULT]
...
iscsi_helper = lioadm</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
</procedure>
<procedure os="debian">
<title>Install and configure Block Storage volume components</title>
<step>
<para>Install the packages:</para>
<screen><prompt>#</prompt> <userinput>apt-get install cinder-volume python-mysqldb</userinput></screen>
</step>
<step>
<para>Respond to prompts for the volume group to associate with the
Block Storage service. The script scans for volume groups and
attempts to use the first one. If your system only contains the
<literal>cinder-volumes</literal> volume group, the script should
automatically choose it.</para>
</step>
</procedure>
<procedure>
<title>To finalize installation</title>
<step os="ubuntu;debian">
<para>Restart the Block Storage volume service including its
dependencies:</para>
<screen><prompt>#</prompt> <userinput>service tgt restart</userinput>
<prompt>#</prompt> <userinput>service cinder-volume restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Block Storage volume service including its dependencies
and configure them to start when the system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable openstack-cinder-volume.service target.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-cinder-volume.service target.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service tgtd start</userinput>
<prompt>#</prompt> <userinput>chkconfig tgtd on</userinput>
<prompt>#</prompt> <userinput>service openstack-cinder-volume start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-cinder-volume on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-cinder-volume.service tgtd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-cinder-volume.service tgtd.service</userinput></screen>
</step>
<step os="ubuntu">
<para>By default, the Ubuntu packages create an SQLite database.
Because this configuration uses a SQL database server, remove
the SQLite database file:</para>
<screen><prompt>#</prompt> <userinput>rm -f /var/lib/cinder/cinder.sqlite</userinput></screen>
</step>
</procedure>
</section>

View File

@ -4,32 +4,51 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="cinder-verify">
<title>Verify the Block Storage installation</title>
<para>To verify that the Block Storage is installed and configured properly,
create a new volume.</para>
<title>Verify operation</title>
<para>This section describes how to verify operation of the Block Storage
service by creating a volume.</para>
<para>For more information about how to manage volumes, see the <link
xlink:href="http://docs.openstack.org/user-guide/content/index.html"
><citetitle>OpenStack User Guide</citetitle></link>.</para>
><citetitle>OpenStack User Guide</citetitle></link>.</para>
<note>
<para>Perform these commands on the controller node.</para>
</note>
<procedure>
<step>
<para>Source the <filename>demo-openrc.sh</filename> file:</para>
<para>Source the <literal>admin</literal> credentials to gain access to
admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>List service components to verify successful launch of each
process:</para>
<screen><prompt>$</prompt> <userinput>cinder service-list</userinput>
<computeroutput>+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |
| cinder-volume | block1 | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
+------------------+------------+------+---------+-------+----------------------------+-----------------+</computeroutput></screen>
</step>
<step>
<para>Source the <literal>demo</literal> tenant credentials to perform
the following steps as a non-administrative tenant:</para>
<screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen>
</step>
<step>
<para>Use the <command>cinder create</command> command to create a new volume:</para>
<screen><prompt>$</prompt> <userinput>cinder create --display-name myVolume 1</userinput>
<para>Create a 1 GB volume:</para>
<screen><prompt>$</prompt> <userinput>cinder create --display-name demo-volume1 1</userinput>
<computeroutput>+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-04-17T10:28:19.615050 |
| created_at | 2014-10-14T23:11:50.870239 |
| display_description | None |
| display_name | myVolume |
| display_name | demo-volume1 |
| encrypted | False |
| id | 5e691b7b-12e3-40b6-b714-7f17550db5d1 |
| id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
| metadata | {} |
| size | 1 |
| snapshot_id | None |
@ -39,18 +58,22 @@
+---------------------+--------------------------------------+</computeroutput></screen>
</step>
<step>
<para>Make sure that the volume has been correctly created with the
<command>cinder list</command> command:</para>
<para>Verify creation and availability of the volume:</para>
<screen><prompt>$</prompt> <userinput>cinder list</userinput>
<computeroutput>--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 5e691b7b-12e3-40b6-b714-7f17550db5d1 | available | myVolume | 1 | None | false | |
| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | available | demo-volume1 | 1 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+</computeroutput></screen>
<para>If the status value is not <literal>available</literal>, the volume
creation failed. Check the log files in the
<filename>/var/log/cinder/</filename> directory on the controller and
volume nodes to get information about the failure.</para>
<para>If the status does not indicate <literal>available</literal>,
check the logs in the <filename>/var/log/cinder</filename> directory
on the controller and volume nodes for more information.</para>
<note>
<para>The
<link linkend="launch-instance">launch an instance</link>
chapter includes instructions for attaching this volume to an
instance.</para>
</note>
</step>
</procedure>
</section>

View File

@ -4,188 +4,140 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="install_dashboard">
<?dbhtml stop-chunking?>
<title>Install the dashboard</title>
<para>Before you can install and configure the dashboard, meet the
requirements in <xref linkend="dashboard-system-requirements"
/>.</para>
<note>
<para>When you install only Object Storage and the Identity
Service, even if you install the dashboard, it does not
pull up projects and is unusable.</para>
</note>
<para>For more information about how to deploy the dashboard, see
<link
xlink:href="http://docs.openstack.org/developer/horizon/topics/deployment.html"
>deployment topics in the developer
documentation</link>.</para>
<procedure>
<step>
<para>Install the dashboard on the node that can contact
the Identity Service as root:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install memcached python-memcached mod_wsgi openstack-dashboard</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install memcached python-python-memcached apache2-mod_wsgi openstack-dashboard openstack-dashboard-test</userinput></screen>
<note os="ubuntu">
<title>Note for Ubuntu users</title>
<para>Remove the
<literal>openstack-dashboard-ubuntu-theme</literal>
package. This theme prevents translations, several
menus as well as the network map from rendering
correctly:
<screen><prompt>#</prompt> <userinput>apt-get remove --purge openstack-dashboard-ubuntu-theme</userinput></screen>
</para>
</note>
<note os="debian">
<title>Note for Debian users</title>
<para>To install the Apache package:</para>
<screen><prompt>#</prompt> <userinput>apt-get install openstack-dashboard-apache</userinput></screen>
<para>This command installs and configures Apache
correctly, provided that the user asks for it
during the <package>debconf</package> prompts. The
default SSL certificate is self-signed, and it is
probably wise to have it signed by a root
Certificate Authority (CA).</para>
</note>
</step>
<step>
<para>Modify the value of
<literal>CACHES['default']['LOCATION']</literal>
in <filename os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename><filename
os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename><filename
os="opensuse;sles"
>/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>
to match the ones set in <filename os="ubuntu;debian"
>/etc/memcached.conf</filename><filename
os="centos;fedora;rhel;opensuse;sles"
>/etc/sysconfig/memcached</filename>.</para>
<para>Open <filename os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename>
<filename os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename>
and look for this line:</para>
<programlisting language="python" linenumbering="unnumbered"><?db-font-size 75%?>CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}</programlisting>
<note>
<title>Notes</title>
<itemizedlist>
<listitem>
<para>The address and port must match the ones
set in <filename os="ubuntu;debian"
>/etc/memcached.conf</filename><filename
os="centos;fedora;rhel;opensuse;sles"
>/etc/sysconfig/memcached</filename>.</para>
<para>If you change the memcached settings,
you must restart the Apache web server for
the changes to take effect.</para>
</listitem>
<listitem>
<para>You can use options other than memcached
option for session storage. Set the
session back-end through the
<parameter>SESSION_ENGINE</parameter>
option.</para>
</listitem>
<listitem>
<para>To change the timezone, use the
dashboard or edit the <filename
os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename><filename
os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename><filename
os="opensuse;sles"
>/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>
file.</para>
<para>Change the following parameter:
<code>TIME_ZONE = "UTC"</code></para>
</listitem>
</itemizedlist>
</note>
</step>
<step>
<para>Update the <literal>ALLOWED_HOSTS</literal> in
<filename>local_settings.py</filename> to include
the addresses you wish to access the dashboard
from.</para>
<para>Edit <filename os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename><filename
os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename><filename
os="opensuse;sles"
>/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>:</para>
<programlisting language="python" linenumbering="unnumbered"><?db-font-size 75%?>ALLOWED_HOSTS = ['localhost', 'my-desktop']
</programlisting>
</step>
<step>
<para>This guide assumes that you are running the
Dashboard on the controller node. You can easily run
the dashboard on a separate server, by changing the
appropriate settings in
<filename>local_settings.py</filename>.</para>
<para>Edit <filename os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename><filename
os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename><filename
os="opensuse;sles"
>/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>
and change <literal>OPENSTACK_HOST</literal> to the
hostname of your Identity Service:</para>
<programlisting language="python" linenumbering="unnumbered"><?db-font-size 75%?>OPENSTACK_HOST = "controller"
</programlisting>
</step>
<step os="opensuse;sles">
<para>Setup Apache configuration:
<screen><prompt>#</prompt> <userinput>cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
/etc/apache2/conf.d/openstack-dashboard.conf</userinput>
<?dbhtml stop-chunking?>
<title>Install and configure</title>
<para>This section describes how to install and configure the dashboard
on the controller node.</para>
<para>Before you proceed, verify that your system meets the requirements
in <xref linkend="dashboard-system-requirements"/>. Also, the dashboard
relies on functional core services including Identity, Image Service,
Compute, and either Networking (neutron) or legacy networking
(nova-network). Environments with stand-alone services such as Object
Storage cannot use the dashboard. For more information, see the
<link xlink:href="http://docs.openstack.org/developer/horizon/topics/deployment.html">developer documentation</link>.</para>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To install the dashboard components</title>
<step>
<para>Install the packages:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install openstack-dashboard apache2 libapache2-mod-wsgi memcached python-memcache</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-dashboard httpd mod_wsgi memcached python-memcached</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-dashboard apache2-mod_wsgi memcached python-python-memcached \
openstack-dashboard-test</userinput></screen>
<note os="ubuntu">
<para>Ubuntu installs the
<package>openstack-dashboard-ubuntu-theme</package> package
as a dependency. Some users reported issues with this theme in
previous releases. If you encounter issues, remove this package
to restore the original OpenStack theme.</para>
</note>
</step>
</procedure>
<procedure os="debian">
<title>To install the dashboard components</title>
<step>
<para>Install the packages:</para>
<screen><prompt>#</prompt> <userinput>apt-get install openstack-dashboard-apache</userinput></screen>
</step>
<step>
<para>Respond to prompts for web server configuration.</para>
<note>
<para>The automatic configuration process generates a self-signed
SSL certificate. Consider obtaining an official certificate for
production environments.</para>
</note>
</step>
</procedure>
<procedure>
<title>To configure the dashboard</title>
<step os="sles;opensuse">
<para>Configure the web server:</para>
<screen><prompt>#</prompt> <userinput>cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
/etc/apache2/conf.d/openstack-dashboard.conf</userinput>
<prompt>#</prompt> <userinput>a2enmod rewrite;a2enmod ssl;a2enmod wsgi</userinput></screen>
</para>
</step>
<step os="opensuse;sles">
<para>By default, the
<systemitem>openstack-dashboard</systemitem>
package enables a database as session store. Before
you continue, either change the session store set up
as described in <xref linkend="dashboard-sessions"/>
or finish the setup of the database session store as
explained in <xref
linkend="dashboard-session-database"/>.</para>
</step>
<step os="centos;fedora;rhel">
<para>Ensure that the SELinux policy of the system is
configured to allow network connections to the HTTP
server.</para>
<screen><prompt>#</prompt> <userinput>setsebool -P httpd_can_network_connect on</userinput></screen>
</step>
<step>
<para os="ubuntu;debian">Edit the
<filename>/etc/openstack-dashboard/local_settings.py</filename>
file and complete the following actions:</para>
<para os="rhel;centos;fedora">Edit the
<filename>/etc/openstack-dashboard/local_settings</filename>
file and complete the following actions:</para>
<para os="sles;opensuse">Edit the
<filename>/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>
file and complete the following actions:</para>
<substeps>
<step>
<para>Configure the dashboard to use OpenStack services on the
<literal>controller</literal> node:</para>
<programlisting language="python">OPENSTACK_HOST = "<replaceable>controller</replaceable>"</programlisting>
</step>
<step>
<para>Allow all hosts to access the dashboard:</para>
<programlisting language="python">ALLOWED_HOSTS = ['*']</programlisting>
</step>
<step>
<para>Configure the <application>memcached</application> session
storage service:</para>
<programlisting language="python">CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}</programlisting>
<note>
<para>Comment out any other session storage configuration.</para>
</note>
<note os="sles;opensuse">
<para>By default, SLES and openSUSE use a SQL database for session
storage. For simplicity, we recommend changing the configuration
to use <application>memcached</application> for session
storage.</para>
</note>
</step>
<step>
<para>Start the Apache web server and memcached:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service apache2 start</userinput>
<para>Optionally, configure the time zone:</para>
<programlisting language="python">TIME_ZONE = "<replaceable>TIME_ZONE</replaceable>"</programlisting>
<para>Replace <replaceable>TIME_ZONE</replaceable> with an
appropriate time zone identifier. For more information, see the
<link xlink:href="http://en.wikipedia.org/wiki/List_of_tz_database_time_zones"
>list of time zones</link>.</para>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To finalize installation</title>
<step os="rhel;centos;fedora">
<para>On RHEL and CentOS, configure SELinux to permit the web server
to connect to OpenStack services:</para>
<screen><prompt>#</prompt> <userinput>setsebool -P httpd_can_network_connect on</userinput></screen>
</step>
<step os="rhel;centos;fedora">
<para>Due to a packaging bug, the dashboard CSS fails to load properly.
Run the following command to resolve this issue:</para>
<screen><prompt>#</prompt> <userinput>chown -R apache:apache /usr/share/openstack-dashboard/static</userinput></screen>
<para>For more information, see the
<link xlink:href="https://bugzilla.redhat.com/show_bug.cgi?id=1150678"
>bug report</link>.</para>
</step>
<step os="ubuntu;debian">
<para>Restart the web server and session storage service:</para>
<screen><prompt>#</prompt> <userinput>service apache2 restart</userinput>
<prompt>#</prompt> <userinput>service memcached restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the web server and session storage service and configure
them to start when the system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable httpd.service memcached.service</userinput>
<prompt>#</prompt> <userinput>systemctl start httpd.service memcached.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service apache2 start</userinput>
<prompt>#</prompt> <userinput>service memcached start</userinput>
<prompt>#</prompt> <userinput>chkconfig apache2 on</userinput>
<prompt>#</prompt> <userinput>chkconfig memcached on</userinput></screen>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>service httpd start</userinput>
<prompt>#</prompt> <userinput>service memcached start</userinput>
<prompt>#</prompt> <userinput>chkconfig httpd on</userinput>
<prompt>#</prompt> <userinput>chkconfig memcached on</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service apache2 restart</userinput>
<prompt>#</prompt> <userinput>service memcached restart</userinput></screen>
</step>
<step>
<para>You can now access the dashboard at <uri os="ubuntu"
>http://controller/horizon</uri>
<uri os="debian">https://controller/</uri>
<uri os="centos;fedora;rhel"
>http://controller/dashboard</uri>
<uri os="opensuse;sles"
>http://controller</uri>.</para>
<para>Login with credentials for any user that you created
with the OpenStack Identity Service.</para>
</step>
</procedure>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable apache2.service memcached.service</userinput>
<prompt>#</prompt> <userinput>systemctl start apache2.service memcached.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -0,0 +1,24 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="dashboard-verify">
<?dbhtml stop-chunking?>
<title>Verify operation</title>
<para>This section describes how to verify operation of the
dashboard.</para>
<procedure>
<step>
<para>Access the dashboard using a web browser:
<uri os="ubuntu">http://controller/horizon</uri>
<uri os="debian">https://controller/</uri>
<uri os="rhel;centos;fedora">http://controller/dashboard</uri>
<uri os="sles;opensuse">http://controller</uri>.</para>
</step>
<step>
<para>Authenticate using <literal>admin</literal> or
<literal>demo</literal> user credentials.</para>
</step>
</procedure>
</section>

View File

@ -7,19 +7,19 @@
<title>Register API endpoints</title>
<para>All Debian packages for API services, except the
<package>heat-api</package> package, register the service in the
Identity service catalog. This feature is helpful because API
endpoints can be difficult to remember.</para>
Identity Service catalog. This feature is helpful because API
endpoints are difficult to remember.</para>
<note>
<para>The <package>heat-common</package> package, not the
<package>heat-api</package> package, configures the
<para>The <package>heat-common</package> package and not the
<package>heat-api</package> package configures the
Orchestration service.</para>
</note>
<para>When you install a package for an API service, you are
prompted to register that service. After you install or
prompted to register that service. However, after you install or
upgrade the package for an API service, Debian immediately removes
your response to this prompt from the <package>debconf</package>
database. Consequently, you are prompted to re-register the
service with the Identity service. If you already registered the
service with the Identity Service. If you already registered the
API service, respond <literal>no</literal> when you
upgrade.</para>
<informalfigure>
@ -31,7 +31,7 @@
</imageobject>
</mediaobject>
</informalfigure>
<para>This screen registers packages in the Identity service
<para>This screen registers packages in the Identity Service
catalog:</para>
<informalfigure>
<mediaobject>
@ -42,8 +42,8 @@
</imageobject>
</mediaobject>
</informalfigure>
<para>You are prompted for the Identity service
<literal>admin_token</literal> value. The Identity service uses
<para>You are prompted for the Identity Service
<literal>admin_token</literal> value. The Identity Service uses
this value to register the API service. When you set up the
<package>keystone</package> package, this value is configured
automatically.</para>
@ -87,17 +87,17 @@
below commands for you:</para>
<programlisting language="ini">PKG_SERVICE_ID=$(pkgos_get_id keystone --os-token ${AUTH_TOKEN} \
--os-endpoint http://${KEYSTONE_ENDPOINT_IP}:35357/v2.0/ service-create \
--name=${SERVICE_NAME} --type=${SERVICE_TYPE} --description="${SERVICE_DESC}")
--name ${SERVICE_NAME} --type ${SERVICE_TYPE} --description "${SERVICE_DESC}")
keystone --os-token ${AUTH_TOKEN} \
--os-endpoint http://${KEYSTONE_ENDPOINT_IP}:35357/v2.0/
endpoint-create \
--region "${REGION_NAME}" --service_id=${PKG_SERVICE_ID} \
--publicurl=http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL} \
--internalurl=http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL} \
--adminurl=http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL})</programlisting>
--region "${REGION_NAME}" --service_id ${PKG_SERVICE_ID} \
--publicurl http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL} \
--internalurl http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL} \
--adminurl http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL})</programlisting>
<para>The values of <literal>AUTH_TOKEN</literal>, <literal>KEYSTONE_ENDPOINT_IP</literal>,
<literal>PKG_ENDPOINT_IP</literal> and <literal>REGION_NAME</literal> depend on the
answer you will provide to the debconf prompts. The values of <literal>SERVICE_NAME</literal>,
answer you will provide to the debconf prompts. But the values of <literal>SERVICE_NAME</literal>,
<literal>SERVICE_TYPE</literal>, <literal>SERVICE_DESC</literal> and <literal>SERVICE_URL</literal>
are already pre-wired in each package, so you don't have to remember them.</para>
</section>

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="debconf-general-principles">
xml:id="debconf-concepts">
<?dbhtml stop-chunking?>
<title>debconf concepts</title>
<para>This chapter explains how to use the Debian <systemitem

View File

@ -13,16 +13,15 @@
for each service to work.</para>
<para>Generally, this section looks like this:</para>
<programlisting language="ini">[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%</programlisting>
<para>The debconf system helps users configure the
<code>auth_host</code>, <code>admin_tenant_name</code>,
<code>admin_user</code> and <code>admin_password</code>
options.</para>
<code>auth_uri</code>, <code>identity_uri</code>,
<code>admin_tenant_name</code>, <code>admin_user</code> and
<code>admin_password</code> options.</para>
<para>The following screens show an example Image Service
configuration:</para>
<informalfigure>

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="debconf-preseeding">
xml:id="debconf-preseed-prompts">
<title>Pre-seed debconf prompts</title>
<para>You can pre-seed all <systemitem
class="library">debconf</systemitem> prompts. To pre-seed means

View File

@ -8,12 +8,6 @@
<para>This section describes how to install and configure the Image Service,
code-named glance, on the controller node. For simplicity, this
configuration stores images on the local file system.</para>
<note>
<para>This section assumes proper installation, configuration, and
operation of the Identity service as described in
<xref linkend="keystone-install"/> and
<xref linkend="keystone-verify"/>.</para>
</note>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure prerequisites</title>
<para>Before you install and configure the Image Service, you must create
@ -28,19 +22,20 @@
</step>
<step>
<para>Create the <literal>glance</literal> database:</para>
<screen><prompt>mysql></prompt> <userinput>CREATE DATABASE glance;</userinput></screen>
<screen><userinput>CREATE DATABASE glance;</userinput></screen>
</step>
<step>
<para>Grant proper access to the <literal>glance</literal>
database:</para>
<screen><prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '<replaceable>GLANCE_DBPASS</replaceable>';</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '<replaceable>GLANCE_DBPASS</replaceable>';</userinput></screen>
<screen><userinput>GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY '<replaceable>GLANCE_DBPASS</replaceable>';</userinput>
<userinput>GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY '<replaceable>GLANCE_DBPASS</replaceable>';</userinput></screen>
<para>Replace <replaceable>GLANCE_DBPASS</replaceable> with a suitable
password.</para>
</step>
<step>
<para>Exit the database access client:</para>
<screen><prompt>mysql></prompt> <userinput>exit</userinput></screen>
<para>Exit the database access client.</para>
</step>
</substeps>
</step>
@ -50,35 +45,67 @@
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>To create the Identity service credentials, complete these steps:</para>
<para>To create the Identity service credentials, complete these
steps:</para>
<substeps>
<step>
<para>Create the <literal>glance</literal> user:</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name=glance --pass=<replaceable>GLANCE_PASS</replaceable> --email=<replaceable>EMAIL_ADDRESS</replaceable></userinput></screen>
<screen><prompt>$</prompt> <userinput>keystone user-create --name glance --pass <replaceable>GLANCE_PASS</replaceable></userinput>
<computeroutput>+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | f89cca5865dc42b18e2421fa5f5cce66 |
| name | glance |
| username | glance |
+----------+----------------------------------+</computeroutput></screen>
<para>Replace <replaceable>GLANCE_PASS</replaceable> with a suitable
password and <replaceable>EMAIL_ADDRESS</replaceable> with
a suitable e-mail address.</para>
password.</para>
</step>
<step>
<para>Link the <literal>glance</literal> user to the
<literal>service</literal> tenant and <literal>admin</literal>
role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user=glance --tenant=service --role=admin</userinput></screen>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user glance --tenant service --role admin</userinput></screen>
<note>
<para>This command provides no output.</para>
</note>
</step>
<step>
<para>Create the <literal>glance</literal> service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name=glance --type=image \
--description="OpenStack Image Service"</userinput></screen>
<screen><prompt>$</prompt> <userinput>keystone service-create --name glance --type image \
--description "OpenStack Image Service"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Image Service |
| enabled | True |
| id | 23f409c4e79f4c9e9d23d809c50fbacf |
| name | glance |
| type | image |
+-------------+----------------------------------+</computeroutput></screen>
</step>
</substeps>
</step>
<step>
<para>Create the Identity service endpoints:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ image / {print $2}') \
--publicurl=http://<replaceable>controller</replaceable>:9292 \
--internalurl=http://<replaceable>controller</replaceable>:9292 \
--adminurl=http://<replaceable>controller</replaceable>:9292</userinput></screen>
--service-id $(keystone service-list | awk '/ image / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:9292 \
--internalurl http://<replaceable>controller</replaceable>:9292 \
--adminurl http://<replaceable>controller</replaceable>:9292 \
--region regionOne</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://controller:9292 |
| id | a2ee818c69cb475199a1ca108332eb35 |
| internalurl | http://controller:9292 |
| publicurl | http://controller:9292 |
| region | regionOne |
| service_id | 23f409c4e79f4c9e9d23d809c50fbacf |
+-------------+----------------------------------+</computeroutput></screen>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
@ -102,18 +129,6 @@ connection = mysql://glance:<replaceable>GLANCE_DBPASS</replaceable>@<replaceabl
<para>Replace <replaceable>GLANCE_DBPASS</replaceable> with the
password you chose for the Image Service database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the password
you chose for the <literal>guest</literal> account in
<application>RabbitMQ</application>.</para>
</step>
<step>
<para>In the <literal>[keystone_authtoken]</literal> and
<literal>[paste_deploy]</literal> sections, configure Identity
@ -121,9 +136,7 @@ rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<programlisting language="ini">[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
auth_host = <replaceable>controller</replaceable>
auth_port = 35357
auth_protocol = http
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = glance
admin_password = <replaceable>GLANCE_PASS</replaceable>
@ -134,6 +147,28 @@ flavor = keystone</programlisting>
<para>Replace <replaceable>GLANCE_PASS</replaceable> with the
password you chose for the <literal>glance</literal> user in the
Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[glance_store]</literal> section, configure
the local file system store and location of image files:</para>
<programlisting language="ini">[glance_store]
...
default_store = file
filesystem_store_datadir = /var/lib/glance/images/</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
@ -157,25 +192,36 @@ connection = mysql://glance:<replaceable>GLANCE_DBPASS</replaceable>@<replaceabl
<programlisting language="ini">[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
auth_host = <replaceable>controller</replaceable>
auth_port = 35357
auth_protocol = http
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = glance
admin_password = <replaceable>GLANCE_PASS</replaceable>
...
[paste_deploy]
...
flavor = keystone</programlisting>
<para>Replace <replaceable>GLANCE_PASS</replaceable> with the
password you chose for the <literal>glance</literal> user in the
Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
<step>
<para>Populate the Image Service
database:</para>
<para>Populate the Image Service database:</para>
<screen><prompt>#</prompt> <userinput>su -s /bin/sh -c "glance-manage db_sync" glance</userinput></screen>
</step>
</procedure>
@ -185,16 +231,6 @@ flavor = keystone</programlisting>
<para>Install the packages:</para>
<screen><prompt>#</prompt> <userinput>apt-get install glance python-glanceclient</userinput></screen>
</step>
<step>
<para>Respond to prompts for
<link linkend="debconf-dbconfig-common">database management</link>,
<link linkend="debconf-keystone_authtoken">Identity service
credentials</link>,
<link linkend="debconf-api-endpoints">service endpoint
registration</link>, and
<link linkend="debconf-rabbitmq">message broker
credentials</link>.</para>
</step>
<step>
<para>Select the <literal>keystone</literal> pipeline to configure the
Image Service to use the Identity service:</para>
@ -217,16 +253,22 @@ flavor = keystone</programlisting>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Image Service services and configure them to start when
the system boots:</para>
<screen><prompt>#</prompt> <userinput>service openstack-glance-api start</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-glance-api.service openstack-glance-registry.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-glance-api.service openstack-glance-registry.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-glance-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-glance-registry start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-glance-api on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-glance-registry on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-glance-api.service openstack-glance-registry.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-glance-api.service openstack-glance-registry.service</userinput></screen>
</step>
<step os="ubuntu">
<para>By default, the Ubuntu packages create an SQLite database.</para>
<para>Because this configuration uses a SQL database server, you can
remove the SQLite database file:</para>
<screen><prompt>#</prompt> <userinput>rm /var/lib/glance/glance.sqlite</userinput></screen>
<screen><prompt>#</prompt> <userinput>rm -f /var/lib/glance/glance.sqlite</userinput></screen>
</step>
</procedure>
</section>

View File

@ -24,8 +24,8 @@
<prompt>$</prompt> <userinput>cd /tmp/images</userinput></screen>
</step>
<step>
<para>Download the image to the local directory:</para>
<screen><prompt>$</prompt> <userinput>wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img</userinput></screen>
<para>Download the image to the temporary local directory:</para>
<screen><prompt>$</prompt> <userinput>wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img</userinput></screen>
</step>
<step>
<para>Source the <literal>admin</literal> credentials to gain access to
@ -34,14 +34,15 @@
</step>
<step>
<para>Upload the image to the Image Service:</para>
<screen><prompt>$</prompt> <userinput>glance image-create --name "cirros-0.3.2-x86_64" --file cirros-0.3.2-x86_64-disk.img \
<screen><prompt>$</prompt> <userinput>glance image-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3-x86_64-disk.img \
--disk-format qcow2 --container-format bare --is-public True --progress</userinput>
<computeroutput>+------------------+--------------------------------------+
<computeroutput>[=============================>] 100%
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 64d7c1cd2b6f60c92c14662941cb7913 |
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
| container_format | bare |
| created_at | 2014-04-08T18:59:18 |
| created_at | 2014-10-10T13:14:42 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
@ -49,12 +50,13 @@
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-0.3.2-x86_64 |
| owner | efa984b0a914450e9a47788ad330699d |
| name | cirros-0.3.3-x86_64 |
| owner | ea8c352d253443118041c9c8b8416040 |
| protected | False |
| size | 13167616 |
| size | 13200896 |
| status | active |
| updated_at | 2014-01-08T18:59:18 |
| updated_at | 2014-10-10T13:14:43 |
| virtual_size | None |
+------------------+--------------------------------------+</computeroutput></screen>
<para>For information about the parameters for the
<command>glance image-create</command> command, see <link
@ -80,7 +82,7 @@
<computeroutput>+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.2-x86_64 | qcow2 | bare | 13167616 | active |
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.3-x86_64 | qcow2 | bare | 13200896 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+</computeroutput></screen>
</step>
<step>

View File

@ -3,83 +3,158 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="heat-install">
xml:id="heat-install-controller-node">
<title>Install and configure Orchestration</title>
<para>This section describes how to install and configure the
Orchestration module (heat) on the controller node.</para>
Orchestration module, code-named heat, on the controller node.</para>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure prerequisites</title>
<para>Before you install and configure Orchestration, you must create a
database and Identity service credentials including endpoints.</para>
<step>
<para>Connect to the database server as the <literal>root</literal> user:</para>
<screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput></screen>
<para>Create the <literal>heat</literal> database:</para>
<screen><prompt>mysql></prompt> <userinput>CREATE DATABASE heat;</userinput></screen>
<para>Grant the
proper access to the database:</para>
<screen><prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY '<replaceable>HEAT_DBPASS</replaceable>';</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY '<replaceable>HEAT_DBPASS</replaceable>';</userinput>
<prompt>mysql></prompt> <userinput>exit</userinput></screen>
<para>Replace <replaceable>HEAT_DBPASS</replaceable> with a suitable
password.</para>
<para>To create the database, complete these steps:</para>
<substeps>
<step>
<para>Use the database access client to connect to the database
server as the <literal>root</literal> user:</para>
<screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput></screen>
</step>
<step>
<para>Create the <literal>heat</literal> database:</para>
<screen><userinput>CREATE DATABASE heat;</userinput></screen>
</step>
<step>
<para>Grant proper access to the <literal>heat</literal>
database:</para>
<screen><userinput>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' \
IDENTIFIED BY '<replaceable>HEAT_DBPASS</replaceable>';</userinput>
<userinput>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' \
IDENTIFIED BY '<replaceable>HEAT_DBPASS</replaceable>';</userinput></screen>
<para>Replace <replaceable>HEAT_DBPASS</replaceable> with a suitable
password.</para>
</step>
<step>
<para>Exit the database access client.</para>
</step>
</substeps>
</step>
<step>
<para>Create Identity service credentials:</para>
<para>Source the <literal>admin</literal> credentials to gain access to
admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>To create the Identity service credentials, complete these
steps:</para>
<substeps>
<step>
<para>Create the <literal>heat</literal> user:</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name heat --pass <replaceable>HEAT_PASS</replaceable> --email <replaceable>EMAIL_ADDRESS</replaceable></userinput></screen>
<screen><prompt>$</prompt> <userinput>keystone user-create --name heat --pass <replaceable>HEAT_PASS</replaceable></userinput>
<computeroutput>+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 7fd67878dcd04d0393469ef825a7e005 |
| name | heat |
| username | heat |
+----------+----------------------------------+</computeroutput></screen>
<para>Replace <replaceable>HEAT_PASS</replaceable> with a suitable
password and <replaceable>EMAIL_ADDRESS</replaceable> with
a suitable e-mail address.</para>
password.</para>
</step>
<step>
<para>Link the <literal>heat</literal> user to the
<literal>service</literal> tenant and <literal>admin</literal>
role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user heat --tenant service --role admin</userinput></screen>
<note>
<para>This command provides no output.</para>
</note>
</step>
<step>
<para>Create the <literal>heat_stack_user</literal> and <literal>heat_stack_owner</literal> roles:</para>
<screen><prompt>$</prompt> <userinput>keystone role-create --name heat_stack_user</userinput>
<prompt>$</prompt> <userinput>keystone role-create --name heat_stack_owner</userinput></screen>
<para>By default, users created by Orchestration use the
<literal>heat_stack_user</literal> role.</para>
</step>
<step>
<para>Create the <literal>heat</literal> and
<literal>heat-cfn</literal> services:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name=heat --type=orchestration \
--description="Orchestration"</userinput>
<prompt>$</prompt> <userinput>keystone service-create --name=heat-cfn --type=cloudformation \
--description="Orchestration CloudFormation"</userinput></screen>
<screen><prompt>$</prompt> <userinput>keystone service-create --name heat --type orchestration \
--description "Orchestration"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Orchestration |
| enabled | True |
| id | 031112165cad4c2bb23e84603957de29 |
| name | heat |
| type | orchestration |
+-------------+----------------------------------+</computeroutput>
<prompt>$</prompt> <userinput>keystone service-create --name heat-cfn --type cloudformation \
--description "Orchestration"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Orchestration |
| enabled | True |
| id | 297740d74c0a446bbff867acdccb33fa |
| name | heat-cfn |
| type | cloudformation |
+-------------+----------------------------------+</computeroutput></screen>
</step>
<step>
<para>Create the <literal>heat_stack_user</literal> and <literal>heat_stack_owner</literal> roles:</para>
<para>By default, users created by Orchestration use the role <literal>heat_stack_user</literal>.</para>
<screen><prompt>$</prompt> <userinput>keystone role-create --name heat_stack_user</userinput>
<prompt>$</prompt> <userinput>keystone role-create --name heat_stack_owner</userinput></screen>
<para>Create the Identity service endpoints:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id $(keystone service-list | awk '/ orchestration / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:8004/v1/%\(tenant_id\)s \
--internalurl http://<replaceable>controller</replaceable>:8004/v1/%\(tenant_id\)s \
--adminurl http://<replaceable>controller</replaceable>:8004/v1/%\(tenant_id\)s \
--region regionOne</userinput>
<computeroutput>+-------------+-----------------------------------------+
| Property | Value |
+-------------+-----------------------------------------+
| adminurl | http://controller:8004/v1/%(tenant_id)s |
| id | f41225f665694b95a46448e8676b0dc2 |
| internalurl | http://controller:8004/v1/%(tenant_id)s |
| publicurl | http://controller:8004/v1/%(tenant_id)s |
| region | regionOne |
| service_id | 031112165cad4c2bb23e84603957de29 |
+-------------+-----------------------------------------+</computeroutput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id $(keystone service-list | awk '/ cloudformation / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:8000/v1 \
--internalurl http://<replaceable>controller</replaceable>:8000/v1 \
--adminurl http://<replaceable>controller</replaceable>:8000/v1 \
--region regionOne</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://controller:8000/v1 |
| id | f41225f665694b95a46448e8676b0dc2 |
| internalurl | http://controller:8000/v1 |
| publicurl | http://controller:8000/v1 |
| region | regionOne |
| service_id | 297740d74c0a446bbff867acdccb33fa |
+-------------+----------------------------------+</computeroutput></screen>
</step>
</substeps>
</step>
<step>
<para>Create the Identity service endpoints:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ orchestration / {print $2}') \
--publicurl=http://<replaceable>controller</replaceable>:8004/v1/%\(tenant_id\)s \
--internalurl=http://<replaceable>controller</replaceable>:8004/v1/%\(tenant_id\)s \
--adminurl=http://<replaceable>controller</replaceable>:8004/v1/%\(tenant_id\)s</userinput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ cloudformation / {print $2}') \
--publicurl=http://<replaceable>controller</replaceable>:8000/v1 \
--internalurl=http://<replaceable>controller</replaceable>:8000/v1 \
--adminurl=http://<replaceable>controller</replaceable>:8000/v1</userinput></screen>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To install and configure the Orchestration components</title>
<step>
<para>Run the following commands to install the packages:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install heat-api heat-api-cfn heat-engine</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-heat-api openstack-heat-engine openstack-heat-api-cfn</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-heat-api openstack-heat-engine openstack-heat-api-cfn</userinput></screen>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install heat-api heat-api-cfn heat-engine python-heatclient</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \
python-heatclient</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \
python-heatclient</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/heat/heat.conf</filename> file.</para>
<para>Edit the <filename>/etc/heat/heat.conf</filename> file and
complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[database]</literal> section, configure
@ -87,19 +162,19 @@
<programlisting language="ini">[database]
...
connection = mysql://heat:<replaceable>HEAT_DBPASS</replaceable>@<replaceable>controller</replaceable>/heat</programlisting>
<para>Replace <replaceable>HEAT_DBPASS</replaceable> with the password
you chose for the Orchestration database.</para>
<para>Replace <replaceable>HEAT_DBPASS</replaceable> with the
password you chose for the Orchestration database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = heat.openstack.common.rpc.impl_kombu
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the password
you chose for the <literal>guest</literal> account in
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the
password you chose for the <literal>guest</literal> account in
<application>RabbitMQ</application>.</para>
</step>
<step>
@ -109,9 +184,7 @@ rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<programlisting language="ini">[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
auth_host = <replaceable>controller</replaceable>
auth_port = 35357
auth_protocol = http
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = heat
admin_password = <replaceable>HEAT_PASS</replaceable>
@ -122,6 +195,12 @@ auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0</programlistin
<para>Replace <replaceable>HEAT_PASS</replaceable> with the
password you chose for the <literal>heat</literal> user
in the Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
@ -131,18 +210,17 @@ auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0</programlistin
heat_metadata_server_url = http://<replaceable>controller</replaceable>:8000
heat_waitcondition_server_url = http://<replaceable>controller</replaceable>:8000/v1/waitcondition</programlisting>
</step>
<step os="ubuntu">
<para>Configure the log directory in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting os="ubuntu" language="ini">[DEFAULT]
<step>
<para>(Optional) To assist with troubleshooting, enable verbose
logging in the <literal>[DEFAULT]</literal> section:</para>
<programlisting language="ini">[DEFAULT]
...
log_dir = /var/log/heat</programlisting>
verbose = True</programlisting>
</step>
</substeps>
</step>
<step>
<para>Run the following command to populate the Orchestration
database:</para>
<para>Populate the Orchestration database:</para>
<screen><prompt>#</prompt> <userinput>su -s /bin/sh -c "heat-manage db_sync" heat</userinput></screen>
</step>
</procedure>
@ -150,7 +228,7 @@ log_dir = /var/log/heat</programlisting>
<title>To install and configure the Orchestration components</title>
<step>
<para>Run the following commands to install the packages:</para>
<screen><prompt>#</prompt> <userinput>apt-get install heat-api heat-api-cfn heat-engine</userinput></screen>
<screen><prompt>#</prompt> <userinput>apt-get install heat-api heat-api-cfn heat-engine python-heat-client</userinput></screen>
</step>
<step>
<para>Respond to prompts for
@ -163,16 +241,13 @@ log_dir = /var/log/heat</programlisting>
credentials</link>.</para>
</step>
<step>
<para>Respond to the <literal>debconf</literal> configuration
tool prompts.</para>
</step>
<step>
<para>Edit the <filename>/etc/heat/heat.conf</filename> file.</para>
<para>Edit the <filename>/etc/heat/heat.conf</filename> file and
complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[ec2authtoken]</literal> section, configure
Identity service access for EC2 operations:</para>
<programlisting language="ini">[ec2authtoken]
Identity service access:</para>
<programlisting language="ini">[ec2authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0</programlisting>
</step>
@ -187,21 +262,31 @@ auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0</programlistin
<prompt>#</prompt> <userinput>service heat-api-cfn restart</userinput>
<prompt>#</prompt> <userinput>service heat-engine restart</userinput></screen>
</step>
<step os="rhel;fedora;centos;opensuse;sles">
<step os="rhel;fedora;centos;sles;opensuse">
<para>Start the Orchestration services and configure them to start when
the system boots:</para>
<screen os="rhel;fedora;centos;opensuse;sles"><prompt>#</prompt> <userinput>service openstack-heat-api start</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service \
openstack-heat-engine.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-heat-api.service openstack-heat-api-cfn.service \
openstack-heat-engine.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-heat-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-heat-api-cfn start</userinput>
<prompt>#</prompt> <userinput>service openstack-heat-engine start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-heat-api on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-heat-api-cfn on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-heat-engine on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service \
openstack-heat-engine.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-heat-api.service openstack-heat-api-cfn.service \
openstack-heat-engine.service</userinput></screen>
</step>
<step os="ubuntu">
<para>By default, the Ubuntu packages create a SQLite database.</para>
<para>Because this configuration uses a SQL database server, you
can remove the SQLite database file:</para>
<screen><prompt>#</prompt> <userinput>rm /var/lib/heat/heat.sqlite</userinput></screen>
<screen><prompt>#</prompt> <userinput>rm -f /var/lib/heat/heat.sqlite</userinput></screen>
</step>
</procedure>
</section>

View File

@ -28,7 +28,7 @@
stack from the template:</para>
<screen><prompt>$</prompt> <userinput>NET_ID=$(nova net-list | awk '/ demo-net / { print $2 }')</userinput>
<prompt>$</prompt> <userinput>heat stack-create -f test-stack.yml \
-P "ImageID=cirros-0.3.2-x86_64;NetID=$NET_ID" testStack</userinput>
-P "ImageID=cirros-0.3.3-x86_64;NetID=$NET_ID" testStack</userinput>
<computeroutput>+--------------------------------------+------------+--------------------+----------------------+
| id | stack_name | stack_status | creation_time |
+--------------------------------------+------------+--------------------+----------------------+

View File

@ -5,29 +5,41 @@
version="5.0"
xml:id="keystone-install">
<title>Install and configure</title>
<para>This section describes how to install and configure the
OpenStack Identity service on the controller node.</para>
<para>This section describes how to install and configure the OpenStack
Identity service on the controller node.</para>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure prerequisites</title>
<para>Before you configure the OpenStack Identity service, you
must create a database and an administration token.</para>
<para>Before you configure the OpenStack Identity service, you must create
a database and an administration token.</para>
<step>
<para>As the <literal>root</literal> user, connect to the
database to create the <literal>keystone</literal> database
and grant the proper access to it:</para>
<screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput>
<prompt>mysql></prompt> <userinput>CREATE DATABASE keystone;</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
<para>To create the database, complete these steps:</para>
<substeps>
<step>
<para>Use the database access client to connect to the database
server as the <literal>root</literal> user:</para>
<screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput></screen>
</step>
<step>
<para>Create the <literal>keystone</literal> database:</para>
<screen><userinput>CREATE DATABASE keystone;</userinput></screen>
</step>
<step>
<para>Grant proper access to the <literal>keystone</literal>
database:</para>
<screen><userinput>GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY '<replaceable>KEYSTONE_DBPASS</replaceable>';</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY '<replaceable>KEYSTONE_DBPASS</replaceable>';</userinput>
<prompt>mysql></prompt> <userinput>exit</userinput></screen>
<para>Replace <replaceable>KEYSTONE_DBPASS</replaceable> with a
suitable password.</para>
<userinput>GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY '<replaceable>KEYSTONE_DBPASS</replaceable>';</userinput></screen>
<para>Replace <replaceable>KEYSTONE_DBPASS</replaceable> with a suitable password.</para>
</step>
<step>
<para>Exit the database access client.</para>
</step>
</substeps>
</step>
<step>
<para>Generate a random value to use as the administration token
during initial configuration:</para>
<para>Generate a random value to use as the administration token during
initial configuration:</para>
<screen os="ubuntu;rhel;centos;fedora"><prompt>#</prompt> <userinput>openssl rand -hex 10</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>openssl rand 10 | hexdump -e '1/1 "%.2x"'</userinput></screen>
</step>
@ -35,8 +47,8 @@
<procedure os="debian">
<title>To configure prerequisites</title>
<step>
<para>Generate a random value to use as the administration token
during initial configuration:</para>
<para>Generate a random value to use as the administration token during
initial configuration:</para>
<screen><prompt>#</prompt> <userinput>openssl rand -hex 10</userinput></screen>
</step>
</procedure>
@ -46,51 +58,57 @@
<para>Run the following command to install the packages:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install keystone python-keystoneclient</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-keystone python-keystoneclient</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-keystone python-keystoneclient</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-keystone python-keystoneclient</userinput></screen>
</step>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>Edit the <filename>/etc/keystone/keystone.conf</filename>
file.</para>
<step>
<para>Edit the <filename>/etc/keystone/keystone.conf</filename> file and
complete the following actions:</para>
<substeps>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<step>
<para>In the <literal>[DEFAULT]</literal> section, define the value
of the initial administration token:</para>
<programlisting language="ini">[DEFAULT]
...
admin_token = <replaceable>ADMIN_TOKEN</replaceable></programlisting>
<para>Replace <replaceable>ADMIN_TOKEN</replaceable> with the
random value that you generated in a previous step.</para>
<para>Replace <replaceable>ADMIN_TOKEN</replaceable> with the random
value that you generated in a previous step.</para>
</step>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<step>
<para>In the <literal>[database]</literal> section, configure
database access:</para>
<programlisting language="ini">[database]
...
connection = mysql://keystone:<replaceable>KEYSTONE_DBPASS</replaceable>@<replaceable>controller</replaceable>/keystone</programlisting>
<para>Replace <replaceable>KEYSTONE_DBPASS</replaceable> with
the password you chose for the database.</para>
<para>Replace <replaceable>KEYSTONE_DBPASS</replaceable> with the
password you chose for the database.</para>
</step>
<step os="ubuntu">
<para>In the <literal>[DEFAULT]</literal> section, configure the
log directory:</para>
<step>
<para>In the <literal>[token]</literal> section, configure the UUID
token provider and SQL driver:</para>
<programlisting language="ini">[token]
...
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal> section:</para>
<programlisting language="ini">[DEFAULT]
...
log_dir = /var/log/keystone</programlisting>
verbose = True</programlisting>
</step>
</substeps>
</step>
<step os="rhel;centos;fedora;opensuse;sles">
<para>By default, the Identity service uses public key
infrastructure (PKI).</para>
<para>Create generic certificates and keys and restrict access
to the associated files:</para>
<para>Create generic certificates and keys and restrict access to the
associated files:</para>
<screen os="rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>keystone-manage pki_setup --keystone-user keystone --keystone-group keystone</userinput>
<prompt>#</prompt> <userinput>chown -R keystone:keystone /var/log/keystone</userinput>
<prompt>#</prompt> <userinput>chown -R keystone:keystone /etc/keystone/ssl</userinput>
<prompt>#</prompt> <userinput>chmod -R o-rwx /etc/keystone/ssl</userinput></screen>
</step>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>Run the following command to populate the Identity service
database:</para>
<para>Populate the Identity service database:</para>
<screen><prompt>#</prompt> <userinput>su -s /bin/sh -c "keystone-manage db_sync" keystone</userinput></screen>
</step>
</procedure>
@ -101,34 +119,28 @@ log_dir = /var/log/keystone</programlisting>
<screen><prompt>#</prompt> <userinput>apt-get install keystone python-keystoneclient</userinput></screen>
</step>
<step>
<para>Respond to prompts for <link
linkend="debconf-dbconfig-common">database
management</link>.</para>
<para>Respond to prompts for</para>
</step>
<step>
<para>Configure the initial administration token:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_1_admin_token.png"
/>
<imagedata scale="50" fileref="figures/debconf-screenshots/keystone_1_admin_token.png"/>
</imageobject>
</mediaobject>
</informalfigure>
<para>Use the random value that you generated in a previous step. If
you install using non-interactive mode or you do not specify this
token, the configuration tool generates a random value.</para>
<para>Use the random value that you generated in a previous step. If you
install using non-interactive mode or you do not specify this token,
the configuration tool generates a random value.</para>
</step>
<step>
<para>Create the <literal>admin</literal> tenant and
user:</para>
<para>Create the <literal>admin</literal> tenant and user:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_2_register_admin_tenant_yes_no.png"
/>
fileref="figures/debconf-screenshots/keystone_2_register_admin_tenant_yes_no.png"/>
</imageobject>
</mediaobject>
</informalfigure>
@ -136,8 +148,7 @@ log_dir = /var/log/keystone</programlisting>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_3_admin_user_name.png"
/>
fileref="figures/debconf-screenshots/keystone_3_admin_user_name.png"/>
</imageobject>
</mediaobject>
</informalfigure>
@ -145,8 +156,7 @@ log_dir = /var/log/keystone</programlisting>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_4_admin_user_email.png"
/>
fileref="figures/debconf-screenshots/keystone_4_admin_user_email.png"/>
</imageobject>
</mediaobject>
</informalfigure>
@ -154,8 +164,7 @@ log_dir = /var/log/keystone</programlisting>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_5_admin_user_pass.png"
/>
fileref="figures/debconf-screenshots/keystone_5_admin_user_pass.png"/>
</imageobject>
</mediaobject>
</informalfigure>
@ -163,20 +172,18 @@ log_dir = /var/log/keystone</programlisting>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_6_admin_user_pass_confirm.png"
/>
fileref="figures/debconf-screenshots/keystone_6_admin_user_pass_confirm.png"/>
</imageobject>
</mediaobject>
</informalfigure>
</step>
<step>
<para>Register the Identity service in the catalog:</para>
<para>Create the Identity service endpoints:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_7_register_endpoint.png"
/>
fileref="figures/debconf-screenshots/keystone_7_register_endpoint.png"/>
</imageobject>
</mediaobject>
</informalfigure>
@ -189,40 +196,40 @@ log_dir = /var/log/keystone</programlisting>
<screen><prompt>#</prompt> <userinput>service keystone restart</userinput></screen>
</step>
<step os="rhel;fedora;centos;opensuse;sles">
<para>Start the Identity service and configure it to start when
the system boots:</para>
<screen><prompt>#</prompt> <userinput>service openstack-keystone start</userinput>
<para>Start the Identity service and configure it to start when the
system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-keystone.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-keystone.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-keystone start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-keystone on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-keystone.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-keystone.service</userinput></screen>
</step>
<step os="ubuntu">
<para>By default, the Ubuntu packages create a SQLite
database.</para>
<para>Because this configuration uses a SQL database server, you
can remove the SQLite database file:</para>
<screen><prompt>#</prompt> <userinput>rm /var/lib/keystone/keystone.db</userinput></screen>
<para>By default, the Ubuntu packages create a SQLite database.</para>
<para>Because this configuration uses a SQL database server, you can
remove the SQLite database file:</para>
<screen><prompt>#</prompt> <userinput>rm -f /var/lib/keystone/keystone.db</userinput></screen>
</step>
<step>
<para>By default, the Identity service stores expired tokens in
the database indefinitely. The accumulation of expired tokens
considerably increases the database size and might degrade
service performance, particularly in test environments with
limited resources.</para>
<para>We recommend that you use <systemitem class="service"
>cron</systemitem> to configure a periodic task that purges
expired tokens hourly.</para>
<para>Run the following command to purge expired tokens every
hour and log the output to the
<filename>/var/log/keystone/keystone-tokenflush.log</filename>
file:</para>
<para>By default, the Identity service stores expired tokens in the
database indefinitely. The accumulation of expired tokens considerably
increases the database size and might degrade service performance,
particularly in environments with limited resources.</para>
<para>We recommend that you use
<systemitem class="service">cron</systemitem> to configure a periodic
task that purges expired tokens hourly:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>(crontab -l -u keystone 2>&amp;1 | grep -q token_flush) || \
echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&amp;1' \
>> /var/spool/cron/crontabs/keystone</userinput></screen>
echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&amp;1' \
>> /var/spool/cron/crontabs/keystone</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>(crontab -l -u keystone 2>&amp;1 | grep -q token_flush) || \
echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&amp;1' \
>> /var/spool/cron/keystone</userinput></screen>
echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&amp;1' \
>> /var/spool/cron/keystone</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>(crontab -l -u keystone 2>&amp;1 | grep -q token_flush) || \
echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&amp;1' \
>> /var/spool/cron/tabs/keystone</userinput></screen>
echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&amp;1' \
>> /var/spool/cron/tabs/keystone</userinput></screen>
</step>
</procedure>
</section>

View File

@ -0,0 +1,52 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="keystone-client-environment-scripts">
<title>Create OpenStack client environment scripts</title>
<para>The previous section used a combination of environment variables and
command options to interact with the Identity service via the
<command>keystone</command> client. To increase efficiency of client
operations, OpenStack supports simple client environment scripts also
known as OpenRC files. These scripts typically contain common options for
all clients, but also support unique options. For more information, see the
<link xlink:href="http://docs.openstack.org/user-guide/content/cli_openrc.html">OpenStack User Guide</link>.</para>
<procedure>
<title>To create the scripts</title>
<para>Create client environment scripts for the <literal>admin</literal>
and <literal>demo</literal> tenants and users. Future portions of this
guide reference these scripts to load appropriate credentials for client
operations.</para>
<step>
<para>Edit the <filename>admin-openrc.sh</filename> file and add the
following content:</para>
<programlisting language="bash">export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=<replaceable>ADMIN_PASS</replaceable>
export OS_AUTH_URL=http://<replaceable>controller</replaceable>:35357/v2.0</programlisting>
<para>Replace <literal>ADMIN_PASS</literal> with the password you chose
for the <literal>admin</literal> user in the Identity service.</para>
</step>
<step>
<para>Edit the <filename>demo-openrc.sh</filename> file and add the
following content:</para>
<programlisting language="bash">export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=<replaceable>DEMO_PASS</replaceable>
export OS_AUTH_URL=http://<replaceable>controller</replaceable>:5000/v2.0</programlisting>
<para>Replace <literal>DEMO_PASS</literal> with the password you chose
for the <literal>demo</literal> user in the Identity service.</para>
</step>
</procedure>
<procedure>
<title>To load client environment scripts</title>
<step>
<para>To run clients as a certain tenant and user, you can simply load
the associated client environment script prior to running them. For
example, to load the location of the Identity service and
<literal>admin</literal> tenant and user credentials:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
</procedure>
</section>

View File

@ -25,14 +25,15 @@
services in your environment.</para>
<para>Create the service entity for the Identity service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name keystone --type identity \
--description="OpenStack Identity"</userinput>
--description "OpenStack Identity"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| id | 15c11a23667e427e91bc31335b45f4bd |
| name | keystone |
| type | identity |
| enabled | True |
| id | 15c11a23667e427e91bc31335b45f4bd |
| name | keystone |
| type | identity |
+-------------+----------------------------------+</computeroutput></screen>
<note>
<para>Because OpenStack generates IDs dynamically, you will see
@ -47,23 +48,26 @@
<para>OpenStack provides three API endpoint variations for each service:
admin, internal, and public. In a production environment, the variants
might reside on separate networks that service different types of users
for security reasons. For simplicity, this configuration uses the
management network for all variations.</para>
for security reasons. Also, OpenStack supports multiple regions for
scalability. For simplicity, this configuration uses the management
network for all endpoint variations and the
<literal>regionOne</literal> region.</para>
<para>Create the API endpoint for the Identity service:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://<replaceable>controller</replaceable>:5000/v2.0 \
--internalurl=http://<replaceable>controller</replaceable>:5000/v2.0 \
--adminurl=http://<replaceable>controller</replaceable>:35357/v2.0</userinput>
--service-id $(keystone service-list | awk '/ identity / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:5000/v2.0 \
--internalurl http://<replaceable>controller</replaceable>:5000/v2.0 \
--adminurl http://<replaceable>controller</replaceable>:35357/v2.0 \
--region regionOne</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://controller:35357/v2.0 |
| id | 11f9c625a3b94a3f8e66bf4e5de2679f |
| adminurl | http://controller:35357/v2.0 |
| id | 11f9c625a3b94a3f8e66bf4e5de2679f |
| internalurl | http://controller:5000/v2.0 |
| publicurl | http://controller:5000/v2.0 |
| region | regionOne |
| service_id | 15c11a23667e427e91bc31335b45f4bd |
| publicurl | http://controller:5000/v2.0 |
| region | regionOne |
| service_id | 15c11a23667e427e91bc31335b45f4bd |
+-------------+----------------------------------+</computeroutput></screen>
<note>
<para>This command references the ID of the service that you created

View File

@ -15,11 +15,11 @@
(endpoint) of the Identity service before you run
<command>keystone</command> commands.</para>
<para>You can pass the value of the administration token to the
<command>keystone</command> command with the <option>--os-token</option>
<command>keystone</command> command with the <parameter>--os-token</parameter>
option or set the temporary <envar>OS_SERVICE_TOKEN</envar> environment
variable. Similarly, you can pass the location of the Identity service
to the <command>keystone</command> command with the
<option>--os-endpoint</option> option or set the temporary
<parameter>--os-endpoint</parameter> option or set the temporary
<envar>OS_SERVICE_ENDPOINT</envar> environment variable. This guide
uses environment variables to reduce command length.</para>
<para>For more information, see the
@ -96,12 +96,18 @@
</note>
</step>
<step>
<para>By default, the Identity service creates a special
<literal>_member_</literal> role. The OpenStack dashboard
automatically grants access to users with this role. You must
give the <literal>admin</literal> user access to this role in
addition to the <literal>admin</literal> role.
</para>
<para>By default, the dashboard limits access to users with the
<literal>_member_</literal> role.</para>
<para>Create the <literal>_member_</literal> role:</para>
<screen><prompt>$</prompt> <userinput>keystone role-create --name _member_</userinput>
<computeroutput>+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | 0f198e94ffce416cbcbe344e1843eac8 |
| name | _member_ |
+----------+----------------------------------+</computeroutput></screen>
</step>
<step>
<para>Add the <literal>admin</literal> tenant and user to the
<literal>_member_</literal> role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --tenant admin --user admin --role _member_</userinput></screen>

View File

@ -87,12 +87,18 @@
<para>As the <literal>demo</literal> tenant and user, request an
authentication token:</para>
<screen><prompt>$</prompt> <userinput>keystone --os-tenant-name demo --os-username demo --os-password <replaceable>DEMO_PASS</replaceable> \
--os-auth-url http://controller:35357/v2.0 token-get</userinput></screen>
--os-auth-url http://controller:35357/v2.0 token-get</userinput>
<computeroutput>+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2014-10-10T12:51:33Z |
| id | 1b87ceae9e08411ba4a16e4dada04802 |
| tenant_id | 4aa51bb942be4dd0ac0555d7591f80a6 |
| user_id | 7004dfa0dda84d63aef81cf7f100af01 |
+-----------+----------------------------------+</computeroutput></screen>
<para>Replace <replaceable>DEMO_PASS</replaceable> with the password
you chose for the <literal>demo</literal> user in the Identity
service.</para>
<para>Lengthy output that includes a token value verifies operation
for the <literal>demo</literal> tenant and user.</para>
</step>
<step>
<para>As the <literal>demo</literal> tenant and user, attempt to list

View File

@ -6,18 +6,18 @@
xml:id="launch-instance-neutron">
<title>Launch an instance with OpenStack Networking (neutron)</title>
<procedure>
<title>To generate a keypair</title>
<title>To generate a key pair</title>
<para>Most cloud images support
<glossterm>public key authentication</glossterm> rather than conventional
user name/password authentication. Before launching an instance, you must
generate a public/private keypair using <command>ssh-keygen</command>
generate a public/private key pair using <command>ssh-keygen</command>
and add the public key to your OpenStack environment.</para>
<step>
<para>Source the <literal>demo</literal> tenant credentials:</para>
<screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen>
</step>
<step>
<para>Generate a keypair:</para>
<para>Generate a key pair:</para>
<screen><prompt>$</prompt> <userinput>ssh-keygen</userinput></screen>
</step>
<step>
@ -67,10 +67,10 @@
<computeroutput>+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.2-x86_64 | ACTIVE | |
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.3-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+</computeroutput></screen>
<para>Your first instance uses the
<literal>cirros-0.3.2-x86_64</literal> image.</para>
<literal>cirros-0.3.3-x86_64</literal> image.</para>
</step>
<step>
<para>List available networks:</para>
@ -97,14 +97,13 @@
group. By default, this security group implements a firewall that
blocks remote access to instances. If you would like to permit
remote access to your instance, launch it and then
<link linkend="launch-instance-neutron-remoteaccess">
configure remote access</link>.</para>
configure remote access.</para>
</step>
<step>
<para>Launch the instance:</para>
<para>Replace <replaceable>DEMO_NET_ID</replaceable> with the ID of the
<literal>demo-net</literal> tenant network.</para>
<screen><prompt>$</prompt> <userinput>nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-id=<replaceable>DEMO_NET_ID</replaceable> \
<screen><prompt>$</prompt> <userinput>nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=<replaceable>DEMO_NET_ID</replaceable> \
--security-group default --key-name demo-key <replaceable>demo-instance1</replaceable></userinput>
<computeroutput>+--------------------------------------+------------------------------------------------------------+
| Property | Value |
@ -124,7 +123,7 @@
| flavor | m1.tiny (1) |
| hostId | |
| id | 05682b91-81a1-464c-8f40-8b3da7ee92c5 |
| image | cirros-0.3.2-x86_64 (acafc7c0-40aa-4026-9673-b879898e1fc2) |
| image | cirros-0.3.3-x86_64 (acafc7c0-40aa-4026-9673-b879898e1fc2) |
| key_name | demo-key |
| metadata | {} |
| name | demo-instance1 |
@ -279,16 +278,90 @@ Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.102' (RSA) to the list of known hosts.
$</computeroutput></screen>
<note>
<para>If your host does not contain the public/private keypair created
<para>If your host does not contain the public/private key pair created
in an earlier step, SSH prompts for the default password associated
with the <literal>cirros</literal> user.</para>
</note>
</step>
</procedure>
<procedure xml:id="launch-instance-neutron-volumeattach">
<title>To attach a Block Storage volume to your instance</title>
<para>If your environment includes the Block Storage service, you can
attach a volume to the instance.</para>
<step>
<para>Source the <literal>demo</literal> tenant credentials:</para>
<screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen>
</step>
<step>
<para>List volumes:</para>
<screen><prompt>$</prompt> <userinput>nova volume-list</userinput>
<computeroutput>+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | available | demo-volume1 | 1 | None | |
+--------------------------------------+-----------+--------------+------+-------------+-------------+</computeroutput></screen>
</step>
<step>
<para>Attach the <literal>demo-volume1</literal> volume to
the <literal>demo-instance1</literal> instance:</para>
<screen><prompt>$</prompt> <userinput>nova volume-attach demo-instance1 158bea89-07db-4ac2-8115-66c0d6a4bb48</userinput>
<computeroutput>+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
| serverId | 05682b91-81a1-464c-8f40-8b3da7ee92c5 |
| volumeId | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
+----------+--------------------------------------+</computeroutput></screen>
<note>
<para>You must reference volumes using the IDs instead of
names.</para>
</note>
</step>
<step>
<para>List volumes:</para>
<screen><prompt>$</prompt> <userinput>nova volume-list</userinput>
<computeroutput>+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+
| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | in-use | demo-volume1 | 1 | None | 05682b91-81a1-464c-8f40-8b3da7ee92c5 |
+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+</computeroutput></screen>
<para>The <literal>demo-volume1</literal> volume status should indicate
<literal>in-use</literal> by the ID of the
<literal>demo-instance1</literal> instance.</para>
</step>
<step>
<para>Access your instance using SSH from the controller node or any
host on the external network and use the <command>fdisk</command>
command to verify presence of the volume as the
<literal>/dev/vdb</literal> block storage device:</para>
<screen><prompt>$</prompt> <userinput>ssh cirros@203.0.113.102</userinput>
<computeroutput>$ sudo fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/vda1 * 16065 2088449 1036192+ 83 Linux
Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn't contain a valid partition table</computeroutput></screen>
<note>
<para>You must create a partition table and file system to use
the volume.</para>
</note>
</step>
</procedure>
<para>If your instance does not launch or seem to work as you expect, see the
<link xlink:href="http://docs.openstack.org/ops">
<citetitle>OpenStack Operations Guide</citetitle></link> for more
information or use one of the
<link linkend="app_community_support">many other options</link> to seek
assistance. We want your environment to work!</para>
</section>

View File

@ -6,18 +6,18 @@
xml:id="launch-instance-nova">
<title>Launch an instance with legacy networking (nova-network)</title>
<procedure>
<title>To generate a keypair</title>
<title>To generate a key pair</title>
<para>Most cloud images support
<glossterm>public key authentication</glossterm> rather than conventional
user name/password authentication. Before launching an instance, you must
generate a public/private keypair using <command>ssh-keygen</command>
generate a public/private key pair using <command>ssh-keygen</command>
and add the public key to your OpenStack environment.</para>
<step>
<para>Source the <literal>demo</literal> tenant credentials:</para>
<screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen>
</step>
<step>
<para>Generate a keypair:</para>
<para>Generate a key pair:</para>
<screen><prompt>$</prompt> <userinput>ssh-keygen</userinput></screen>
</step>
<step>
@ -67,10 +67,10 @@
<computeroutput>+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.2-x86_64 | ACTIVE | |
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.3-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+</computeroutput></screen>
<para>Your first instance uses the
<literal>cirros-0.3.2-x86_64</literal> image.</para>
<literal>cirros-0.3.3-x86_64</literal> image.</para>
</step>
<step>
<para>List available networks:</para>
@ -102,14 +102,13 @@
group. By default, this security group implements a firewall that
blocks remote access to instances. If you would like to permit
remote access to your instance, launch it and then
<link linkend="launch-instance-nova-remoteaccess">
configure remote access</link>.</para>
configure remote access.</para>
</step>
<step>
<para>Launch the instance:</para>
<para>Replace <replaceable>DEMO_NET_ID</replaceable> with the ID of the
<literal>demo-net</literal> tenant network.</para>
<screen><prompt>$</prompt> <userinput>nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-id=<replaceable>DEMO_NET_ID</replaceable> \
<screen><prompt>$</prompt> <userinput>nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=<replaceable>DEMO_NET_ID</replaceable> \
--security-group default --key-name demo-key <replaceable>demo-instance1</replaceable></userinput>
<computeroutput>+--------------------------------------+------------------------------------------------------------+
| Property | Value |
@ -129,7 +128,7 @@
| flavor | m1.tiny (1) |
| hostId | |
| id | 45ea195c-c469-43eb-83db-1a663bbad2fc |
| image | cirros-0.3.2-x86_64 (acafc7c0-40aa-4026-9673-b879898e1fc2) |
| image | cirros-0.3.3-x86_64 (acafc7c0-40aa-4026-9673-b879898e1fc2) |
| key_name | demo-key |
| metadata | {} |
| name | demo-instance1 |
@ -238,16 +237,92 @@ Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.26' (RSA) to the list of known hosts.
$</computeroutput></screen>
<note>
<para>If your host does not contain the public/private keypair created
<para>If your host does not contain the public/private key pair created
in an earlier step, SSH prompts for the default password associated
with the <literal>cirros</literal> user.</para>
</note>
</step>
</procedure>
<procedure xml:id="launch-instance-nova-volumeattach">
<title>To attach a Block Storage volume to your instance</title>
<para>If your environment includes the Block Storage service, you can
attach a volume to the instance.</para>
<step>
<para>Source the <literal>demo</literal> tenant credentials:</para>
<screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen>
</step>
<step>
<para>List volumes:</para>
<screen><prompt>$</prompt> <userinput>nova volume-list</userinput>
<computeroutput>+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | available | demo-volume1 | 1 | None | |
+--------------------------------------+-----------+--------------+------+-------------+-------------+</computeroutput></screen>
</step>
<step>
<para>Attach the <literal>demo-volume1</literal> volume to
the <literal>demo-instance1</literal> instance:</para>
<screen><prompt>$</prompt> <userinput>nova volume-attach demo-instance1 158bea89-07db-4ac2-8115-66c0d6a4bb48</userinput>
<computeroutput>+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
| serverId | 45ea195c-c469-43eb-83db-1a663bbad2fc |
| volumeId | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
+----------+--------------------------------------+</computeroutput></screen>
<note>
<para>You must reference volumes using the IDs instead of
names.</para>
</note>
</step>
<step>
<para>List volumes:</para>
<screen><prompt>$</prompt> <userinput>nova volume-list</userinput>
<computeroutput>+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+
| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | in-use | demo-volume1 | 1 | None | 45ea195c-c469-43eb-83db-1a663bbad2fc |
+--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+</computeroutput></screen>
<para>The <literal>demo-volume1</literal> volume status should indicate
<literal>in-use</literal> by the ID of the
<literal>demo-instance1</literal> instance.</para>
</step>
<step>
<para>Access your instance using SSH from the controller node or any
host on the external network and use the <command>fdisk</command>
command to verify presence of the volume as the
<literal>/dev/vdb</literal> block storage device:</para>
<screen><prompt>$</prompt> <userinput>ssh cirros@203.0.113.102</userinput>
<computeroutput>$ sudo fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/vda1 * 16065 2088449 1036192+ 83 Linux
Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn't contain a valid partition table</computeroutput></screen>
<note>
<para>You must create a partition table and file system to use
the volume.</para>
</note>
</step>
</procedure>
<para>If your instance does not launch or seem to work as you expect, see the
<link xlink:href="http://docs.openstack.org/ops">
<citetitle>OpenStack Operations Guide</citetitle></link> for more
<citetitle>OpenStack Operations Guide</citetitle> for more
information or use one of the
<link linkend="app_community_support">many other options</link> to seek
assistance. We want your environment to work!</para>
</section>

View File

@ -0,0 +1,334 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="neutron-compute-node">
<title>Install and configure compute node</title>
<para>The compute node handles connectivity and
<glossterm baseform="security group">security groups</glossterm>
for instances.</para>
<procedure>
<title>To configure prerequisites</title>
<para>Before you install and configure OpenStack Networking, you
must configure certain kernel networking parameters.</para>
<step>
<para>Edit the <filename>/etc/sysctl.conf</filename> file to
contain the following parameters:</para>
<programlisting>net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0</programlisting>
</step>
<step>
<para>Implement the changes:</para>
<screen><prompt>#</prompt> <userinput>sysctl -p</userinput></screen>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To install the Networking components</title>
<step>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-neutron-ml2 openstack-neutron-openvswitch</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install --no-recommends openstack-neutron-openvswitch-agent ipset</userinput></screen>
<note os="sles;opensuse">
<para>SUSE does not use a separate ML2 plug-in package.</para>
</note>
</step>
</procedure>
<procedure os="debian">
<title>To install and configure the Networking components</title>
<step>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-plugin-openvswitch-agent openvswitch-datapath-dkms</userinput></screen>
<note>
<para>Debian does not use a separate ML2 plug-in package.</para>
</note>
</step>
<step>
<para>Select the ML2 plug-in:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/neutron_1_plugin_selection.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<note>
<para>Selecting the ML2 plug-in also populates the
<option>service_plugins</option> and
<option>allow_overlapping_ips</option> options in the
<filename>/etc/neutron/neutron.conf</filename> file with the
appropriate values.</para>
</note>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure the Networking common components</title>
<para>The Networking common component configuration includes the
authentication mechanism, message broker, and plug-in.</para>
<step>
<para>Edit the <filename>/etc/neutron/neutron.conf</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[database]</literal> section, comment out
any <literal>connection</literal> options because compute nodes
do not directly access the database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the
password you chose for the <literal>guest</literal> account in
<application>RabbitMQ</application>.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections,
configure Identity service access:</para>
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = neutron
admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<para>Replace <replaceable>NEUTRON_PASS</replaceable> with the
password you chose or the <literal>neutron</literal> user in the
Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, enable the
Modular Layer 2 (ML2) plug-in, router service, and overlapping
IP addresses:</para>
<programlisting language="ini">[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To configure the Modular Layer 2 (ML2) plug-in</title>
<para>The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to
build the virtual networking framework for instances.</para>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>
file and complete the following actions:</para>
<substeps>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[ml2]</literal> section, enable the
<glossterm baseform="flat network">flat</glossterm> and
<glossterm>generic routing encapsulation (GRE)</glossterm>
network type drivers, GRE tenant networks, and the OVS
mechanism driver:</para>
<programlisting language="ini">[ml2]
...
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch</programlisting>
</step>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[ml2_type_gre]</literal> section, configure
the tunnel identifier (id) range:</para>
<programlisting language="ini">[ml2_type_gre]
...
tunnel_id_ranges = 1:1000</programlisting>
</step>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[securitygroup]</literal> section, enable
security groups, enable <glossterm>ipset</glossterm>, and
configure the OVS <glossterm>iptables</glossterm> firewall
driver:</para>
<programlisting language="ini">[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver</programlisting>
</step>
<step>
<para>In the <literal>[ovs]</literal> section, configure the
<glossterm>Open vSwitch (OVS) agent</glossterm>:</para>
<programlisting language="ini">[ovs]
...
local_ip = <replaceable>INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS</replaceable>
tunnel_type = gre
enable_tunneling = True</programlisting>
<para>Replace
<replaceable>INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the instance tunnels network interface
on your compute node.</para>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To configure the Open vSwitch (OVS) service</title>
<para>The OVS service provides the underlying virtual networking framework
for instances.</para>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the OVS service and configure it to start when the
system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openvswitch.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openvswitch.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openvswitch-switch start</userinput>
<prompt>#</prompt> <userinput>chkconfig openvswitch-switch on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openvswitch.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openvswitch.service</userinput></screen>
</step>
<step os="debian;ubuntu">
<para>Restart the OVS service:</para>
<screen><prompt>#</prompt> <userinput>service openvswitch-switch restart</userinput></screen>
</step>
</procedure>
<procedure>
<title>To configure Compute to use Networking</title>
<para>By default, distribution packages configure Compute to use
legacy networking. You must reconfigure Compute to manage
networks through Networking.</para>
<step>
<para>Edit the <filename>/etc/nova/nova.conf</filename> file and
complete the following actions:</para>
<substeps>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[DEFAULT]</literal> section, configure
the <glossterm baseform="API">APIs</glossterm> and drivers:</para>
<programlisting language="ini">[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver</programlisting>
<note>
<para>By default, Compute uses an internal firewall service.
Since Networking includes a firewall service, you must
disable the Compute firewall service by using the
<literal>nova.virt.firewall.NoopFirewallDriver</literal>
firewall driver.</para>
</note>
</step>
<step>
<para>In the <literal>[neutron]</literal> section, configure
access parameters:</para>
<programlisting language="ini">[neutron]
...
url = http://<replaceable>controller</replaceable>:9696
auth_strategy = keystone
admin_auth_url = http://<replaceable>controller</replaceable>:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<para>Replace <replaceable>NEUTRON_PASS</replaceable> with the
password you chose for the <literal>neutron</literal> user
in the Identity service.</para>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To finalize the installation</title>
<step os="rhel;centos;fedora">
<para>The Networking service initialization scripts expect a
symbolic link <filename>/etc/neutron/plugin.ini</filename>
pointing to the ML2 plug-in configuration file,
<filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>.
If this symbolic link does not exist, create it using the
following command:</para>
<screen><prompt>#</prompt> <userinput>ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini</userinput></screen>
<para>Due to a packaging bug, the Open vSwitch agent initialization
script explicitly looks for the Open vSwitch plug-in configuration
file rather than a symbolic link
<filename>/etc/neutron/plugin.ini</filename> pointing to the ML2
plug-in configuration file. Run the following commands to resolve this
issue:</para>
<screen><prompt>#</prompt> <userinput>cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig</userinput>
<prompt>#</prompt> <userinput>sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
/usr/lib/systemd/system/neutron-openvswitch-agent.service</userinput></screen>
</step>
<step os="sles;opensuse">
<para>The Networking service initialization scripts expect the
variable <literal>NEUTRON_PLUGIN_CONF</literal> in the
<filename>/etc/sysconfig/neutron</filename> file to
reference the ML2 plug-in configuration file. Edit the
<filename>/etc/sysconfig/neutron</filename> file and add the
following:</para>
<programlisting>NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"</programlisting>
</step>
<step>
<para>Restart the Compute service:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-compute.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-nova-compute restart</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-compute.service</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-compute restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Open vSwitch (OVS) agent and configure it to
start when the system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable neutron-openvswitch-agent.service</userinput>
<prompt>#</prompt> <userinput>systemctl start neutron-openvswitch-agent.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-neutron-openvswitch-agent start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-neutron-openvswitch-agent on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-neutron-openvswitch-agent.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-neutron-openvswitch-agent.service</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>Restart the Open vSwitch (OVS) agent:</para>
<screen><prompt>#</prompt> <userinput>service neutron-plugin-openvswitch-agent restart</userinput></screen>
</step>
</procedure>
<procedure>
<title>Verify operation</title>
<note>
<para>Perform these commands on the controller node.</para>
</note>
<step>
<para>Source the <literal>admin</literal> credentials to gain access to
admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>List agents to verify successful launch of the
neutron agents:</para>
<screen><prompt>$</prompt> <userinput>neutron agent-list</userinput>
<computeroutput>+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
...
| a5a49051-05eb-4b4f-bfc7-d36235fe9131 | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+</computeroutput></screen>
</step>
</procedure>
</section>

View File

@ -22,13 +22,13 @@
many interfaces connected to subnets. Subnets can access machines
on other subnets connected to the same router.</para>
<para>Any given Networking set up has at least one external network.
This network, unlike the other networks, is not merely a virtually
defined network. Instead, it represents the view into a slice of
the external network that is accessible outside the OpenStack
installation. IP addresses on the Networking external network are
Unlike the other networks, the external network is not merely a
virtually defined network. Instead, it represents a view into a
slice of the physical, external network accessible outside the
OpenStack installation. IP addresses on the external network are
accessible by anybody physically on the outside network. Because
this network merely represents a slice of the outside network,
DHCP is disabled on this network.</para>
the external network merely represents a view into the outside
network, DHCP is disabled on this network.</para>
<para>In addition to external networks, any Networking set up has
one or more internal networks. These software-defined networks
connect directly to the VMs. Only the VMs on any given internal
@ -54,10 +54,10 @@
security groups to block or unblock ports, port ranges, or traffic
types for that VM.</para>
<para>Each plug-in that Networking uses has its own concepts. While
not vital to operating Networking, understanding these concepts
can help you set up Networking. All Networking installations use a
core plug-in and a security group plug-in (or just the No-Op
security group plug-in). Additionally, Firewall-as-a-Service
(FWaaS) and Load-Balancer-as-a-Service (LBaaS) plug-ins are
available.</para>
not vital to operating the VNI and OpenStack environment,
understanding these concepts can help you set up Networking.
All Networking installations use a core plug-in and a security group
plug-in (or just the No-Op security group plug-in). Additionally,
Firewall-as-a-Service (FWaaS) and Load-Balancer-as-a-Service (LBaaS)
plug-ins are available.</para>
</section>

View File

@ -0,0 +1,448 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="neutron-controller-node">
<title>Install and configure controller node</title>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure prerequisites</title>
<para>Before you configure OpenStack Networking (neutron), you must create
a database and Identity service credentials including endpoints.</para>
<step>
<para>To create the database, complete these steps:</para>
<substeps>
<step>
<para>Use the database access client to connect to the database
server as the <literal>root</literal> user:</para>
<screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput></screen>
</step>
<step>
<para>Create the <literal>neutron</literal> database:</para>
<screen><userinput>CREATE DATABASE neutron;</userinput></screen>
</step>
<step>
<para>Grant proper access to the <literal>neutron</literal>
database:</para>
<screen><userinput>GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY '<replaceable>NEUTRON_DBPASS</replaceable>';</userinput>
<userinput>GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY '<replaceable>NEUTRON_DBPASS</replaceable>';</userinput></screen>
<para>Replace <replaceable>NEUTRON_DBPASS</replaceable> with a
suitable password.</para>
</step>
<step>
<para>Exit the database access client.</para>
</step>
</substeps>
</step>
<step>
<para>Source the <literal>admin</literal> credentials to gain access to
admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>To create the Identity service credentials, complete these
steps:</para>
<substeps>
<step>
<para>Create the <literal>neutron</literal> user:</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name neutron --pass <replaceable>NEUTRON_PASS</replaceable></userinput>
<computeroutput>+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 7fd67878dcd04d0393469ef825a7e005 |
| name | neutron |
| username | neutron |
+----------+----------------------------------+</computeroutput></screen>
<para>Replace <replaceable>NEUTRON_PASS</replaceable> with a suitable
password.</para>
</step>
<step>
<para>Link the <literal>neutron</literal> user to the
<literal>service</literal> tenant and <literal>admin</literal>
role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user neutron --tenant service --role admin</userinput></screen>
<note>
<para>This command provides no output.</para>
</note>
</step>
<step>
<para>Create the <literal>neutron</literal> service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name neutron --type network \
--description "OpenStack Networking"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 6369ddaf99a447f3a0d41dac5e342161 |
| name | neutron |
| type | network |
+-------------+----------------------------------+</computeroutput></screen>
</step>
<step>
<para>Create the Identity service endpoints:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id $(keystone service-list | awk '/ network / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:9696 \
--adminurl http://<replaceable>controller</replaceable>:9696 \
--internalurl http://<replaceable>controller</replaceable>:9696 \
--region regionOne</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://controller:9696 |
| id | fa18b41938a94bf6b35e2c152063ee21 |
| internalurl | http://controller:9696 |
| publicurl | http://controller:9696 |
| region | regionOne |
| service_id | 6369ddaf99a447f3a0d41dac5e342161 |
+-------------+----------------------------------+</computeroutput></screen>
</step>
</substeps>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To install the Networking components</title>
<step>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install neutron-server neutron-plugin-ml2 python-neutronclient</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-neutron openstack-neutron-server</userinput></screen>
<note os="sles;opensuse">
<para>SUSE does not use a separate ML2 plug-in package.</para>
</note>
</step>
</procedure>
<procedure os="debian">
<title>To install and configure the Networking components</title>
<step>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-server</userinput></screen>
<note>
<para>Debian does not use a separate ML2 plug-in package.</para>
</note>
</step>
<step>
<para>Select the ML2 plug-in:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/neutron_1_plugin_selection.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<note>
<para>Selecting the ML2 plug-in also populates the
<option>service_plugins</option> and
<option>allow_overlapping_ips</option> options in the
<filename>/etc/neutron/neutron.conf</filename> file with the
appropriate values.</para>
</note>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure the Networking server component</title>
<para>The Networking server component configuration includes the database,
authentication mechanism, message broker, topology change notifications,
and plug-in.</para>
<step>
<para>Edit the <filename>/etc/neutron/neutron.conf</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[database]</literal> section, configure
database access:</para>
<programlisting language="ini">[database]
...
connection = mysql://neutron:<replaceable>NEUTRON_DBPASS</replaceable>@<replaceable>controller</replaceable>/neutron</programlisting>
<para>Replace <replaceable>NEUTRON_DBPASS</replaceable> with the
password you chose for the database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the
password you chose for the <literal>guest</literal> account in
<application>RabbitMQ</application>.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections,
configure Identity service access:</para>
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = neutron
admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<para>Replace <replaceable>NEUTRON_PASS</replaceable> with the
password you chose or the <literal>neutron</literal> user in the
Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, enable the
Modular Layer 2 (ML2) plug-in, router service, and overlapping
IP addresses:</para>
<programlisting language="ini">[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True</programlisting>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
Networking to notify Compute of network topology changes:</para>
<programlisting language="ini">[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://<replaceable>controller</replaceable>:8774/v2
nova_admin_auth_url = http://<replaceable>controller</replaceable>:35357/v2.0
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = <replaceable>SERVICE_TENANT_ID</replaceable>
nova_admin_password = <replaceable>NOVA_PASS</replaceable></programlisting>
<para>Replace <replaceable>SERVICE_TENANT_ID</replaceable> with the
<literal>service</literal> tenant identifier (id) in the Identity
service and <replaceable>NOVA_PASS</replaceable> with the password
you chose for the <literal>nova</literal> user in the Identity
service.</para>
<note>
<para>To obtain the <literal>service</literal> tenant
identifier (id):</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput>
<prompt>$</prompt> <userinput>keystone tenant-get service</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Service Tenant |
| enabled | True |
| id | f727b5ec2ceb4d71bad86dfc414449bf |
| name | service |
+-------------+----------------------------------+</computeroutput></screen>
</note>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure the Modular Layer 2 (ML2) plug-in</title>
<para>The ML2 plug-in uses the
<glossterm baseform="Open vSwitch">Open vSwitch (OVS)</glossterm>
mechanism (agent) to build the virtual networking framework for
instances. However, the controller node does not need the OVS
components because it does not handle instance network traffic.</para>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>
file and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[ml2]</literal> section, enable the
<glossterm baseform="flat network">flat</glossterm> and
<glossterm>generic routing encapsulation (GRE)</glossterm>
network type drivers, GRE tenant networks, and the OVS
mechanism driver:</para>
<programlisting language="ini">[ml2]
...
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch</programlisting>
<warning>
<para>Once you configure the ML2 plug-in, be aware that disabling
a network type driver and re-enabling it later can lead to
database inconsistency.</para>
</warning>
</step>
<step>
<para>In the <literal>[ml2_type_gre]</literal> section, configure
the tunnel identifier (id) range:</para>
<programlisting language="ini">[ml2_type_gre]
...
tunnel_id_ranges = 1:1000</programlisting>
</step>
<step>
<para>In the <literal>[securitygroup]</literal> section, enable
security groups, enable <glossterm>ipset</glossterm>, and
configure the OVS <glossterm>iptables</glossterm> firewall
driver:</para>
<programlisting language="ini">[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver</programlisting>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To configure Compute to use Networking</title>
<para>By default, distribution packages configure Compute to use legacy
networking. You must reconfigure Compute to manage networks through
Networking.</para>
<step>
<para>Edit the <filename>/etc/nova/nova.conf</filename> file and
complete the following actions:</para>
<substeps>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[DEFAULT]</literal> section, configure
the <glossterm baseform="API">APIs</glossterm> and drivers:</para>
<programlisting language="ini">[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver</programlisting>
<note>
<para>By default, Compute uses an internal firewall service.
Since Networking includes a firewall service, you must
disable the Compute firewall service by using the
<literal>nova.virt.firewall.NoopFirewallDriver</literal>
firewall driver.</para>
</note>
</step>
<step>
<para>In the <literal>[neutron]</literal> section, configure
access parameters:</para>
<programlisting language="ini">[neutron]
...
url = http://<replaceable>controller</replaceable>:9696
auth_strategy = keystone
admin_auth_url = http://<replaceable>controller</replaceable>:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<para>Replace <replaceable>NEUTRON_PASS</replaceable> with the
password you chose for the <literal>neutron</literal> user
in the Identity service.</para>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To finalize installation</title>
<step os="rhel;centos;fedora">
<para>The Networking service initialization scripts expect a
symbolic link <filename>/etc/neutron/plugin.ini</filename>
pointing to the ML2 plug-in configuration file,
<filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>.
If this symbolic link does not exist, create it using the
following command:</para>
<screen><prompt>#</prompt> <userinput>ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini</userinput></screen>
</step>
<step os="sles;opensuse">
<para>The Networking service initialization scripts expect the
variable <literal>NEUTRON_PLUGIN_CONF</literal> in the
<filename>/etc/sysconfig/neutron</filename> file to
reference the ML2 plug-in configuration file. Edit the
<filename>/etc/sysconfig/neutron</filename> file and add the
following:</para>
<programlisting>NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"</programlisting>
</step>
<step>
<para>Populate the database:</para>
<screen><prompt>#</prompt> <userinput>su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron</userinput></screen>
<note>
<para>Database population occurs later for Networking because the
script requires complete server and plug-in configuration
files.</para>
</note>
</step>
<step>
<para>Restart the Compute services:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-nova-api restart</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-scheduler restart</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-conductor restart</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-api restart</userinput>
<prompt>#</prompt> <userinput>service nova-scheduler restart</userinput>
<prompt>#</prompt> <userinput>service nova-conductor restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Networking service and configure it to start when the
system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable neutron-server.service</userinput>
<prompt>#</prompt> <userinput>systemctl start neutron-server.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-neutron start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-neutron on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-neutron.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-neutron.service</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>Restart the Networking service:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
<procedure>
<title>Verify operation</title>
<note>
<para>Perform these commands on the controller node.</para>
</note>
<step>
<para>Source the <literal>admin</literal> credentials to gain access to
admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>List loaded extensions to verify successful launch of the
<literal>neutron-server</literal> process:</para>
<screen><prompt>$</prompt> <userinput>neutron ext-list</userinput>
<computeroutput>+-----------------------+-----------------------------------------------+
| alias | name |
+-----------------------+-----------------------------------------------+
| security-group | security-group |
| l3_agent_scheduler | L3 Agent Scheduler |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| provider | Provider Network |
| agent | agent |
| quotas | Quota management support |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| l3-ha | HA Router extension |
| multi-provider | Multi Provider Network |
| external-net | Neutron external network |
| router | Neutron L3 Router |
| allowed-address-pairs | Allowed Address Pairs |
| extraroute | Neutron Extra Route |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| dvr | Distributed Virtual Router |
+-----------------------+-----------------------------------------------+</computeroutput></screen>
</step>
</procedure>
</section>

View File

@ -20,18 +20,18 @@
<title>Initial networks</title>
<mediaobject>
<imageobject>
<imagedata contentwidth="6in"
<imagedata scale="50"
fileref="figures/installguide_neutron-initial-networks.png"/>
</imageobject>
</mediaobject>
</figure>
<section xml:id="neutron_initial-external-network">
<title>External network</title>
<para>The external network typically provides internet access for
your instances. By default, this network only allows internet
<para>The external network typically provides Internet access for
your instances. By default, this network only allows Internet
access <emphasis>from</emphasis> instances using
<glossterm>Network Address Translation (NAT)</glossterm>. You can
enable internet access <emphasis>to</emphasis> individual instances
enable Internet access <emphasis>to</emphasis> individual instances
using a <glossterm>floating IP address</glossterm> and suitable
<glossterm>security group</glossterm> rules. The <literal>admin</literal>
tenant owns this network because it provides external network
@ -43,12 +43,14 @@
<procedure>
<title>To create the external network</title>
<step>
<para>Source the <literal>admin</literal> tenant credentials:</para>
<para>Source the <literal>admin</literal> credentials to gain access to
admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>Create the network:</para>
<screen><prompt>$</prompt> <userinput>neutron net-create ext-net --shared --router:external=True</userinput>
<screen><prompt>$</prompt> <userinput>neutron net-create ext-net --shared --router:external True \
--provider:physical_network external --provider:network_type flat</userinput>
<computeroutput>Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
@ -56,9 +58,9 @@
| admin_state_up | True |
| id | 893aebb9-1c1e-48be-8908-6b947f3237b3 |
| name | ext-net |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 1 |
| provider:network_type | flat |
| provider:physical_network | external |
| provider:segmentation_id | |
| router:external | True |
| shared | True |
| status | ACTIVE |
@ -74,16 +76,6 @@
network node. You should specify an exclusive slice of this subnet
for <glossterm>router</glossterm> and floating IP addresses to prevent
interference with other devices on the external network.</para>
<para>Replace <replaceable>FLOATING_IP_START</replaceable> and
<replaceable>FLOATING_IP_END</replaceable> with the first and last
IP addresses of the range that you want to allocate for floating IP
addresses. Replace <replaceable>EXTERNAL_NETWORK_CIDR</replaceable>
with the subnet associated with the physical network. Replace
<replaceable>EXTERNAL_NETWORK_GATEWAY</replaceable> with the gateway
associated with the physical network, typically the ".1" IP address.
You should disable <glossterm>DHCP</glossterm> on this subnet because
instances do not connect directly to the external network and floating
IP addresses require manual assignment.</para>
<procedure>
<title>To create a subnet on the external network</title>
<step>
@ -91,6 +83,16 @@
<screen><prompt>$</prompt> <userinput>neutron subnet-create ext-net --name ext-subnet \
--allocation-pool start=<replaceable>FLOATING_IP_START</replaceable>,end=<replaceable>FLOATING_IP_END</replaceable> \
--disable-dhcp --gateway <replaceable>EXTERNAL_NETWORK_GATEWAY</replaceable> <replaceable>EXTERNAL_NETWORK_CIDR</replaceable></userinput></screen>
<para>Replace <replaceable>FLOATING_IP_START</replaceable> and
<replaceable>FLOATING_IP_END</replaceable> with the first and last
IP addresses of the range that you want to allocate for floating IP
addresses. Replace <replaceable>EXTERNAL_NETWORK_CIDR</replaceable>
with the subnet associated with the physical network. Replace
<replaceable>EXTERNAL_NETWORK_GATEWAY</replaceable> with the gateway
associated with the physical network, typically the ".1" IP address.
You should disable <glossterm>DHCP</glossterm> on this subnet because
instances do not connect directly to the external network and
floating IP addresses require manual assignment.</para>
<para>For example, using <literal>203.0.113.0/24</literal> with
floating IP address range <literal>203.0.113.101</literal> to
<literal>203.0.113.200</literal>:</para>
@ -130,41 +132,42 @@
<procedure>
<title>To create the tenant network</title>
<step>
<para>Source the <literal>demo</literal> tenant credentials:</para>
<para>Source the <literal>demo</literal> credentials to gain access to
user-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen>
</step>
<step>
<para>Create the network:</para>
<screen><prompt>$</prompt> <userinput>neutron net-create demo-net</userinput>
<computeroutput>Created a new network:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| admin_state_up | True |
| id | ac108952-6096-4243-adf4-bb6615b3de28 |
| name | demo-net |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | cdef0071a0194d19ac6bb63802dc9bae |
+----------------+--------------------------------------+</computeroutput></screen>
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | True |
| id | ac108952-6096-4243-adf4-bb6615b3de28 |
| name | demo-net |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | cdef0071a0194d19ac6bb63802dc9bae |
+-----------------+--------------------------------------+</computeroutput></screen>
</step>
</procedure>
<para>Like the external network, your tenant network also requires
a subnet attached to it. You can specify any valid subnet because the
architecture isolates tenant networks. Replace
<replaceable>TENANT_NETWORK_CIDR</replaceable> with the subnet
you want to associate with the tenant network. Replace
<replaceable>TENANT_NETWORK_GATEWAY</replaceable> with the gateway you
want to associate with this network, typically the ".1" IP address. By
default, this subnet will use DHCP so your instances can obtain IP
addresses.</para>
architecture isolates tenant networks. By default, this subnet will
use DHCP so your instances can obtain IP addresses.</para>
<procedure>
<title>To create a subnet on the tenant network</title>
<step>
<para>Create the subnet:</para>
<screen><prompt>$</prompt> <userinput>neutron subnet-create demo-net --name demo-subnet \
--gateway <replaceable>TENANT_NETWORK_GATEWAY</replaceable> <replaceable>TENANT_NETWORK_CIDR</replaceable></userinput></screen>
<para>Replace <replaceable>TENANT_NETWORK_CIDR</replaceable> with the
subnet you want to associate with the tenant network and
<replaceable>TENANT_NETWORK_GATEWAY</replaceable> with the gateway
you want to associate with it, typically the ".1" IP address.</para>
<para>Example using <literal>192.168.1.0/24</literal>:</para>
<screen><prompt>$</prompt> <userinput>neutron subnet-create demo-net --name demo-subnet \
--gateway 192.168.1.1 192.168.1.0/24</userinput>
@ -207,6 +210,7 @@
| external_gateway_info | |
| id | 635660ae-a254-4feb-8993-295aa9ec6418 |
| name | demo-router |
| routes | |
| status | ACTIVE |
| tenant_id | cdef0071a0194d19ac6bb63802dc9bae |
+-----------------------+--------------------------------------+</computeroutput></screen>

View File

@ -42,15 +42,6 @@ net.ipv4.conf.default.rp_filter=0</programlisting>
<title>To configure the Networking common components</title>
<para>The Networking common component configuration includes the
authentication mechanism, message broker, and plug-in.</para>
<step os="debian">
<para>Respond to prompts for <link
linkend="debconf-dbconfig-common">database
management</link>, <link linkend="debconf-keystone_authtoken"
>Identity service credentials</link>, <link
linkend="debconf-api-endpoints">service endpoint
registration</link>, and <link linkend="debconf-rabbitmq"
>message broker credentials</link>.</para>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Configure Networking to use the Identity service for
authentication:</para>

View File

@ -74,16 +74,6 @@ IDENTIFIED BY '<replaceable>NEUTRON_DBPASS</replaceable>';</userinput></screen>
<para>The Networking server component configuration includes the database,
authentication mechanism, message broker, topology change notifier,
and plug-in.</para>
<step os="debian">
<para>Respond to prompts for
<link linkend="debconf-dbconfig-common">database management</link>,
<link linkend="debconf-keystone_authtoken">Identity service
credentials</link>,
<link linkend="debconf-api-endpoints">service endpoint
registration</link>, and
<link linkend="debconf-rabbitmq">message broker
credentials</link>.</para>
</step>
<step os="debian">
<para>During the installation, you will also be prompted for which
Networking plug-in to use. This will automatically fill the

View File

@ -45,15 +45,6 @@ net.ipv4.conf.default.rp_filter=0</programlisting>
<title>To configure the Networking common components</title>
<para>The Networking common component configuration includes the
authentication mechanism, message broker, and plug-in.</para>
<step os="debian">
<para>Respond to prompts for <link
linkend="debconf-dbconfig-common">database
management</link>, <link linkend="debconf-keystone_authtoken"
>Identity service credentials</link>, <link
linkend="debconf-api-endpoints">service endpoint
registration</link>, and <link linkend="debconf-rabbitmq"
>message broker credentials</link>.</para>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Configure Networking to use the Identity service for
authentication:</para>

View File

@ -0,0 +1,550 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="neutron-network-node">
<title>Install and configure network node</title>
<para>The network node primarily handles internal and external routing
and <glossterm>DHCP</glossterm> services for virtual networks.</para>
<procedure>
<title>To configure prerequisites</title>
<para>Before you install and configure OpenStack Networking, you
must configure certain kernel networking parameters.</para>
<step>
<para>Edit the <filename>/etc/sysctl.conf</filename> file to
contain the following parameters:</para>
<programlisting>net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0</programlisting>
</step>
<step>
<para>Implement the changes:</para>
<screen><prompt>#</prompt> <userinput>sysctl -p</userinput></screen>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To install the Networking components</title>
<step>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
neutron-l3-agent neutron-dhcp-agent</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install --no-recommends openstack-neutron-openvswitch-agent openstack-neutron-l3-agent \
openstack-neutron-dhcp-agent openstack-neutron-metadata-agent ipset</userinput></screen>
<note os="sles;opensuse">
<para>SUSE does not use a separate ML2 plug-in package.</para>
</note>
</step>
</procedure>
<procedure os="debian">
<title>To install and configure the Networking components</title>
<step>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-plugin-openvswitch-agent openvswitch-datapath-dkms \
neutron-l3-agent neutron-dhcp-agent</userinput></screen>
<note>
<para>Debian does not use a separate ML2 plug-in package.</para>
</note>
</step>
<step>
<para>Select the ML2 plug-in:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/neutron_1_plugin_selection.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<note>
<para>Selecting the ML2 plug-in also populates the
<option>service_plugins</option> and
<option>allow_overlapping_ips</option> options in the
<filename>/etc/neutron/neutron.conf</filename> file with the
appropriate values.</para>
</note>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure the Networking common components</title>
<para>The Networking common component configuration includes the
authentication mechanism, message broker, and plug-in.</para>
<step>
<para>Edit the <filename>/etc/neutron/neutron.conf</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[database]</literal> section, comment out
any <literal>connection</literal> options because network nodes
do not directly access the database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the
password you chose for the <literal>guest</literal> account in
<application>RabbitMQ</application>.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections,
configure Identity service access:</para>
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = neutron
admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<para>Replace <replaceable>NEUTRON_PASS</replaceable> with the
password you chose or the <literal>neutron</literal> user in the
Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, enable the
Modular Layer 2 (ML2) plug-in, router service, and overlapping
IP addresses:</para>
<programlisting language="ini">[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To configure the Modular Layer 2 (ML2) plug-in</title>
<para>The ML2 plug-in uses the
<glossterm baseform="Open vSwitch">Open vSwitch (OVS)</glossterm>
mechanism (agent) to build the virtual networking framework for
instances.</para>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>
file and complete the following actions:</para>
<substeps>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[ml2]</literal> section, enable the
<glossterm baseform="flat network">flat</glossterm> and
<glossterm>generic routing encapsulation (GRE)</glossterm>
network type drivers, GRE tenant networks, and the OVS
mechanism driver:</para>
<programlisting language="ini">[ml2]
...
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch</programlisting>
</step>
<step>
<para>In the <literal>[ml2_type_flat]</literal> section, configure
the external network:</para>
<programlisting language="ini">[ml2_type_flat]
...
flat_networks = external</programlisting>
</step>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[ml2_type_gre]</literal> section, configure
the tunnel identifier (id) range:</para>
<programlisting language="ini">[ml2_type_gre]
...
tunnel_id_ranges = 1:1000</programlisting>
</step>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[securitygroup]</literal> section, enable
security groups, enable <glossterm>ipset</glossterm>, and
configure the OVS <glossterm>iptables</glossterm> firewall
driver:</para>
<programlisting language="ini">[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver</programlisting>
</step>
<step>
<para>In the <literal>[ovs]</literal> section, configure the
<glossterm>Open vSwitch (OVS) agent</glossterm>:</para>
<programlisting language="ini">[ovs]
...
local_ip = <replaceable>INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS</replaceable>
tunnel_type = gre
enable_tunneling = True
bridge_mappings = external:br-ex</programlisting>
<para>Replace
<replaceable>INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the instance tunnels network interface
on your network node.</para>
</step>
</substeps>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure the Layer-3 (L3) agent</title>
<para>The <glossterm>Layer-3 (L3) agent</glossterm> provides
routing services for virtual networks.</para>
<step>
<para>Edit the <filename>/etc/neutron/l3_agent.ini</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
the driver, enable
<glossterm baseform="network namespace">network
namespaces</glossterm>, and configure the external
network bridge:</para>
<programlisting language="ini">[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
external_network_bridge = br-ex</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To configure the DHCP agent</title>
<para>The <glossterm>DHCP agent</glossterm> provides DHCP
services for virtual networks.</para>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>Edit the <filename>/etc/neutron/dhcp_agent.ini</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
the drivers and enable namespaces:</para>
<programlisting language="ini">[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
<step>
<para>(Optional)</para>
<para>Tunneling protocols such as GRE include additional packet
headers that increase overhead and decrease space available for the
payload or user data. Without knowledge of the virtual network
infrastructure, instances attempt to send packets using the default
Ethernet <glossterm>maximum transmission unit (MTU)</glossterm> of
1500 bytes. <glossterm>Internet protocol (IP)</glossterm> networks
contain the <glossterm>path MTU discovery (PMTUD)</glossterm>
mechanism to detect end-to-end MTU and adjust packet size
accordingly. However, some operating systems and networks block or
otherwise lack support for PMTUD causing performance degradation
or connectivity failure.</para>
<para>Ideally, you can prevent these problems by enabling
<glossterm baseform="jumbo frame">jumbo frames</glossterm> on the
physical network that contains your tenant virtual networks.
Jumbo frames support MTUs up to approximately 9000 bytes which
negates the impact of GRE overhead on virtual networks. However,
many network devices lack support for jumbo frames and OpenStack
administrators often lack control over network infrastructure.
Given the latter complications, you can also prevent MTU problems
by reducing the instance MTU to account for GRE overhead.
Determining the proper MTU value often takes experimentation,
but 1454 bytes works in most environments. You can configure the
DHCP server that assigns IP addresses to your instances to also
adjust the MTU.</para>
<note>
<para>Some cloud images ignore the DHCP MTU option in which case
you should configure it using metadata, script, or other suitable
method.</para>
</note>
<substeps>
<step>
<para>Edit the <filename>/etc/neutron/dhcp_agent.ini</filename>
file and complete the following action:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, enable the
<glossterm>dnsmasq</glossterm> configuration file:</para>
<programlisting language="ini">[DEFAULT]
...
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf</programlisting>
</step>
</substeps>
</step>
<step>
<para>Create and edit the
<filename>/etc/neutron/dnsmasq-neutron.conf</filename> file and
complete the following action:</para>
<substeps>
<step>
<para>Enable the DHCP MTU option (26) and configure it to
1454 bytes:</para>
<programlisting language="ini">dhcp-option-force=26,1454</programlisting>
</step>
</substeps>
</step>
<step>
<para>Kill any existing
<systemitem role="process">dnsmasq</systemitem> processes:</para>
<screen><prompt>#</prompt> <userinput>pkill dnsmasq</userinput></screen>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To configure the metadata agent</title>
<para>The <glossterm baseform="Metadata agent">metadata agent</glossterm>
provides configuration information such as credentials to
instances.</para>
<step>
<para>Edit the <filename>/etc/neutron/metadata_agent.ini</filename>
file and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
access parameters:</para>
<programlisting language="ini">[DEFAULT]
...
auth_url = http://<replaceable>controller</replaceable>:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<para>Replace <replaceable>NEUTRON_PASS</replaceable> with the
password you chose for the <literal>neutron</literal> user in
the Identity service.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
metadata host:</para>
<programlisting language="ini">[DEFAULT]
...
nova_metadata_ip = <replaceable>controller</replaceable></programlisting>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
metadata proxy shared secret:</para>
<programlisting language="ini">[DEFAULT]
...
metadata_proxy_shared_secret = <replaceable>METADATA_SECRET</replaceable></programlisting>
<para>Replace <replaceable>METADATA_SECRET</replaceable> with a
suitable secret for the metadata proxy.</para>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
<step>
<para>On the <emphasis>controller</emphasis> node, edit the
<filename>/etc/nova/nova.conf</filename> file and complete the
following action:</para>
<substeps>
<step>
<para>In the <literal>[neutron]</literal> section, enable the
metadata proxy and configure the secret:</para>
<programlisting language="ini">[neutron]
...
service_metadata_proxy = True
metadata_proxy_shared_secret = <replaceable>METADATA_SECRET</replaceable></programlisting>
<para>Replace <replaceable>METADATA_SECRET</replaceable> with
the secret you chose for the metadata proxy.</para>
</step>
</substeps>
</step>
<step>
<para>On the <emphasis>controller</emphasis> node, restart the
Compute <glossterm>API</glossterm> service:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-api.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-nova-api restart</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-api.service</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-api restart</userinput></screen>
</step>
</procedure>
<procedure>
<title>To configure the Open vSwitch (OVS) service</title>
<para>The OVS service provides the underlying virtual networking
framework for instances. The integration bridge
<literal>br-int</literal> handles internal instance network
traffic within OVS. The external bridge <literal>br-ex</literal>
handles external instance network traffic within OVS. The
external bridge requires a port on the physical external network
interface to provide instances with external network access. In
essence, this port connects the virtual and physical external
networks in your environment.</para>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the OVS service and configure it to start when the
system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openvswitch.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openvswitch.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openvswitch-switch start</userinput>
<prompt>#</prompt> <userinput>chkconfig openvswitch-switch on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openvswitch.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openvswitch.service</userinput></screen>
</step>
<step os="debian;ubuntu">
<para>Restart the OVS service:</para>
<screen><prompt>#</prompt> <userinput>service openvswitch-switch restart</userinput></screen>
</step>
<step>
<para>Add the external bridge:</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-ex</userinput></screen>
</step>
<step>
<para>Add a port to the external bridge that connects to the
physical external network interface:</para>
<para>Replace <replaceable>INTERFACE_NAME</replaceable> with the
actual interface name. For example, <emphasis>eth2</emphasis>
or <emphasis>ens256</emphasis>.</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-port br-ex <replaceable>INTERFACE_NAME</replaceable></userinput></screen>
<note>
<para>Depending on your network interface driver, you may need
to disable <glossterm>generic receive offload
(GRO)</glossterm> to achieve suitable throughput between
your instances and the external network.</para>
<para>To temporarily disable GRO on the external network
interface while testing your environment:</para>
<screen><prompt>#</prompt> <userinput>ethtool -K <replaceable>INTERFACE_NAME</replaceable> gro off</userinput></screen>
</note>
</step>
</procedure>
<procedure>
<title>To finalize the installation</title>
<step os="rhel;centos;fedora">
<para>The Networking service initialization scripts expect a
symbolic link <filename>/etc/neutron/plugin.ini</filename>
pointing to the ML2 plug-in configuration file,
<filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>.
If this symbolic link does not exist, create it using the
following command:</para>
<screen><prompt>#</prompt> <userinput>ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini</userinput></screen>
<para>Due to a packaging bug, the Open vSwitch agent initialization
script explicitly looks for the Open vSwitch plug-in configuration
file rather than a symbolic link
<filename>/etc/neutron/plugin.ini</filename> pointing to the ML2
plug-in configuration file. Run the following commands to resolve this
issue:</para>
<screen><prompt>#</prompt> <userinput>cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig</userinput>
<prompt>#</prompt> <userinput>sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
/usr/lib/systemd/system/neutron-openvswitch-agent.service</userinput></screen>
</step>
<step os="sles;opensuse">
<para>The Networking service initialization scripts expect the
variable <literal>NEUTRON_PLUGIN_CONF</literal> in the
<filename>/etc/sysconfig/neutron</filename> file to
reference the ML2 plug-in configurarion file. Edit the
<filename>/etc/sysconfig/neutron</filename> file and add the
following:</para>
<programlisting>NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"</programlisting>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Networking services and configure them to start
when the system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service \
neutron-ovs-cleanup.service</userinput>
<prompt>#</prompt> <userinput>systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service</userinput></screen>
<note os="rhel;centos;fedora">
<para>Do not explictly start the
<systemitem class="service">neutron-ovs-cleanup</systemitem>
service.</para>
</note>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-neutron-openvswitch-agent start</userinput>
<prompt>#</prompt> <userinput>service openstack-neutron-l3-agent start</userinput>
<prompt>#</prompt> <userinput>service openstack-neutron-dhcp-agent start</userinput>
<prompt>#</prompt> <userinput>service openstack-neutron-metadata-agent start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-neutron-openvswitch-agent on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-neutron-l3-agent on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-neutron-dhcp-agent on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-neutron-metadata-agent on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-neutron-ovs-cleanup on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-neutron-openvswitch-agent.service openstack-neutron-l3-agent.service \
openstack-neutron-dhcp-agent.service openstack-neutron-metadata-agent.service \
openstack-neutron-ovs-cleanup.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-neutron-openvswitch-agent.service openstack-neutron-l3-agent.service \
openstack-neutron-dhcp-agent.service openstack-neutron-metadata-agent.service</userinput></screen>
<note os="sles;opensuse">
<para>Do not explictly start the
<systemitem class="service">openstack-neutron-ovs-cleanup</systemitem>
service.</para>
</note>
</step>
<step os="ubuntu;debian">
<para>Restart the Networking services:</para>
<screen><prompt>#</prompt> <userinput>service neutron-plugin-openvswitch-agent restart</userinput>
<prompt>#</prompt> <userinput>service neutron-l3-agent restart</userinput>
<prompt>#</prompt> <userinput>service neutron-dhcp-agent restart</userinput>
<prompt>#</prompt> <userinput>service neutron-metadata-agent restart</userinput></screen>
</step>
</procedure>
<procedure>
<title>Verify operation</title>
<note>
<para>Perform these commands on the controller node.</para>
</note>
<step>
<para>Source the <literal>admin</literal> credentials to gain access to
admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>List agents to verify successful launch of the
neutron agents:</para>
<screen><prompt>$</prompt> <userinput>neutron agent-list</userinput>
<computeroutput>+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| 30275801-e17a-41e4-8f53-9db63544f689 | Metadata agent | network | :-) | True | neutron-metadata-agent |
| 4bd8c50e-7bad-4f3b-955d-67658a491a15 | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent |
| 756e5bba-b70f-4715-b80e-e37f59803d20 | L3 agent | network | :-) | True | neutron-l3-agent |
| 9c45473c-6d6d-4f94-8df1-ebd0b6838d5f | DHCP agent | network | :-) | True | neutron-dhcp-agent |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+</computeroutput></screen>
</step>
</procedure>
</section>

View File

@ -42,17 +42,6 @@
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-neutron openstack-neutron-l3-agent \
openstack-neutron-dhcp-agent openstack-neutron-metadata-agent</userinput></screen>
</step>
<step os="debian">
<para>Respond to prompts for <link
linkend="debconf-dbconfig-common">database
management</link>, <link
linkend="debconf-keystone_authtoken"
><literal>[keystone_authtoken]</literal>
settings</link>, <link linkend="debconf-rabbitmq">RabbitMQ
credentials</link> and <link
linkend="debconf-api-endpoints">API endpoint</link>
registration.</para>
</step>
<step os="rhel;centos;fedora;opensuse;sles">
<para>Configure Networking agents to start at boot time:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>for s in neutron-{dhcp,metadata,l3}-agent; do chkconfig $s on; done</userinput></screen>

View File

@ -22,23 +22,14 @@
<title>To install and configure the Compute hypervisor components</title>
<step>
<para>Install the packages:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install nova-compute</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-nova-compute</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-nova-compute genisoimage</userinput></screen>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install nova-compute sysfsutils</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-nova-compute sysfsutils</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-nova-compute genisoimage kvm</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/nova/nova.conf</filename> file and
complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[database]</literal> section, configure
database access:</para>
<programlisting language="ini">[database]
...
connection = mysql://nova:<replaceable>NOVA_DBPASS</replaceable>@<replaceable>controller</replaceable>/nova</programlisting>
<para>Replace <replaceable>NOVA_DBPASS</replaceable> with the password
you chose for the Compute database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
@ -52,32 +43,40 @@ rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<application>RabbitMQ</application>.</para>
</step>
<step>
<para>In the <literal>[keystone_authtoken</literal>] section,
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections,
configure Identity service access:</para>
<programlisting language="ini">
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
auth_host = <replaceable>controller</replaceable>
auth_port = 35357
auth_protocol = http
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = nova
admin_password = <replaceable>NOVA_PASS</replaceable></programlisting>
<para>Replace <replaceable>NOVA_PASS</replaceable> with the password
you chose for the <literal>nova</literal> user in the Identity
service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
<literal>my_ip</literal> key:</para>
<literal>my_ip</literal> option:</para>
<programlisting language="ini">[DEFAULT]
...
my_ip = <replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable></programlisting>
<para>Replace
<replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable> with
the IP address of the management network interface on your
first compute node, typically 10.0.0.31 in the
compute node, typically 10.0.0.31 for the first node in the
<link linkend="architecture_example-architectures">example
architecture</link>.</para>
</step>
@ -98,7 +97,7 @@ novncproxy_base_url = http://<replaceable>controller</replaceable>:6080/vnc_auto
<para>Replace
<replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable> with
the IP address of the management network interface on your
first compute node, typically 10.0.0.31 in the
compute node, typically 10.0.0.31 for the first node in the
<link linkend="architecture_example-architectures">example
architecture</link>.</para>
<note>
@ -110,11 +109,35 @@ novncproxy_base_url = http://<replaceable>controller</replaceable>:6080/vnc_auto
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
<para>In the <literal>[glance]</literal> section, configure the
location of the Image Service:</para>
<programlisting language="ini">[glance]
...
host = <replaceable>controller</replaceable></programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal> section:</para>
<programlisting language="ini">[DEFAULT]
...
glance_host = <replaceable>controller</replaceable></programlisting>
verbose = True</programlisting>
</step>
</substeps>
</step>
<step os="opensuse;sles">
<substeps>
<step>
<para>Ensure the kernel module <literal>nbd</literal> is
loaded.</para>
<screen><prompt>#</prompt> <userinput>modprobe nbd</userinput></screen>
</step>
<step>
<para>Ensure the module will be loaded on every boot.</para>
<para>On openSUSE by adding <literal>nbd</literal> in the
<filename>/etc/modules-load.d/nbd.conf</filename> file.</para>
<para>On SLES by adding or modifying the following line in the
<filename>/etc/sysconfig/kernel</filename> file.</para>
<programlisting language="ini">MODULES_LOADED_ON_BOOT = "nbd"</programlisting>
</step>
</substeps>
</step>
@ -125,16 +148,6 @@ glance_host = <replaceable>controller</replaceable></programlisting>
<para>Install the packages:</para>
<screen><prompt>#</prompt> <userinput>apt-get install nova-compute</userinput></screen>
</step>
<step>
<para>Respond to the prompts for
<link linkend="debconf-dbconfig-common">database management</link>,
<link linkend="debconf-keystone_authtoken">Identity service
credentials</link>,
<link linkend="debconf-api-endpoints">service endpoint
registration</link>, and
<link linkend="debconf-rabbitmq">message broker
credentials.</link>.</para>
</step>
</procedure>
<procedure>
<title>To finalize installation</title>
@ -159,21 +172,6 @@ glance_host = <replaceable>controller</replaceable></programlisting>
<programlisting language="ini">[libvirt]
...
virt_type = qemu</programlisting>
<warning os="ubuntu">
<para>On Ubuntu 12.04, kernels backported from newer releases may
not automatically load the KVM modules for hardware acceleration
when the compute node boots. In this case, launching an instance
will fail with the following message in the
<filename>/var/log/nova/nova-compute.log</filename> file:</para>
<screen><computeroutput>libvirtError: internal error: no supported architecture for os type 'hvm'</computeroutput></screen>
<para>As a workaround for this issue, you must add the appropriate
module for your compute node to the
<filename>/etc/modules</filename> file.</para>
<para>For systems with Intel processors:</para>
<screen><prompt>#</prompt> <userinput>echo 'kvm_intel' >> /etc/modules</userinput></screen>
<para>For systems with AMD processors:</para>
<screen><prompt>#</prompt> <userinput>echo 'kvm_amd' >> /etc/modules</userinput></screen>
</warning>
</step>
</substeps>
</step>
@ -184,36 +182,24 @@ virt_type = qemu</programlisting>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Compute service including its dependencies and configure
them to start automatically when the system boots:</para>
<stepalternatives os="rhel;centos;fedora">
<step>
<para>For RHEL, CentOS, and compatible derivatives:</para>
<screen><prompt>#</prompt> <userinput>service libvirtd start</userinput>
<prompt>#</prompt> <userinput>service messagebus start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-compute start</userinput>
<prompt>#</prompt> <userinput>chkconfig libvirtd on</userinput>
<prompt>#</prompt> <userinput>chkconfig messagebus on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-compute on</userinput></screen>
</step>
<step>
<para>For Fedora:</para>
<screen><prompt>#</prompt> <userinput>service libvirtd start</userinput>
<prompt>#</prompt> <userinput>service dbus start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-compute start</userinput>
<prompt>#</prompt> <userinput>chkconfig libvirtd on</userinput>
<prompt>#</prompt> <userinput>chkconfig dbus on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-compute on</userinput></screen>
</step>
</stepalternatives>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service libvirtd start</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable libvirtd.service openstack-nova-compute.service</userinput>
<prompt>#</prompt> <userinput>systemctl start libvirtd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-compute.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service libvirtd start</userinput>
<prompt>#</prompt> <userinput>chkconfig libvirtd on</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-compute start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-compute on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable libvirtd.service openstack-nova-compute.service</userinput>
<prompt>#</prompt> <userinput>systemctl start libvirtd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-compute.service</userinput></screen>
</step>
<step os="ubuntu">
<para>By default, the Ubuntu packages create an SQLite database.</para>
<para>Because this configuration uses a SQL database server, you can
remove the SQLite database file:</para>
<screen><prompt>#</prompt> <userinput>rm /var/lib/nova/nova.sqlite</userinput></screen>
<screen><prompt>#</prompt> <userinput>rm -f /var/lib/nova/nova.sqlite</userinput></screen>
</step>
</procedure>
</section>

View File

@ -21,19 +21,20 @@
</step>
<step>
<para>Create the <literal>nova</literal> database:</para>
<screen><prompt>mysql></prompt> <userinput>CREATE DATABASE nova;</userinput></screen>
<screen><userinput>CREATE DATABASE nova;</userinput></screen>
</step>
<step>
<para>Grant proper access to the <literal>nova</literal>
database:</para>
<screen><prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '<replaceable>NOVA_DBPASS</replaceable>';</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '<replaceable>NOVA_DBPASS</replaceable>';</userinput></screen>
<screen><userinput>GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY '<replaceable>NOVA_DBPASS</replaceable>';</userinput>
<userinput>GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY '<replaceable>NOVA_DBPASS</replaceable>';</userinput></screen>
<para>Replace <replaceable>NOVA_DBPASS</replaceable> with a suitable
password.</para>
</step>
<step>
<para>Exit the database access client:</para>
<screen><prompt>mysql></prompt> <userinput>exit</userinput></screen>
<para>Exit the database access client.</para>
</step>
</substeps>
</step>
@ -48,30 +49,62 @@
<substeps>
<step>
<para>Create the <literal>nova</literal> user:</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name=nova --pass=<replaceable>NOVA_PASS</replaceable> --email=<replaceable>EMAIL_ADDRESS</replaceable></userinput></screen>
<screen><prompt>$</prompt> <userinput>keystone user-create --name nova --pass <replaceable>NOVA_PASS</replaceable></userinput>
<computeroutput>+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 387dd4f7e46d4f72965ee99c76ae748c |
| name | nova |
| username | nova |
+----------+----------------------------------+</computeroutput></screen>
<para>Replace <replaceable>NOVA_PASS</replaceable> with a suitable
password and <replaceable>EMAIL_ADDRESS</replaceable> with
a suitable e-mail address.</para>
password.</para>
</step>
<step>
<para>Link the <literal>nova</literal> user to the
<literal>service</literal> tenant and <literal>admin</literal>
role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user=nova --tenant=service --role=admin</userinput></screen>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user nova --tenant service --role admin</userinput></screen>
<note>
<para>This command provides no output.</para>
</note>
</step>
<step>
<para>Create the <literal>nova</literal> service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name=nova --type=compute --description="OpenStack Compute"</userinput></screen>
<screen><prompt>$</prompt> <userinput>keystone service-create --name nova --type compute \
--description "OpenStack Compute"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 6c7854f52ce84db795557ebc0373f6b9 |
| name | nova |
| type | compute |
+-------------+----------------------------------+</computeroutput></screen>
</step>
</substeps>
</step>
<step>
<para>Create the Compute service endpoints:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://<replaceable>controller</replaceable>:8774/v2/%\(tenant_id\)s \
--internalurl=http://<replaceable>controller</replaceable>:8774/v2/%\(tenant_id\)s \
--adminurl=http://<replaceable>controller</replaceable>:8774/v2/%\(tenant_id\)s</userinput></screen>
--service-id $(keystone service-list | awk '/ compute / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:8774/v2/%\(tenant_id\)s \
--internalurl http://<replaceable>controller</replaceable>:8774/v2/%\(tenant_id\)s \
--adminurl http://<replaceable>controller</replaceable>:8774/v2/%\(tenant_id\)s \
--region regionOne</userinput>
<computeroutput>+-------------+-----------------------------------------+
| Property | Value |
+-------------+-----------------------------------------+
| adminurl | http://controller:8774/v2/%(tenant_id)s |
| id | c397438bd82c41198ec1a9d85cb7cc74 |
| internalurl | http://controller:8774/v2/%(tenant_id)s |
| publicurl | http://controller:8774/v2/%(tenant_id)s |
| region | regionOne |
| service_id | 6c7854f52ce84db795557ebc0373f6b9 |
+-------------+-----------------------------------------+</computeroutput></screen>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
@ -83,9 +116,9 @@
<screen os="fedora;rhel;centos"><prompt>#</prompt> <userinput>yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \
python-novaclient</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-nova-api openstack-nova-scheduler \
openstack-nova-cert openstack-nova-conductor openstack-nova-console \
openstack-nova-consoleauth openstack-nova-novncproxy python-novaclient</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-nova-api openstack-nova-scheduler openstack-nova-cert \
openstack-nova-conductor openstack-nova-consoleauth openstack-nova-novncproxy \
python-novaclient iptables</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/nova/nova.conf</filename> file and
@ -113,25 +146,33 @@ rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<application>RabbitMQ</application>.</para>
</step>
<step>
<para>In the <literal>[keystone_authtoken]</literal> section,
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections,
configure Identity service access:</para>
<programlisting language="ini">
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
auth_host = <replaceable>controller</replaceable>
auth_port = 35357
auth_protocol = http
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = nova
admin_password = <replaceable>NOVA_PASS</replaceable></programlisting>
<para>Replace <replaceable>NOVA_PASS</replaceable> with the password
you chose for the <literal>nova</literal> user in the Identity
service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
<literal>my_ip</literal> key to use the management interface IP
<literal>my_ip</literal> option to use the management interface IP
address of the controller node:</para>
<programlisting language="ini">[DEFAULT]
...
@ -147,11 +188,18 @@ vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11</programlisting>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
<para>In the <literal>[glance]</literal> section, configure the
location of the Image Service:</para>
<programlisting language="ini">[glance]
...
host = <replaceable>controller</replaceable></programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal> section:</para>
<programlisting language="ini">[DEFAULT]
...
glance_host = <replaceable>controller</replaceable></programlisting>
verbose = True</programlisting>
</step>
</substeps>
</step>
@ -167,16 +215,6 @@ glance_host = <replaceable>controller</replaceable></programlisting>
<screen><prompt>#</prompt> <userinput>apt-get install nova-api nova-cert nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler python-novaclient</userinput></screen>
</step>
<step>
<para>Respond to prompts for
<link linkend="debconf-dbconfig-common">database management</link>,
<link linkend="debconf-keystone_authtoken">Identity service
credentials</link>,
<link linkend="debconf-api-endpoints">service endpoint
registration</link>, and
<link linkend="debconf-rabbitmq">message broker
credentials</link>.</para>
</step>
<step>
<para>Edit the <filename>/etc/nova/nova.conf</filename> file and
complete the following actions:</para>
@ -207,7 +245,14 @@ vncserver_proxyclient_address = 10.0.0.11</programlisting>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Compute services and configure them to start when the
system boots:</para>
<screen><prompt>#</prompt> <userinput>service openstack-nova-api start</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-nova-api.service openstack-nova-cert.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-api.service openstack-nova-cert.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-nova-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-cert start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-consoleauth start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-scheduler start</userinput>
@ -219,12 +264,19 @@ vncserver_proxyclient_address = 10.0.0.11</programlisting>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-scheduler on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-conductor on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-novncproxy on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-nova-api.service openstack-nova-cert.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-api.service openstack-nova-cert.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service</userinput></screen>
</step>
<step os="ubuntu">
<para>By default, the Ubuntu packages create an SQLite database.</para>
<para>Because this configuration uses a SQL database server, you can
remove the SQLite database file:</para>
<screen><prompt>#</prompt> <userinput>rm /var/lib/nova/nova.sqlite</userinput></screen>
<screen><prompt>#</prompt> <userinput>rm -f /var/lib/nova/nova.sqlite</userinput></screen>
</step>
</procedure>
</section>

View File

@ -14,52 +14,22 @@
<procedure>
<title>To install legacy networking components</title>
<step>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install nova-network nova-api-metadata</userinput></screen>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install nova-network nova-api-metadata</userinput></screen>
<screen os="debian"><prompt>#</prompt> <userinput>apt-get install nova-network nova-api</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-nova-network openstack-nova-api</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-nova-network openstack-nova-api</userinput></screen>
</step>
</procedure>
<procedure>
<title>To configure legacy networking</title>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Run the following commands:</para>
<para>Replace <replaceable>INTERFACE_NAME</replaceable> with the
actual interface name for the external network. For example,
<emphasis>eth1</emphasis> or <emphasis>ens224</emphasis>.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
network_api_class nova.network.api.API</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
security_group_api nova</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
network_manager nova.network.manager.FlatDHCPManager</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
network_size 254</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
allow_same_net_traffic False</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
multi_host True</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
send_arp_for_ha True</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
share_dhcp_address True</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
force_dhcp_release True</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
flat_network_bridge br100</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
flat_interface <replaceable>INTERFACE_NAME</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
public_interface <replaceable>INTERFACE_NAME</replaceable></userinput></screen>
</step>
<step os="ubuntu;debian">
<para>Edit the <filename>/etc/nova/nova.conf</filename> file and add the
following keys to the <literal>[DEFAULT]</literal> section:</para>
<para>Replace <replaceable>INTERFACE_NAME</replaceable> with the
actual interface name for the external network. For example,
<emphasis>eth1</emphasis> or <emphasis>ens224</emphasis>.</para>
<programlisting language="ini">[DEFAULT]
<step>
<para>Edit the <filename>/etc/nova/nova.conf</filename> file and
complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
the network parameters:</para>
<programlisting language="ini">[DEFAULT]
...
network_api_class = nova.network.api.API
security_group_api = nova
@ -74,6 +44,11 @@ force_dhcp_release = True
flat_network_bridge = br100
flat_interface = <replaceable>INTERFACE_NAME</replaceable>
public_interface = <replaceable>INTERFACE_NAME</replaceable></programlisting>
<para>Replace <replaceable>INTERFACE_NAME</replaceable> with the
actual interface name for the external network. For example,
<emphasis>eth1</emphasis> or <emphasis>ens224</emphasis>.</para>
</step>
</substeps>
</step>
<step>
<para os="ubuntu;debian">Restart the services:</para>
@ -81,14 +56,16 @@ public_interface = <replaceable>INTERFACE_NAME</replaceable></programlisting>
<prompt>#</prompt> <userinput>service nova-api-metadata restart</userinput></screen>
<para os="rhel;centos;fedora;sles;opensuse">Start the services and
configure them to start when the system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>service openstack-nova-network start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-metadata-api start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-network on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-metadata-api on</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>service openstack-nova-network start</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-nova-network.service openstack-nova-metadata-api.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-network.service openstack-nova-metadata-api.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-nova-network start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-api-metadata start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-network on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-api-metadata on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-nova-network.service openstack-nova-metadata-api.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-network.service penstack-nova-metadata-api.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -5,30 +5,36 @@
version="5.0"
xml:id="nova-networking-controller-node">
<title>Configure controller node</title>
<para>Legacy networking primarily involves compute nodes. However, you must
configure the controller node to use it.</para>
<para>Legacy networking primarily involves compute nodes. However,
you must configure the controller node to use legacy
networking.</para>
<procedure>
<title>To configure legacy networking</title>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Run the following commands:</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
network_api_class nova.network.api.API</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
security_group_api nova</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>Edit the <filename>/etc/nova/nova.conf</filename> file and add the
following keys to the <literal>[DEFAULT]</literal> section:</para>
<programlisting language="ini">[DEFAULT]
<step>
<para>Edit the <filename>/etc/nova/nova.conf</filename> file and
complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
the network and security group APIs:</para>
<programlisting language="ini">[DEFAULT]
...
network_api_class = nova.network.api.API
security_group_api = nova</programlisting>
</step>
</substeps>
</step>
<step>
<para>Restart the Compute services:</para>
<screen os="rhel;centos;fedora;sles;opensuse"><prompt>#</prompt> <userinput>service openstack-nova-api restart</userinput>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-nova-api restart</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-scheduler restart</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-conductor restart</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-api restart</userinput>
<prompt>#</prompt> <userinput>service nova-scheduler restart</userinput>
<prompt>#</prompt> <userinput>service nova-conductor restart</userinput></screen>

View File

@ -7,8 +7,8 @@
<title>Create initial network</title>
<para>Before launching your first instance, you must create the necessary
virtual network infrastructure to which the instance will connect.
This network typically provides internet access
<emphasis>from</emphasis> instances. You can enable internet access
This network typically provides Internet access
<emphasis>from</emphasis> instances. You can enable Internet access
<emphasis>to</emphasis> individual instances using a
<glossterm>floating IP address</glossterm> and suitable
<glossterm>security group</glossterm> rules. The <literal>admin</literal>

View File

@ -20,15 +20,15 @@
<para>List service components to verify successful launch of each
process:</para>
<screen><prompt>$</prompt> <userinput>nova service-list</userinput>
<computeroutput>+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| nova-cert | controller | internal | enabled | up | 2014-06-29T22:23:16.000000 | - |
| nova-consoleauth | controller | internal | enabled | up | 2014-06-29T22:23:10.000000 | - |
| nova-scheduler | controller | internal | enabled | up | 2014-06-29T22:23:14.000000 | - |
| nova-conductor | controller | internal | enabled | up | 2014-06-29T22:23:11.000000 | - |
| nova-compute | compute1 | nova | enabled | up | 2014-06-29T22:23:11.000000 | - |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+</computeroutput></screen>
<computeroutput>+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2014-09-16T23:54:02.000000 | - |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2014-09-16T23:54:04.000000 | - |
| 3 | nova-scheduler | controller | internal | enabled | up | 2014-09-16T23:54:07.000000 | - |
| 4 | nova-cert | controller | internal | enabled | up | 2014-09-16T23:54:00.000000 | - |
| 5 | nova-compute | compute1 | nova | enabled | up | 2014-09-16T23:54:06.000000 | - |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+</computeroutput></screen>
<note>
<para>This output should indicate four components enabled on the
controller node one component enabled on the compute node.</para>
@ -41,7 +41,7 @@
<computeroutput>+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.2-x86_64 | ACTIVE | |
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.3-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+</computeroutput></screen>
</step>
</procedure>

View File

@ -0,0 +1,97 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="sahara-install">
<title>Install the Data processing service</title>
<para>This procedure installs the Data processing service (sahara) on the
controller node.</para>
<para>To install the Data processing service on the controller:</para>
<procedure>
<step os="rhel;centos;fedora;opensuse;sles">
<para>Install required packages:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-sahara python-saharaclient</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-sahara python-saharaclient</userinput></screen>
</step>
<step os="ubuntu;debian">
<warning><para>You need to install required packages. For now, sahara
doesn't have packages for Ubuntu and Debian.
Documentation will be updated once packages are available. The rest
of this document assumes that you have sahara service packages
installed on the system.</para></warning>
</step>
<step>
<para>Edit <filename>/etc/sahara/sahara.conf</filename> configuration file</para>
<substeps>
<step><para>First, edit <option>connection</option> parameter in
the <literal>[database]</literal> section. The URL provided here
should point to an empty database. For instance, connection
string for MySQL database will be:
<programlisting language="ini">connection = mysql://sahara:<replaceable>SAHARA_DBPASS</replaceable>@<replaceable>controller</replaceable>/sahara</programlisting>
</para></step>
<step><para>Switch to the <literal>[keystone_authtoken]</literal>
section. The <option>auth_uri</option> parameter should point to
the public Identity API endpoint. <option>identity_uri</option>
should point to the admin Identity API endpoint. For example:
<programlisting language="ini">auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357</programlisting>
</para></step>
<step><para>Next specify <literal>admin_user</literal>,
<literal>admin_password</literal> and
<literal>admin_tenant_name</literal>. These parameters must specify
a keystone user which has the <literal>admin</literal> role in the
given tenant. These credentials allow sahara to authenticate and
authorize its users.
</para></step>
<step><para>Switch to the <literal>[DEFAULT]</literal> section.
Proceed to the networking parameters. If you are using Neutron
for networking, then set <literal>use_neutron=true</literal>.
Otherwise if you are using <systemitem>nova-network</systemitem> set
the given parameter to <literal>false</literal>.
</para></step>
<step><para>That should be enough for the first run. If you want to
increase logging level for troubleshooting, there are two parameters
in the config: <literal>verbose</literal> and
<literal>debug</literal>. If the former is set to
<literal>true</literal>, sahara will
start to write logs of <literal>INFO</literal> level and above. If
<literal>debug</literal> is set to
<literal>true</literal>, sahara will write all the logs, including
the <literal>DEBUG</literal> ones.
</para></step>
</substeps>
</step>
<step><para>If you use the Data processing service with MySQL database,
then for storing big job binaries in sahara internal database you must
configure size of max allowed packet. Edit <filename>my.cnf</filename>
file and change parameter:
<programlisting language="ini">[mysqld]
max_allowed_packet = 256M</programlisting>
and restart MySQL server.
</para></step>
<step><para>Create database schema:
<screen><prompt>#</prompt> <userinput>sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head</userinput></screen>
</para></step>
<step><para>You must register the Data processing service with the Identity
service so that other OpenStack services can locate it. Register the
service and specify the endpoint:
<screen><prompt>$</prompt> <userinput>keystone service-create --name sahara --type data_processing \
--description "Data processing service"</userinput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id $(keystone service-list | awk '/ sahara / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:8386/v1.1/%\(tenant_id\)s \
--internalurl http://<replaceable>controller</replaceable>:8386/v1.1/%\(tenant_id\)s \
--adminurl http://<replaceable>controller</replaceable>:8386/v1.1/%\(tenant_id\)s \
--region regionOne</userinput></screen>
</para></step>
<step><para>Start the sahara service:
<screen os="rhel;centos;fedora;opensuse;ubuntu;debian"><prompt>#</prompt> <userinput>systemctl start openstack-sahara-all</userinput></screen>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-sahara-all start</userinput></screen>
</para></step>
<step><para>(Optional) Enable the Data processing service to start on boot
<screen os="rhel;centos;fedora;opensuse;ubuntu;debian"><prompt>#</prompt> <userinput>systemctl enable openstack-sahara-all</userinput></screen>
<screen os="sles"><prompt>#</prompt> <userinput>chkconfig openstack-sahara-all on</userinput></screen>
</para></step>
</procedure>
</section>

View File

@ -0,0 +1,26 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="sahara-verify">
<title>Verify the Data processing service installation</title>
<para>To verify that the Data processing service (sahara) is installed and
configured correctly, try requesting clusters list using sahara
client.</para>
<procedure>
<step>
<para>Source the <literal>demo</literal> tenant credentials:</para>
<screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen>
</step>
<step>
<para>Retrieve sahara clusters list:</para>
<screen><prompt>$</prompt> <userinput>sahara cluster-list</userinput></screen>
<para>You should see output similar to this:</para>
<screen><computeroutput>+------+----+--------+------------+
| name | id | status | node_count |
+------+----+--------+------------+
+------+----+--------+------------+</computeroutput></screen>
</step>
</procedure>
</section>

View File

@ -13,26 +13,25 @@
OpenStack environment with at least the following components
installed: Compute, Image Service, Identity.</para>
</formalpara>
<note os="ubuntu">
<title>Ubuntu 14.04 Only</title>
<para>The Database module is only available under Ubuntu 14.04.
Packages are not available for 12.04, or via the Ubuntu Cloud
Archive.</para>
</note>
<itemizedlist>
<listitem>
<para>If you want to do backup and restore, you also need Object Storage.</para>
</listitem>
<listitem>
<para>If you want to provision datastores on block-storage volumes, you also need Block Storage.</para>
</listitem>
</itemizedlist>
<para>To install the Database module on the controller:</para>
<procedure>
<step>
<para>Install required packages:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install python-trove python-troveclient python-glanceclient \
trove-common trove-api trove-taskmanager</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install python-trove python-troveclient \
trove-common trove-api trove-taskmanager trove-conductor</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-trove python-troveclient</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-trove python-troveclient</userinput></screen>
</step>
<step os="debian">
<para>Respond to the prompts for <link
linkend="debconf-dbconfig-common">database management</link> and
<link linkend="debconf-keystone_authtoken"
><literal>[keystone_authtoken]</literal> settings</link>,
<para>Respond to the prompts for <link linkend="debconf-dbconfig-common">database management</link> and <link linkend="debconf-keystone_authtoken"><literal>[keystone_authtoken]</literal> settings</link>,
and <link linkend="debconf-api-endpoints">API endpoint</link>
registration. The <command>trove-manage db_sync</command>
command runs automatically.</para>
@ -51,27 +50,38 @@
<literal>service</literal> tenant and give the user the
<literal>admin</literal> role:
</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name=trove --pass=<replaceable>TROVE_PASS</replaceable> \
--email=<replaceable>trove@example.com</replaceable></userinput>
<prompt>$</prompt> <userinput>keystone user-role-add --user=trove --tenant=service --role=admin</userinput></screen>
<screen><prompt>$</prompt> <userinput>keystone user-create --name trove --pass <replaceable>TROVE_PASS</replaceable></userinput>
<prompt>$</prompt> <userinput>keystone user-role-add --user trove --tenant service --role admin</userinput></screen>
<para>Replace <replaceable>TROVE_PASS</replaceable> with a
suitable password.</para>
</step>
</substeps>
</step>
<step>
<para>Edit the following configuration files, taking the below
<para>All configuration files should be placed at <filename>/etc/trove</filename> directory.
Edit the following configuration files, taking the below
actions for each file:</para>
<itemizedlist>
<listitem><para><filename>api-paste.ini</filename></para></listitem>
<listitem><para><filename>trove.conf</filename></para></listitem>
<listitem><para><filename>trove-taskmanager.conf</filename></para></listitem>
<listitem><para><filename>trove-conductor.conf</filename></para></listitem>
</itemizedlist>
<substeps>
<step>
<para>You need to take upstream <filename>api-paste.ini</filename> and change content below in it:</para>
<programlisting language="ini">[composite:trove]
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
auth_host = <replaceable>controller</replaceable>
admin_tenant_name = service
admin_user = trove
admin_password = <replaceable>TROVE_PASS</replaceable></programlisting></step>
<step><para>Edit the <literal>[DEFAULT]</literal> section of
each file and set appropriate values for the OpenStack service
URLs, logging and messaging configuration, and SQL
each file (except <filename>api-paste.ini</filename>) and set appropriate values for the OpenStack service
URLs (can be handled by Keystone service catalog), logging and messaging configuration, and SQL
connections:</para>
<programlisting language="ini">[DEFAULT]
log_dir = /var/log/trove
@ -83,74 +93,38 @@ sql_connection = mysql://trove:<literal>TROVE_DBPASS</literal>@<replaceable>cont
notifier_queue_hostname = <replaceable>controller</replaceable></programlisting>
</step>
<step os="ubuntu">
<step>
<para>Configure the Database module to use the RabbitMQ message broker by
setting the rabbit_password in the <literal>[DEFAULT]</literal>
setting the following options in the <literal>[DEFAULT]</literal>
configuration group of each file:</para>
<programlisting language="ini">[DEFAULT]
...
control_exchange = trove
rabbit_host = <replaceable>controller</replaceable>
rabbit_userid = <replaceable>guest</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable>
...</programlisting>
rabbit_virtual_host= <replaceable>/</replaceable>
rpc_backend = trove.openstack.common.rpc.impl_kombu</programlisting>
</step>
<step os="opensuse;sles;rhel;centos;fedora">
<para>Set these configuration keys to configure the Database module to use
the RabbitMQ message broker:</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/trove/trove.conf \
DEFAULT rpc_backend rabbit</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/trove/trove-taskmanager.conf \
DEFAULT rpc_backend rabbit</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/trove/trove-conductor.conf \
DEFAULT rpc_backend rabbit</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/trove/trove.conf DEFAULT \
rabbit_host <replaceable>controller</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT \
rabbit_host <replaceable>controller</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/trove/trove-conductor.conf DEFAULT \
rabbit_host <replaceable>controller</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/trove/trove.conf DEFAULT \
rabbit_password <replaceable>RABBIT_PASS</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT \
rabbit_password <replaceable>RABBIT_PASS</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/trove/trove-conductor.conf DEFAULT \
rabbit_password <replaceable>RABBIT_PASS</replaceable></userinput></screen>
</step>
</substeps>
</step>
<step os="opensuse;sles;fedora;rhel;centos;ubuntu">
<para>Edit the <literal>[filter:authtoken]</literal> section
of the <filename>api-paste.ini</filename> file so it matches the
listing shown below:</para>
<programlisting language="ini">[filter:authtoken]
auth_host = <replaceable>controller</replaceable>
auth_port = 35357
auth_protocol = http
admin_user = trove
admin_password = <replaceable>ADMIN_PASS</replaceable>
admin_token = <replaceable>ADMIN_TOKEN</replaceable>
admin_tenant_name = service
signing_dir = /var/cache/trove</programlisting>
</step>
<step><para>Edit the <filename>trove.conf</filename> file so it includes
appropriate values for the default datastore and network label
regex as shown below:</para>
<programlisting language="ini">[DEFAULT]
default_datastore = mysql
....
# Config option for showing the IP address that nova doles out
add_addresses = True
network_label_regex = ^NETWORK_LABEL$
....</programlisting>
control_exchange = trove
</programlisting>
</step>
<step>
<para>Edit the <filename>trove-taskmanager.conf</filename> file
so it includes the appropriate service credentials required to
so it includes the required settings to
connect to the OpenStack Compute service as shown below:</para>
<programlisting language="ini">[DEFAULT]
....
# Configuration options for talking to nova via the novaclient.
# These options are for an admin user in your keystone config.
# It proxy's the token received from the user to send to nova via this admin users creds,
@ -158,15 +132,19 @@ network_label_regex = ^NETWORK_LABEL$
nova_proxy_admin_user = admin
nova_proxy_admin_pass = <replaceable>ADMIN_PASS</replaceable>
nova_proxy_admin_tenant_name = service
...</programlisting>
taskmanager_manager = trove.taskmanager.manager.Manager
log_file=trove-taskmanager.log
</programlisting>
</step>
<step os="opensuse;sles;fedora;rhel;centos;ubuntu">
<para>Prepare the trove admin database:</para>
<screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput>
<prompt>mysql&gt;</prompt> <userinput>CREATE DATABASE trove;</userinput>
<prompt>mysql&gt;</prompt> <userinput>GRANT ALL PRIVILEGES ON trove.* TO trove@'localhost' IDENTIFIED BY 'TROVE_DBPASS';</userinput>
<prompt>mysql&gt;</prompt> <userinput>GRANT ALL PRIVILEGES ON trove.* TO trove@'%' IDENTIFIED BY 'TROVE_DBPASS';</userinput></screen>
<prompt>mysql&gt;</prompt> <userinput>GRANT ALL PRIVILEGES ON trove.* TO trove@'localhost' \
IDENTIFIED BY '<replaceable>TROVE_DBPASS</replaceable>';</userinput>
<prompt>mysql&gt;</prompt> <userinput>GRANT ALL PRIVILEGES ON trove.* TO trove@'%' \
IDENTIFIED BY '<replaceable>TROVE_DBPASS</replaceable>';</userinput></screen>
</step>
<step os="opensuse;sles;fedora;rhel;centos;ubuntu">
@ -174,7 +152,7 @@ nova_proxy_admin_tenant_name = service
<substeps>
<step>
<para>Initialize the database:</para>
<screen><prompt>#</prompt> <userinput>su -s /bin/sh -c "trove-manage db_sync" trove</userinput></screen>
<screen><prompt>#</prompt> <userinput>trove-manage db_sync</userinput></screen>
</step>
<step>
<para>Create a datastore. You need to create a separate datastore for
@ -184,12 +162,6 @@ nova_proxy_admin_tenant_name = service
</step>
</substeps>
</step>
<step os="debian">
<para>Create a datastore. You need to create a separate datastore for
each type of database you want to use, for example, MySQL, MongoDB, Cassandra.
This example shows you how to create a datastore for a MySQL database:</para>
<screen><prompt>#</prompt> <userinput>su -s /bin/sh -c "trove-manage datastore_update mysql ''" trove</userinput></screen>
</step>
<step>
<para>Create a trove image.</para>
<para>Create an image for the type of database you want to use,
@ -209,48 +181,70 @@ rabbit_password = <replaceable>RABBIT_PASS</replaceable>
nova_proxy_admin_user = admin
nova_proxy_admin_pass = <replaceable>ADMIN_PASS</replaceable>
nova_proxy_admin_tenant_name = service
trove_auth_url = http://<replaceable>controller</replaceable>:35357/v2.0</programlisting>
trove_auth_url = http://<replaceable>controller</replaceable>:35357/v2.0
log_file = trove-guestagent.log</programlisting>
</step>
</substeps>
</step>
<step>
<para>Update the datastore to use the new image, using the
<command>trove-manage</command> command.</para>
<para>This example shows you how to create a MySQL 5.5 datastore:</para>
<screen><prompt>#</prompt> <userinput>trove-manage --config-file=/etc/trove/trove.conf datastore_version_update \
mysql mysql-5.5 mysql <replaceable>glance_image_ID</replaceable> mysql-server-5.5 1</userinput></screen>
<para>Update the datastore and version to use the specific image with the <command>trove-manage</command> command.</para>
<screen><prompt>#</prompt><userinput>trove-manage datastore_update <replaceable>datastore_name</replaceable> <replaceable>datastore_version</replaceable></userinput>
<prompt>#</prompt><userinput>trove-manage datastore_version_update <replaceable>datastore_name</replaceable> <replaceable>version_name</replaceable> \
<replaceable>datastore_manager</replaceable> <replaceable>glance_image_id</replaceable> <replaceable>packages</replaceable> <replaceable>active</replaceable></userinput></screen>
<para>This example shows you how to create a MySQL datastore with version 5.5:</para>
<screen><prompt>#</prompt><userinput>trove-manage datastore_update mysql ''</userinput>
<prompt>#</prompt><userinput>trove-manage datastore_version_update mysql 5.5 mysql <replaceable>glance_image_ID</replaceable> mysql-server-5.5 1
<prompt>#</prompt><userinput>trove-manage datastore_update mysql 5.5</userinput>
</userinput></screen>
<para>
Upload post-provisioning configuration validation rules:
</para>
<screen><prompt>#</prompt><userinput>trove-manage db_load_datastore_config_parameters <replaceable>datastore_name</replaceable> <replaceable>version_name</replaceable> \
<replaceable>/etc/<replaceable>datastore_name</replaceable>/validation-rules.json</replaceable></userinput></screen>
<para>Example for uplodating rules for MySQL datastore:</para>
<screen><prompt>#</prompt> <userinput>trove-manage db_load_datastore_config_parameters \
mysql 5.5 "$PYBASEDIR"/trove/templates/mysql/validation-rules.json
</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>You must register the Database module with the Identity service so
that other OpenStack services can locate it. Register the
service and specify the endpoint:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name=trove --type=database \
--description="OpenStack Database Service"</userinput>
<screen><prompt>$</prompt> <userinput>keystone service-create --name trove --type database \
--description "OpenStack Database Service"</userinput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ trove / {print $2}') \
--publicurl=http://<replaceable>controller</replaceable>:8779/v1.0/%\(tenant_id\)s \
--internalurl=http://<replaceable>controller</replaceable>:8779/v1.0/%\(tenant_id\)s \
--adminurl=http://<replaceable>controller</replaceable>:8779/v1.0/%\(tenant_id\)s</userinput></screen>
--service-id $(keystone service-list | awk '/ trove / {print $2}') \
--publicurl http://<replaceable>controller</replaceable>:8779/v1.0/%\(tenant_id\)s \
--internalurl http://<replaceable>controller</replaceable>:8779/v1.0/%\(tenant_id\)s \
--adminurl http://<replaceable>controller</replaceable>:8779/v1.0/%\(tenant_id\)s \
--region regionOne</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para os="centos;fedora;rhel;opensuse;sles">Start Database
services and configure them to start when the system
boots:</para>
<para os="ubuntu">Restart Database services:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>service trove-api restart</userinput>
<step os="ubuntu;debian">
<para>Restart the Database services:</para>
<screen><prompt>#</prompt> <userinput>service trove-api restart</userinput>
<prompt>#</prompt> <userinput>service trove-taskmanager restart</userinput>
<prompt>#</prompt> <userinput>service trove-conductor restart</userinput></screen>
<screen os="centos;rhel;fedora;opensuse;sles"><prompt>#</prompt> <userinput>service openstack-trove-api start</userinput>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Database services and configure them to start when the
system boots:</para>
<screen os="centos;rhel;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \
openstack-trove-conductor.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \
openstack-trove-conductor.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-trove-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-trove-taskmanager start</userinput>
<prompt>#</prompt> <userinput>service openstack-trove-conductor start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-trove-api on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-trove-taskmanager on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-trove-conductor on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \
openstack-trove-conductor.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \
openstack-trove-conductor.service</userinput></screen>
</step>
</procedure>

View File

@ -31,7 +31,7 @@
</para>
<para>This example shows you how to create a MySQL 5.5
database:</para>
<screen><prompt>$</prompt> <userinput>trove create <replaceable>name</replaceable> 2 --size=2 --databases=<replaceable>DBNAME</replaceable> \
<screen><prompt>$</prompt> <userinput>trove create <replaceable>name</replaceable> 2 --size=2 --databases <replaceable>DBNAME</replaceable> \
--users <replaceable>USER</replaceable>:<replaceable>PASSWORD</replaceable> --datastore_version mysql-5.5 \
--datastore mysql</userinput></screen>
</step>

View File

@ -10,7 +10,7 @@
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/section_dochistory.xml">
</xi:include>
</preface>
<appendix xml:id="under-construction">
<appendix xml:id="preface_under-construction">
<title>OpenStack Training Guides Are Under Construction</title>
<para>We need your help! This is a community driven project to provide the user group community
access to OpenStack training materials. We cannot make this work without your help.</para>

View File

@ -99,12 +99,10 @@
</revision>
</revhistory>
</info>
<!-- <xi:include href="under-contruction-notice.xml"/> -->
<xi:include href="under-contruction-notice.xml"/>
<xi:include href="bk_preface.xml"/>
<xi:include href="associate-guide/bk_associate-training-guide.xml"/>
<xi:include href="operator-guide/bk_operator-training-guide.xml"/>
<xi:include href="developer-guide/bk_developer-training-guide.xml"/>
<xi:include href="architect-guide/bk_architect-training-guide.xml"/>
<!-- For Juno Release -->
<!-- xi:include href="basic-install-guide/bk-openstack-basic-install-guide.xml"/ -->
</set>