From c2d54b9737cd5839162827eb4b6f3478630dde68 Mon Sep 17 00:00:00 2001
From: chenxing <chason.chan@foxmail.com>
Date: Wed, 28 Feb 2018 19:09:55 +0800
Subject: [PATCH] Upgrade the rst convention of the Reference Guide [2]

We upgrade the rst convention by following Documentation Contributor
Guide[1].

[1] https://docs.openstack.org/doc-contrib-guide

Change-Id: I825deadefcf996732a03e61c8fd19cfd6a498e77
Partially-Implements: blueprint optimize-the-documentation-format
---
 doc/source/reference/external-ceph-guide.rst  | 308 ++++++++++--------
 .../reference/external-mariadb-guide.rst      | 191 +++++------
 doc/source/reference/hyperv-guide.rst         | 123 ++++---
 doc/source/reference/index.rst                |   8 +-
 doc/source/reference/ironic-guide.rst         |  59 ++--
 doc/source/reference/kuryr-guide.rst          |  51 +--
 6 files changed, 398 insertions(+), 342 deletions(-)

diff --git a/doc/source/reference/external-ceph-guide.rst b/doc/source/reference/external-ceph-guide.rst
index ba92273481..a660347e09 100644
--- a/doc/source/reference/external-ceph-guide.rst
+++ b/doc/source/reference/external-ceph-guide.rst
@@ -9,7 +9,7 @@ cluster instead of deploying it with Kolla. This can be achieved with only a
 few configuration steps in Kolla.
 
 Requirements
-============
+~~~~~~~~~~~~
 
 * An existing installation of Ceph
 * Existing Ceph storage pools
@@ -17,92 +17,103 @@ Requirements
   (Glance, Cinder, Nova, Gnocchi)
 
 Enabling External Ceph
-======================
+~~~~~~~~~~~~~~~~~~~~~~
 
 Using external Ceph with Kolla means not to deploy Ceph via Kolla. Therefore,
 disable Ceph deployment in ``/etc/kolla/globals.yml``
 
-::
+.. code-block:: yaml
 
-  enable_ceph: "no"
+   enable_ceph: "no"
+
+.. end
 
 There are flags indicating individual services to use ceph or not which default
 to the value of ``enable_ceph``. Those flags now need to be activated in order
 to activate external Ceph integration. This can be done individually per
 service in ``/etc/kolla/globals.yml``:
 
-::
+.. code-block:: yaml
 
-  glance_backend_ceph: "yes"
-  cinder_backend_ceph: "yes"
-  nova_backend_ceph: "yes"
-  gnocchi_backend_storage: "ceph"
-  enable_manila_backend_ceph_native: "yes"
+   glance_backend_ceph: "yes"
+   cinder_backend_ceph: "yes"
+   nova_backend_ceph: "yes"
+   gnocchi_backend_storage: "ceph"
+   enable_manila_backend_ceph_native: "yes"
+
+.. end
 
 The combination of ``enable_ceph: "no"`` and ``<service>_backend_ceph: "yes"``
 triggers the activation of external ceph mechanism in Kolla.
 
 Edit the Inventory File
-=======================
+~~~~~~~~~~~~~~~~~~~~~~~
 
 When using external Ceph, there may be no nodes defined in the storage group.
 This will cause Cinder and related services relying on this group to fail.
 In this case, operator should add some nodes to the storage group, all the
-nodes where cinder-volume and cinder-backup will run:
+nodes where ``cinder-volume`` and ``cinder-backup`` will run:
 
-::
+.. code-block:: ini
 
-  [storage]
-  compute01
+   [storage]
+   compute01
+
+.. end
 
 Configuring External Ceph
-=========================
+~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Glance
 ------
 
 Configuring Glance for Ceph includes three steps:
 
-1) Configure RBD back end in glance-api.conf
-2) Create Ceph configuration file in /etc/ceph/ceph.conf
-3) Create Ceph keyring file in /etc/ceph/ceph.client.<username>.keyring
+#. Configure RBD back end in ``glance-api.conf``
+#. Create Ceph configuration file in ``/etc/ceph/ceph.conf``
+#. Create Ceph keyring file in ``/etc/ceph/ceph.client.<username>.keyring``
 
 Step 1 is done by using Kolla's INI merge mechanism: Create a file in
 ``/etc/kolla/config/glance/glance-api.conf`` with the following contents:
 
-::
+.. code-block:: ini
 
-  [glance_store]
-  stores = rbd
-  default_store = rbd
-  rbd_store_pool = images
-  rbd_store_user = glance
-  rbd_store_ceph_conf = /etc/ceph/ceph.conf
+   [glance_store]
+   stores = rbd
+   default_store = rbd
+   rbd_store_pool = images
+   rbd_store_user = glance
+   rbd_store_ceph_conf = /etc/ceph/ceph.conf
+
+.. end
 
 Now put ceph.conf and the keyring file (name depends on the username created in
 Ceph) into the same directory, for example:
 
-/etc/kolla/config/glance/ceph.conf
+.. path /etc/kolla/config/glance/ceph.conf
+.. code-block:: ini
 
-::
+   [global]
+   fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
+   mon_initial_members = ceph-0
+   mon_host = 192.168.0.56
+   auth_cluster_required = cephx
+   auth_service_required = cephx
+   auth_client_required = cephx
 
-  [global]
-  fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
-  mon_initial_members = ceph-0
-  mon_host = 192.168.0.56
-  auth_cluster_required = cephx
-  auth_service_required = cephx
-  auth_client_required = cephx
+.. end
 
-/etc/kolla/config/glance/ceph.client.glance.keyring
+.. code-block:: none
 
-::
+   $ cat /etc/kolla/config/glance/ceph.client.glance.keyring
 
-  [client.glance]
-          key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
+   [client.glance]
+   key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
 
-Kolla will pick up all files named ceph.* in this directory an copy them to the
-/etc/ceph/ directory of the container.
+.. end
+
+Kolla will pick up all files named ``ceph.*`` in this directory and copy them
+to the ``/etc/ceph/`` directory of the container.
 
 Cinder
 ------
@@ -110,61 +121,68 @@ Cinder
 Configuring external Ceph for Cinder works very similar to
 Glance.
 
-Edit /etc/kolla/config/cinder/cinder-volume.conf with the following content:
+Modify ``/etc/kolla/config/cinder/cinder-volume.conf`` file according to
+the following configuration:
 
-::
+.. code-block:: ini
 
-  [DEFAULT]
-  enabled_backends=rbd-1
+   [DEFAULT]
+   enabled_backends=rbd-1
 
-  [rbd-1]
-  rbd_ceph_conf=/etc/ceph/ceph.conf
-  rbd_user=cinder
-  backend_host=rbd:volumes
-  rbd_pool=volumes
-  volume_backend_name=rbd-1
-  volume_driver=cinder.volume.drivers.rbd.RBDDriver
-  rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
+   [rbd-1]
+   rbd_ceph_conf=/etc/ceph/ceph.conf
+   rbd_user=cinder
+   backend_host=rbd:volumes
+   rbd_pool=volumes
+   volume_backend_name=rbd-1
+   volume_driver=cinder.volume.drivers.rbd.RBDDriver
+   rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
+
+.. end
 
 .. note::
 
-    ``cinder_rbd_secret_uuid`` can be found in ``/etc/kolla/passwords.yml`` file.
+   ``cinder_rbd_secret_uuid`` can be found in ``/etc/kolla/passwords.yml`` file.
 
-Edit /etc/kolla/config/cinder/cinder-backup.conf with the following content:
+Modify ``/etc/kolla/config/cinder/cinder-backup.conf`` file according to
+the following configuration:
 
-::
+.. code-block:: ini
 
-  [DEFAULT]
-  backup_ceph_conf=/etc/ceph/ceph.conf
-  backup_ceph_user=cinder-backup
-  backup_ceph_chunk_size = 134217728
-  backup_ceph_pool=backups
-  backup_driver = cinder.backup.drivers.ceph
-  backup_ceph_stripe_unit = 0
-  backup_ceph_stripe_count = 0
-  restore_discard_excess_bytes = true
+   [DEFAULT]
+   backup_ceph_conf=/etc/ceph/ceph.conf
+   backup_ceph_user=cinder-backup
+   backup_ceph_chunk_size = 134217728
+   backup_ceph_pool=backups
+   backup_driver = cinder.backup.drivers.ceph
+   backup_ceph_stripe_unit = 0
+   backup_ceph_stripe_count = 0
+   restore_discard_excess_bytes = true
 
-Next, place the ceph.conf file into
-/etc/kolla/config/cinder/ceph.conf:
+.. end
 
-::
+Next, copy the ``ceph.conf`` file into ``/etc/kolla/config/cinder/``:
 
-  [global]
-  fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
-  mon_initial_members = ceph-0
-  mon_host = 192.168.0.56
-  auth_cluster_required = cephx
-  auth_service_required = cephx
-  auth_client_required = cephx
+.. code-block:: ini
+
+   [global]
+   fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
+   mon_initial_members = ceph-0
+   mon_host = 192.168.0.56
+   auth_cluster_required = cephx
+   auth_service_required = cephx
+   auth_client_required = cephx
+
+.. end
 
 Separate configuration options can be configured for
 cinder-volume and cinder-backup by adding ceph.conf files to
-/etc/kolla/config/cinder/cinder-volume and
-/etc/kolla/config/cinder/cinder-backup respectively. They
-will be merged with /etc/kolla/config/cinder/ceph.conf.
+``/etc/kolla/config/cinder/cinder-volume`` and
+``/etc/kolla/config/cinder/cinder-backup`` respectively. They
+will be merged with ``/etc/kolla/config/cinder/ceph.conf``.
 
 Ceph keyrings are deployed per service and placed into
-cinder-volume and cinder-backup directories, put the keyring files
+``cinder-volume`` and ``cinder-backup`` directories, put the keyring files
 to these directories, for example:
 
 .. note::
@@ -172,111 +190,131 @@ to these directories, for example:
     ``cinder-backup`` requires two keyrings for accessing volumes
     and backup pool.
 
-/etc/kolla/config/cinder/cinder-backup/ceph.client.cinder.keyring
+.. code-block:: console
 
-::
+   $ cat /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder.keyring
 
-  [client.cinder]
-          key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
+   [client.cinder]
+   key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
 
-/etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
+.. end
 
-::
+.. code-block:: console
 
-  [client.cinder-backup]
-          key = AQC9wNBYrD8MOBAAwUlCdPKxWZlhkrWIDE1J/w==
+   $ cat /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
 
-/etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
+   [client.cinder-backup]
+   key = AQC9wNBYrD8MOBAAwUlCdPKxWZlhkrWIDE1J/w==
 
-::
+.. end
 
-  [client.cinder]
-          key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
+.. code-block:: console
 
-It is important that the files are named ceph.client*.
+   $ cat /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
+
+   [client.cinder]
+   key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
+
+.. end
+
+It is important that the files are named ``ceph.client*``.
 
 Nova
-------
+----
 
 Put ceph.conf, nova client keyring file and cinder client keyring file into
 ``/etc/kolla/config/nova``:
 
-::
+.. code-block:: console
 
-  $ ls /etc/kolla/config/nova
-  ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf
+   $ ls /etc/kolla/config/nova
+   ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf
+
+.. end
 
 Configure nova-compute to use Ceph as the ephemeral back end by creating
 ``/etc/kolla/config/nova/nova-compute.conf`` and adding the following
-contents:
+configurations:
 
-::
+.. code-block:: ini
 
-  [libvirt]
-  images_rbd_pool=vms
-  images_type=rbd
-  images_rbd_ceph_conf=/etc/ceph/ceph.conf
-  rbd_user=nova
+   [libvirt]
+   images_rbd_pool=vms
+   images_type=rbd
+   images_rbd_ceph_conf=/etc/ceph/ceph.conf
+   rbd_user=nova
 
-.. note:: ``rbd_user`` might vary depending on your environment.
+.. end
+
+.. note::
+
+   ``rbd_user`` might vary depending on your environment.
 
 Gnocchi
 -------
 
-Edit ``/etc/kolla/config/gnocchi/gnocchi.conf`` with the following content:
+Modify ``/etc/kolla/config/gnocchi/gnocchi.conf`` file according to
+the following configuration:
 
-::
+.. code-block:: ini
 
-  [storage]
-  driver = ceph
-  ceph_username = gnocchi
-  ceph_keyring = /etc/ceph/ceph.client.gnocchi.keyring
-  ceph_conffile = /etc/ceph/ceph.conf
+   [storage]
+   driver = ceph
+   ceph_username = gnocchi
+   ceph_keyring = /etc/ceph/ceph.client.gnocchi.keyring
+   ceph_conffile = /etc/ceph/ceph.conf
+
+.. end
 
 Put ceph.conf and gnocchi client keyring file in
 ``/etc/kolla/config/gnocchi``:
 
-::
+.. code-block:: console
 
-  $ ls /etc/kolla/config/gnocchi
-  ceph.client.gnocchi.keyring ceph.conf gnocchi.conf
+   $ ls /etc/kolla/config/gnocchi
+   ceph.client.gnocchi.keyring ceph.conf gnocchi.conf
+
+.. end
 
 Manila
 ------
 
 Configuring Manila for Ceph includes four steps:
 
-1) Configure CephFS backend, setting enable_manila_backend_ceph_native
-2) Create Ceph configuration file in /etc/ceph/ceph.conf
-3) Create Ceph keyring file in /etc/ceph/ceph.client.<username>.keyring
-4) Setup Manila in the usual way
+#. Configure CephFS backend, setting ``enable_manila_backend_ceph_native``
+#. Create Ceph configuration file in ``/etc/ceph/ceph.conf``
+#. Create Ceph keyring file in ``/etc/ceph/ceph.client.<username>.keyring``
+#. Setup Manila in the usual way
 
-Step 1 is done by using setting enable_manila_backend_ceph_native=true
+Step 1 is done by using setting ``enable_manila_backend_ceph_native=true``
 
 Now put ceph.conf and the keyring file (name depends on the username created
 in Ceph) into the same directory, for example:
 
-/etc/kolla/config/manila/ceph.conf
+.. path /etc/kolla/config/manila/ceph.conf
+.. code-block:: ini
 
-::
+   [global]
+   fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
+   mon_host = 192.168.0.56
+   auth_cluster_required = cephx
+   auth_service_required = cephx
+   auth_client_required = cephx
 
-  [global]
-  fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
-  mon_host = 192.168.0.56
-  auth_cluster_required = cephx
-  auth_service_required = cephx
-  auth_client_required = cephx
+.. end
 
-/etc/kolla/config/manila/ceph.client.manila.keyring
+.. code-block:: console
 
-::
+   $ cat /etc/kolla/config/manila/ceph.client.manila.keyring
 
-  [client.manila]
-  key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
+   [client.manila]
+   key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
+
+.. end
 
 For more details on the rest of the Manila setup, such as creating the share
-type ``default_share_type``, please see:
-https://docs.openstack.org/kolla-ansible/latest/reference/manila-guide.html
+type ``default_share_type``, please see `Manila in Kolla
+<https://docs.openstack.org/kolla-ansible/latest/reference/manila-guide.html>`__.
 
-For more details on the CephFS Native driver, please see:
-https://docs.openstack.org/manila/latest/admin/cephfs_driver.html
+For more details on the CephFS Native driver, please see `CephFS driver
+<https://docs.openstack.org/manila/latest/admin/cephfs_driver.html>`__.
diff --git a/doc/source/reference/external-mariadb-guide.rst b/doc/source/reference/external-mariadb-guide.rst
index 144510a492..9fb288a41b 100644
--- a/doc/source/reference/external-mariadb-guide.rst
+++ b/doc/source/reference/external-mariadb-guide.rst
@@ -9,7 +9,7 @@ it might be necessary to use an externally managed database.
 This use case can be achieved by simply taking some extra steps:
 
 Requirements
-============
+~~~~~~~~~~~~
 
 * An existing MariaDB cluster / server, reachable from all of your
   nodes.
@@ -23,7 +23,7 @@ Requirements
   user accounts for all enabled services.
 
 Enabling External MariaDB support
-=================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 In order to enable external mariadb support,
 you will first need to disable mariadb deployment,
@@ -31,186 +31,163 @@ by ensuring the following line exists within ``/etc/kolla/globals.yml`` :
 
 .. code-block:: yaml
 
-  enable_mariadb: "no"
+   enable_mariadb: "no"
 
 .. end
 
-There are two ways in which you can use
-external MariaDB:
+There are two ways in which you can use external MariaDB:
+* Using an already load-balanced MariaDB address
+* Using an external MariaDB cluster
 
 Using an already load-balanced MariaDB address (recommended)
 ------------------------------------------------------------
 
-If your external database already has a
-load balancer, you will need to do the following:
+If your external database already has a load balancer, you will
+need to do the following:
 
-* Within your inventory file, just add the hostname
-  of the load balancer within the mariadb group,
-  described as below:
+#. Edit the inventory file, change ``control`` to the hostname of the load
+   balancer within the ``mariadb`` group as below:
 
-Change the following
+   .. code-block:: ini
+
+      [mariadb:children]
+      myexternalmariadbloadbalancer.com
+
+   .. end
+
+
+#. Define ``database_address`` in ``/etc/kolla/globals.yml`` file:
+
+   .. code-block:: yaml
+
+      database_address: myexternalloadbalancer.com
+
+   .. end
+
+.. note::
+
+   If ``enable_external_mariadb_load_balancer`` is set to ``no``
+   (default), the external DB load balancer should be accessible
+   from all nodes during your deployment.
+
+Using an external MariaDB cluster
+---------------------------------
+
+Using this way, you need to adjust the inventory file:
 
 .. code-block:: ini
 
-  [mariadb:children]
-  control
-
-.. end
-
-so that it looks like below:
-
-.. code-block:: ini
-
-  [mariadb]
-  myexternalmariadbloadbalancer.com
-
-.. end
-
-* Define **database_address** within ``/etc/kolla/globals.yml``
-
-.. code-block:: yaml
-
-  database_address: myexternalloadbalancer.com
-
-.. end
-
-Please note that if **enable_external_mariadb_load_balancer** is
-set to "no" - **default**, the external DB load balancer will need to be
-accessible from all nodes within your deployment, which might
-connect to it.
-
-Using an external MariaDB cluster:
-----------------------------------
-
-Then, you will need to adjust your inventory file:
-
-Change the following
-
-.. code-block:: ini
-
-  [mariadb:children]
-  control
-
-.. end
-
-so that it looks like below:
-
-.. code-block:: ini
-
-  [mariadb]
-  myexternaldbserver1.com
-  myexternaldbserver2.com
-  myexternaldbserver3.com
+   [mariadb:children]
+   myexternaldbserver1.com
+   myexternaldbserver2.com
+   myexternaldbserver3.com
 
 .. end
 
 If you choose to use haproxy for load balancing between the
 members of the cluster, every node within this group
-needs to be resolvable and reachable and resolvable from all
-the hosts within the **[haproxy:children]**  group
-of your inventory (defaults to **[network]**).
+needs to be resolvable and reachable from all
+the hosts within the ``[haproxy:children]``  group
+of your inventory (defaults to ``[network]``).
 
-In addition to that, you also need to set the following within
-``/etc/kolla/globals.yml``:
+In addition, configure the ``/etc/kolla/globals.yml`` file
+according to the following configuration:
 
 .. code-block:: yaml
 
-  enable_external_mariadb_load_balancer: yes
+   enable_external_mariadb_load_balancer: yes
 
 .. end
 
 Using External MariaDB with a privileged user
-=============================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 In case your MariaDB user is root, just leave
 everything as it is within globals.yml (Except the
 internal mariadb deployment, which should be disabled),
-and set the **database_password** field within
-``/etc/kolla/passwords.yml``
+and set the ``database_password`` in ``/etc/kolla/passwords.yml`` file:
 
 .. code-block:: yaml
 
-  database_password: mySuperSecurePassword
+   database_password: mySuperSecurePassword
 
 .. end
 
-In case your username is other than **root**, you will
-need to also set it, within ``/etc/kolla/globals.yml``
+If the MariaDB ``username`` is not ``root``, set ``database_username`` in
+``/etc/kolla/globals.yml`` file:
 
 .. code-block:: yaml
 
-  database_username: "privillegeduser"
+   database_username: "privillegeduser"
 
 .. end
 
 Using preconfigured databases / users:
-======================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The first step you need to take is the following:
-
-Within ``/etc/kolla/globals.yml``, set the following:
+The first step you need to take is to set ``use_preconfigured_databases`` to
+``yes`` in the ``/etc/kolla/globals.yml`` file:
 
 .. code-block:: yaml
 
-  use_preconfigured_databases: "yes"
+   use_preconfigured_databases: "yes"
 
 .. end
 
-.. note:: Please note that when the ``use_preconfigured_databases`` flag
-  is set to ``"yes"``, you need to have the ``log_bin_trust_function_creators``
-  mysql variable set to ``1`` by your database administrator before running the
-  ``upgrade`` command.
+.. note::
+
+   when the ``use_preconfigured_databases`` flag is set to ``"yes"``, you need
+   to make sure the mysql variable ``log_bin_trust_function_creators``
+   set to ``1`` by the database administrator before running the
+   :command:`upgrade` command.
 
 Using External MariaDB with separated, preconfigured users and databases
 ------------------------------------------------------------------------
 
-In order to achieve this, you will need to define the user names within
-``/etc/kolla/globals.yml``, as illustrated by the example below:
+In order to achieve this, you will need to define the user names in the
+``/etc/kolla/globals.yml`` file, as illustrated by the example below:
 
 
 .. code-block:: yaml
 
-  keystone_database_user: preconfigureduser1
-  nova_database_user: preconfigureduser2
+   keystone_database_user: preconfigureduser1
+   nova_database_user: preconfigureduser2
 
 .. end
 
-You will need to also set the passwords for all databases within
-``/etc/kolla/passwords.yml``
-
-
-However, fortunately, using a common user across
-all databases is also possible.
+Also, you will need to set the passwords for all databases in the
+``/etc/kolla/passwords.yml`` file
 
+However, fortunately, using a common user across all databases is possible.
 
 Using External MariaDB with a common user across databases
 ----------------------------------------------------------
 
 In order to use a common, preconfigured user across all databases,
-all you need to do is the following:
+all you need to do is the following steps:
 
-* Within ``/etc/kolla/globals.yml``, add the following:
+#. Edit the ``/etc/kolla/globals.yml`` file, add the following:
 
-.. code-block:: yaml
+   .. code-block:: yaml
 
-  use_common_mariadb_user: "yes"
+      use_common_mariadb_user: "yes"
 
-.. end
+   .. end
 
-* Set the database_user within ``/etc/kolla/globals.yml`` to
-  the one provided to you:
+#. Set the database_user within ``/etc/kolla/globals.yml`` to
+   the one provided to you:
 
-.. code-block:: yaml
+   .. code-block:: yaml
 
-  database_user: mycommondatabaseuser
+      database_user: mycommondatabaseuser
 
-.. end
+   .. end
 
-* Set the common password for all components within ``/etc/kolla/passwords.yml```.
-  In order to achieve that you could use the following command:
+#. Set the common password for all components within ``/etc/kolla/passwords.yml```.
+   In order to achieve that you could use the following command:
 
-.. code-block:: console
+   .. code-block:: console
 
-  sed -i -r -e 's/([a-z_]{0,}database_password:+)(.*)$/\1 mycommonpass/gi' /etc/kolla/passwords.yml
+      sed -i -r -e 's/([a-z_]{0,}database_password:+)(.*)$/\1 mycommonpass/gi' /etc/kolla/passwords.yml
 
-.. end
\ No newline at end of file
+   .. end
\ No newline at end of file
diff --git a/doc/source/reference/hyperv-guide.rst b/doc/source/reference/hyperv-guide.rst
index 3b175267b2..6ac4886083 100644
--- a/doc/source/reference/hyperv-guide.rst
+++ b/doc/source/reference/hyperv-guide.rst
@@ -5,7 +5,7 @@ Nova-HyperV in Kolla
 ====================
 
 Overview
-========
+~~~~~~~~
 Currently, Kolla can deploy the following OpenStack services for Hyper-V:
 
 * nova-compute
@@ -24,30 +24,28 @@ virtual machines from Horizon web interface.
 
 .. note::
 
-    HyperV services are not currently deployed as containers. This functionality
-    is in development. The current implementation installs OpenStack services
-    via MSIs.
+   HyperV services are not currently deployed as containers. This functionality
+   is in development. The current implementation installs OpenStack services
+   via MSIs.
 
 
 .. note::
 
-    HyperV services do not currently support outside the box upgrades. Manual
-    upgrades are required for this process. MSI release versions can be found
-    `here
-    <https://cloudbase.it/openstack-hyperv-driver/>`__.
-    To upgrade an existing MSI to a newer version, simply uninstall the current
-    MSI and install the newer one. This will not delete the configuration files.
-    To preserve the configuration files, check the Skip configuration checkbox
-    during installation.
+   HyperV services do not currently support outside the box upgrades. Manual
+   upgrades are required for this process. MSI release versions can be found
+   `here <https://cloudbase.it/openstack-hyperv-driver/>`__.
+   To upgrade an existing MSI to a newer version, simply uninstall the current
+   MSI and install the newer one. This will not delete the configuration files.
+   To preserve the configuration files, check the Skip configuration checkbox
+   during installation.
 
 
 Preparation for Hyper-V node
-============================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Ansible communicates with Hyper-V host via WinRM protocol. An HTTPS WinRM
 listener needs to be configured on the Hyper-V host, which can be easily
-created with
-`this PowerShell script
+created with `this PowerShell script
 <https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1>`__.
 
 
@@ -57,13 +55,16 @@ Virtual Interface the following PowerShell may be used:
 
 .. code-block:: console
 
-    PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
-    PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false
+   PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
+   PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false
+
+.. end
 
 .. note::
 
-    It is very important to make sure that when you are using a Hyper-V node with only 1 NIC the
-    -AllowManagementOS option is set on True, otherwise you will lose connectivity to the Hyper-V node.
+   It is very important to make sure that when you are using a Hyper-V node
+   with only 1 NIC the ``-AllowManagementOS`` option is set on ``True``,
+   otherwise you will lose connectivity to the Hyper-V node.
 
 
 To prepare the Hyper-V node to be able to attach to volumes provided by
@@ -72,72 +73,83 @@ running and started automatically.
 
 .. code-block:: console
 
-    PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
-    PS C:\> Start-Service MSiSCSI
-
+   PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
+   PS C:\> Start-Service MSiSCSI
 
+.. end
 
 Preparation for Kolla deployer node
-===================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Hyper-V role is required, enable it in ``/etc/kolla/globals.yml``:
 
-.. code-block:: console
+.. code-block:: yaml
 
-    enable_hyperv: "yes"
+   enable_hyperv: "yes"
+
+.. end
 
 Hyper-V options are also required in ``/etc/kolla/globals.yml``:
 
-.. code-block:: console
+.. code-block:: yaml
 
-    hyperv_username: <HyperV username>
-    hyperv_password: <HyperV password>
-    vswitch_name: <HyperV virtual switch name>
-    nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
+   hyperv_username: <HyperV username>
+   hyperv_password: <HyperV password>
+   vswitch_name: <HyperV virtual switch name>
+   nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
+
+.. end
 
 If tenant networks are to be built using VLAN add corresponding type in
 ``/etc/kolla/globals.yml``:
 
-.. code-block:: console
+.. code-block:: yaml
 
-    neutron_tenant_network_types: 'flat,vlan'
+   neutron_tenant_network_types: 'flat,vlan'
+
+.. end
 
 The virtual switch is the same one created on the HyperV setup part.
 For nova_msi_url, different Nova MSI (Mitaka/Newton/Ocata) versions can
 be found on `Cloudbase website
 <https://cloudbase.it/openstack-hyperv-driver/>`__.
 
-
 Add the Hyper-V node in ``ansible/inventory`` file:
 
-.. code-block:: console
+.. code-block:: none
 
-    [hyperv]
-    <HyperV IP>
+   [hyperv]
+   <HyperV IP>
 
-    [hyperv:vars]
-    ansible_user=<HyperV user>
-    ansible_password=<HyperV password>
-    ansible_port=5986
-    ansible_connection=winrm
-    ansible_winrm_server_cert_validation=ignore
+   [hyperv:vars]
+   ansible_user=<HyperV user>
+   ansible_password=<HyperV password>
+   ansible_port=5986
+   ansible_connection=winrm
+   ansible_winrm_server_cert_validation=ignore
+
+.. end
 
 ``pywinrm`` package needs to be installed in order for Ansible to work
 on the HyperV node:
 
 .. code-block:: console
 
-    pip install "pywinrm>=0.2.2"
+   pip install "pywinrm>=0.2.2"
+
+.. end
 
 .. note::
 
-    In case of a test deployment with controller and compute nodes as virtual machines
-    on Hyper-V, if VLAN tenant networking is used, trunk mode has to be enabled on the
-    VMs:
+   In case of a test deployment with controller and compute nodes as
+   virtual machines on Hyper-V, if VLAN tenant networking is used,
+   trunk mode has to be enabled on the VMs:
 
 .. code-block:: console
 
-    Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList <VLAN ID> -NativeVlanId 0 <VM name>
+   Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList <VLAN ID> -NativeVlanId 0 <VM name>
+
+.. end
 
 networking-hyperv mechanism driver is needed for neutron-server to
 communicate with HyperV nova-compute. This can be built with source
@@ -146,7 +158,9 @@ container with pip:
 
 .. code-block:: console
 
-    pip install "networking-hyperv>=4.0.0"
+   pip install "networking-hyperv>=4.0.0"
+
+.. end
 
 For neutron_extension_drivers, ``port_security`` and ``qos`` are
 currently supported by the networking-hyperv mechanism driver.
@@ -154,20 +168,23 @@ By default only ``port_security`` is set.
 
 
 Verify Operations
-=================
+~~~~~~~~~~~~~~~~~
 
 OpenStack HyperV services can be inspected and managed from PowerShell:
 
 .. code-block:: console
 
-    PS C:\> Get-Service nova-compute
-    PS C:\> Get-Service neutron-hyperv-agent
+   PS C:\> Get-Service nova-compute
+   PS C:\> Get-Service neutron-hyperv-agent
+
+.. end
 
 .. code-block:: console
 
-    PS C:\> Restart-Service nova-compute
-    PS C:\> Restart-Service neutron-hyperv-agent
+   PS C:\> Restart-Service nova-compute
+   PS C:\> Restart-Service neutron-hyperv-agent
 
+.. end
 
 For more information on OpenStack HyperV, see
 `Hyper-V virtualization platform
diff --git a/doc/source/reference/index.rst b/doc/source/reference/index.rst
index 7c9dbaa5bd..ffc9e12967 100644
--- a/doc/source/reference/index.rst
+++ b/doc/source/reference/index.rst
@@ -1,12 +1,12 @@
-Reference
-=========
+Projects Deployment References
+==============================
 
 .. toctree::
-   :maxdepth: 1
+   :maxdepth: 2
 
    ceph-guide
-   central-logging-guide
    external-ceph-guide
+   central-logging-guide
    external-mariadb-guide
    cinder-guide
    cinder-guide-hnas
diff --git a/doc/source/reference/ironic-guide.rst b/doc/source/reference/ironic-guide.rst
index 80e9b7ed00..e9e1fc503c 100644
--- a/doc/source/reference/ironic-guide.rst
+++ b/doc/source/reference/ironic-guide.rst
@@ -5,7 +5,7 @@ Ironic in Kolla
 ===============
 
 Overview
-========
+~~~~~~~~
 Currently Kolla can deploy the Ironic services:
 
 - ironic-api
@@ -16,61 +16,72 @@ Currently Kolla can deploy the Ironic services:
 As well as a required PXE service, deployed as ironic-pxe.
 
 Current status
-==============
+~~~~~~~~~~~~~~
+
 The Ironic implementation is "tech preview", so currently instances can only be
 deployed on baremetal. Further work will be done to allow scheduling for both
 virtualized and baremetal deployments.
 
 Pre-deployment Configuration
-============================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Enable Ironic role in ``/etc/kolla/globals.yml``:
 
-.. code-block:: console
+.. code-block:: yaml
 
-    enable_ironic: "yes"
+   enable_ironic: "yes"
 
-Beside that an additional network type 'vlan,flat' has to be added to a list of
+.. end
+
+Beside that an additional network type ``vlan,flat`` has to be added to a list of
 tenant network types:
 
-.. code-block:: console
+.. code-block:: yaml
 
-    neutron_tenant_network_types: "vxlan,vlan,flat"
+   neutron_tenant_network_types: "vxlan,vlan,flat"
+
+.. end
 
 Configuring Web Console
-=======================
-Configuration based off upstream web_console_documentation_.
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Configuration based off upstream `Node web console
+<https://docs.openstack.org/ironic/latest/admin/console.html#node-web-console>`__.
 
 Serial speed must be the same as the serial configuration in the BIOS settings.
 Default value: 115200bps, 8bit, non-parity.If you have different serial speed.
 
 Set ironic_console_serial_speed in ``/etc/kolla/globals.yml``:
 
-::
+.. code-block:: yaml
 
-    ironic_console_serial_speed: 9600n8
+   ironic_console_serial_speed: 9600n8
 
-.. _web_console_documentation: https://docs.openstack.org/ironic/latest/admin/console.html#node-web-console
+.. end
 
 Post-deployment configuration
-=============================
-Configuration based off upstream documentation_.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configuration based off upstream `Ironic installation Documentation
+<https://docs.openstack.org/ironic/latest/install/index.html>`__.
 
 Again, remember that enabling Ironic reconfigures nova compute (driver and
 scheduler) as well as changes neutron network settings. Further neutron setup
 is required as outlined below.
 
 Create the flat network to launch the instances:
-::
 
-    neutron net-create --tenant-id $TENANT_ID sharednet1 --shared \
-    --provider:network_type flat --provider:physical_network physnet1
+.. code-block:: console
 
-    neutron subnet-create sharednet1 $NETWORK_CIDR --name $SUBNET_NAME \
-    --ip-version=4 --gateway=$GATEWAY_IP --allocation-pool \
-    start=$START_IP,end=$END_IP --enable-dhcp
+   neutron net-create --tenant-id $TENANT_ID sharednet1 --shared \
+   --provider:network_type flat --provider:physical_network physnet1
 
-And then the above ID is used to set cleaning_network in the neutron
-section of ironic.conf.
+   neutron subnet-create sharednet1 $NETWORK_CIDR --name $SUBNET_NAME \
+   --ip-version=4 --gateway=$GATEWAY_IP --allocation-pool \
+   start=$START_IP,end=$END_IP --enable-dhcp
+
+.. end
+
+And then the above ID is used to set ``cleaning_network`` in the neutron
+section of ``ironic.conf``.
 
-.. _documentation: https://docs.openstack.org/ironic/latest/install/index.html
diff --git a/doc/source/reference/kuryr-guide.rst b/doc/source/reference/kuryr-guide.rst
index 2be34fd7a7..c7a4e9120b 100644
--- a/doc/source/reference/kuryr-guide.rst
+++ b/doc/source/reference/kuryr-guide.rst
@@ -1,25 +1,28 @@
+==============
 Kuryr in Kolla
 ==============
 
 "Kuryr is a Docker network plugin that uses Neutron to provide networking
 services to Docker containers. It provides containerized images for the common
-Neutron plugins" [1]. Kuryr requires at least Keystone and neutron. Kolla makes
+Neutron plugins. Kuryr requires at least Keystone and neutron. Kolla makes
 kuryr deployment faster and accessible.
 
 Requirements
-------------
+~~~~~~~~~~~~
 
 * A minimum of 3 hosts for a vanilla deploy
 
 Preparation and Deployment
---------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 To allow Docker daemon connect to the etcd, add the following in the
-docker.service file.
+``docker.service`` file.
 
-::
+.. code-block:: none
 
-  ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
+   ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
+
+.. end
 
 The IP address is host running the etcd service. ```2375``` is port that
 allows Docker daemon to be accessed remotely. ```2379``` is the etcd listening
@@ -29,36 +32,46 @@ By default etcd and kuryr are disabled in the ``group_vars/all.yml``.
 In order to enable them, you need to edit the file globals.yml and set the
 following variables
 
-::
+.. code-block:: yaml
 
-  enable_etcd: "yes"
-  enable_kuryr: "yes"
+   enable_etcd: "yes"
+   enable_kuryr: "yes"
+
+.. end
 
 Deploy the OpenStack cloud and kuryr network plugin
 
-::
+.. code-block:: console
 
-  kolla-ansible deploy
+   kolla-ansible deploy
+
+.. end
 
 Create a Virtual Network
---------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~
 
-::
+.. code-block:: console
 
-    docker network create -d kuryr --ipam-driver=kuryr --subnet=10.1.0.0/24 --gateway=10.1.0.1 docker-net1
+   docker network create -d kuryr --ipam-driver=kuryr --subnet=10.1.0.0/24 --gateway=10.1.0.1 docker-net1
+
+.. end
 
 To list the created network:
 
-::
+.. code-block:: console
 
-    docker network ls
+   docker network ls
+
+.. end
 
 The created network is also available from OpenStack CLI:
 
-::
+.. code-block:: console
 
-    openstack network list
+   openstack network list
+
+.. end
 
 For more information about how kuryr works, see
-`kuryr, OpenStack Containers Networking
+`kuryr (OpenStack Containers Networking)
 <https://docs.openstack.org/kuryr/latest/>`__.