From 377d8704174bba5326f08edf45b3143789626d94 Mon Sep 17 00:00:00 2001 From: Pete Birley Date: Wed, 12 Jul 2017 11:52:50 -0500 Subject: [PATCH] Ceph: Update multinode doc This PS updates the Multinode Doc for Ceph deployemnt now that we have bootstrap capability within the chart. Change-Id: I40110db926bbbcbfb5a08300784e6a9735d32955 --- doc/source/install/multinode.rst | 21 ++------------------- 1 file changed, 2 insertions(+), 19 deletions(-) diff --git a/doc/source/install/multinode.rst b/doc/source/install/multinode.rst index a8d7b9f994..92f307d8dc 100644 --- a/doc/source/install/multinode.rst +++ b/doc/source/install/multinode.rst @@ -354,7 +354,8 @@ the following command to install Ceph: helm install --namespace=ceph local/ceph --name=ceph \ --set manifests_enabled.client_secrets=false \ --set network.public=$osd_public_network \ - --set network.cluster=$osd_cluster_network + --set network.cluster=$osd_cluster_network \ + --set bootstrap.enabled=true You may want to validate that Ceph is deployed successfully. For more information on this, please see the section entitled `Ceph @@ -378,24 +379,6 @@ deploy the client keyring and ``ceph.conf`` to the ``openstack`` namespace: --set network.public=$osd_public_network \ --set network.cluster=$osd_cluster_network -Ceph pool creation ------------------- - -Once Ceph has been deployed the pools for OpenStack services to consume can be -created, using the following commands: - -:: - - kubectl exec -n ceph ceph-mon-0 -- ceph osd pool create volumes 8 - kubectl exec -n ceph ceph-mon-0 -- ceph osd pool create images 8 - kubectl exec -n ceph ceph-mon-0 -- ceph osd pool create vms 8 - -The number of placement groups can be altered by replacing the 8 -to meet your needs. It is important to note that using too large -of a number for your placement groups may result in Ceph -becoming unhealthy. For more information on this topic, reference -Ceph's documentation `here `_. - MariaDB Installation and Verification -------------------------------------