From 24bf92da57cc30214e2c3ea723576b1fd3b8b2ea Mon Sep 17 00:00:00 2001 From: Sayali Lunkad Date: Wed, 27 Aug 2014 17:43:18 +0530 Subject: [PATCH] Updates README.md in training-guides/labs Adds details for using the osbash scripts in labs section for training purpose. Partial-bug: #1362088 Change-Id: I639432e18c46efbcd851cee188870b10c0822d5a --- labs/README.md | 158 ++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 156 insertions(+), 2 deletions(-) diff --git a/labs/README.md b/labs/README.md index 21fcfa3e..429c68f8 100644 --- a/labs/README.md +++ b/labs/README.md @@ -1,3 +1,4 @@ + Training Labs ============= @@ -10,11 +11,164 @@ Environment. **Note:** Training Labs are specifically meant for OpenStack Training and are specifically tuned as per Training Manuals repo. + +Pre-requisite +------------- + +* Download and install [VirtualBox](https://www.virtualbox.org/wiki/Downloads). + + +How to run the scripts +---------------------- + +1. Clone the training-guides repo which contains scripts in the labs section that will install multi-node OpenStack automatically. + + $ git clone git://git.openstack.org/openstack/training-guides + +2. Go to the labs folder + + $ cd training-guides/labs + +3. Run the script: + + $ ./osbash -b cluster + +This will do the complete installation for all the nodes - Controller, Compute and Network. + +To build one node at a time you can run + + $ ./osbash -b + +**Note** The node names can be 'controller', 'compute', 'network'. + +**Note:** The Controller node needs to be installed and running while building one of the other nodes. + +Controller node VM: $ ./osbash -b controller + +Network node VM: $ ./osbash -b network + +Compute node VM: $ ./osbash -b compute + +For more help you can check + + $ ./osbash --help + + +This will take some time to run the first time. + + +What the script installs +------------------------ + +Running this will automatically spin up 3 virtual machines in VirtualBox: + +* Controller node +* Network node +* Compute node + +Now you have a multi-node deployment of OpenStack running with the below services installed. + +OpenStack services installed on Controller node: + +* Keystone +* Horizon +* Glance +* Nova + + * nova-api + * nova-scheduler + * nova-consoleauth + * nova-cert + * nova-novncproxy + * python-novaclient + +* Neutron + + * neutron-server + +* Cinder + +Openstack services installed on Network node: + +* Neutron + + * neutron-plugin-openvswitch-agent + * neutron-l3-agent + * neutron-dhcp-agent + * neutron-metadata-agent + +Openstack Services installed on Compute node: + +* Nova + + * nova-compute + +* Neutron + + * neutron-plugin-openvswitch-agent + + +How to access the services +-------------------------- + +There are two ways to access the services: + +* OpenStack Dashboard (horizon) + +You can access the dashboard at: http://192.168.100.51/horizon + +Admin Login: + +*Username:* `admin` + +*Password:* `admin_pass` + +*Demo User Login:* + +*Username:* `demo` + +*Password:* `demo_pass` + +* SSH + +You can ssh to each of the nodes: + + $ ssh controller@10.10.10.51 + + $ ssh compute@10.10.10.51 + + $ ssh network@10.10.10.51 + +Credentials for all nodes: + +*Username:* `osbash` + +*Password:* `osbash` + +After you have ssh access, you need to source the OpenStack credentials in order to access the services. + +Two credential files are present on each of the nodes: + demo-openstackrc.sh + admin-openstackrc.sh + +Source the following credential files + +For Admin user privileges: + + $ source admin-openstackrc.sh + +For Demo user privileges: + + $ source demo-openstackrc.sh + +Now you can access the OpenStack services via CLI. + + BluePrints ---------- -Training Manuals : https://blueprints.launchpad.net/openstack-manuals/+spec/training-manuals -Training Labs : https://blueprints.launchpad.net/openstack-training-guides/+spec/openstack-training-labs +* Training Manuals : https://blueprints.launchpad.net/openstack-manuals/+spec/training-manuals +* Training Labs : https://blueprints.launchpad.net/openstack-training-guides/+spec/openstack-training-labs Mailing Lists, IRC ------------------