Roger Luethi 8a6a88b6b8 labs: add test script and helpers
Add script launch_instance.sh for basic instance VM tests. The script
tries to deal with a number of failures that have turned up in testing
(e.g., services failing to start, instance not launching).

The changeset includes three scripts in a new tools directory.

1) To run a test once, use test-once.sh:
   $ ./tools/test-once.sh scripts/test/launch_instance.sh

2) To restore (and boot) the cluster to an earlier snapshot, use
   restore-cluster.sh.

   The argument selects the snapshot used for the controller node VM.

   To select the most recently used snapshot:
   $ ./tools/restore-cluster.sh current

   To select the controller snapshot, "controller_node_installed":
   $ ./tools/restore-cluster.sh controller_node_installed

3) To run the same test repeatedly, use repeat-test.sh. The test script
   name is hard-coded (launch_instance.sh). The argument determines
   whether the cluster is rebuilt for each test or if a snapshot of
   the cluster is restored.

   The controller snapshot is hardcoded (controller_node_installed);
   this particular snapshot is of interest because it does not seem to
   result in a reliable cluster.

   Log files are stored in log/test-results. Repeat-test.sh also
   saves log files from each node's /var/log/upstart to help with
   analyzing failures.

   $ ./tools/repeat-test.sh restore

   After running a number of tests, you can get some simple stats
   using a command like this:

   $ grep -h SUM log/test-results/*/test.log|LC_ALL=C sort|uniq -c

Co-Author: Pranav Salunke <dguitarbite@gmail.com>
Change-Id: I20b7273683b281bf7822ef66e311b955b8c5ec8a
2015-03-08 13:59:31 +01:00
..
2014-06-16 16:55:23 +05:30
2014-06-16 16:55:23 +05:30
2015-03-08 13:59:31 +01:00
2015-03-08 13:59:31 +01:00
2014-06-12 18:15:41 +05:30
2014-06-12 18:15:41 +05:30
2015-01-29 16:26:21 +00:00

Training Labs

About

Training Labs will provide scripts to automate the creation of the Training Environment.

Note: Training Labs are specifically meant for OpenStack Training and are specifically tuned as per Training Manuals repo.

Pre-requisite

How to run the scripts

  1. Clone the training-guides repo which contains scripts in the labs section that will install multi-node OpenStack automatically.

     $ git clone git://git.openstack.org/openstack/training-guides
    
  2. Go to the labs folder

     $ cd training-guides/labs
    
  3. Run the script:

     $ ./osbash -b cluster
    

This will do the complete installation for all the nodes - Controller, Compute and Network.

For more help you can check

    $ ./osbash --help

This will take some time to run the first time.

What the script installs

Running this will automatically spin up 3 virtual machines in VirtualBox:

  • Controller node
  • Network node
  • Compute node

Now you have a multi-node deployment of OpenStack running with the below services installed.

OpenStack services installed on Controller node:

  • Keystone

  • Horizon

  • Glance

  • Nova

    • nova-api
    • nova-scheduler
    • nova-consoleauth
    • nova-cert
    • nova-novncproxy
    • python-novaclient
  • Neutron

    • neutron-server
  • Cinder

Openstack services installed on Network node:

  • Neutron

    • neutron-plugin-openvswitch-agent
    • neutron-l3-agent
    • neutron-dhcp-agent
    • neutron-metadata-agent

Openstack Services installed on Compute node:

  • Nova

    • nova-compute
  • Neutron

    • neutron-plugin-openvswitch-agent

How to access the services

There are two ways to access the services:

  • OpenStack Dashboard (horizon)

You can access the dashboard at: http://192.168.100.51/horizon

Admin Login:

Username: admin

Password: admin_pass

Demo User Login:

Username: demo

Password: demo_pass

  • SSH

You can ssh to each of the nodes by:

    # Controller node
    $ ssh osbash@10.10.10.51

    # Network node
    $ ssh osbash@10.10.10.52

    # Compute node
    $ ssh osbash@10.10.10.53

Credentials for all nodes:

Username: osbash

Password: osbash

After you have ssh access, you need to source the OpenStack credentials in order to access the services.

Two credential files are present on each of the nodes: demo-openstackrc.sh admin-openstackrc.sh

Source the following credential files

For Admin user privileges:

    $ source admin-openstackrc.sh

For Demo user privileges:

    $ source demo-openstackrc.sh

Now you can access the OpenStack services via CLI.

BluePrints

Mailing Lists, IRC

  • To contribute please hop on to IRC on the channel #openstack-doc on IRC freenode or write an e-mail to the OpenStack Manuals mailing list openstack-docs@lists.openstack.org.

NOTE: You might consider registering on the OpenStack Manuals mailing list if you want to post your e-mail instantly. It may take some time for unregistered users, as it requires admin's approval.

Sub-team leads

Feel free to ping Roger or Pranav on the IRC channel #openstack-doc regarding any queries about the Labs section.

  • Roger Luethi ** Email: rl@patchworkscience.org ** IRC: rluethi

  • Pranav Salunke ** Email: dguitarbite@gmail.com ** IRC: dguitarbite

Meetings

To follow the weekly meetings for OpenStack Training, please refer to the following link.

For IRC meetings, refer to the wiki page on training manuals. https://wiki.openstack.org/wiki/Meetings/training-manual

Wiki

Follow various links on OpenStack Training Manuals here: https://wiki.openstack.org/wiki/Training-guides