
Add script launch_instance.sh for basic instance VM tests. The script tries to deal with a number of failures that have turned up in testing (e.g., services failing to start, instance not launching). The changeset includes three scripts in a new tools directory. 1) To run a test once, use test-once.sh: $ ./tools/test-once.sh scripts/test/launch_instance.sh 2) To restore (and boot) the cluster to an earlier snapshot, use restore-cluster.sh. The argument selects the snapshot used for the controller node VM. To select the most recently used snapshot: $ ./tools/restore-cluster.sh current To select the controller snapshot, "controller_node_installed": $ ./tools/restore-cluster.sh controller_node_installed 3) To run the same test repeatedly, use repeat-test.sh. The test script name is hard-coded (launch_instance.sh). The argument determines whether the cluster is rebuilt for each test or if a snapshot of the cluster is restored. The controller snapshot is hardcoded (controller_node_installed); this particular snapshot is of interest because it does not seem to result in a reliable cluster. Log files are stored in log/test-results. Repeat-test.sh also saves log files from each node's /var/log/upstart to help with analyzing failures. $ ./tools/repeat-test.sh restore After running a number of tests, you can get some simple stats using a command like this: $ grep -h SUM log/test-results/*/test.log|LC_ALL=C sort|uniq -c Co-Author: Pranav Salunke <dguitarbite@gmail.com> Change-Id: I20b7273683b281bf7822ef66e311b955b8c5ec8a
57 lines
1.5 KiB
Bash
Executable File
57 lines
1.5 KiB
Bash
Executable File
#!/bin/bash
|
|
set -o errexit -o nounset
|
|
TOP_DIR=$(cd "$(dirname "$0")/.." && pwd)
|
|
source "$TOP_DIR/config/paths"
|
|
source "$CONFIG_DIR/deploy.osbash"
|
|
source "$OSBASH_LIB_DIR/functions.host"
|
|
|
|
# Get remote ssh port of target node (VM_SSH_PORT)
|
|
source "$CONFIG_DIR/config.controller"
|
|
|
|
if [ $# -eq 0 ]; then
|
|
echo "Purpose: Copy one script to target node and execute it via ssh."
|
|
echo "Usage: $0 <script>"
|
|
exit 1
|
|
fi
|
|
|
|
SCRIPT_SRC=$1
|
|
|
|
if [ ! -f "$SCRIPT_SRC" ]; then
|
|
echo "File not found: $SCRIPT_SRC"
|
|
exit 1
|
|
fi
|
|
SCRIPT=$(basename "$SCRIPT_SRC")
|
|
|
|
wait_for_ssh "$VM_SSH_PORT"
|
|
|
|
function get_remote_top_dir {
|
|
if vm_ssh "$VM_SSH_PORT" "test -d /osbash"; then
|
|
# The installation uses a VirtualBox shared folder.
|
|
echo >&2 -n "Waiting for shared folder."
|
|
until vm_ssh "$VM_SSH_PORT" "test -f $REMOTE_TOP_DIR/lib"; do
|
|
sleep 1
|
|
echo >&2 -n .
|
|
done
|
|
echo >&2
|
|
echo /osbash
|
|
else
|
|
# Copy and execute the script with scp/ssh.
|
|
echo /home/osbash
|
|
fi
|
|
}
|
|
|
|
REMOTE_TOP_DIR=$(get_remote_top_dir)
|
|
|
|
EXE_DIR_NAME=test_tmp
|
|
mkdir -p "$TOP_DIR/$EXE_DIR_NAME"
|
|
cp -u "$SCRIPT_SRC" "$TOP_DIR/$EXE_DIR_NAME"
|
|
|
|
if [[ "$REMOTE_TOP_DIR" = "/home/osbash" ]]; then
|
|
# Not using a shared folder, we need to scp the script to the target node
|
|
vm_scp_to_vm "$VM_SSH_PORT" "$TOP_DIR/$EXE_DIR_NAME/$SCRIPT"
|
|
fi
|
|
|
|
vm_ssh "$VM_SSH_PORT" "bash -c $REMOTE_TOP_DIR/$EXE_DIR_NAME/$SCRIPT" || \
|
|
rc=$?
|
|
echo "$SCRIPT returned status: ${rc:-0}"
|