[Fuego] [PATCH] docs: .rst files for pages categorized as Explanation, Tutorials and How Tos.

Pooja pooja.sm at pathpartnertech.com
Tue Sep 22 07:03:38 UTC 2020


From: Pooja More <pooja.sm at pathpartnertech.com>

 convert the following pages from the fuegotest wiki
 into rst format and add to the Fuegl rst documentation directory:
 Adding_a_Board, Adding_a_new_test, Adding_a_toolchain,
 Adding_or_Customizing_a_Distribution, Adding_test_jobs_to_Jenkins,
 Adding_views_to_Jenkins,Architecture,Artwork, Building_Documentation,
 FAQ.rst, FrontPage, Fuego_Quickstart_Guide, Fuego_naming_rules,
 Installing_Fuego, License_And_Contribution_Policy
 OSS_Test_Vision, Parser_module_API, Quick_Setup_Guide,
 Raspberry_Pi_Fuego_Setup,Test_variables,
 Using_Batch_tests, Using_the_qemuarm_target.

Signed-off-by: Pooja More <pooja.sm at pathpartnertech.com>
---
 docs/rst_src/Adding_a_Board.rst                    | 242 +++++++------
 docs/rst_src/Adding_a_new_test.rst                 | 230 ++++++++-----
 docs/rst_src/Adding_a_toolchain.rst                | 224 +++++++++++-
 .../Adding_or_Customizing_a_Distribution.rst       | 130 ++++---
 docs/rst_src/Adding_test_jobs_to_Jenkins.rst       | 136 +++++++-
 docs/rst_src/Adding_views_to_Jenkins.rst           |  68 ++--
 docs/rst_src/Architecture.rst                      | 375 +++++++++++----------
 docs/rst_src/Artwork.rst                           |   7 +-
 docs/rst_src/Building_Documentation.rst            |  25 +-
 docs/rst_src/FAQ.rst                               |  49 +++
 docs/rst_src/FrontPage.rst                         |  43 ++-
 docs/rst_src/Fuego_Quickstart_Guide.rst            | 255 ++++++++++++++
 docs/rst_src/Fuego_naming_rules.rst                | 126 ++++---
 docs/rst_src/Installing_Fuego.rst                  | 235 +++++++------
 docs/rst_src/License_And_Contribution_Policy.rst   | 137 ++++----
 docs/rst_src/OSS_Test_Vision.rst                   | 349 +++++++++++++++++++
 docs/rst_src/Parser_module_API.rst                 | 114 ++++---
 docs/rst_src/Quick_Setup_Guide.rst                 | 161 +++++++++
 docs/rst_src/Raspberry_Pi_Fuego_Setup.rst          |  73 ++--
 docs/rst_src/Test_variables.rst                    | 206 ++++++-----
 docs/rst_src/Using_Batch_tests.rst                 | 240 +++++++------
 docs/rst_src/Using_the_qemuarm_target.rst          |  38 ++-
 docs/rst_src/Working_with_remote_boards.rst        |  63 ++--
 docs/rst_src/index.rst                             |   4 +-
 docs/rst_src/integration_with_ttc.rst              |  83 +++--
 25 files changed, 2575 insertions(+), 1038 deletions(-)
 create mode 100644 docs/rst_src/FAQ.rst
 create mode 100644 docs/rst_src/Fuego_Quickstart_Guide.rst
 create mode 100644 docs/rst_src/OSS_Test_Vision.rst
 create mode 100644 docs/rst_src/Quick_Setup_Guide.rst

diff --git a/docs/rst_src/Adding_a_Board.rst b/docs/rst_src/Adding_a_Board.rst
index 5ad2a10..8bed1ac 100644
--- a/docs/rst_src/Adding_a_Board.rst
+++ b/docs/rst_src/Adding_a_Board.rst
@@ -1,7 +1,8 @@
 .. _adding_board:
 
+
 #################
-Adding a Board
+Adding a board
 #################
 
 ==============
@@ -10,25 +11,28 @@ Overview
 
 To add your own board to Fuego, there are five main steps:
 
- * 1. Make sure you can access the target via ssh, serial or some other connection
- * 2. Decide whether to use an existing user account, or to create a user account specifically for testing
- * 3. create a test directory on the target
+ * 1. Make sure you can access the target via ssh, serial or some
+   other connection 
+ * 2. Decide whether to use an existing user account, or to create a 
+   user account specifically for testing 
+ * 3. create a test directory on the target 
  * 4. create a board file (on the host)
  * 5. add your board as a node in the Jenkins interface
 
 1 - Set up communication to the target board
 ==============================================
 
-In order for Fuego to test a board, it needs to communicate with it from
-the host machine where Fuego is running.
+In order for Fuego to test a board, it needs to communicate with it 
+from the host machine where Fuego is running.
 
 The most common way to do this is to use 'ssh' access over a network
 connection.  The target board needs to run an ssh server, and the host
 machine connects to it using the 'ssh' client.
 
-The method of setting an ssh server up on a board varies from system to system,
-but sample instructions for setting up an ssh server on a raspberry pi are
-located here:  :ref:`Raspberry Pi Fuego Setup <raspPiFuegoSetup>`
+The method of setting an ssh server up on a board varies from system 
+to system, but sample instructions for setting up an ssh server on a 
+raspberry pi are located here:  
+:ref:`Raspberry Pi Fuego Setup <raspPiFuegoSetup>`
 
 Another method that can work is to use a serial connection between
 the host and the board's serial console.  Setting this up is outside
@@ -40,8 +44,8 @@ package to accomplish this.  I
 
 On your target board, a user account is required in order to run tests.
 
-The user account used by Fuego is determined by your board file, which you
-will configure manually in step 4.  You need
+The user account used by Fuego is determined by your board file, which
+you will configure manually in step 4.  You need
 to decide which account to use.  There are three options:
 
  * use the root account
@@ -50,21 +54,23 @@ to decide which account to use.  There are three options:
 
 There are pros and cons to each approach.
 
-My personal preference is to use the root account.  Several tests in Fuego
-require root privileges.  If you are working with a test board, that you
-can re-install easily, using the 'root' account will allow you to run the
-greatest number of tests.  However, this should not be used to test machines
-that are in production.  A Fuego test can run all kinds of commands, and
-you should not trust that tests will not destroy your machine (either
-accidentally or via some malicious intent).
+My personal preference is to use the root account.  Several tests in 
+Fuego require root privileges.  If you are working with a test board, 
+that you can re-install easily, using the 'root' account will allow 
+you to run the greatest number of tests.  However, this should not be 
+used to test machines that are in production.  A Fuego test can run 
+all kinds of commands, and you should not trust that tests will not 
+destroy your machine (either accidentally or via some malicious 
+intent).
 
-If you don't use 'root', then you can either use an existing account, or
-create a new account.  In most circumstances it is worthwhile to create a new
-account dedicated to testing.  However, you may not have sufficient privileges
-on your board to do this.
+If you don't use 'root', then you can either use an existing account, 
+or create a new account.  In most circumstances it is worthwhile to 
+create a new account dedicated to testing.  However, you may not have 
+sufficient privileges on your board to do this.
 
-In any event, at this point, decide which account you will use for testing
-with Fuego, and note it to include in the board file, described later.
+In any event, at this point, decide which account you will use for
+testing with Fuego, and note it to include in the board file, 
+described later.
 
 
 3 - Create test directory on target
@@ -98,7 +104,8 @@ Create board file
 
 Now, create your board file.
 The board files reside in <fuego-source-dir>/fuego-ro/boards, and
-each file has a filename with the name of the board, with the extension ".board".
+each file has a filename with the name of the board, with the 
+extension ".board".
 
 The easiest way to create a board file is to copy an existing one,
 and edit the variables to match those of your board.  The following
@@ -129,7 +136,8 @@ with that transport type.
 
  * TRANSPORT - this specifies the transport to use with the target
 
-   * there are three transport types currently supported: 'ssh', 'serial', 'ttc'
+   * there are three transport types currently supported: 'ssh', 
+     'serial', 'ttc'
    * Most boards will use the 'ssh' or 'serial' transport type
    * ex: TRANSPORT="ssh" 
 
@@ -146,12 +154,13 @@ For targets using ssh:
  * SSH_PORT
  * SSH_KEY
 
-IPADDR is the network address of your board.  SSH_PORT is the port where
-the ssh daemon is listening for connections.  By default this is 22, but
-you should set this to whatever your target board uses.  SSH_KEY is the
-absolute path where an SSH key file
-may be found (to allow password-less access to a target machine).  An
-example would be:
+IPADDR is the network address of your board.  SSH_PORT is the port 
+where the ssh daemon is listening for connections.  By default this is
+22, but you should set this to whatever your target board uses.  
+SSH_KEY is the absolute path where an SSH key file may be found (to 
+allow password-less access to a target machine).  
+
+An example would be:
 
  * SSH_KEY="/fuego-ro/boards/myboard_id_rsa"
 
@@ -163,28 +172,40 @@ For targets using serial:
  * BAUD
  * IO_TIME_SERIAL
 
-SERIAL is serial port name used to access the target from the host.  This
-is the name of the serial device node on the host (or in the container).
-this is specified without the /dev/ prefix.  Some examples are:
+SERIAL is serial port name used to access the target from the host. 
+This is the name of the serial device node on the host (or in the 
+container).this is specified without the /dev/ prefix.  
+
+Some examples are:
 
  * ttyACM0
  * ttyACM1
  * ttyUSB0
 
-BAUD is the baud-rate used for the serial communication, for eg. "115200".  
+BAUD is the baud-rate used for the serial communication, for eg. 
+"115200".  
 
-IO_TIME_SERIAL is the time required to catch the command's response from the target. This is specified as a decimal fraction of a second, and is usually
-very short.  A time that usually works is "0.1" seconds.
+IO_TIME_SERIAL is the time required to catch the command's response
+from the target. This is specified as a decimal fraction of a second,
+and is usually very short.  A time that usually works is "0.1"
+seconds.
 
  * ex: IO_TIME_SERIAL="0.1"
 
-This value directly impacts the speed of operations over the serial port, so
-it should be adjusted with caution.  However, if you find that some operations
-are not working over the serial port, try increasing this value (in small increments - 0.15, 0.2, etc.)
-
-*Note: In the case of TRANSPORT="serial", Please make sure that docker container and Fuego have sufficient permissions to access the specified serial port. You may need to modify docker-create-usb-privileged-container.sh prior to making your docker image, in order to make sure the container can access the ports.  Also, if check that the host filesystem permissions on the device node (e.g /dev/ttyACM0 allows access. From inside the container
-you can try using the sersh or sercp commands directly, to test access to
-the target.*
+This value directly impacts the speed of operations over the serial
+port, so it should be adjusted with caution.  However, if you find
+that some operations are not working over the serial port, try
+increasing this value (in small increments - 0.15, 0.2, etc.)
+
+*Note: In the case of TRANSPORT="serial", Please make sure that docker
+container and Fuego have sufficient permissions to access the
+specified serial port. You may need to modify
+docker-create-usb-privileged-container.sh prior to making your docker
+image, in order to make sure the container can access the ports.
+Also, if check that the host filesystem permissions on the device node
+(e.g /dev/ttyACM0 allows access. From inside the container you can try
+using the sersh or sercp commands directly, to test access to the
+target.*
 
 For targets using ttc:
 
@@ -202,44 +223,52 @@ Other parameters
  * DISTRIB
  * BOARD_CONTROL
 
-The BOARD_TESTDIR directory is an absolute path in the filesystem on the
-target board where the Fuego tests are run.
-Normally this is set to something like "/home/fuego", but you can set it to
-anything.  The user you specify for LOGIN should have access rights to
-this directory.
+The BOARD_TESTDIR directory is an absolute path in the filesystem on 
+the target board where the Fuego tests are run.
+Normally this is set to something like "/home/fuego", but you can set 
+it to anything.  The user you specify for LOGIN should have access 
+rights to this directory.
 
-The ARCHITECTURE is a string describing the architecture used by toolchains to build the tests for the target.
+The ARCHITECTURE is a string describing the architecture used by
+toolchains to build the tests for the target.
 
-The TOOLCHAIN variable indicates the toolchain to use to build the tests
-for the target.  If you are using an ARM target, set this to "qemu-armv7hf".
-This is a default ARM toolchain installed in the docker container, and should
-work for most ARM boards.
+The TOOLCHAIN variable indicates the toolchain to use to build the 
+tests for the target.  If you are using an ARM target, set this to 
+"debian-armhf". This is a default ARM toolchain installed in the 
+docker container, and should work for most ARM boards.
 
-If you are not using ARM, or for some reason the pre-installed arm toolchains
-don't work for the Linux distribution installed on your board, then 
-you will need to install your own SDK or toolchain.  In this case, follow
-the steps in [[Adding a toolchain]], then come back to this step and set
-the TOOLCHAIN variable to the name you used for that operation.
+If you are not using ARM, or for some reason the pre-installed arm 
+toolchains don't work for the Linux distribution installed on your 
+board, then you will need to install your own SDK or toolchain.  
+In this case, follow the steps in [[Adding a toolchain]], then come 
+back to this step and set the TOOLCHAIN variable to the name you used 
+for that operation.
 
 For other variables in the board file, see the section below.
 
 The DISTRIB variable specifies attributes of the Linux distribution
-running on the board, that are used by Fuego.  Currently, this is mainly 
-used to tell Fuego what kind of system logger the operating system on
-the board has.  Here are some options that are available:
+running on the board, that are used by Fuego.  Currently, this is 
+mainly  used to tell Fuego what kind of system logger the operating 
+system on the board has.  
 
- * base.dist - a "standard" distribution that implements syslogd-style system logging.  It should have the commands: logread, logger, and /var/log/messages
- * nologread.dist - a distribution that has no 'logread' command, but does have /var/log/messages
- * nosyslogd.dist - a distribution that does not have syslogd-style system logging.
+Here are some options that are available:
 
-If DISTRIB is not specified, Fuego will default to using "nosyslogd.dist".
+ * base.dist - a "standard" distribution that implements syslogd-style
+ * system logging.  It should have the commands: logread, logger, and
+ * /var/log/messages nologread.dist - a distribution that has no
+ * 'logread' command, but does have /var/log/messages nosyslogd.dist -
+ * a distribution that does not have syslogd-style system logging.
 
-The BOARD_CONTROL variable specifies the name of the system used to control
-board hardware operations.  When Fuego is used in conjunction with board
-control hardware, it can automate more testing functionality.  Specifically,
-it can reboot the board, or re-provision the board, as needed for testing.
-As of the 1.3 release, Fuego only supports the 'ttc' board control system.
-Other board control systems will be introduced and supported over time.
+If DISTRIB is not specified, Fuego will default to using 
+"nosyslogd.dist".
+
+The BOARD_CONTROL variable specifies the name of the system used to
+control board hardware operations.  When Fuego is used in conjunction
+with board control hardware, it can automate more testing
+functionality.  Specifically, it can reboot the board, or re-provision
+the board, as needed for testing.  As of the 1.3 release, Fuego only
+supports the 'ttc' board control system.  Other board control systems
+will be introduced and supported over time.
 
 Add node to Jenkins interface
 ================================
@@ -252,13 +281,17 @@ You can see a list of the boards that Fuego knows about using:
 
  * $ ftc list-boards
 
-When you run this command, you should see the name of the board you just
-created.
+When you run this command, you should see the name of the board you
+just created.
+
+You can see the nodes that have already been installed in Jenkins
+with:
 
-You can see the nodes that have already been installed in Jenkins with:
  * $ ftc list-nodes
 
-To actually add the board as a node in jenkins, inside the docker container, run the following command at a shell prompt:
+To actually add the board as a node in jenkins, inside the docker
+container, run the following command at a shell prompt:
+
  * $ ftc add-nodes -b <board_name>
 
 ==============================
@@ -266,11 +299,13 @@ Board-specific test variables
 ==============================
 
 The following other variables can also be defined in the board file:
+
  * MAX_REBOOT_RETRIES
  * FUEGO_TARGET_TMP
  * FUEGO_BUILD_FLAGS
 
-See :ref:`Variables <variables>` for the definition and usage of these variables.
+See :ref:`Variables <variables>` for the definition and usage of these
+variables.
 
 General Variables
 ====================
@@ -278,33 +313,36 @@ General Variables
 File System test variables (SATA, USB, MMC)
 =============================================
 
-If running filesystem tests, you will want to declare the Linux device name
-and mountpoint path, for the filesystems to be tested.  There are three
-different device/mountpoint options available depending on the testplan you
-select (SATA, USB, or MMC).  Your board may have all of these types of
-storage available, or only one.
+If running filesystem tests, you will want to declare the Linux device
+name and mountpoint path, for the filesystems to be tested.  There are 
+three different device/mountpoint options available depending on the 
+testplan you select (SATA, USB, or MMC).  Your board may have all of 
+these types of storage available, or only one.
 
 To prepare to run a test on a filesystem on a sata device, define the
 SATA device and mountpoint variables for your board.
 
-For example, if you had a SATA device with a mountable filesystem accessible
-on device /dev/sdb1, and you have a directory on your target of /mnt/sata
-that can be used to mount this device at, you could declare the following
-variables in your board file.
+For example, if you had a SATA device with a mountable filesystem 
+accessible on device /dev/sdb1, and you have a directory on your 
+target of /mnt/sata that can be used to mount this device at, you 
+could declare the following variables in your board file.
 
  * SATA_DEV="/dev/sdb1"
  * SATA_MP="/mnt/sata"
 
-You can define variables with similar names (USB_DEV and USB_MP, or MMC_DEV and MMC_MP) for USB-based filesystems or MMC-based filesystems.
+You can define variables with similar names (USB_DEV and USB_MP, or
+MMC_DEV and MMC_MP) for USB-based filesystems or MMC-based
+filesystems.
 
 LTP test variables
 ======================
 
-LTP (the Linux Test Project) test suite is a large collection of tests that
-require some specialized handling, due to the complexity and diversity of 
-the suite. LTP has a large number of tests, some of which may not work correctly on your board.  Some of the LTP tests
-depend on the kernel configuration or on aspects of your Linux distribution
-or your configuration.
+LTP (the Linux Test Project) test suite is a large collection of tests
+that require some specialized handling, due to the complexity and
+diversity of the suite. LTP has a large number of tests, some of which
+may not work correctly on your board.  Some of the LTP tests depend on
+the kernel configuration or on aspects of your Linux distribution or
+your configuration.
 
 You can control whether the LTP posix test succeeds by indicating the
 number of positive and negative results you expect for your board.
@@ -313,19 +351,23 @@ These numbers are indicated in test variables in the board file:
  * LTP_OPEN_POSIX_SUBTEST_COUNT_POS
  * LTP_OPEN_POSIX_SUBTEST_COUNT_NEG
 
-You should run the LTP test yourself once, to see what your baseline values
-should be, then set these to the correct values for your board (configuration
-and setup).
+You should run the LTP test yourself once, to see what your baseline
+values should be, then set these to the correct values for your board
+(configuration and setup).
 
 Then, Fuego will report any deviation from your accepted numbers, for 
 LTP tests on your board.
 
 LTP may also use these other test variables defined in the board file:
 
- * FUNCTIONAL_LTP_HOMEDIR - If this variable is set, it indicates where a pre-installed version of LTP resides in the board's filesystem.  This can be used to avoid a lengthy deploy phase on each execution of LTP.
- * FUNCTIONAL_LTP_BOARD_SKIPLIST - This variable has a list of individual LTP test programs to skip.
+ * FUNCTIONAL_LTP_HOMEDIR - If this variable is set, it indicates
+   where a pre-installed version of LTP resides in the board's
+   filesystem.  This can be used to avoid a lengthy deploy phase on
+   each execution of LTP.  
+ * FUNCTIONAL_LTP_BOARD_SKIPLIST - This variable has a list of 
+   individual LTP test programs to skip.
 
-See :ref:`Functional.LTP <functionalLTP>` for more information about the LTP test, and test
-variables used by it.
+See :ref:`Functional.LTP <functionalLTP>` for more information about 
+the LTP test, and test variables used by it.
 
 
diff --git a/docs/rst_src/Adding_a_new_test.rst b/docs/rst_src/Adding_a_new_test.rst
index 644a37a..0bb8de5 100644
--- a/docs/rst_src/Adding_a_new_test.rst
+++ b/docs/rst_src/Adding_a_new_test.rst
@@ -16,29 +16,40 @@ To add a new test to Fuego, you need to perform the following steps:
  * 4. Write a test script for the test
  * 5. Add the test_specs (if any) for the test
  * 6. Add log processing to the test
- * 6-a. (if a benchmark) Add parser.py and criteria and reference files
+ * 6-a. (if a benchmark) Add parser.py and criteria and reference 
+   files
  * 7. Create the Jenkins test configuration for the test
 
 ==========================
 Decide on a test name 
 ==========================
 
-The first step to creating a test is deciding the test name.  There are two
-types of tests supported by Fuego: functional tests and benchmark tests.
-A functional test either passes or fails, while a benchmark test produces one or more numbers representing some performance measurements for the system.
+The first step to creating a test is deciding the test name.  There
+are two types of tests supported by Fuego: functional tests and
+benchmark tests.  A functional test either passes or fails, while a
+benchmark test produces one or more numbers representing some
+performance measurements for the system.
 
 Usually, the name of the test will be a combination of the test type
-and a name to identify the test itself.  Here are some examples: *bonnie* is a popular disk performance test.  The name of this test in the fuego system is *Benchmark.bonnie*.  A test which runs portions of the posix test suite is a functional test (it either passes or fails), and in Fuego is named *Functional.posixtestsuite*.  The test name should be all one word (no spaces).
+and a name to identify the test itself.  Here are some examples:
+*bonnie* is a popular disk performance test.  The name of this test in
+the fuego system is *Benchmark.bonnie*.  A test which runs portions of
+the posix test suite is a functional test (it either passes or fails),
+and in Fuego is named *Functional.posixtestsuite*.  The test name
+should be all one word (no spaces).
 
-This name is used as the directory name where the test materials will live in the Fuego system.
+This name is used as the directory name where the test materials will
+live in the Fuego system.
 
 ======================================
 Create the directory for the test 
 ======================================
 
-The main test directory is located in /fuego-core/engine/tests/*<test_name>*
+The main test directory is located in
+/fuego-core/engine/tests/*<test_name>*
 
-So if you just created a new Functional test called 'foo', you would create the directory:
+So if you just created a new Functional test called 'foo', you would
+create the directory:
 
  * /fuego-core/engine/tests/Functional.foo
 
@@ -50,31 +61,34 @@ The actual creation of the test program itself is outside
 the scope of Fuego.  Fuego is intended to execute an existing
 test program, for which source code or a script already exists.
 
-This page describes how to integrate such a test program into the Fuego test system.
+This page describes how to integrate such a test program into the
+Fuego test system.
 
 A test program in Fuego is provided in source form so that it can
 be compiled for whatever processor architecture is used by the 
 target under test. This source may be in the form of a tarfile,
 or a reference to a git repository, and one or more patches.
 
-Create a tarfile for the test, by downloading the test source manually, and
-creating the tarfile.  Or, note the reference for the git repository for the
-test source.
+Create a tarfile for the test, by downloading the test source
+manually, and creating the tarfile.  Or, note the reference for the
+git repository for the test source.
 
 tarball source 
 ================
 
-If you are using source in the form of a tarfile, you add the name of the
-tarfile (called 'tarball') to the test script.
+If you are using source in the form of a tarfile, you add the name of
+the tarfile (called 'tarball') to the test script.
 
-The tarfile may be compressed.  Supported compression schemes, and their associated extensions are:
+The tarfile may be compressed.  Supported compression schemes, and
+their associated extensions are:
  
  * uncompressed (extension='.tar')
  * compressed with gzip (extension='.tar.gz' or '.tgz')
  * compressed with bzip2 (extension='.bz2')
 
-For example, if the source for your test was in the tarfile 'foo-1.2.tgz' you
-would add the following line to your test script, to reference this source: ::
+For example, if the source for your test was in the tarfile
+'foo-1.2.tgz' you would add the following line to your test script, to
+reference this source: ::
 
   tarball=foo-1.2.tgz
 
@@ -82,17 +96,18 @@ would add the following line to your test script, to reference this source: ::
 git source 
 ===============
 
-If you are using source from an online git repository, you reference this
-source by adding the variables 'gitrepo' and 'gitref' to the test script.
+If you are using source from an online git repository, you reference
+this source by adding the variables 'gitrepo' and 'gitref' to the test
+script.
 
 In this case, the 'gitrepo' is the URL used to access the source, and
-the 'gitref' refers to a commit id (hash, tag, version, etc.) that refers
-to a particular version of the code.
+the 'gitref' refers to a commit id (hash, tag, version, etc.) that
+refers to a particular version of the code.
 
-For example, if your test program is built from source in an online 'foo' repository,
-and you want to use version 1.2 of that (which is tagged in the repository as 'v1.2',
-on the master branch,  you might have some lines like the following in the test's
-script. ::
+For example, if your test program is built from source in an online
+'foo' repository, and you want to use version 1.2 of that (which is
+tagged in the repository as 'v1.2', on the master branch,  you might
+have some lines like the following in the test's script. ::
 
   gitrepo=http://github.com/sampleuser/foo.git
   gitref=master/v1.2
@@ -101,16 +116,17 @@ script. ::
 script-based source 
 =====================
 
-Some tests are simple enough to be implemented as a single script (that runs on the board).
-For these tests, no additional source is necessary, and the script
-can just be placed directly in the test's home directory. In *fuego_test.sh* you must set the following variable: ::
+Some tests are simple enough to be implemented as a single script
+(that runs on the board).  For these tests, no additional source is
+necessary, and the script can just be placed directly in the test's
+home directory. In *fuego_test.sh* you must set the following
+variable: ::
 
 
  local_source=1
 
 
-During
-the deploy phase, the script is sent to the board directly from
+During the deploy phase, the script is sent to the board directly from
 the test home directory instead of from the test build directory.
 
 
@@ -118,7 +134,10 @@ the test home directory instead of from the test build directory.
 Test script 
 =================
 
-The test script is a small shell script called ``fuego_test.sh``. It specifies the source tarfile containing the test program, and provides implementations for the functions needed to build, deploy, execute, and evaluate the results from the test program.
+The test script is a small shell script called ``fuego_test.sh``. It
+specifies the source tarfile containing the test program, and provides
+implementations for the functions needed to build, deploy, execute,
+and evaluate the results from the test program.
 
 The test script for a functional test should contain the following:
 
@@ -136,7 +155,9 @@ in order to run the test.
 Sample test script 
 ========================
 
-Here is the ``fuego_test.sh`` script for the test Functional.hello_world.  This script demonstrates a lot of the core elements of a test script.::
+Here is the ``fuego_test.sh`` script for the test
+Functional.hello_world.  This script demonstrates a lot of the core
+elements of a test script.::
 
 
 	#!/bin/bash
@@ -152,7 +173,8 @@ Here is the ``fuego_test.sh`` script for the test Functional.hello_world.  This
 	}
 
 	function test_run {
-	    report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./hello $FUNCTIONAL_HELLO_WORLD_ARG"
+	    report "cd $BOARD_TESTDIR/fuego.$TESTDIR; 
+            ./hello $FUNCTIONAL_HELLO_WORLD_ARG"
 	}
 
 	function test_processing {
@@ -163,9 +185,12 @@ Here is the ``fuego_test.sh`` script for the test Functional.hello_world.  This
 Description of base test functions
 =========================================
 
-The base test functions (test_build, test_deploy, test_run, and test_processing) are fairly simple.  Each one contains a few statements to accomplish that phase of the test execution.
+The base test functions (test_build, test_deploy, test_run, and
+test_processing) are fairly simple.  Each one contains a few
+statements to accomplish that phase of the test execution.
 
-You can find more information about each of these functions at the following links:
+You can find more information about each of these functions at the
+following links:
 
  * :ref:`test_pre_check <func_test_pre_check>`
  * :ref:`test_build <func_test_build>`
@@ -182,7 +207,8 @@ Another element of every test is the *test spec*.  A file is used
 to define a set of parameters that are used to customize the test
 for a particular use case.
 
-You must define the test spec(s) for this test, and add an entry to the appropriate testplan for it.
+You must define the test spec(s) for this test, and add an entry to
+the appropriate testplan for it.
 
 Each test in the system must have a test spec file.  This file
 is used to list customizable variables for the test.
@@ -195,12 +221,14 @@ The test spec file is:
 
  * named 'spec.json' in the test directory,
  * in JSON format, 
- * provides a ``testName`` attribute, and a ``specs`` attribute, which is a list,
- * may include any named spec you want, but must define at least the 'default' spec for the test
+ * provides a ``testName`` attribute, and a ``specs`` 
+   attribute, which is a list,
+ * may include any named spec you want, but must define at least the 
+   'default' spec for the test
 
    * Note that the 'default' spec can be empty, if desired.
 
-Here is an example one that defines no variables.::
+Here is an example one that defines no variables. ::
 
 
 	{
@@ -211,7 +239,8 @@ Here is an example one that defines no variables.::
 	}
 
 
-And here is the spec.json of the Functional.hello_world example, which defines three specs: ::
+And here is the spec.json of the Functional.hello_world example, which
+defines three specs: ::
 
 
 	{
@@ -230,14 +259,20 @@ And here is the spec.json of the Functional.hello_world example, which defines t
 	}
 
 
-Next, you may want to add an entry to one of the testplan files.  These files are located in the directory ``/fuego-core/engine/overlays/testplans``.
+Next, you may want to add an entry to one of the testplan files.
+These files are located in the directory
+``/fuego-core/engine/overlays/testplans``.
 
-Choose a testplan you would like to include this test, and edit the corresponding file. For example, to add your test to the list of tests executed when the 'default' testplan is used, add an entry ``default`` to the 'testplan_default.json' file.
+Choose a testplan you would like to include this test, and edit the
+corresponding file. For example, to add your test to the list of tests
+executed when the 'default' testplan is used, add an entry ``default``
+to the 'testplan_default.json' file.
 
-Note that you should add a comma after your entry, if it is not the last
-one in the list of *tests*.
+Note that you should add a comma after your entry, if it is not the 
+last one in the list of *tests*.
 
-Please read :ref:`Test Specs and Plans <test_specs_and_plans>` for more details.
+Please read :ref:`Test Specs and Plans <test_specs_and_plans>` for
+more details.
 
 
 ========================
@@ -246,10 +281,12 @@ Test results parser
 Each test should also provide some mechanism to parse the results
 from the test program, and determine the success of the test.
 
-For a simple Functional test, you can use the :ref:`log_compare <func_log_compare>` function to specify a pattern to search
-for in the test log, and the number of times that pattern should be found
-in order to indicate success of the test.  This is done from the
-:ref:`test_processing <func_test_processing>` function in the test script.
+For a simple Functional test, you can use the :ref:`log_compare
+<func_log_compare>` function to specify a pattern to search for in the
+test log, and the number of times that pattern should be found in
+order to indicate success of the test.  This is done from the
+:ref:`test_processing <func_test_processing>` function in the test
+script.
 
 Here is an example of a call to log_compare: ::
 
@@ -258,55 +295,58 @@ Here is an example of a call to log_compare: ::
 	}
 
 
-This example looks for the pattern *^TEST.*OK*, which finds lines in the
-test log that start with the word 'TEST' and are followed by the string 'OK'
-on the same line.  It looks for this pattern 11 times.
+This example looks for the pattern *^TEST.*OK*, which finds lines in
+the test log that start with the word 'TEST' and are followed by the
+string 'OK' on the same line.  It looks for this pattern 11 times.
 
 :ref:`log_compare <func_log_compare>` can be used to parse the logs of
 simple tests with line-oriented output.
 
-For tests with more complex output, and for Benchmark tests that produce
-numerical results, you must add a python program called 'parser.py',
-which scans the test log and produces a data structure used by 
-other parts of the Fuego system.
+For tests with more complex output, and for Benchmark tests that
+produce numerical results, you must add a python program called
+'parser.py', which scans the test log and produces a data structure
+used by other parts of the Fuego system.
 
 See :ref:`parser.py <parser>` for information about this program.
 
 
 
-====================================
-Pass criteria and reference info 
-====================================
-You should also provide information to Fuego to indicate how to evaluate
-the ultimate resolution of the test.
+==================================== 
+Pass criteria and reference info
+==================================== 
 
-For a Functional test, it is usually the case that the whole test passes
-only if all individual test cases in the test pass.  That is, one error in
-a test case indicates overall test failure.  However, for Benchmark tests,
-the evaluation of the results is more complicated.  It is required to specify
-what numbers constitute success vs. failure for the test.
+You should also provide information to Fuego to indicate how to 
+evaluate the ultimate resolution of the test.
+
+For a Functional test, it is usually the case that the whole test
+passes only if all individual test cases in the test pass.  That is,
+one error in a test case indicates overall test failure.  However, for
+Benchmark tests, the evaluation of the results is more complicated.
+It is required to specify what numbers constitute success vs. failure
+for the test.
 
 Also, for very complicated Functional tests, there may be complicated
 results, where, for example, some results should be ignored.
 
-You can specify the criteria used to evaluate the test results, by creating
-a ':ref:`criteria.json <criteria.json>`' file for the test.
+You can specify the criteria used to evaluate the test results, by
+creating a ':ref:`criteria.json <criteria.json>`' file for the test.
 
-Finally, you may wish to add a file that indicates certain information about
-the test results.  This information is placed in the ':ref:`reference.json <reference_json>`' file
-for a test.
+Finally, you may wish to add a file that indicates certain information
+about the test results.  This information is placed in the
+':ref:`reference.json <reference_json>`' file for a test.
 
-Please see the links for those files to learn more about what they are and
-how to write them, and customize them for your system.
+Please see the links for those files to learn more about what they are
+and how to write them, and customize them for your system.
 
 =================================
 Jenkins job definition file 
 =================================
 
-The last step in creating the test is to create the Jenkins job for it.
+The last step in creating the test is to create the Jenkins job for
+it.
 
-A Jenkins job describes to Jenkins what board to run the test on,
-what variables to pass to the test (including the test spec (or variant),
+A Jenkins job describes to Jenkins what board to run the test on, what
+variables to pass to the test (including the test spec (or variant),
 and what script to run for the test.
 
 Jenkins jobs are created using the command-line tool 'ftc'.
@@ -321,32 +361,37 @@ The ftc 'add-jobs' sub-command uses '-b' to specify the board,
 '-t' to specify the test, and '-s' to specify the test spec that
 will be used for this Jenkins job.
 
-In this case, the name of the Jenkins job that would be created would be:
+In this case, the name of the Jenkins job that would be created would
+be:
 
  * myboard.default.Functional.mytest
 
-This results in the creation of a file called config.xml, in the /var/lib/jenkins/jobs/<job_name> directory.
+This results in the creation of a file called config.xml, in the
+/var/lib/jenkins/jobs/<job_name> directory.
+
 
 
 
 
 
+========================= 
+Publishing the test
+========================= 
 
-=========================
-Publishing the test 
-=========================
-Tests that are of general interest should be submitted for inclusion into fuego-core.
+Tests that are of general interest should be
+submitted for inclusion into fuego-core.
 
-Right now, the method of doing this is to create a commit and send that commit
-to the fuego mailing list, for review, and hopefully acceptance and
-integration by the fuego maintainers.
+Right now, the method of doing this is to create a commit and send
+that commit to the fuego mailing list, for review, and hopefully
+acceptance and integration by the fuego maintainers.
 
-In the future, a server will be provided where test developers can share
-tests that they have created in a kind of "test marketplace".  Tests will
-be available for browsing and downloading, with results from other
-developers available to compare with your own results.  There is already
-preliminary support for packaging a test using the 'ftc package-test' feature.
-More information about this service will be made available in the future.
+In the future, a server will be provided where test developers can
+share tests that they have created in a kind of "test marketplace".
+Tests will be available for browsing and downloading, with results
+from other developers available to compare with your own results.
+There is already preliminary support for packaging a test using the
+'ftc package-test' feature.  More information about this service will
+be made available in the future.
 
 =======================
 Technical Details 
@@ -357,7 +402,8 @@ This section has technical details about a test.
 Directory structure 
 ========================
 
-The directory structure used by Fuego is documented at [[Fuego directories]]
+The directory structure used by Fuego is documented at 
+[[Fuego directories]]
  
 
 
diff --git a/docs/rst_src/Adding_a_toolchain.rst b/docs/rst_src/Adding_a_toolchain.rst
index ae3700e..265b912 100644
--- a/docs/rst_src/Adding_a_toolchain.rst
+++ b/docs/rst_src/Adding_a_toolchain.rst
@@ -1,6 +1,224 @@
 .. _addtoolchain:
 
 
-#################
-Adding Toolchain
-#################
+###################
+Adding a toolchain
+###################
+
+==================
+Introduction 
+==================
+
+In order to build tests for your target board, you need to install a
+toolchain (often in the form of an SDK) into the Fuego system, and let
+Fuego know how to access it.
+
+Adding a toolchain to Fuego consists of these steps:
+
+ * 1. obtain (generate or retrieve) the toolchain
+ * 2. copy the toolchain to the container
+ * 3. install the toolchain inside the container
+ * 4. create a -tools.sh file for the toolchain
+ * 5. reference the toolchain in the appropriate board file
+
+========================
+Obtain a toolchain 
+========================
+
+First, you need to obtain a toolchain that will work with your board.
+You should have a toolchain that produces software which will work
+with the Linux distribution on your board.  This is usually obtained
+from your build tool, if you are building the distribution yourself,
+or from your semiconductor supplier or embedded Linux OS vendor, if
+you have been provided the Linux distribution from an external source.
+
+
+Installing a Debian cross-toolchain target
+==============================================
+
+If you are using an Debian-based target, then to get started, you may
+use a script to install a cross-compiler toolchain into the container.
+For example, for an ARM target, you might want to install the Debian
+armv7hf toolchain.  You can even try a Debian toolchain with other
+Linux distributions.  However, if you are not using Debian on your
+target board, there is no guarantee that this will produce correct
+software for your board.  It is much better to install your own SDK
+for your board into the fuego system.
+
+To install a Debian cross toolchain into the container, get to the
+shell prompt in the container and use the following script:
+
+ * /fuego-ro/toolchains/install_cross_toolchain.sh
+
+To use the script, pass it the argument naming the cross-compile
+architecture you are using.  Available values are:
+
+ * arm64 armel armhf mips mipsel powerpc ppc64el
+
+Execute the script, inside the docker container, with a single
+command-line option to indicate the cross-toolchain to install.  You
+can use the script more than once, if you wish to install multiple
+toolchains.
+
+Example:
+
+ * # /fuego-ro/toolchains/install_cross_toolchain.sh armhf
+
+The Debian packages for the specified toolchain will be installed into
+the docker container.
+
+Building a Yocto Project SDK
+===============================
+
+When you build an image in the Yocto Project, you can also build an
+SDK to go with that image using the '-c do_populate_sdk' build step
+with bitbake.
+
+To build the SDK in Yocto Project, inside your yocto build directory
+do:
+
+ * bitbake <image-name> -c do_populate_sdk
+
+This will build an SDK archive (containing the toolchain, header files
+and libraries needed for creating software on your target, and put it
+into the directory <build-root>/tmp/deploy/sdk/
+
+For example, if you are building the 'core-image-minimal' image, you
+would execute: ::
+
+  $ bitbake core-image-minimal -c do_populate_sdk
+
+At this step look in tmp/deploy/sdk and note the name of the sdk
+install package (the file ending with .sh).
+
+===========================================
+Install the SDK in the docker container 
+===========================================
+
+To allow fuego to use the SDK, you need to install it into the fuego
+docker container.  First, transfer the SDK into the container using
+docker cp.
+
+With the container running, on the host machine do:
+
+ * docker ps (note the container id)
+ * docker cp tmp/deploy/sdk/<sdk-install-package> <container-id>:/tmp
+
+This last command will place the SDK install package into the /tmp
+directory in the container.
+
+Now, install the SDK into the container, whereever you would like.
+Many toolchains install themselves under /opt.
+
+At the shell inside the container, run the SDK install script
+(which is a self-extracting archive):
+
+  * /tmp/poky-....sh
+
+    * during the installation, select a toolchain installation 
+      location, like: /opt/poky/2.0.1
+
+These instructions are for an SDK built by the Yocto Project.  Similar
+instructions would apply for installing a different toolchain or SDK.
+That is, get the SDK into the container, then install it inside the
+container.
+
+==============================================
+Create a -tools.sh file for the toolchain
+==============================================
+
+Now, fuego needs to be told how to interact with the toolchain.
+During test execution, the fuego system determines what toolchain to
+use based on the value of the TOOLCHAIN variable in the board file for
+the target under test.  The TOOLCHAIN variable is a string that is
+used to select the appropriate '<TOOLCHAIN>-tools.sh' file in
+/fuego-ro/toolchains.
+
+You need to determine a name for this TOOLCHAIN, and then create a
+file with that name, called $TOOLCHAIN-tools.sh.  So, for example if
+you created an SDK with poky for the qemuarm image, you might call the
+TOOLCHAIN "poky-qemuarm".  You would create a file called
+"poky-qemuarm-tools.sh"
+
+The -tools.sh file is used by Fuego to define the environment
+variables needed to interact with the SDK.  This includes things like
+CC, AR, and LD.  The complete list of variables that this script
+neeeds to provide are described on the page [[tools.sh]]
+
+Inside the -tools.sh file, you execute instructions that will set the
+environment variables needed to build software with that SDK.  For an
+SDK built by the Yocto Project, this involves setting a few variables,
+and calling the environment-setup... script that comes with the SDK.
+For SDKs from other sources, you can define the needed variables by
+directly exporting them.
+
+Here is an example of the tools.sh script for poky-qemuarm.  This is
+in the sample file /fuego-ro/toolchains/poky-qemuarm-tools.sh: ::
+
+
+	# fuego toolchain script
+	# this sets up the environment needed for fuego to use a 
+	# toolchain
+	# this includes the following variables:
+	# CC, CXX, CPP, CXXCPP, CONFIGURE_FLAGS, AS, LD, ARCH
+	# CROSS_COMPILE, PREFIX, HOST, SDKROOT
+	# CFLAGS and LDFLAGS are optional
+	# 
+	# this script is sourced by /fuego-ro/toolchains/tools.sh
+
+	POKY_SDK_ROOT=/opt/poky/2.0.1
+	export SDKROOT=${POKY_SDK_ROOT}/sysroots/
+        armv5e-poky-linux-gnueabi
+
+	# the Yocto project environment setup script changes PATH so 
+        # that python uses
+	# libs from sysroot, which is not what we want, so save the 
+        # original path
+	# and use it later
+	ORIG_PATH=$PATH
+
+	PREFIX=arm-poky-linux-gnueabi
+	source ${POKY_SDK_ROOT}/environment-setup-armv5e-
+        poky-linux-gnueabi
+
+	HOST=arm-poky-linux-gnueabi
+
+	# don't use PYTHONHOME from environment setup script
+	unset PYTHONHOME
+	env -u PYTHONHOME
+
+
+
+===============================================
+Reference the toolchain in a board file
+===============================================
+
+Now, to use that SDK for building test software for a particular
+target board, set the value of the TOOLCHAIN variable in the board
+file for that target.
+
+Edit the board file:
+ * vi /fuego-ro/boards/myboard.board
+
+And add (or edit) the line:
+
+ * TOOLCHAIN="poky-qemuarm"
+
+============
+Notes
+============
+
+Python execution
+==================
+
+You may notice that some of the example scripts set the environment
+variable ORIG_PATH.  This is used by the function
+[[function_run_python|run_python]] internally to execute the
+container's default python interpreter, instead of the interpreter
+that was built by the Yocto Project.
+
+
+
+
+
+
diff --git a/docs/rst_src/Adding_or_Customizing_a_Distribution.rst b/docs/rst_src/Adding_or_Customizing_a_Distribution.rst
index 6e067c9..0384ac7 100644
--- a/docs/rst_src/Adding_or_Customizing_a_Distribution.rst
+++ b/docs/rst_src/Adding_or_Customizing_a_Distribution.rst
@@ -8,31 +8,33 @@ Adding or Customizing a Distribution
 Introduction
 =====================
 
-Although Fuego is configured to execute on a standard Linux distribution,
-Fuego supports customizing certain aspects of its interaction with the system
-under test.  Fuego uses several features of the operating system on the
-board to perform
-aspects of its test execution.  This includes things like accessing the system
-log, flushing file system caches, and rebooting the board.  The ability 
-to customize Fuego's interaction with the system under test is useful in
-case you have a non-standard Linux distribution (where, say, certain features
-of Linux are missing or changed), or when you are trying to use Fuego with
-a non-Linux system.
-
-A developer can customize the distribution layer of Fuego in one of two ways:
- * adding overlay functions to a board file
- * by creating a new distribution overlay file
+Although Fuego is configured to execute on a standard Linux
+distribution, Fuego supports customizing certain aspects of its
+interaction with the system under test.  Fuego uses several features
+of the operating system on the board to perform aspects of its test
+execution.  This includes things like accessing the system log,
+flushing file system caches, and rebooting the board.  The ability to
+customize Fuego's interaction with the system under test is useful in
+case you have a non-standard Linux distribution (where, say, certain
+features of Linux are missing or changed), or when you are trying to
+use Fuego with a non-Linux system.
+
+A developer can customize the distribution layer of Fuego in one of
+two ways:
+
+ * adding overlay functions to a board file by creating a new
+ * distribution overlay file
 
 ==============================
 Distribution overlay file
 ==============================
 
-A distribution overlay file can be added to Fuego, by adding a new ''.dist''
-file to the directory: fuego-core/overlays/distrib
+A distribution overlay file can be added to Fuego, by adding a new
+''.dist'' file to the directory: fuego-core/overlays/distrib
 
 The *distribution* functions are defined in the file:
-fuego-core/overlays/base/base-distrib.fuegoclass
-These include functions for doing certain operations on your board, including:
+fuego-core/overlays/base/base-distrib.fuegoclass These include
+functions for doing certain operations on your board, including:
 
  - :ref:`ov_get_firmware <func_ov_get_firmware>`
  - :ref:`ov_rootfs_reboot <func_ov_rootfs_reboot>`
@@ -54,23 +56,29 @@ You can look up what each override function should do by
 reading the fuegoclass code, or looking at the function documentation
 at: :ref:`Test Script APIs <test_script_apis>`
 
-The inheritance mechanism and syntax for Fuego overlay files is described
-at: :ref:`Overlay Generation <overlay_generation>`
+The inheritance mechanism and syntax for Fuego overlay files is
+described at: :ref:`Overlay Generation <overlay_generation>`
 
-The goal of the distribution abstraction layer in Fuego is to allow you to
-customize Fuego operations to match what is available on your target board.
-For example, the default (base class) :ref:`ov_rootfs_logread() <func_ov_rootfs_logread>` function assumes
+The goal of the distribution abstraction layer in Fuego is to allow
+you to customize Fuego operations to match what is available on your
+target board.  For example, the default (base class)
+:ref:`ov_rootfs_logread() <func_ov_rootfs_logread>` function assumes
 that the target board has the command "/sbin/logread" that can be used
-to read the system log.  If your distribution does not have "/sbin/logread", or indeed
-if there is no system log, then you would need to override ov_rootfs_logread()
-to do something appropriate for your distribution or OS.
-
-*Note: In fact, this is a common enough situation that there is already a 'nologread.dist' file already in the overlay/distribs directory.*
-
-Similarly, :ref:`ov_rootfs_kill <func_ov_rootfs_kill>` uses the /proc filesystem, /proc/$pid/status, and the
-cat, grep, kill and sleep commands on the target board to do its work.  If our distribution
-is missing any of these, then you would need to override ov_rootfs_kill()
-with a function that did the appropriate thing on your distribution (or OS).
+to read the system log.  If your distribution does not have
+"/sbin/logread", or indeed if there is no system log, then you would
+need to override ov_rootfs_logread() to do something appropriate for
+your distribution or OS.
+
+*Note: In fact, this is a common enough situation that there is*
+*already a 'nologread.dist' file already in the overlay/distribs*
+*directory.*
+
+Similarly, :ref:`ov_rootfs_kill <func_ov_rootfs_kill>` uses the /proc
+filesystem, /proc/$pid/status, and the cat, grep, kill and sleep
+commands on the target board to do its work.  If our distribution is
+missing any of these, then you would need to override ov_rootfs_kill()
+with a function that did the appropriate thing on your distribution
+(or OS).
 
 Existing distribution overlay files
 =====================================
@@ -87,16 +95,17 @@ that commonly occur in embedded Linux testing.
 Referencing the distribution in the board file 
 ===========================================================
 
-Inside the board file for your board, indicate the distribution overlay you are using
-by setting the *DISTRIB* variable.
+Inside the board file for your board, indicate the distribution
+overlay you are using by setting the *DISTRIB* variable.
 
-If the DISTRIB variable is not set, then the default distribution overlay
-functions are used.
+If the DISTRIB variable is not set, then the default distribution
+overlay functions are used.
 
-For example, if your embedded distribution of Linux does not have a system
-logger, you can override the normal logging interaction of Fuego by using
-the 'nosyslogd.dist' distribution overlay.  To do this, add the following
-line to the board file for target board where this is the case: ::
+For example, if your embedded distribution of Linux does not have a
+system logger, you can override the normal logging interaction of
+Fuego by using the 'nosyslogd.dist' distribution overlay.  To do this,
+add the following line to the board file for target board where this
+is the case: ::
 
 
   DISTRIB="nosyslogd.dist"
@@ -123,16 +132,16 @@ Notes
 =========
 
 Fuego does not yet fully support testing non-Linux operating systems.
-There is work-in-progress to support testing of NuttX, but that feature
-is not complete as of this writing. In any event, Fuego does include
-a 'NuttX' distribution overlay, which may provide some ideas if you wish
-to write your own overlay for a non-Linux OS.
+There is work-in-progress to support testing of NuttX, but that 
+feature is not complete as of this writing. In any event, Fuego does 
+include a 'NuttX' distribution overlay, which may provide some ideas 
+if you wish to write your own overlay for a non-Linux OS.
 
 NuttX distribution overlay
 ============================
 
-By way of illustration, here are the contents of the NuttX
-distribution overlay file (fuego-core/overlays/distribs/nuttx.dist).::
+By way of illustration, here are the contents of the NuttX 
+distribution overlay file (fuego-core/overlays/distribs/nuttx.dist). ::
 
 
 	override-func ov_get_firmware() {
@@ -196,34 +205,23 @@ Hypothetical QNX distribution
 Say you wanted to add support for testing QNX with Fuego.
 
 Here are some first steps to add a QNX distribution overlay:
+
  * set up your board file
- * create a custom QNX.dist (stubbing out or replacing base class functions as needed)
+ * create a custom QNX.dist (stubbing out or replacing base class 
+   functions as needed)
 
-    * you could copy null.dist to QNX.dist, and deciding which items to replace with QNX-specific functionality
+    * you could copy null.dist to QNX.dist, and deciding which items 
+      to replace with QNX-specific functionality
  
  * add DISTRIB="QNX.dist" to your board file
- * run the Functional.fuego_board_check test (using ftc, or adding the node and job to Jenkins
-and building the job using the Jenkins interface), and
+ * run the Functional.fuego_board_check test (using ftc, or adding 
+   the node and job to Jenkins and building the job using the Jenkins 
+   interface), and
  * examine the console log to see what issues surface
 
 
 
-.. toctree::
-   :hidden:
 
-   
-   function_ov_get_firmware
-   function_ov_rootfs_reboot
-   function_ov_rootfs_state
-   function_ov_logger
-   function_ov_rootfs_sync
-   function_ov_rootfs_drop_caches
-   function_ov_rootfs_oom
-   function_ov_rootfs_kill
-   function_ov_rootfs_logread
-   Test_Script_APIs
-   Overlay_Generation
-  
 
 
 
diff --git a/docs/rst_src/Adding_test_jobs_to_Jenkins.rst b/docs/rst_src/Adding_test_jobs_to_Jenkins.rst
index cb0d23f..ca715b3 100644
--- a/docs/rst_src/Adding_test_jobs_to_Jenkins.rst
+++ b/docs/rst_src/Adding_test_jobs_to_Jenkins.rst
@@ -1,5 +1,139 @@
 .. _addtestjob:
 
 ############################
-Adding test jobs to jenkins
+Adding test jobs to Jenkins
 ############################
+
+Before performing any tests with Fuego, you first need to
+add Jenkins jobs for those tests in Jenkins.
+
+To add jobs to Jenkins, you use the 'ftc' command line tool.
+
+Fuego comes with over a hundred different tests, and not
+all of them will be useful for your environment or testing needs.
+
+In order to add jobs to Jenkins, you first need to have
+created a Jenkins node for the board for which you wish to add
+the test.  If you have not already added a board definition,
+or added your board to Jenkins, please see:
+:ref:`Adding a board <adding_board>`
+
+One your board is defined as a Jenkins node, you can add test
+jobs for it.
+
+There are two ways of adding test jobs, individually, and
+using testplans.  In both cases, you use the 'ftc add-jobs'
+command.
+
+============================
+Selecting tests or plans
+============================
+
+The list of all tests that are available can be seen
+by running the command 'ftc list-tests'.
+
+Run this command inside the docker container, by going to
+the shell prompt inside the Fuego docker container, and typing ::
+
+
+  (container_prompt)$ ftc list-tests
+
+
+To see the list of plans that come pre-configured with Fuego,
+use the command 'ftc list-plans'.
+
+  (container_prompt)$ ftc list-plans
+
+
+A plan lists a set of tests to execute.  You can examine the
+list of tests that a testplan includes, by examining the testplan
+file. The testplan files are in JSON format, and are in the
+directory ``fuego-core/engine/overlays/testplans``.
+
+============================
+Adding individual tests 
+============================
+
+To add an individual test, add it using the 'ftc add-jobs'
+command.  For example, to add the test "Functional.hello_world"
+for the board "beaglebone", you would use the following command: ::
+
+
+  (container prompt)$ ftc add-job -b beaglebone -t 
+  Functional.hello_world
+
+
+Configuring job options
+=========================
+
+When Fuego executes a test job, several options are available to 
+control aspects of job execution.  These can be configued on the 
+'ftc add-job' command line.
+
+The options available are:
+
+ * timeout
+ * rebuild flag
+ * reboot flag
+ * precleanup flag
+ * postcleanup flag
+
+See 'ftc add-jobs help' for details about these options and how to 
+specify them.
+
+Adding tests for more than one board 
+======================================
+
+If you want to add tests for more than one board at a time, you can do
+so by specifying multiple board names after the '-b' option with 
+'ftc add-jobs'.Board names should be a single string argument, with 
+individual board names separated by commas.
+
+For example, the following would add a job for Functional.hello_world 
+to each of the boards rpi1, rpi2 and beaglebone. ::
+
+
+  (container prompt)$ ftc add-job -b rpi1,rpi2,beaglebone -t 
+  Functional.hello_world
+
+
+
+================================
+Adding jobs based on testplans 
+================================
+
+A testplan is a list of Fuego tests with some options for each one.
+You can see the list of testplans in your
+system with the following command: ::
+
+
+  (container prompt)$ ftc list-plans
+
+
+To create a set of jobs related to docker image testing, for the 
+'docker' board on the system, do the following: ::
+
+
+  (container prompt)$ ftc add-jobs -b docker -p testplan_docker
+
+
+To create a set of jobs for a board called 'beaglebone', 
+do the following: ::
+
+
+  (container prompt)$ ftc add-jobs -b myboard -p testplan_smoketest
+
+
+The "smoketest" testplan has about 20 tests that exercise a variety of
+features on a Linux system.  After running these commands, a set of 
+jobs will appear in the Jenkins interface.
+
+Once this is done, your Jenkins interface should look something like 
+this:
+
+.. image:: ../images/fuego-1.1-jenkins-dashboard-beaglebone-jobs.png
+   :width: 900
+
+
+
+
diff --git a/docs/rst_src/Adding_views_to_Jenkins.rst b/docs/rst_src/Adding_views_to_Jenkins.rst
index e61bbff..8d0a679 100644
--- a/docs/rst_src/Adding_views_to_Jenkins.rst
+++ b/docs/rst_src/Adding_views_to_Jenkins.rst
@@ -5,14 +5,15 @@
 Adding views to Jenkins
 #########################
 
-It is useful to organize your Jenkins test jobs into "views".  These appear
-as tabs in the main Jenkins interface. Jenkins always provides a tab
-that lists all of the installed jobs, call "All".  Other views that you
-create will appear on tabs next to this, on the main Jenkins page.
-
-You can define new Jenkins views using the Jenkins interface,
-but Fuego provides a command that allows you to easily create views
-for boards, or for sets of related tests (by name and wildcard), from the
+It is useful to organize your Jenkins test jobs into "views".  These
+appear as tabs in the main Jenkins interface. Jenkins always provides
+a tab that lists all of the installed jobs, call "All".  Other views
+that you create will appear on tabs next to this, on the main Jenkins
+page.
+
+You can define new Jenkins views using the Jenkins interface, but
+Fuego provides a command that allows you to easily create views for
+boards, or for sets of related tests (by name and wildcard), from the
 Linux command line (inside the container).
 
 The usage line for this command is: ::
@@ -20,22 +21,22 @@ The usage line for this command is: ::
   Usage: ftc add-view <view-name> [<job_spec>]
 
 
-The view-name parameter indicates the name of the view in Jenkins,
-and the job-spec parameter is used to select the jobs which appear in
-that view.
+The view-name parameter indicates the name of the view in Jenkins, and
+the job-spec parameter is used to select the jobs which appear in that
+view.
 
-If the job_spec is provided and starts with an '=', then it is 
-interpreted as one or more specific job names.  Otherwise, the view
-is created using a regular expression statement that Jenkins uses to select
-the jobs to include in the view.
+If the job_spec is provided and starts with an '=', then it is
+interpreted as one or more specific job names.  Otherwise, the view is
+created using a regular expression statement that Jenkins uses to
+select the jobs to include in the view.
 
 ======================
 Adding a board view 
 ======================
 
 By convention, most Fuego users populate their Jenkins interface with
-a view for each board in their system (well, for labs with a small number
-of boards, anyway).
+a view for each board in their system (well, for labs with a small
+number of boards, anyway).
 
 The simplest way to add a view for a board is to just specify the
 board name, like so: ::
@@ -52,28 +53,28 @@ Customizing regular expressions
 ==================================
 
 Note that if your board name is not unique enough, or is a string
-contained in some tests, then
-you might see some test jobs listed that were not specific
-to that board.  For example, if you had a board name "Bench",
-then a view you created with the view-name of "Bench", would also
-include Benchmarks.  You can work around this by specifying a more
-details regular expression for your job spec.
+contained in some tests, then you might see some test jobs listed that
+were not specific to that board.  For example, if you had a board name
+"Bench", then a view you created with the view-name of "Bench", would
+also include Benchmarks.  You can work around this by specifying a
+more details regular expression for your job spec.
 
 For example: ::
 
   (container_prompt)$ ftc add-view Bench "Bench.*"
 
 
-This would only include the jobs that started with "Bench" in the "Bench"
-view.  Benchmark jobs for other boards would not be included, since they
-only have "Benchmark" somewhere in the middle of their job name - not at the
-beginning.
+This would only include the jobs that started with "Bench" in the
+"Bench" view.  Benchmark jobs for other boards would not be included,
+since they only have "Benchmark" somewhere in the middle of their job
+name - not at the beginning.
 
 ===============================================
 Add view by test name regular expression
 ===============================================
 
-This command would create a view to show LTP results for multiple boards: ::
+This command would create a view to show LTP results for multiple
+boards: ::
 
  (container_prompt)$ ftc add-view LTP
 
@@ -86,7 +87,8 @@ prefixed with  *"fuego_"*.  ::
   (container_prompt)$ ftc add-view fuego ".*fuego_.*"
 
 
-And the following command will show all the batch jobs defined in the system: ::
+And the following command will show all the batch jobs defined in the
+system: ::
 
   (container_prompt)$ ftc add-view .*.batch
 
@@ -101,11 +103,13 @@ list of job names.  The job names must be complete, including the
 board name, spec name and full test name. ::
 
 
-  (container_prompt)$ ftc add-view network-tests =docker.default.Functional.ipv6connect,docker.default.Functional.netperf
+  (container_prompt)$ ftc add-view network-tests =docker.default.
+  Functional.ipv6connect,docker.default.Functional.netperf
 
 
-In this command, the view would be named "network-tests", and it would show
-the jobs "docker.default.Functional.ipv6connect" and "docker.default.Functional.netperf".
+In this command, the view would be named "network-tests", and it would
+show the jobs "docker.default.Functional.ipv6connect" and
+"docker.default.Functional.netperf".
 
 
 
diff --git a/docs/rst_src/Architecture.rst b/docs/rst_src/Architecture.rst
index 84fc4ab..e85d9c7 100644
--- a/docs/rst_src/Architecture.rst
+++ b/docs/rst_src/Architecture.rst
@@ -5,9 +5,9 @@
 Architecture
 ################
 
-Fuego consists of a continuous integration system,
-along with some pre-packaged test programs and a shell-based
-test harness, running in a Docker container.::
+Fuego consists of a continuous integration system, along with some
+pre-packaged test programs and a shell-based test harness, running in
+a Docker container.::
 
    Fuego = (Jenkins + abstraction scripts + pre-packed tests)
           inside a container
@@ -23,21 +23,22 @@ Major elements
 
 The major elements in the Fuego architecture are:
 
- * Host system
+ * host system
+
+   * container build system
+   * fuego container instance
 
-     * Fuego container instance
-     * Container build system
      * Jenkins continuous integration system
 
-        * web-based user interface (web server on port 8090)
-        * plugins
+       * web-based user interface (web server on port 8090)
+       * plugins
 
-     * Test programs
-     * Build environment (not shown in the diagram above)
-     * Fuego core system
+     * test programs
+     * abstraction scripts (test scripts)
+     * build environment (not shown in the diagram above)
 
- * Target system
- * Web client, for interaction with the system
+ * target system
+ * web client, for interaction with the system
 
 ==============
 Jenkins 
@@ -82,13 +83,6 @@ system).
 =========================
 Pre-packaged tests 
 =========================
-Fuego contains over 100 pre-packaged tests, ready for you to start
-testing with these tests "out-of-the-box".  The tests individual
-tests like a test of 'iputils' or 'pmqtest', as well as several
-Benchmarks in the area of CPU performance, networking, graphics
-and realtime.  Fuego also includes some full test suites, like
-LTP (Linux Test Project).  Finally Fuego includes a set of selftest
-tests, to validate board operation or Fuego core functionality.
 
 =========================
 Abstraction scripts 
@@ -108,9 +102,10 @@ Fuego uses a set of shell script fragments to support abstractions for
 Container
 ==========================
 
-By default, Fuego runs inside a Docker container.  This provides two benefits:
+By default, Fuego runs inside a Docker container.  This provides two
+benefits:
 
- * It makes it easy to run the system on a variety of different Linux
+ * It makes it easy to run the system on a variety of different Linux 
    distributions
  * It makes the build environment for the test programs consistent
 
@@ -131,63 +126,66 @@ utilities and tools available for performing tests
 Different objects in Fuego 
 ============================
 
-It is useful to give an overview of the major objects used in Fuego, as
-they will be referenced many times:
+It is useful to give an overview of the major objects used in Fuego, 
+as they will be referenced many times:
 
 Fuego core objects:
 
- * board - a description of the device under test
- * test - materials for conducting a test
- * spec - one or more sets of variables for describing a test variant
- * plan - a collection of tests, with additional test settings for their
-   execution
- * run - the results from a individual execution of a test on a board
+ * board - a description of the device under test 
+ * test - materials forconducting a test 
+ * spec - one or more sets of variables for describing a test variant 
+ * plan - a collection of tests, with additional test settings for 
+   their execution 
+ * run - the results from
+   a individual execution of a test on a board
 
 Jenkins objects:
 
- * node - the Jenkins object corresponding to a Fuego board
- * job - a Jenkins object corresponding to a combination of board, spec,
-   and test
- * build - the test results, from Jenkins perspective - corresponding to
-   a Fuego 'run'
-
-There are both a front-end and a back-end to the system, and different
-names are used to describe the front-end and back-end objects used by
-the system, to avoid confusion.  In general, Jenkins objects have
-rough counterparts in the Fuego system:
-
-  +------------------+-------------------------------+
-  | Jenkins object   | Corresponds to fuego object   |
-  +==================+===============================+
-  | node             | board                         |
-  +------------------+-------------------------------+
-  | job              | test                          |
-  +------------------+-------------------------------+
-  | build            | run                           |
-  +------------------+-------------------------------+
+ * node - the Jenkins object corresponding to a Fuego board 
+ * job - a Jenkins object corresponding to a combination of board, 
+   spec, and test 
+ * build - the test results, from Jenkins perspective - corresponding 
+   to a Fuego 'run'
+
+There are both a front-end and a back-end to the system, and different 
+names are used to describe the front-end and back-end objects used by 
+the system, to avoid confusion.  In general, Jenkins objects have 
+rough counterparts in
+the Fuego system:
+
+        +------------------+-------------------------------+
+        | Jenkins object   | corresponds to fuego object   |
+        +==================+===============================+
+        | node		   | board                         |
+        +------------------+-------------------------------+
+        | job              | test                          |
+        +------------------+-------------------------------+
+        | build            | run                           |
+        +------------------+-------------------------------+
      
 =======================
  Jenkins operations 
 =======================
 
 How does Jenkins work?
- * When the a job is initiated, Jenkins starts a slave process to run
+ * When the a job is initiated, Jenkins starts a slave process to run 
    the test that corresponds to that job
  * Jenkins records stdout from slave process
- * The slave (slave.jar) runs a script specified in the config.xml
-   for the job
+ * the slave (slave.jar) runs a script specified in the config.xml for
+   the job
 
-   * This script sources functions from the scripts and overlays
-     directory of Fuego, and does the actual building, deploying and
+   * this script sources functions from the scripts and overlays 
+     directory of Fuego, and does the actual building, deploying and 
      test executing
-   * Also, the script does results analysis on the test logs, and calls
-     the post_test operation to collect additional information and cleanup
-     after the test
-
- * While a test is running, Jenkins accumulates the log output from the
-   generated test script and displays it to the user (if they are watching
-   the console log)
- * Jenkins provides a web UI for browsing the nodes, jobs, and test
+   * Also, the script does results analysis on the test logs, and 
+     calls the post_test operation to collect additional information 
+     and clean up after the test
+
+ * while a test is running, Jenkins accumulates the log output from 
+   the generated test script and displays it to the user (if they are 
+   watching the console log)
+
+ * Jenkins provides a web UI for browsing the nodes, jobs, and test 
    results (builds), and displaying graphs for benchmark data
 
 ======================
@@ -200,103 +198,112 @@ How do the Fuego scripts work?
 Test execution 
 ======================
 
- * Each test has a base script, that defines a few functions specific
+ * each test has a base script, that defines a few functions specific 
    to that test (see below)
- * Upon execution, this base script loads additional test variables
-   and function definitions from other files using something called
+ * upon execution, this base script loads additional test variables 
+   and function definitions from other files using something called 
    the overlay generator
- * The overlay generator creates a script containing test variables
+ * the overlay generator creates a script containing test variables 
    for this test run
 
-    * The script is created in the run directory for the test
-    * The script is called prolog.sh
-    * The overlay generator is called ovgen.py
- * The base script (with the test variable script sourced into it)
-   runs on the host, and uses fuego functions to perform different
+   * the script is created in the run directory for the test
+   * the script is called prolog.sh
+   * the overlay generator is called ovgen.py
+
+ * the base script (with the test variable script sourced into it) 
+   runs on the host, and uses fuego functions to perform different 
    phases of the test
- * For a detailed flow graph of normal test execution see:
+ * for a detailed flow graph of normal test execution see:  
    :ref:`test execution flow outline <Outline>`
 
 ================================
-Test variable file generation 
+test variable file generation 
 ================================
 
- * The generator takes the following as input:
-    * environment variables passed by Jenkins
-    * board file for the target (specified with NODE_NAME)
-    * tools.sh (vars from tools.sh are selected with TOOLCHAIN,
-      from the board file)
-    * the distribution file, and (selected with DISTRIB)
-    * the testplans for the test (selected with TESTPLAN)
-    * test specs for the test
-
-The generator produces the test variable file, which it places
-in the "run" directory for a test, which has the name ``prolog.sh``
-This generation happens on the host, inside the docker container.
-This test variable file has all the functions which are available to
-be called by the base test script, as well as test variables
-from various source in the test system.
+ * the generator takes the following as input:
+
+   * environment variables passed by Jenkins
+   * board file for the target (specified with NODE_NAME)
+   * tools.sh (vars from tools.sh are selected with TOOLCHAIN, from 
+     the board file)
+   * the distribution file, and (selected with DISTRIB)
+   * the testplans for the test (selected with TESTPLAN)
+   * test specs for the test
+
+ * the generator produces the test variable file
+ * the test variable file is in "run" directory for a test, and has 
+   the name: prolog.sh
+ * this generation happens on the host, inside the docker container
+ * the test variable file has functions which are available to be 
+   called by the base test script
 
 .. image:: ../images/fuego-script-generation.png
    :width: 600
 
 Input
 ======
- * Input descriptions:
-    * the board file has variables defining attributes of the board,
-      like the toolchain, network address, method of accessing the
-      board, etc.
-    * The tools.sh script has variables which are used for identifying the
-      toolchain used to build binary test programs
-
-       * It uses the TOOLCHAIN variable to determine the set of variables
-         to define
-
-   * A testplan lists multiple tests to run
-      * It specifies a test name and spec for each one
-      * a spec file holds the a set of variable declarations which
-        are used by the tests themselves.
-        These are put into environment variables on the target.
-
- * ovgen.py reads the plans, board files, distrib files and specs,
-   and produces a single prolog.sh file that has all the information
-   for the test 
+
+ * input descriptions:
+
+   * the board file has variables defining attributes of the board, 
+     like the toolchain, network address, method of accessing the 
+     board, etc.
+   * tools.sh has variables which are used for identifying the 
+     toolchain used to build binary test programs
+
+     * it uses the TOOLCHAIN variable to determine the set of 
+       variables to define
+
+   * a testplan lists multiple tests to run
+
+     * it specifies a test name and spec for each one
+
+     * a spec files hold the a set of variable declarations which are 
+       used by the tests themselves.
+       These are put into environment variables on the target.
+
+ * ovgen.py reads the plans, board files, distrib files and specs, 
+   and produces
+   a single prolog.sh file that has all the information for the test 
 
  * Each test in the system has a fuego shell script
 
-    * This must have the same name as the base name of the test:
-       * \<base_test_name>.sh
+   * this must have the same name as the base name of the test:
+
+     * \<base_test_name>.sh
 
  * Most (but not all) tests have an additional test program
 
-    * this program is executed on the board (the device under test)
-    * it is often a compiled program, or set of programs
-    * it can be a simple shell script
-    * it is optional - sometime the base script can execute the
-      needed commands for a test without an additional program
-      placed on the board
+   * this program is executed on the board (the device under test)
+   * it is often a compiled program, or set of programs
+   * it can be a simple shell script
+   * it is optional - sometime the base script can execute the needed 
+     commands for a test without an additional program placed on the 
+     board
+
+ * the base script declares the tarfile for the test, and has 
+   functions for: test_build(), test_deploy() and test_run()
 
- * The base script declares the tarfile for the test, and has functions
-   for: test_build(), test_deploy() and test_run()
+   * the test script is run on host (in the container)
 
-    * The test script is run on host (in the container)
-       * but it can include commands that will run on the board
-    * tarball has the tarfile 
-    * test_build() has commands (which run in the container) to compile
-      the test program
-    * test_deploy() has commands to put the test programs on the target
-    * test_run() has commands to define variables, execute the actual
-      test, and log the results.
+     * but it can include commands that will run on the board
 
- * The test program is run on the target
+   * tarball has the tarfile 
+   * test_build() has commands (which run in the container) to compile 
+     the test program
+   * test_deploy() has commands to put the test programs on the target
+   * test_run() has commands to define variables, execute the actual 
+     test, and log the results.
 
-    * This is the actual test program that runs and produces a result
+ * the test program is run on the target
+
+   * this is the actual test program that runs and produces a result
 
 ====================
 fuego test phases 
 ====================
 
-A test execution in fuego runs through several phases, some of which
+A test execution in fuego runs through several phases, some of which 
 are optional, depending on the test.
 
 The test phases are:
@@ -339,12 +346,12 @@ software.
 
 This phase is split into multiple parts:
  * pre_build - build workspace is created, a build lock is acquired
-   and the tarball is unpacked
+ * and the tarball is unpacked
 
-    * :ref:`unpack <unpack>` is called during pre_build
+   * :ref:`unpack <unpack>` is called during pre_build
  * test_build - this function, from the base script, is called
 
-    * Usually this consists of 'make', or 'configure ; make'
+   * usually this consists of 'make', or 'configure ; make'
  * post_build - (empty for now)
 
 deploy
@@ -356,10 +363,11 @@ required supporting files, to the target.
 This consists of 3 sub-phases:
  * pre_deploy - cd's to the build directory
  * test_deploy - the base script's 'test_deploy' function is called.
-    * Usually this consists of tarring up needed files, copying them
-      to the target with 'put', and then extracting them there 
-    * Items should be placed in the directory
-      $BOARD_TESTDIR/fuego.$TESTDIR/ directory on the target
+
+   * Usually this consists of tarring up needed files, copying them to 
+     the target with 'put', and then extracting them there 
+   * Items should be placed in the directory 
+     $BOARD_TESTDIR/fuego.$TESTDIR/ directory on the target
  * post_deploy - removes the build lock
 
 run
@@ -415,47 +423,49 @@ Also, a final analysis is done on the system logs is done in this step
 phase relation to base script functions
 ============================================================
 
-Some of the phases are automatically performed by Fuego, and some end
-up calling a routine in the base script (or use data from the base
-script) to perform their actions.  This table shows the relation
-between the phases and the data and routines that should be defined in
-the base script.
+Some of the phases are automatically performed by Fuego, and some end 
+up calling a routine in the base script (or use data from the base 
+script) to perform their actions.  This table shows the relation 
+between the phases and the data and routines that should be defined 
+in the base script.
 
 It also shows the most common commands utilized by base script
 functions for this phase.
 
 
-  +------------+-------------------------------+------------------------------+
-  | phase      | relationship to base script   | common operations            |
-  +============+===============================+==============================+
-  | pre_test   | calls 'test_pre_check'        |assert_define, is_on_target,  |
-  |            |                               |check_process_is_running      |
-  +------------+-------------------------------+------------------------------+
-  | build      | uses the 'tarfile' definition,|patch,configure,make          |
-  |            | calls'test_build'             |                              |
-  +------------+-------------------------------+------------------------------+
-  | deploy     | Calls 'test_deploy'           | put                          |
-  +------------+-------------------------------+------------------------------+
-  | run        | calls 'test_run'              | cmd,report,report_append     |
-  +------------+-------------------------------+------------------------------+
-  |get_testlog |(none)                         |                              |
-  +------------+-------------------------------+------------------------------+
-  |processing  |calls 'test_processing'        | log_compare                  |
-  +------------+-------------------------------+------------------------------+
-  |post_test   |calls 'test_cleanup'           | kill procs                   |
-  +------------+-------------------------------+------------------------------+
+        +------------+-------------------------------+---------------------------+
+        | phase      | relationship to base script   | common operations         |
+        +============+===============================+===========================+
+        | pre_test   | calls 'test_pre_check'        |assert_define,is_on_target |
+        |            |                               |,check_process_is_running  |
+        +------------+-------------------------------+---------------------------+
+        | build      | uses the 'tarfile' definition,|patch,configure,make       |
+	|            | calls'test_build'             | 				 |
+        +------------+-------------------------------+---------------------------+
+        | deploy     | Calls 'test_deploy'           | put                       |
+        +------------+-------------------------------+---------------------------+
+        | run        | calls 'test_run'              | cmd,report,report_append  |
+        +------------+-------------------------------+---------------------------+
+        |get_testlog |(none)                         |                           |
+        +------------+-------------------------------+---------------------------+
+        |processing  |calls 'test_processing'        | log_compare               |
+        +------------+-------------------------------+---------------------------+
+        |post_test   |calls 'test_cleanup'           | kill procs                |
+	+------------+-------------------------------+---------------------------+
 
 
 other scripts and programs 
 ==============================
 
  * parser.py is used for benchmark tests
-    * It is run against the test log, on the host
-    * It extracts the values from the test log and puts them in a
-      normalized format
-    * These values, called benchmark 'metrics', are compared against
-      pre-defined threshholds to determine test pass or failure
-    * The values are saved for use by plotting software
+
+   * it is run against the test log, on the host
+   * it extracts the values from the test log and puts them in a 
+     normalized format
+   * these values, called benchmark 'metrics', are compared against 
+     pre-defined threshholds to determine test pass or failure
+   * the values are saved for use by plotting software
+
 
 ==============
  Data Files 
@@ -480,7 +490,7 @@ The base shell script should:
  * execute the tests
  * read the log data from the test
 
-The base shell script can handle host/target tests (because it runs on
+The base shell script can handle host/target tests (because it runs on 
 the host).
 
 (That is, tests that involve actions on both the host and target.
@@ -500,9 +510,13 @@ specifies the board, spec and base script for the test.
 ========
 
 Human roles:
- * test program author - person who creates a new standalone test program
- * test integrator - person who integrates a standalone test into fuego
- * fuego developer - person who modifies Fuego (including the fuego system scripts or Jenkins) to support more test scenarios or additional features
+ * test program author - person who creates a new standalone test 
+   program
+ * test integrator - person who integrates a standalone test into 
+   fuego
+ * fuego developer - person who modifies Fuego (including the fuego 
+   system scripts or Jenkins) to support more test scenarios or 
+   additional features
  * tester - person who executes tests and evaluates results
 
 =================
@@ -514,3 +528,24 @@ their interactions at:
 
  * :ref:`Fuego Developer Notes <Devref>`
 
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/rst_src/Artwork.rst b/docs/rst_src/Artwork.rst
index b4b89d6..f5868fe 100644
--- a/docs/rst_src/Artwork.rst
+++ b/docs/rst_src/Artwork.rst
@@ -1,9 +1,11 @@
+.. _artwork:
 
 #########
 Artwork
 #########
 
-This page has artwork (logos, photos and images) for use in Fuego presentations and documents.
+This page has artwork (logos, photos and images) for use in Fuego
+presentations and documents.
 
 =======
 Logos 
@@ -83,7 +85,8 @@ images
 Photos 
 =========
 
- * Tim Bird presenting Introduction to Fuego at ELC 2016 (Youtube video framegrab) (png, 370x370)
+ * Tim Bird presenting Introduction to Fuego at ELC 2016 (Youtube
+   video framegrab) (png, 370x370)
 
   .. image:: ../images/Youtube-Intro-to-Fuego-ELC-2016-square.png
      :height: 100
diff --git a/docs/rst_src/Building_Documentation.rst b/docs/rst_src/Building_Documentation.rst
index 4a56a22..861a44a 100644
--- a/docs/rst_src/Building_Documentation.rst
+++ b/docs/rst_src/Building_Documentation.rst
@@ -4,16 +4,19 @@
 Building Documentation
 ##########################
 
-As of July, 2020, the Fuego documentation is currently available in 3 places:
+As of July, 2020, the Fuego documentation is currently available in 3
+places:
 
- * the fuego-docs.pdf generated from TEX files in the fuego/docs/source directory
- * the Fuegotest wiki, located at: `<https://fuegotest.org/wiki/Documentation>`_
+ * the fuego-docs.pdf generated from TEX files in the fuego/docs/source 
+   directory
+ * the Fuegotest wiki, located at: 
+   `<https://fuegotest.org/wiki/Documentation>`_
  * .rst files in fuego/docs
 
-The fuego-docs.pdf file is a legacy file that is several years old.  It
-is only kept around for backwards compatibility.  It might be worthwhile
-to scan it and see if any information is in it that is not in the wiki
-and migrate it to the wiki.
+The fuego-docs.pdf file is a legacy file that is several years old.
+It is only kept around for backwards compatibility.  It might be
+worthwhile to scan it and see if any information is in it that is not
+in the wiki and migrate it to the wiki.
 
 The fuegotest wiki has the currently-maintained documentation for the
 project.  But there are several issues with this documentation:
@@ -26,9 +29,9 @@ project.  But there are several issues with this documentation:
 
  - there is a mixture of information in the wiki
 
-   - not just documentation, but a crude issues tracker, random technical notes
-     testing information, release information and other data that should not be
-     part of official documentation
+   - not just documentation, but a crude issues tracker, random 
+     technical notes testing information, release information and 
+     other data that should not be part of official documentation
 
 The .rst files are intended to be the future documentation source for
 the project.
@@ -49,7 +52,7 @@ building the RST docs
 ===========================
 
 The RST docs can be build in several different formats, including
-text, html, and pdf.  You can type 'make help' to get a list of the 
+text, html, and pdf.  You can type 'make help' to get a list of the
 possible build targets for this documentation.  Output is always
 directed to a directory under fuego/docs/_build.
 
diff --git a/docs/rst_src/FAQ.rst b/docs/rst_src/FAQ.rst
new file mode 100644
index 0000000..001a008
--- /dev/null
+++ b/docs/rst_src/FAQ.rst
@@ -0,0 +1,49 @@
+.. _faq:
+
+#####
+FAQ
+#####
+
+Here is a list of Frequently Asked Questions and Answers about Fuego:
+
+===========================
+Languages and formats used
+===========================
+
+Q. Why does Fuego use shell scripting as the language for tests?
+==================================================================
+
+There are other computer languages which have more advanced features
+(such as data structions, object orientation, rich libraries,
+concurrency, etc.) than shell scripting.  It might seem odd that shell
+scripting was chosen as the language for implementing the base scripts
+for the tests in fuego, given the availability of these other
+languages.
+
+The Fuego architecture is specifically geared toward host/target
+testing.  In particular, tests often perform a variety of operations
+on the target in addition to the operations that are performed on the
+host.  When the base script for a test runs on the host machine,
+portions of the test are invoked on the target.  It is still true
+today that the most common execution environment (besides native code)
+that is available on almost every embedded Linux system is a
+POSIX-compliant shell.  Even devices with very tight memory
+requirements usually have a busybox 'ash' shell available.
+
+In order to keep the base script consistent, Fuego uses shell
+scripting on both the host and target systems.  Shell operations are
+performed on the target using 'cmd', 'report' and 'report_append'
+functions provided by Fuego.
+
+Note that Fuego officially use 'bash' as the shell on the host, but
+does not require a particular shell implementatio to be available on
+the target.  Therefore, it is important to use only POSIX-compatible
+shell features for those aspects of the test that run on target.
+
+
+
+
+
+
+
+
diff --git a/docs/rst_src/FrontPage.rst b/docs/rst_src/FrontPage.rst
index d68cafa..4f9e7a3 100644
--- a/docs/rst_src/FrontPage.rst
+++ b/docs/rst_src/FrontPage.rst
@@ -1,3 +1,4 @@
+.. _front-page:
 
 ##################
 Fuego Test System
@@ -28,7 +29,9 @@ in 2016.  The slides and a video are provided below, if you want
 to see an overview and introduction to Fuego.
 
 The slides are here:
-`Introduction-to-Fuego-LCJ-2016.pdf <http://fuegotest.org/ffiles/Introduction-to-Fuego-LCJ-2016.pdf>`_, along with a 
+`Introduction-to-Fuego-LCJ-2016.pdf 
+<http://fuegotest.org/ffiles/Introduction-to-Fuego-LCJ-2016.pdf>`_, 
+along with a 
 `YouTube video <https://youtu.be/AueBSRN4wLk>`_.
 You can find more presentations about Fuego on our wiki at:
 `<http://fuegotest.org/wiki/Presentations>`_.
@@ -39,12 +42,13 @@ Getting Started
 ================
 
 There are a few different ways to get started with Fuego:
- 1. Use the `Fuego Quickstart Guide <quickstart_guide>`_ to
+ 1. Use the :ref:`Fuego Quickstart Guide <quickstart_guide>` to
     get Fuego up an running quickly.
- 2. Or go through our `Install and First Test <install_and_first_test>`_
+ 2. Or go through our :ref:`Install and First Test 
+    <install_and_first_test>`
     tutorial to install Fuego and run a test on a single "fake" board.
-    This will give you an idea of basic Fuego operations, without having to
-    configure Fuego for your own board
+    This will give you an idea of basic Fuego operations, without 
+    having to configure Fuego for your own board
  3. Work through the documentation for :ref:`Installation <installfuego>`
 
 Where to download 
@@ -57,20 +61,20 @@ Code for the test framework is available in 2 git repositories:
 The fuego-core directory resides inside the fuego directory.
 But normally you do not clone that repository directly.  It is cloned
 for you during the Fuego install process.  See the
-`Fuego Quickstart Guide <quickstart_guide>`_ or the
+:ref:`Fuego Quickstart Guide <quickstart_guide>` or the
 :ref:`Installing Fuego <installfuego>` page for more information.
 
 ===============
 Documentation 
 ===============
-For more complete documentation, see the following areas:
+See the index below for links to the major sections of the documentation
+for Fuego.  The major sections are:
 
- * Installation_ has information about installation and
-   administration documentation for Fuego.
- * `User Guides <user-guides>`_ has User documentation for Fuego.
- * `Developer Info <developer_info>`_ has information for test developers and
-   people who want to extend Fuego
- * `Reference Material <reference_material>`_ has APIs and other material about Fuego
+ * :ref:`Tutorials <tutor>`
+ * :ref:`Installation and Administration <admin>`
+ * :ref:`User Guides <user_guides>`
+ * :ref:`Developer Resources <dev_res>`
+ * :ref:`API Reference <api_rex>`
 
 ============
 Resources
@@ -112,7 +116,9 @@ It can be summed up like this:
 ..
    FIXTHIS - 'admonition:: Vision' didn't work with rtd theme
 
-.. Note:: Do for testing what open source has done for coding
+.. note::
+   Do for testing
+   what open source has done for coding
 
 There are numerous aspects of testing that are still done in an ad-hoc
 and company-specific way.  Although there are open source test
@@ -164,7 +170,7 @@ deploy and run them, and tools to analyze, track, and visualize test
 results.
 
 For more details about a high-level vision of open source testing,
-please see `OSS Test Vision <oss>`_.
+please see  :ref:`OSS Test Vision <oss>`.
 
 ================
 Other Resources 
@@ -174,13 +180,14 @@ Historical information
 ----------------------
 
 
-`<http://elinux.org/Fuego>`_ has some historical information about Fuego.
+`<http://elinux.org/Fuego>`_ has some historical information about 
+Fuego.
 
 Related systems
 ---------------
  
-See :ref:`Other test systems <ots>` for notes about other test frameworks
-and comparisons between Fuego and those other systems.
+See :ref:`Other test systems <ots>` for notes about other test 
+frameworks and comparisons between Fuego and those other systems.
 
 Things to do 
 ------------
diff --git a/docs/rst_src/Fuego_Quickstart_Guide.rst b/docs/rst_src/Fuego_Quickstart_Guide.rst
new file mode 100644
index 0000000..9d68c04
--- /dev/null
+++ b/docs/rst_src/Fuego_Quickstart_Guide.rst
@@ -0,0 +1,255 @@
+.. _quickstart:
+
+#######################
+Fuego Quickstart Guide
+#######################
+
+
+Running tests from Fuego on your hardware can be accomplished in a few
+simple steps.
+
+*Note: this is the quickstart guide.  More details and explanations*
+*can be found on the* :ref:`Installing Fuego <installfuego>` page.
+
+=========
+Overview
+=========
+
+The overview of the steps is:
+ * 1. install pre-requisite software
+ * 2. download the fuego repository
+ * 3. build your fuego container
+ * 4. start the container
+ * 5. access the interface
+ * 6. add your board to fuego
+ * 7. run a test
+
+These steps are described below.
+
+===============================
+Install pre-requisite software
+===============================
+
+To retrieve the fuego software and create the docker image for it, you
+need to have git and docker installed on your system.
+
+On Ubuntu, try the following commands:::
+
+   $ sudo apt install git docker.io
+
+===================================
+Download, build, start and access
+===================================
+
+To accomplish the last 6 steps, do the following from a Linux command
+prompt:::
+
+  $ git clone https://bitbucket.org/fuegotest/fuego.git
+  $ cd fuego
+  $ ./install.sh
+  $ ./start.sh
+  $ firefox http://localhost:8090/fuego
+
+
+The fourth step (with ./install.sh) will take some time - about 45
+minutes on my machine.  This is the main step that builds the Fuego
+docker container.
+
+When you run the 'start.sh' script, the terminal where this is run
+will be placed at a shell prompt, as the root user, inside the docker
+container.  The container will run until you exit this shell.  You
+should leave it running for the duration of your testing.
+
+*Note:If you are experimenting with the unreleased version of Fuego*
+*in the'next' branch, then please replace the 'git clone' command in*
+*the instructions above with these:*
+
+ * git clone -b next https://bitbucket.org/fuegotest/fuego.git
+
+On the last step, to access the Fuego interface you can use any
+browser - not just Firefox.  By default the Fuego interface runs on
+your host machine, on port 8090, with URL path "/fuego".
+
+In your browser, you should see a screen similar to the following:
+
+ .. image:: ../images/fuego-1.1-jenkins-dashboard-new.png
+    :width: 900
+
+We will now add items to Fuego (and this screen) so you can begin
+testing.
+
+==========================
+Add your board to fuego
+==========================
+
+To add your own board to Fuego, there are three main steps:
+ * 1. create a test directory on the target
+ * 2. create a board file (on the host)
+ * 3. add your board to the Jenkins interface
+
+You can find detailed instructions for adding a board at:
+:ref:`Adding a board <adding_board>`
+
+However, here is a quick list of steps you can do to add
+a your own board, and a sample 'docker' board to Fuego:
+
+Create a test directory on your board 
+========================================
+
+Login to your board, and create a directory to use for testing:::
+
+ $ ssh root at your_board
+ <board>$ mkdir /home/a
+ <board>$ exit
+
+
+If not using ssh, use whatever method you normally use to
+access the board.
+
+Create board file
+===================
+
+Now, create your board file.  The board file resides in
+<fuego-dir>/fuego-ro/boards, and has a filename with the name of the
+board, with the extension ".board".
+
+Do the following: ::
+
+ $ cd fuego-ro/boards
+ $ cp template-dev.board myboard.board
+ $ vi myboard.board
+
+
+Edit the variables in the board file to match your board.
+Most variables can be left alone, but you will need
+to change the IPADDR, TOOLCHAIN and ARCHITECTURE variables,
+and set the BOARD_TESTDIR to the directory
+you just created above.
+
+For other variables in the board file, or specifically to use
+a different transport than SSH, see more complete instructions
+at: :ref:`Adding a board <adding_board>`
+
+Add boards to the Jenkins interface
+====================================
+
+Finally, add the board in the Jenkins interface.
+
+In the Jenkins interface, boards are referred to as "Nodes".
+
+At the container shell prompt, run the following command:
+
+ * (container prompt)$ ftc add-nodes -b myboard docker
+
+This will add your board as a node, as well as a 'docker' node in the
+Jenkins interface.
+
+=====================
+Install a toolchain
+=====================
+
+If you just wish to run experiment with Fuego, without installing your
+own board, you can use the existing 'docker' board.  This will run the
+tests inside the docker container on your host machine. This requires
+little setup, and is intended to let people try Fuego to see how the
+interface and tests work, without having to set up their own board.
+
+If you are running an ARM board with a Debian-based distribution on it,
+you can install the Debian ARM cross-compilers into the docker container
+with the following command (inside the container):
+
+ * (container prompt)$ /fuego-ro/toolchains/install_armhf_toolchain.sh
+
+If you are installing a some other kind of board (different
+architecture, different root filesystem layout, or different shared
+library set), you will need to install a toolchain for your board
+inside the docker container.
+
+Please follow the instructions at:
+:ref:`Adding a toolchain <addtoolchain>` to do this.
+
+======================
+Now select some tests
+======================
+
+In order to execute tests using the Jenkins interface, you need to
+create Jenkins "jobs" for them.  You can do this using the 'ftc
+add-jobs' command.
+
+These commands are also executed at the shell prompt in the docker
+container.
+
+You can add jobs individually, or you can add a set of jobs all at
+once based on something called a 'testplan'.  A testplan is a list of
+Fuego tests with some options for each one.  You can see the list of
+testplans in your system with the following command:
+
+ * (container prompt)$ ftc list-plans
+
+To create a set of jobs for the 'docker' board on the system, do the
+following:
+
+ * (container prompt)$ ftc add-jobs -b docker -p testplan_docker
+
+To create a set of jobs for your own board (assuming you called it
+'myboard'), do the following:
+
+ * (container prompt)$ ftc add-jobs -b myboard -p testplan_smoketest
+
+The "smoketest" testplan has about 20 tests that exercise a variety of
+features in a Linux system.  After running these commands, a set of
+jobs will appear in the Jenkins interface.
+
+Once this is done, your Jenkins interface should look something like 
+this:
+
+.. image:: ../images/fuego-1.1-jenkins-dashboard-beaglebone-jobs.png
+   :width: 900
+
+=============
+Run a test 
+=============
+
+To run a job manually, you can do the following:
+ * Go to the Jenkins dashboard (in the main Jenkins web page),
+ * Select the job (which includes the board name and the test name)
+ * Click “Build job”  (Jenkins refers to running a test as "building" 
+   it.)
+
+You can also click on the circle with a green triangle, on the far 
+right of the line with the job name, in the Jenkins dashboard.
+
+When the test has completed, the status will be shown by a colored
+ball by the side of the test in the dashboard.  Blue means success,
+red means failure, and grey means the test did not complete (was not
+run or was aborted).  You can get details about the test run by
+clicking on the link in the history list.
+
+==================
+Additional Notes
+==================
+
+Other variables in the board file
+==================================
+
+Depending on the test you want to run, you may need to define some
+other variables that are specific to your board or the configuration
+of the filesystem on it.  Please see 
+:ref:`Adding a board <adding_board>` for detailed instructions and a 
+full list of variables that may be used on the target.
+
+the Jenkins interface
+========================
+
+See :ref:`Jenkins User Interface <jUsrinterface>` for more screenshots
+of the Jenkins web interface.  This will help familiarize you with
+some of the features of Jenkins, if you are new to using this tool.
+
+=================
+Troubleshooting
+=================
+
+If you have problems installing or using Fuego, please see our
+:ref:`Troubleshooting Guide <troubleshootingguide>`
+
+
diff --git a/docs/rst_src/Fuego_naming_rules.rst b/docs/rst_src/Fuego_naming_rules.rst
index 3cb172a..8cd4446 100644
--- a/docs/rst_src/Fuego_naming_rules.rst
+++ b/docs/rst_src/Fuego_naming_rules.rst
@@ -18,7 +18,8 @@ Fuego test name
    * 'Functional.'
    * 'Benchmark.'
 
- * the name following the prefix is known as the base test name, and has the following rules:
+ * the name following the prefix is known as the base test name, and 
+   has the following rules:
 
    * it may only use letters, numbers and underscores
 
@@ -26,7 +27,8 @@ Fuego test name
 
    * it may use upper and lower case letters
 
- * All test definition materials reside in a directory with the full test name:
+ * All test definition materials reside in a directory with the full 
+   test name:
 
    * e.g. Functional.hello_world
 
@@ -75,7 +77,8 @@ The following sections describe the names used for these elements.
 Node name
 ===================
 
- * A Jenkins node corresponding to a board must have the same name as the board.
+ * A Jenkins node corresponding to a board must have the same name as 
+   the board.
 
    * e.g. beaglebone
 
@@ -83,7 +86,8 @@ Job name
 =============
 
  * A Jenkins job is used to execute a test.
- * Jenkins job names should consist of these parts: <board>.<spec>.<test_name>
+ * Jenkins job names should consist of these parts:
+   <board>.<spec>.<test_name>
 
    * e.g. beaglebone.default.Functional.hello_world
 
@@ -93,14 +97,17 @@ Job name
 Run identifier 
 ===================
 
-A Fuego run identifier is used to refer to a "run" of a test - that is a particular invocation of a test and it's resulting output, logs and artifacts.
-A run identifier should be unique throughout the world, as these are used
-in servers where data from runs from different hosts are stored.
+A Fuego run identifier is used to refer to a "run" of a test - that is
+a particular invocation of a test and it's resulting output, logs and
+artifacts.  A run identifier should be unique throughout the world, as
+these are used in servers where data from runs from different hosts
+are stored.
 
-The parts of a run id are separated by dashes, except that the separator
-between the host and the board is a colon.
+The parts of a run id are separated by dashes, except that the 
+separator between the host and the board is a colon.
 
-A fully qualified (global) run identifier consist of the following parts:
+A fully qualified (global) run identifier consist of the following 
+parts:
 
  * test name
  * spec name
@@ -109,16 +116,17 @@ A fully qualified (global) run identifier consist of the following parts:
  * host
  * board
 
-FIXTHIS - global run ids should include timestamps to make them globally unique for all time
+FIXTHIS - global run ids should include timestamps to make them
+globally unique for all time
 
 
 Example:
 Functional.LTP-quickhit-3-on-timdesk:beaglebone
 
 
-A shortened run identifier may omit the *on* and *host*.  This is referred to
-as a local run id, and is only valid on the host where the run was
-produced.
+A shortened run identifier may omit the *on* and *host*.  This is
+referred to as a local run id, and is only valid on the host where the
+run was produced.
 
 Example:
  * Functional.netperf-default-2-minnow
@@ -131,17 +139,20 @@ timestamp
 
    * e.g. 2017-03-29_10:25:14
 
- * times are expressed in localtime (relative to the host where they are created)
+ * times are expressed in localtime (relative to the host where they 
+   are created)
 
 ====================
 test identifiers 
 ====================
 
 Also know as TGUIDs (or test globally unique identifiers), a test
-identifier refers to a single unit of test operation or result from the test system.
-A test identifier may refer to a testcase or an individual test measure.
+identifier refers to a single unit of test operation or result from
+the test system.  A test identifier may refer to a testcase or an
+individual test measure.
 
-They consist of a several parts, some of which may be omitted in some circumstances
+They consist of a several parts, some of which may be omitted in some
+circumstances
 
 The parts are:
 
@@ -154,14 +165,15 @@ Legal characters for these parts are letters, numbers, and underscores.
 Only testset names may include a period ("."), as that is used as the
 separator between constituent parts of the identifier. 
 
-testcase identifiers should be consistent from run-to-run of a test, and
-should be globally unique.
+testcase identifiers should be consistent from run-to-run of a test, 
+and should be globally unique.
 
 test identifiers may be in fully-qualified form, or in shortened
 form - missing some parts.  The following rules are used to convert
 between from shortened forms to fully-qualified forms.
 
-If the testsuite name is missing, then the base name of the test is used.
+If the testsuite name is missing, then the base name of the test is 
+used.
 
  * e.g. Functional.jpeg has a default testsuite name of "jpeg"
 
@@ -172,23 +184,34 @@ A test id may refer to one of two different items:
  * a testcase id
  * a measure id
 
-A fully qualified test identifier consists of a testsuite name, testset name and a testcase name.  Shortened names may be used, in which case default values will be used for some parts, as follows:
+A fully qualified test identifier consists of a testsuite name,
+testset name and a testcase name.  Shortened names may be used, in
+which case default values will be used for some parts, as follows:
 
-If a result id has only 1 part, it is the testcase name. The testset name is considered to be *default*, and the testsuite name is the base name of the test.
+If a result id has only 1 part, it is the testcase name. The testset
+name is considered to be *default*, and the testsuite name is the base
+name of the test.
 
-That is, for the fuego test Functional.jpeg, a shortened tguid of *test4*, the fully qualified
-name would be:
+That is, for the fuego test Functional.jpeg, a shortened tguid of
+*test4*, the fully qualified name would be:
 
  * jpeg.default.test4
 
-If a result id has 2 parts, then the first part is the testset name and the second is the testcase name, and the testsuite name is the base name of the test.
+If a result id has 2 parts, then the first part is the testset name
+and the second is the testcase name, and the testsuite name is the
+base name of the test.
 
 measure id
 ===============
 
-A measure identifier consists of a testsuide id, testset id, testcase id and measure name.
+A measure identifier consists of a testsuide id, testset id, testcase
+id and measure name.
 
-A shortened measure id may not consist of less than 2 parts.  If it only has 2 parts, the first part is the testcase id, and the second part is the measure name.  In all cases the last part of the name is the measure name, the second-to-last part of the name is the testcase name.
+A shortened measure id may not consist of less than 2 parts.  If it
+only has 2 parts, the first part is the testcase id, and the second
+part is the measure name.  In all cases the last part of the name is
+the measure name, the second-to-last part of the name is the testcase
+name.
 
 If there are three parts, the first part is the testset name.
 
@@ -209,22 +232,43 @@ Dependency check variables
 The following is the preferred format for variables used in dependency
 checking code:
 
- * **PROGRAM_FOO** - require program 'foo' on target.  The program name is upper-cased, punctuation or spaces are replaced with '_', and the name is prefixed with 'PROGRAM\_'.  The value of variable is full path on target where program resides.
+ * **PROGRAM_FOO** - require program 'foo' on target.  The program
+   name is upper-cased, punctuation or spaces are replaced with '_',
+   and the name is prefixed with 'PROGRAM\_'.  The value of variable
+   is full path on target where program resides.
 
     * ex: PROGRAM_BC=/usr/bin/bc
 
- * **HEADER_FOO** - require header file 'foo' in SDK.  The header filename is stripped of its suffix (I don’t know if that's a good idea or not), upper-cased, punctuation or spaces are replaced with '_', and the name is prefixed with 'HEADER\_'. The value of variable is the full path in the SDK of the header file:
-
-    * ex: HEADER_FOO=/opt/poky2.1.2/sysroots/x86_64-pokysdk-linux/usr/include/foo.h
-
-
- * **SDK_LIB_FOO** - require 'foo' library in SDK.  The library filename is stripped of the 'lib' prefix and .so suffix, upper-cased, punctuation and spaces are replaced with '_', and the name is prefixed with 'SDK_LIB\_'.  The value of the variable is the full path in the SDK of the library.
-
-   * ex: SDK_LIB_FOO=/opt/poky2.1.2/sysroots/x86_64-pokysdk-linux/usr/lib/libfoo.so
-   * Note that in case a static library is required (.a), then the variable name should include that suffix:
-   * ex: SDK_LIB_FOO_A=/opt/poky1.2.1/sysroots/x86_64-pokysdk-linux/usr/lib/libfoo.a
-
- * **TARGET_LIB_FOO** - require 'foo' library on target.  The library filename is stripped of the 'lib' prefix and .so suffix (not sure this is a good idea, as we potentially lose a library version requirement), upper-cased, punctuation and spaces are replaced with '_', and the name is prefixed with 'TARGET_LIB\_'. The value of the variable is  the full path of the library on the target board.
+ * **HEADER_FOO** - require header file 'foo' in SDK.  The header
+   filename is stripped of its suffix (I don’t know if that's a good
+   idea or not), upper-cased, punctuation or spaces are replaced with
+   '_', and the name is prefixed with 'HEADER\_'. The value of
+   variable is the full path in the SDK of the header file:
+
+    * ex:
+      HEADER_FOO=/opt/poky2.1.2/sysroots/x86_64-pokysdk-
+      linux/usr/include/foo.h
+
+
+ * **SDK_LIB_FOO** - require 'foo' library in SDK.  The library
+   filename is stripped of the 'lib' prefix and .so suffix,
+   upper-cased, punctuation and spaces are replaced with '_', and the
+   name is prefixed with 'SDK_LIB\_'.  The value of the variable is
+   the full path in the SDK of the library.
+
+   * ex: SDK_LIB_FOO=/opt/poky2.1.2/sysroots/x86_64-pokysdk-
+     linux/usr/lib/libfoo.so
+   * Note that in case a static library is required (.a), then the 
+     variable name should include that suffix:
+   * ex: SDK_LIB_FOO_A=/opt/poky1.2.1/sysroots/x86_64-pokysdk-
+     linux/usr/lib/libfoo.a
+
+ * **TARGET_LIB_FOO** - require 'foo' library on target.  The library
+   filename is stripped of the 'lib' prefix and .so suffix (not sure
+   this is a good idea, as we potentially lose a library version
+   requirement), upper-cased, punctuation and spaces are replaced with
+   '_', and the name is prefixed with 'TARGET_LIB\_'. The value of the
+   variable is  the full path of the library on the target board.
 
    * ex: TARGET_LIB_FOO=/usr/lib/libfoo.so
 
diff --git a/docs/rst_src/Installing_Fuego.rst b/docs/rst_src/Installing_Fuego.rst
index 379f9ad..87fb140 100644
--- a/docs/rst_src/Installing_Fuego.rst
+++ b/docs/rst_src/Installing_Fuego.rst
@@ -8,8 +8,8 @@ This page describes the steps to install Fuego on your Linux machine.
 It includes detailed descriptions of the operations, for both users
 and developers.
 
-.. Tip:: If you are interested in a quick outline of steps, please see the
-   :ref:`Fuego Quickstart Guide <quickstart_guide>` instead.
+If you are interested in a quick outline of steps, please see the
+:ref:`Fuego Quickstart Guide <quickstart>` instead.
 
 ===========
 Overview
@@ -17,18 +17,18 @@ Overview
 
 The overview of the steps is:
 
- 1. install pre-requisite software
- 2. download the Fuego repository
- 3. build your Fuego container
- 4. start the container
- 5. access the Jenkins interface
+ * 1. install pre-requisite software
+ * 2. download the Fuego repository
+ * 3. build your Fuego container
+ * 4. start the container
+ * 5. access the Jenkins interface
 
 =================================
 Install pre-requisite software
 =================================
 
-To retrieve the Fuego software and create the docker image for it, you need
-to have git and docker installed on your system.
+To retrieve the Fuego software and create the docker image for it, you
+need to have git and docker installed on your system.
 
 On Ubuntu, try the following commands: ::
 
@@ -94,11 +94,12 @@ of tests, and the main Fuego command line tool 'ftc'.
 Downloading the repository
 ============================
 
-You can use 'git clone' to download the main 'fuego' repository, like so: ::
+You can use 'git clone' to download the main 'fuego' repository, like
+so: ::
 
 
-  $ git clone https://bitbucket.org/fuegotest/fuego.git
-  $ cd fuego
+	$ git clone https://bitbucket.org/fuegotest/fuego.git
+	$ cd fuego
 
 
 After downloading the repositories, switch to the 'fuego' directory,
@@ -110,16 +111,16 @@ repository, which is the current main released version of Fuego.
 Downloading a different branch
 --------------------------------
 
-If you are experimenting with an unreleased version of Fuego
-in the 'next' branch, then please replace the 'git clone' command in the
-instructions above with these: ::
+*NOTE:* If you are experimenting with an unreleased version of Fuego
+in the 'next' branch, then please replace the 'git clone' command in
+the instructions above with these: ::
 
-  $ git clone -b next https://bitbucket.org/fuegotest/fuego.git
-  $ cd fuego
+	$ git clone -b next https://bitbucket.org/fuegotest/fuego.git
+	$ cd fuego
 
 
-This uses '-b next' to indicate a different branch to check out during the
-clone operation.
+This uses '-b next' to indicate a different branch to check out during
+the clone operation.
 
 ============================
 Create the Fuego container
@@ -129,39 +130,41 @@ The third step of the installation is to run install.sh to create the
 Fuego docker container.  While in the 'fuego' directory,
 run the script from the current directory, like so: ::
 
-  $ ./install.sh
+
+	$ ./install.sh
 
 
 install.sh uses docker and the Dockerfile in the fuego directory to
 create a docker container with the Fuego Linux distribution.
 
 This operation may take a long time.  It takes about 45 minutes on my
-machine.  This is due to building a nearly complete distribution of Linux,
-from binary packages obtained from the Internet.
+machine.  This is due to building a nearly complete distribution of
+Linux, from binary packages obtained from the Internet.
 
 This step requires Internet access.  You need to make sure that
 you have proxy access to the Internet if you are behind a corporate
 firewall.
 
-Please see the section
-:ref:`Alternative Installation Configurations <alt_install>` below
-for other arguments to ``install.sh``, or for alternative installation scripts.
+Please see the section "Alternative Installation Configuratons" below
+for other arguments to *install.sh*, or for alternative installation
+scripts.
 
 
 Fuego Linux distribution
 ===========================
 
-The Fuego Linux distribution is a distribution of Linux based on Debian Linux,
-with many additional packages and tools installed.  These
-additional packages and tools are required for aspects of Fuego operation,
-and to support host-side processes and services needed by the tests
-included with Fuego.
+The Fuego Linux distribution is a distribution of Linux based on
+Debian Linux, with many additional packages and tools installed.
+These additional packages and tools are required for aspects of Fuego
+operation, and to support host-side processes and services needed by
+the tests included with Fuego.
 
 For example, the Fuego distribution includes
  * the 'Jenkins' continuous integration server
  * the 'netperf' server, for testing network performance.
  * the 'ttc' command, which is a tool for board farm management
- * the python 'jenkins' module, for interacting with Fuego's Jenkins instance
+ * the python 'jenkins' module, for interacting with Fuego's Jenkins 
+   instance
  * and many other tools, programs and modules used by Fuego and its tests
 
 Fuego commands execute inside the Fuego docker container, and Fuego
@@ -186,22 +189,22 @@ the '--priv' options with install.sh: ::
 Customizing the privileged container
 -------------------------------------
 
-Note that using '--priv' causes install.sh to use a different container
-creation script.
-Normally (in the non --priv case), install.sh uses ``fuego-host-scripts/docker-create-container.sh``.
+Note that using '--priv' causes install.sh to use a different
+container creation script.  Normally (in the non --priv case),
+install.sh uses ``fuego-host-scripts/docker-create-container.sh``.
 
-When --priv is used, Fuego uses ``fuego-host-scripts/docker-create-usb-privileged-container.sh``.
+When --priv is used, Fuego uses
+``fuego-host-scripts/docker-create-usb-privileged-container.sh``.
 
 
 ``docker-create-usb-privileged-container.sh`` can be edited, before
 running install.sh, to change the set of hardware devices
 that the docker container will have privileged access to.
 
-This is done
-by adding more bind mount options to the 'docker create' command inside
-this script.  Explaining exactly how to do this is outside the scope
-of this documentation.  Please see documentation and online resources for
-the 'docker' system for information about this.
+This is done by adding more bind mount options to the 'docker create'
+command inside this script.  Explaining exactly how to do this is
+outside the scope of this documentation.  Please see documentation and
+online resources for the 'docker' system for information about this.
 
 The script currently creates bind mounts for:
  * /dev/bus/usb - USB ports, and newly created ports
@@ -210,24 +213,24 @@ The script currently creates bind mounts for:
  * /dev/serial - general serial ports, and newly created ports
 
 If you experience problems with Fuego accessing hardware on your host
-system, you may need to build the Fuego docker container using additional
-bind mounts that are specific to your configuration.  Do so by 
-editing docker-create-used-privileged-container.sh, removing the old container,
-and re-running './install.sh --priv' to build a new container with the
-desired privileges.
+system, you may need to build the Fuego docker container using
+additional bind mounts that are specific to your configuration.  Do so
+by editing docker-create-used-privileged-container.sh, removing the
+old container, and re-running './install.sh --priv' to build a new
+container with the desired privileges.
 
 Using an different container name
 ======================================
 
 By default, install.sh creates a docker image called 'fuego' and a
 docker container called 'fuego-container'.  There are some situations
-where it is desirable to use different names.  For example, having different
-container names is useful for Fuego self-testing.  It can also used
-to do A/B testing when
-migrating from one release of Fuego to the next.
+where it is desirable to use different names.  For example, having
+different container names is useful for Fuego self-testing.  It can
+also used to do A/B testing when migrating from one release of Fuego
+to the next.
 
-You can provide a different name for the Fuego image and container,
-by supplying one on the command line for install.sh, like so: ::
+You can provide a different name for the Fuego image and container, by
+supplying one on the command line for install.sh, like so:
 
   $ ./install.sh my-fuego
 
@@ -240,7 +243,8 @@ container named 'my-fuego-container'
 Start the Fuego container 
 ===========================
 
-To start the Fuego docker container, use the 'start.sh' script. ::
+To start the Fuego docker container, use the 'start.sh' script.
+
 
   $ ./start.sh
 
@@ -266,16 +270,18 @@ ran 'start.sh' from) running for the duration of your testing.
 Access the Fuego Jenkins web interface
 =========================================
 
-Fuego includes a version of Jenkins and a set of plugins as part of its
-system. Jenkins is running inside the Fuego docker container.
-By default the Fuego Jenkins interface runs on port 8090, with an URL path "/fuego".
+Fuego includes a version of Jenkins and a set of plugins as part of
+its system. Jenkins is running inside the Fuego docker container.  By
+default the Fuego Jenkins interface runs on port 8090, with an URL
+path "/fuego".
 
-Here is an example showing use of firefox to access the Jenkins interface
-with Fuego ::
+Here is an example showing use of firefox to access the Jenkins
+interface with Fuego ::
 
   $ firefox http://localhost:8090/fuego
 
-To access the Fuego interface you can use any browser - not just Firefox.  
+To access the Fuego interface you can use any browser - not just
+Firefox.  
 
 In your browser, you should see a screen similar to the following:
 
@@ -283,16 +289,19 @@ In your browser, you should see a screen similar to the following:
    :width: 900
 
 Note that this web interface is available from any machine that has
-access to your host machine via the network.  This means that test operations and test results are available to anyone with access to your machine.
-You can configure Jenkins with different security to avoid this.
+access to your host machine via the network.  This means that test
+operations and test results are available to anyone with access to
+your machine.  You can configure Jenkins with different security to
+avoid this.
 
 ======================================
 Access the Fuego docker command line 
 ======================================
 
-For some Fuego operations, it is handy to use the command line (shell prompt)
-inside the docker container.  In particular, parts of the remaining
-setup of your Fuego system involve running the 'ftc' command line tool.
+For some Fuego operations, it is handy to use the command line (shell
+prompt) inside the docker container.  In particular, parts of the
+remaining setup of your Fuego system involve running the 'ftc' command
+line tool.
 
 Some 'ftc' commands can be run outside the container, but other require
 that you execute the command inside the container.
@@ -315,8 +324,8 @@ Fuego docker container, like so: ::
 Remaining steps 
 ===================
 
-Fuego is now installed and ready for test operations.  However, some steps
-remain in order to use it with your hardware.  You need to:
+Fuego is now installed and ready for test operations.  However, some
+steps remain in order to use it with your hardware.  You need to:
 
  * add one or more hardware boards (board definition files)
  * add a toolchain
@@ -325,18 +334,18 @@ remain in order to use it with your hardware.  You need to:
 These steps are described in subsequent sections of this documentation.
 
 See:
- * :ref:`Adding a Board <adding_board>`
+ * :ref:`Adding a board <adding_board>`
  * :ref:`Adding a toolchain <addtoolchain>`
  * :ref:`Adding test jobs to Jenkins <addtestjob>`
 
-.. _alt_install:
-
 ================================================
 Alternative installation configurations 
 ================================================
 
-The default installation of Fuego installs the entire Fuego system, including Jenkins and the Fuego core, into a docker container running on a host system, which Jenkins running on port 8090.  However, it is possible
-to install Fuego in other configurations.
+The default installation of Fuego installs the entire Fuego system,
+including Jenkins and the Fuego core, into a docker container running
+on a host system, which Jenkins running on port 8090.  However, it is
+possible to install Fuego in other configurations.
 
 The configuration alternatives that are supported are:
  * install using a different TCP/IP port for Jenkins
@@ -346,50 +355,76 @@ The configuration alternatives that are supported are:
 with a different Jenkins TCP/IP port
 ===========================================
 
-By default the Fuego uses TCP/IP port 8090, but this can be changed to another port.  This can be used to avoid a conflict with a service already using port 8090 on your host machine, or so that multiple instances of Fuego can be run simultaneously.
+By default the Fuego uses TCP/IP port 8090, but this can be changed to
+another port.  This can be used to avoid a conflict with a service
+already using port 8090 on your host machine, or so that multiple
+instances of Fuego can be run simultaneously.
 
-To use a different port than 8090 for Jenkins, specify it after the image name on the command line when you run install.sh. Note that this means that you must specify a Docker image name in order to specify a non-default port. For example: ::
+To use a different port than 8090 for Jenkins, specify it after the
+image name on the command line when you run install.sh. Note that this
+means that you must specify a Docker image name in order to specify a
+non-default port. For example: ::
 
 
   $ ./install.sh fuego 7777
 
 
-This would install Fuego, with an docker image name of 'fuego', a docker container name of 'fuego-container', and with Jenkins configured to run on port 7777
+This would install Fuego, with an docker image name of 'fuego', a
+docker container name of 'fuego-container', and with Jenkins
+configured to run on port 7777
 
 without Jenkins
 ==================
 
-Some Fuego users have their own front-ends or back-ends, and don't need to
-use the Jenkins CI server to control Fuego tests, or visualize Fuego test
-results. ``install.sh`` supports the option '--nojenkins' which produces a docker container without the Jenkins server. This reduces the overhead of the docker container by quite a bit, for those users.
-
-Inside the docker container, the Fuego core is still available.  Boards, toolchains, and tests are configured normally, but the 'ftc' command line
-tool is used to execute tests.  There is no need to use any of the 'ftc'
-functions to manage nodes, jobs or views in the Jenkins system.  'ftc'
-is used to directly execute tests using 'ftc run-test', and results can be
-queried using 'ftc list-runs' and 'ftc gen-report'.
-
-When using Fuego with a different results visualization backend, the user will
-use 'ftc put-run' to send the test result data to the configured back end.
+Some Fuego users have their own front-ends or back-ends, and don't
+need to use the Jenkins CI server to control Fuego tests, or visualize
+Fuego test results. ``install.sh`` supports the option '--nojenkins'
+which produces a docker container without the Jenkins server. This
+reduces the overhead of the docker container by quite a bit, for those
+users.
+
+Inside the docker container, the Fuego core is still available.
+Boards, toolchains, and tests are configured normally, but the 'ftc'
+command line tool is used to execute tests.  There is no need to use
+any of the 'ftc' functions to manage nodes, jobs or views in the
+Jenkins system.  'ftc' is used to directly execute tests using 'ftc
+run-test', and results can be queried using 'ftc list-runs' and 'ftc
+gen-report'.
+
+When using Fuego with a different results visualization backend, the
+user will use 'ftc put-run' to send the test result data to the
+configured back end.
 
 without a container
 ===========================
 
-Usually, for security and test reproducibility reasons, Fuego is executed inside a docker container on your host machine. That is, the default installation of Fuego will create a docker container using all the software that is needed for Fuego's tests.
-However, in some configurations it is desirable to execute Fuego directly on a host machine (not inside a docker container). A user may have a dedicated machine, or they may want to avoid the overhead of running a docker container.
-
-A separate install script, called 'install-debian.sh' can be used in place
-of 'install.sh' to install the Fuego system onto a Debian-based Linux distribution.
-
-Please note that installing without a container is not advised unless you know exactly what you are doing. In this configuration, Fuego will not be able to manage host-side test dependencies for you correctly.
-
-Please note also that executing without a container presents a possible
-security risk for your host. Fuego tests can run arbitrary bash
-instruction sequences as part of their execution. So there is a danger when running tests from unknown third parties that they will execute something on your test host that breaches the security, or that inadvertently damages
-you filesystem or data.
-
-However, despite these drawbacks, there are test scenarios (such as installing
-Fuego directly to a target board), where this configuration makes sense.
+Usually, for security and test reproducibility reasons, Fuego is
+executed inside a docker container on your host machine. That is, the
+default installation of Fuego will create a docker container using all
+the software that is needed for Fuego's tests.  However, in some
+configurations it is desirable to execute Fuego directly on a host
+machine (not inside a docker container). A user may have a dedicated
+machine, or they may want to avoid the overhead of running a docker
+container.
+
+A separate install script, called 'install-debian.sh' can be used in
+place of 'install.sh' to install the Fuego system onto a Debian-based
+Linux distribution.
+
+Please note that installing without a container is not advised unless
+you know exactly what you are doing. In this configuration, Fuego will
+not be able to manage host-side test dependencies for you correctly.
+
+Please note also that executing without a container presents a
+possible security risk for your host. Fuego tests can run arbitrary
+bash instruction sequences as part of their execution. So there is a
+danger when running tests from unknown third parties that they will
+execute something on your test host that breaches the security, or
+that inadvertently damages you filesystem or data.
+
+However, despite these drawbacks, there are test scenarios (such as
+installing Fuego directly to a target board), where this configuration
+makes sense.
 
 
 
diff --git a/docs/rst_src/License_And_Contribution_Policy.rst b/docs/rst_src/License_And_Contribution_Policy.rst
index eea8bb0..6a34887 100644
--- a/docs/rst_src/License_And_Contribution_Policy.rst
+++ b/docs/rst_src/License_And_Contribution_Policy.rst
@@ -11,8 +11,8 @@ License
 
 Fuego has the following license policy.
 
-Fuego consists of several parts, and includes source code from
-a number of different external test projects.
+Fuego consists of several parts, and includes source code from a
+number of different external test projects.
 
 Default license
 ==================
@@ -22,15 +22,15 @@ indicated in the LICENSE file at the top of the 'fuego' and
 'fuego-core' source repositories.
 
 If a file does not have an explicit license, or license indicator
-(such as SPDX identifier) in the file, than that file is
-covered by the default license for the project, with the exceptions
-noted below for "external test materials".
+(such as SPDX identifier) in the file, than that file is covered by
+the default license for the project, with the exceptions noted below
+for "external test materials".
 
 When making contributions, if you do NOT indicate an alternative
-license for your contribution, the contribution will be assigned
-the license of the file to which the contribution applies (which
-may be the default license, if the file contains no existing
-license indicator).
+license for your contribution, the contribution will be assigned the
+license of the file to which the contribution applies (which may be
+the default license, if the file contains no existing license
+indicator).
 
 Although we may allow for other licenses within the Fuego project
 in order to accommodate external software added to our system, our
@@ -45,25 +45,26 @@ engine/tests/<test_name> (which is known as the test home directory),
 and may include two types of materials:
 
  * 1) Fuego-specific files
- * 2) files obtained from external sources, which have their own license.
+ * 2) files obtained from external sources, which have their own 
+   license.
 
-The Fuego-specific materials consist of files such as:
-fuego_test.sh, spec.json, reference.json, test.yaml, chart_config.json,
-and possibly others as created for use in the Fuego project.
-External test materials may consist of tar files, helper scripts
-and patches against the source in the tar files.
+The Fuego-specific materials consist of files such as: fuego_test.sh,
+spec.json, reference.json, test.yaml, chart_config.json, and possibly
+others as created for use in the Fuego project.  External test
+materials may consist of tar files, helper scripts and patches against
+the source in the tar files.
 
 Unless otherwise indicated, the Fuego-specific materials are
 licensed under the Fuego default license, and the external test
 materials are licensed under their own individual project
 license - as indicated in the test source.
 
-In some cases, there is no external source code, but only source
-that is originally written for Fuego and stored in the test
-home directory. This commonly includes tests based on a single
-shell script, that is written to be deployed to the Device Under
-Test by fuego_test.sh.  Unless otherwise indicated, these files
-(source and scripts) are licensed under the Fuego default license.
+In some cases, there is no external source code, but only source that
+is originally written for Fuego and stored in the test home directory.
+This commonly includes tests based on a single shell script, that is
+written to be deployed to the Device Under Test by fuego_test.sh.
+Unless otherwise indicated, these files (source and scripts) are
+licensed under the Fuego default license.
 
 If there is any ambiguity in the category of a particular file
 (external or Fuego-specific), please designate the intended license
@@ -72,20 +73,19 @@ clearly in the file itself, when making a contribution.
 Copyright statements 
 ======================
 
-Copyrights for individual contributions should be added to
-individual files, when the contributions warrant copyright
-assignment.  Some trivial fixes to existing code may not need
-to have copyright assignment, and thus not every change to a file
-needs to include a copyright notice for the contributor.
+Copyrights for individual contributions should be added to individual
+files, when the contributions warrant copyright assignment.  Some
+trivial fixes to existing code may not need to have copyright
+assignment, and thus not every change to a file needs to include a
+copyright notice for the contributor.
 
 License tags
 =============
 
-Our preference is to use SPDX license identifier, rather than a license
-notice, to indicate the license
-of any materials in Fuego.  Such identifiers and notices are only
-desired if the materials are not contributed under the default Fuego
-license of "BSD-3-Clause".
+Our preference is to use SPDX license identifier, rather than a
+license notice, to indicate the license of any materials in Fuego.
+Such identifiers and notices are only desired if the materials are not
+contributed under the default Fuego license of "BSD-3-Clause".
 
 In a test.yaml, please indicate the license of the upstream
 test program.  If there is no upstream test program (ie, the
@@ -110,30 +110,45 @@ indicates agreement to the following: ::
 
 	By making a contribution to this project, I certify that:
 
-		      (a) The contribution was created in whole or in part by me and I
-		          have the right to submit it under the open source license
-		          indicated in the file; or
-
-		      (b) The contribution is based upon previous work that, to the best
-		          of my knowledge, is covered under an appropriate open source
-		          license and I have the right under that license to submit that
-		          work with modifications, whether created in whole or in part
-		          by me, under the same open source license (unless I am
-		          permitted to submit under a different license), as indicated
+		      (a) The contribution was created in whole or in
+                          part by me and I have the right to submit it
+                          under the open source license indicated in 
+                          the file; or
+
+		      (b) The contribution is based upon previous work
+                          that, to the best of my knowledge, is 
+                          covered under an appropriate open source
+		          license and I have the right under that 
+                          license to submit that work with 
+                          modifications, whether created in whole or 
+                          in part by me, under the same open source 
+                          license (unless I am permitted to submit 
+                          under a different license), as indicated
 		          in the file; or
 
-		      (c) The contribution was provided directly to me by some other
-		          person who certified (a), (b) or (c) and I have not modified
-		          it.
-
-		      (d) I understand and agree that this project and the contribution
-		          are public and that a record of the contribution (including all
-		          personal information I submit with it, including my sign-off) is
-		          maintained indefinitely and may be redistributed consistent with
-		          this project or the open source license(s) involved.
-
-
-*Note: Please note that an "official" DCO at the web site* `<https://developercertificate.org/>`_  *has additional text (an LF copyright, address, and statement of non-copyability).All of these are either nonsense or problematical in some legalsense. The above is a quote of a portion of the document found in the Linuxkernel guide for submitting patches.  See* `<https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/submitting-patches.rst>`_ *(copied in March, 2018).*
+		      (c) The contribution was provided directly to me 
+                          by some other person who certified (a), (b) 
+                          or (c) and I have not modified it.
+
+		      (d) I understand and agree that this project and
+                          the contribution are public and that a 
+                          record of the contribution (including all
+		          personal information I submit with it, 
+                          including my sign-off) is maintained 
+                          indefinitely and may be redistributed 
+                          consistent with this project or the open 
+                          source license(s) involved.
+
+
+*Note*: Please note that an "official" DCO at the web site 
+`<https://developercertificate.org/>`_  has additional text 
+(an LF copyright, address, and statement of non-copyability).All of 
+these are either nonsense or problematical in some legalsense. 
+The above is a quote of a portion of the document found in the 
+Linuxkernel guide for submitting patches.  See
+`<https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
+tree/Documentation/process/submitting-patches.rst>`_ 
+(copied in March, 2018).
 
 Each commit must include a DCO which looks like this ::
 
@@ -156,13 +171,15 @@ Submitting contributions
 ==========================
 
 Please format contributions as a patch, and send the patch to the
-`Fuego mailing list <https://lists.linuxfoundation.org/mailman/listinfo/fuego>`_
+`Fuego mailing list <https://lists.linuxfoundation.org/mailman/
+listinfo/fuego>`_
 
-Before making the patch, please verify that you have followed our preferred
-:ref:`Coding style <coding_style>`.
+Before making the patch, please verify that you have followed our 
+preferred :ref:`Coding style <coding_style>`.
 
-We follow the style of patches used by the Linux kernel, which is described
-here: `<https://www.kernel.org/doc/html/latest/process/submitting-patches.html>`_
+We follow the style of patches used by the Linux kernel, which is 
+described here: `<https://www.kernel.org/doc/html/latest/process/
+submitting-patches.html>`_
 
 Not everything described there applies, but please do the following:
  - used a Signed-off-by line
@@ -176,7 +193,9 @@ Not everything described there applies, but please do the following:
 
    - the test name can be the short name, if it is unambiguous
 
-     - That is, please don't use the 'Functional' or 'Benchmark' prefix unless there are both types of tests with the same short name
+     - That is, please don't use the 'Functional' or 'Benchmark' 
+       prefix unless there are both types of tests with the same 
+       short name
 
  - describe your changes in the commit message body
 
diff --git a/docs/rst_src/OSS_Test_Vision.rst b/docs/rst_src/OSS_Test_Vision.rst
new file mode 100644
index 0000000..bc8c317
--- /dev/null
+++ b/docs/rst_src/OSS_Test_Vision.rst
@@ -0,0 +1,349 @@
+.. _oss:
+
+#################
+OSS Test Vision
+#################
+
+This page describes aspects of the Open Source Test vision for the
+Fuego project, along with some ideas for implementing specific ideas
+related to this vision.
+
+=====================
+overview of concepts
+=====================
+
+
+Letter to ksummit discuss
+==========================
+
+Here's an e-mail Tim sent to the ksummit-discuss list in October,
+2016 ::
+ 
+	I have some ideas on Open Source testing that I'd like to
+        throw out there for discussion.  Some of these I have been 
+	stewing on for a while, while some came to mind after talking 
+	to people at recent conference events.
+
+	Sorry - this is going to be long...
+
+	First, it would be nice to increase the amount of testing we 
+        do, by having more test automation. (ok, that's a no-brainer). 
+        Recently there has been a trend towards more centralized 
+        testing facilities, like the zero-day stuff or board farms 
+        used by kernelci. That makes sense, as this requires 
+ 	specialized hardware, setup,  or skills to operate certain
+	kinds of test environments.  As one example, an automated test 
+	of kernel boot requires automated control of power to a board
+ 	or platform, which is not very common among kernel developers.
+	A centralized test facility has the expertise and hardware to 
+	add new test nodes relatively cheaply. They can do this more 
+	quickly	and much less expensively than the first such node by 
+	an individual new to testing.
+
+	However, I think to make great strides in test quantity and 
+        coverage, it's important to focus on ease of use for 
+ 	individual test nodes. My vision would be to have tens of 
+	thousands of individual test nodes running automated tests on 
+	thousands of different hardware platforms and configurations 
+        and workloads.
+
+	The kernel selftest project is a step in the right direction 
+	for this, because it allows any kernel developer to easily 
+	(in theory) run automated unit tests for the kernel.  However,
+        this is still a manual process.  I'd like to see improved 
+        standards and infrastructure for automating tests. 
+
+	It turns out there are lots of manual steps in the testing
+	and bug-fixing process with the kernel (and other 
+ 	Linux-related software).  It would be nice if a new system 
+	allowed us to capture manual steps, and over time convert 
+	them to automation.
+
+	Here are some problems with the manual process that I think 
+	need addressing:
+
+	1) How does an individual know what tests are valid for their 
+	platform? Currently, this is a manual decision.  In a world 
+        with thousands or tens of thousands of tests, this will be 
+        very difficult.  We need to have automated mechanisms to 
+        indicate which tests are relevant for a platform.Test 
+	definitions should include a description of the hardware 
+	they need,or the test setup they need.  For example, it would 
+	be nice to have tests indicate that they need to be run on a 
+	node with USB gadget support,
+	or on a node with the gadget hardware from a particular vendor 
+        (e.g. a particular SOC), or with a particular hardware phy 
+ 	(e.g. Synopsis).  As another example, if a test requires that 
+        the hardware physically reboot,then that should be indicated 
+        in the test.  If a test requires that a particular button be 
+        pressed (and that the button be available to be pressed), it
+	should be listed.  Or if the test requires that an external 
+	node be available to participate in the test (such as a wifi 
+        endpoint, CANbus endpoint, or
+	i2C device) be present, that should be indicated.  
+	There should be a way for the test nodes which provide those 
+ 	hardware capabilities, setups, or external resources to 
+        identify themselves.  Standards should
+	be developed for how a test node and a test can express these 
+	capabilities and requirements.  Also, standards need to be 
+        developed so that a test can control those external resources 
+	to participate in tests.Right now each test framework handles 
+	this in its own way (if it provides
+	support for it at all).
+
+	I heard of a neat setup at one company where the video
+	output from a system was captured by another video system,
+	and the results analyzed automatically.  This type of test
+	setup currently requires an enormous investment of
+	expertise, and possibly specialized hardware.  Once such a
+	setup is performed in a few locations, it makes much more
+	sense to direct tests that need such facilities to those
+	locations, than it does to try to spread the expertise to
+	lots of different individuals (although that certainly has
+	value also).
+
+	For a first pass, I think the kernel CONFIG variables needed
+	by a test should be indicated, and they could be compared
+	with the config for the device under test.  This would be a
+	start on the expression of the dependencies between a test
+	and the features of the test node.
+
+	2) how do you connect people who are interested in a
+	particular test with a node that can perform that test?
+
+	My proposal here is simple - for every subsystem of the
+	kernel, put a list of test nodes in the MAINTAINERS file, to
+	indicate nodes that are available to test that subsystem.
+	Tests can be scheduled to run on those nodes, either
+	whenever new patches are received for that sub-system, or
+	when a bug is encountered and developers for that subsystem
+	want to investigate it by writing a new test.  Tests or data
+	collection instructions that are now provided manually would
+	be converted to formal test definitions, and added to a
+	growing body of tests.  This should help people re-use test
+	operations that are common.  Capturing test operations that
+	are done manually into a script would need to be very easy
+	(possibly itself automated), and it would need to be easy to
+	publish the new test for others to use.
+
+	Basically, in the future, it would be nice if when a person
+	reported a bug, instead of the maintainer manually walking
+	someone through the steps to identify the bug and track down
+	the problem, they could point the user at an existing test
+	that the user could easily run.
+
+	I imagine a kind of "test app store", where a tester can
+	select from thousands of tests according to their interest.
+	Also, people could rate the tests, and maintainers could
+	point people to tests that are helpful to solve specific
+	problems.
+
+	3) How does an individual know how to execute a test and how
+	to interpret the results?
+
+	For many features or sub-systems, there are existing tools
+	(e.g bonnie for filesystem tests, netperf for networking
+	tests, or cyclictest for realtime), but these tools have a
+	variety of options for testing different aspects of a
+	problem or for dealing with different configurations or
+	setups.  Online you can find tutorials for running each of
+	these, and for helping people interpret the results. A new
+	test system should take care of running these tools with the
+	proper command line arguments for different test aspects,
+	and for different test targets ('device-under-test's).
+
+	For example, when someone figures out a set of useful
+	arguments to cyclictest for testing realtime on a beaglebone
+	board, they should be able to easily capture those arguments
+	to allow another developer using the same board to easily
+	re-use those test parameters, and interpret the cylictest
+	results, in an automated fashion.  Basically we want to
+	automate the process of finding out "what options do I use
+	for this test on this board, and what the heck number am I
+	supposed to look at in this output, and what should its
+	value be?".
+
+	Another issue is with interpretation of test results from
+	large test suites.  One notorious example of this is LTP.
+	It produces thousands of results, and almost always produces
+	failures or results that can be safely  ignored on a
+	particular board or in a particular environment. It requires
+	a large amount of manual evaluation and expertise to
+	determine which items to pay attention to from LTP.  It
+	would be nice to be able to capture this evaluation, and
+	share it with others with either the same board, or the same
+	test environment, to allow them to avoid duplicating this
+	work.
+
+	Of course, this should not be used to gloss over bugs in LTP
+	or bugs that LTP is reporting correctly and actually need to
+	be paid attention to.
+
+	4) How should this test collateral be expressed, and how
+	should it be collected, stored, shared and re-used?
+
+	There are a multitude of test frameworks available.  I am
+	proposing that as a community we develop standards for test
+	packaging which include this type of information (test
+	dependencies, test parameters, results interpretation).  I
+	don't know all the details yet.  For this reason I am coming
+	to the community see how others are solving these problems
+	and to get ideas for how to solve them in a way that would
+	be useful for multiple frameworks.  I'm personally working
+	on the Fuego test framework - see http://fuegotest.org/wiki,
+	but I'd like to create something that could be used with any
+	test framework.
+
+	5) How to trust test collateral from other sources (tests,
+	interpretation)
+
+	One issue which arises with this type of sharing (or with
+	any type of sharing) is how to trust the materials involved.
+	If a user puts up a node with their own hardware, and trusts
+	the test framework to automatically download and execute a
+	never-before-seen test, this creates a security and trust
+	issue.  I believe this will require the same types of
+	authentication and trust mechanisms (e.g. signing,
+	validation and trust relationships) that we use to manage
+	code in the kernel.
+
+	I think this is more important than it sounds.  I think the
+	real value of this system will come when tens of thousands
+	of nodes are running tests where the system owners can
+	largely ignore the operation of the system, and instead the
+	test scheduling and priorities can be driven by the needs of
+	developers and maintainers who the test node owners have
+	never interacted with.
+
+	Finally, 6) What is the motivation for someone to run a test
+	on their hardware?
+
+	Well, there's an obvious benefit to executing a test if you
+	are personally interested in the result.  However, I think
+	the benefit of running an enormous test system needs to be
+	de-coupled from that immediate direct benefit.  I think we
+	should look at this the same way  we look at other
+	crowd-sourced initiatives, like Wikipedia.  While there is
+	some small benefit for someone producing an individual page
+	edit, we need to move beyond that to the benefit to the
+	community of the cumulative effort.
+
+	I think that if we want tens of thousands of people to run
+	tests, then we need to increase the cost/benefit ratio for
+	the system.  First, you need to reduce the cost so that it
+	is very cheap, in all of [time|money|expertise| ongoing
+	attention], to set up and maintain a test node.  Second,
+	there needs to be a real benefit that people can measure
+	from the cumulative effect of participating in the system.
+	I think it would be valuable to report bugs found and fixed
+	by the system as a whole, and possibly to attribute positive
+	results to the output provided by individual nodes.  (Maybe
+	you could 'game-ify' the operation of test nodes.)
+
+	Well, if you are still reading by now, I appreciate it.  I
+	have more ideas, including more details for how such a
+	system might work, and what types of things it could
+	accomplish. But I'll save that for smaller groups who might
+	be more directly interested in this topic.
+
+	To get started, I will begin working on a prototype of a
+	test packaging system that includes some of the ideas
+	mentioned here: inclusion of test collateral, and package
+	validation.  I would also like to schedule a "test summit"
+	of some kind (maybe associated with ELC or Linaro Connect,
+	or some other event), to discuss standards in the area I
+	propose.
+
+	I welcome any response to these ideas.  I plan to discuss
+	them at the upcoming test framework mini-jamboree in Tokyo
+	next week, and at Plumbers (particularly during the 'testing
+	and fuzzing' session) the week following.  But feel free to
+	respond to this e-mail as well.
+
+	Thanks.
+	-- Tim Bird
+
+
+=============================
+Ideas related to the vision
+=============================
+
+
+Capturing tests easily
+========================
+ 
+ * should be easy to capture a command line sequence, and test the 
+   results
+ * maybe do an automated capture and format into a clitest file that
+   can be used at a here document inside a fuego test script?
+
+==================
+test collateral
+==================
+
+ * does it need to be board-specific
+ * elements of test collateral:
+
+   * test dependencies:
+
+     * kernel config values needed
+     * kernel features needed:
+
+       * proc filesystem
+       * sys filesystem
+       * trace filesystem
+     * test hardware needed
+     * test node setup features
+
+       * ability to reboot the board
+       * ability to soft-reset the board
+       * ability to install a new kernel
+     * presence of certain programs on target
+
+       * bc
+       * top, ps, /bin/sh, bash?
+ * already have:
+
+    * CAPABILITIES?
+    * pn and reference logs
+    * positive and negative result counts (specific to board)
+    * test specs indicate parameters for the test
+    * test plans indicate different profiles (method to match test to 
+      test environment - e.g. filesystem test with type of filesystem 
+      hardware)
+
+=================
+test app store
+=================
+
+ * need a repository where tests can be downloaded
+
+   * like Jenkins plugin repository
+   * like debian package feed
+
+ * need a client for browsing tests, installing tests, updating tests
+ * store a test in github, and just refer to different tests in 
+   different git repositories?
+ * test ratings
+ * test metrics (how many bugs found)
+
+======================
+authenticating tests
+======================
+
+ * need to prevent malicious tests
+ * packages should be signed by an authority, after review by someone
+
+   * who? the Fuego maintainers?  This would turn into a bottleneck
+
+======================
+test system metrics
+======================
+
+ * number of bugs found and fixed in upstream software
+ * number of bugs found and fixed in test system
+ * bug categories (See :ref:`Metrics <metrics>`)
+
+
+
+
diff --git a/docs/rst_src/Parser_module_API.rst b/docs/rst_src/Parser_module_API.rst
index 617c0ff..e7f81e7 100644
--- a/docs/rst_src/Parser_module_API.rst
+++ b/docs/rst_src/Parser_module_API.rst
@@ -7,45 +7,60 @@ Parser module API
 
 
 
-The file common.py is the python module for performing benchmark log file processing, and results processing and aggregation.
+The file common.py is the python module for performing benchmark log
+file processing, and results processing and aggregation.
 
 It is used by the parser.py program from the test directory, to process
 the log after each test run.  The data from a test run is processed to:
 
- * check numeric values for pass/fail result(by checking against a reference threshold values)
- * determine the overall result of the test, based on potentially complex results criteria
+ * check numeric values for pass/fail result(by checking against a 
+   reference threshold values)
+ * determine the overall result of the test, based on potentially 
+   complex results criteria
  * save the data for use in history and comparison charts
 
 ===============
 Parser API
 ===============
 
-The following are functions used during log processing, by a test's parser.py program.
+The following are functions used during log processing, by a test's
+parser.py program.
   
- * :ref:`parse_log() <parser_func_parse_log>` - parse the data from a test log
+ * :ref:`parse_log() <parser_func_parse_log>` - parse the data from a 
+   test log
 
-    * this routine takes a regular expression, with one or more groups, and results a list of tuples for lines that matched the expression
-    * the tuples consist of the strings from the matching line corresponding to the regex groups
+    * this routine takes a regular expression, with one or more 
+      groups, and results a list of tuples for lines that matched the 
+      expression
+    * the tuples consist of the strings from the matching line 
+      corresponding to the regex groups
 
  * :ref:`process() <parser_func_process>` - process results from a test
 
-    * this routine taks a dictionary of test results, and does 3 things:
+    * this routine taks a dictionary of test results, and does 3 
+      things:
 
       * formats them into the run.json file (run results file)
       * detects pass or fail by using the specified pass criteria
       * formats the data into charts (plots and tables)
 
- * :ref:`split_output_per_testcase() <parser_func_split_output_per_testcase>` - split testlog into chunks accessible from the Jenkins user interface (one per testcase)
+ * :ref:`split_output_per_testcase() 
+   <parser_func_split_output_per_testcase>` 
 
-In general, a parser module will normally call **parse_log()**, then take
-the resulting list of matching groups to construct a dictionary to pass
-to the **process()** routine.
+   - split testlog into chunks accessible from the Jenkins user 
+     interface (one per testcase)
+
+In general, a parser module will normally call **parse_log()**, then
+take the resulting list of matching groups to construct a dictionary
+to pass to the **process()** routine.
 
 If the log file format is amendable, the parser module may also call
-split_output_per_testcase() to generate a set of files from the testlog,
-that can be referenced from the charts generated by the charting module.
+split_output_per_testcase() to generate a set of files from the
+testlog, that can be referenced from the charts generated by the
+charting module.
 
-Please see :ref:`parser.py <parser>` for more details and examples of use of the API.
+Please see :ref:`parser.py <parser>` for more details and examples of
+use of the API.
 
 
 
@@ -54,11 +69,12 @@ Please see :ref:`parser.py <parser>` for more details and examples of use of the
 Deprecated API
 ===================
 
-*Note: The following information is for historical purposes only.Although the API is still present in Fuego, these APIs are deprecated.*
+*Note: The following information is for historical purposes only.*
+*Although the API is still present in Fuego, these APIs are deprecated.*
 
 In Fuego version 1.1 and prior, the following functions were used.
-These are still available for backwards compatibility with tests written
-for these versions of Fuego.
+These are still available for backwards compatibility with tests
+written for these versions of Fuego.
 
  * parse()
  * process_data()
@@ -74,26 +90,28 @@ parse()
 
  * output:
 
-    * list of regular expression matches for each line matching the specified pattern
+    * list of regular expression matches for each line matching the 
+      specified pattern
 
-This routine scans the current log file, using a regular expression.  It 
-returns an re match object for each line of the log file that matches the
-expression.
+This routine scans the current log file, using a regular expression.
+It returns an re match object for each line of the log file that
+matches the expression.
 
-This list is used to populate a dictionary of metric/value pairs that can
-be passed to the process_data function.
+This list is used to populate a dictionary of metric/value pairs that
+can be passed to the process_data function.
 
 process_data
 =============
 
-This is the main routine of the module.  It processes the list of metrics,
-and populates various output files for test.
+This is the main routine of the module.  It processes the list of
+metrics, and populates various output files for test.
 
  * input:
 
    * ref_section_pat - regular expression used to read reference.log
    * cur_dict - dictionary of metric/value pairs
-   * m - indicates the size of the plot. It should be one of: 's', 'm', 'l', 'xl'
+   * m - indicates the size of the plot. It should be one of: 's', 
+     'm', 'l', 'xl'
 
      * if 'm', 'l', or 'xl' are used, then a multiplot is created
 
@@ -116,13 +134,17 @@ functions in common.py
 ========================
 
  * hls - print a big warning or error message
- * parse_log(regex_str) - specify a regular expression string to use to parse lines in the log
+ * parse_log(regex_str) - specify a regular expression string to use 
+   to parse lines in the log
 
-   * this is a helper function that returns a list of matches (with groups) that the parser.py can use to populate its dictionary of measurements
+   * this is a helper function that returns a list of matches 
+     (with groups) that the parser.py can use to populate its 
+     dictionary of measurements
 
  * parse(regex_compiled_object)
 
-   * similar to parse_log, but it takes a compiled regular expression object, and returns a list of matches (with groups)
+   * similar to parse_log, but it takes a compiled regular expression 
+     object, and returns a list of matches (with groups)
 
    * this is deprecated, but left to support legacy tests
 
@@ -149,7 +171,8 @@ functions in common.py
 
      * key=test_case_id (not including measure name)
 
-       * for a functional test, the test_case_id is usually "default.<test_name>"
+       * for a functional test, the test_case_id is usually 
+         "default.<test_name>"
 
      * value=list of measures (for a benchmark)
      * or value=string (PASS|FAIL|SKIP) (for a functional test)
@@ -191,35 +214,44 @@ call trees
 miscellaneous notes
 ========================
 
- * create_default_ref_tim (for docker.hello-fail.Functional.hello_world)
+ * create_default_ref_tim 
+   (for docker.hello-fail.Functional.hello_world)
 
-   * ref={'test_sets': [{'test_cases': [{'measurements': [{'status': 'FAIL', 'name': 'Functional'}], 'name': 'default'}], 'name': 'default'}]}
+   * ref={'test_sets': [{'test_cases': [{'measurements': 
+     [{'status': 'FAIL', 'name': 'Functional'}], 'name': 'default'}], 
+     'name': 'default'}]}
 
  * create_default_ref
 
-   * ref={'test_sets': [{'test_cases': [{'status': 'FAIL', 'name': 'default'}], 'name': 'default'}]}
+   * ref={'test_sets': [{'test_cases': [{'status': 'FAIL', 
+     'name': 'default'}], 'name': 'default'}]}
 
 data format and tguid rules
 ====================================
 
-The current API and the old parser API take different data and allow different
-test identifiers.  This sections explains the difference:
+The current API and the old parser API take different data and allow
+different test identifiers.  This sections explains the difference:
 
 Data format for benchmark test with new API
 
- * measurements[test_case_id] = [{"name": measure_name, "measure": value}]
+ * measurements[test_case_id] = [{"name": measure_name, 
+   "measure": value}]
 
 Data format for benchmark test with old API:
 
  * in reference.log
 
-    * if tguid is a single word, then use that word as the  measure name and "default" as the test_case.
+    * if tguid is a single word, then use that word as the  
+      measure name and "default" as the test_case.
 
-      * e.g. for benchmark.arm, the reference.log has "short".  This becomes the fully-qualified tguid: arm.default.arm.short:
+      * e.g. for benchmark.arm, the reference.log has "short".  
+        This becomes the fully-qualified tguid: arm.default.arm.short:
 
-        * test_name = arm, test_case = default, test_case_id = arm, measure = short
+        * test_name = arm, test_case = default, test_case_id = arm, 
+          measure = short
 
-Data format for functional tests with new API and the old API is the same:
+Data format for functional tests with new API and the old API is the 
+same:
 
  * e.g. measurements["status"] = "PASS|FAIL"
 
diff --git a/docs/rst_src/Quick_Setup_Guide.rst b/docs/rst_src/Quick_Setup_Guide.rst
new file mode 100644
index 0000000..86c7de5
--- /dev/null
+++ b/docs/rst_src/Quick_Setup_Guide.rst
@@ -0,0 +1,161 @@
+.. _quickSetupguide:
+
+##################
+Quick Setup Guide
+##################
+
+This page has some really quick setup instructions if you just want to
+get a taste of what Fuego is like.  This allows you to experiment with
+Fuego and try out some tests to see what it looks like and how it
+works, without investing a lot of time.
+
+In this configuration, we will show you how to install Fuego and run a
+test on a 'docker' board, which is the docker container where Fuego
+itself is running on your host machine.
+
+Obviously, this is not useful for testing any real hardware.  It is
+intended only as a demonstration of Fuego functionality.
+
+For instructions to set up a real board, try the :ref:`Fuego
+Quickstart Guide <quickstart>` or the :ref:`Installing Fuego
+<installfuego>` page.
+
+=============
+Overview
+=============
+
+The overview of the steps is:
+ * 1. install pre-requisite software
+ * 2. download the Fuego repository
+ * 3. build your Fuego container
+ * 4. start the container
+ * 5. add the 'docker' board to Jenkins
+ * 6. add some sample tests
+ * 7. access the Jenkins interface
+ * 8. run a test
+
+These steps are described below.
+
+==================
+Step details
+==================
+
+To retrieve the Fuego software and create the docker image for it, you
+need to have git and docker installed on your system.
+
+On Ubuntu, try the following commands: ::
+
+  $ sudo apt install git docker.io
+
+
+To download Fuego, and build and start the container,
+type the following commands at a Linux shell prompt: ::
+
+	$ git clone https://bitbucket.org/fuegotest/fuego.git
+	$ cd fuego
+	$ ./install.sh
+	$ ./start.sh
+
+
+The third step (with ./install.sh) will take some time - about 45
+minutes on an average Linux machine.  This is building the "Fuego"
+distribution of Linux (based on Debian) and putting it in the Fuego
+docker container.  You will also need a connection to the Internet
+with fairly decent bandwidth.
+
+When you run the 'start.sh' script, the terminal will be placed at a
+shell prompt, as the root user inside the docker container.  The
+container will run until you exit this shell.  You should leave it
+running for the duration of your testing.
+
+The next steps populate the Jenkins system objects used for testing:
+
+At the shell prompt inside the container type the following: ::
+
+	<container-prompt># ftc add-node -b docker
+	<container-prompt># ftc add-jobs -b docker -t 
+         Functional.batch_smoketest
+
+
+This will add the 'docker' node (representing the 'docker' Fuego
+board) in the Jenkins interface and a small set of tests.
+
+The "smoketest" batch test has about 20 tests that exercise a variety
+of features in a Linux system.  After running these commands, a set of
+jobs will appear in the Jenkins interface. ::
+
+
+	$ firefox http://localhost:8090/fuego
+
+
+To access the Fuego interface (Jenkins) you can use any browser -
+not just Firefox.
+By default the Fuego interface runs on your host machine, on
+port 8090, with URL path "/fuego".
+
+In your browser, you should see a screen similar to the following:
+
+ .. image:: ../images/fuego-1.1-jenkins-dashboard-beaglebone-jobs.png
+    :width: 900
+
+
+=================
+Run a test 
+=================
+
+To run a job manually, do the following:
+ * Go to the Jenkins dashboard (in the main Jenkins web page),
+ * Select the job (which includes the board name and the test name)
+ * Click “Build job”  (Jenkins refers to running a test as "building" 
+   it.)
+
+A few very simple jobs you might start with are:
+ * Functional.hello_world
+ * Benchmark.Drhystone
+
+You can also start a test manually by clicking on the circle with
+a green triangle, on the far right of the line with the job name,
+in the Jenkins dashboard.
+
+When you run a test, the test software is built from source,
+sent to the machine (in this case the Fuego docker container), and
+executed.  Then the results are collected, analyzed, and displayed
+in the Jenkins interface.
+
+When the test has completed, the status will be shown by a colored
+ball by the side of the test in the dashboard.  Green means success,
+red means failure, and grey means the test did not complete
+(it was not run or it was aborted).  You can get details about the test
+run by clicking on the links in the history list.  You can see the
+test log (what the actual test program
+output on the target), by clicking on "testlog".  You can see the steps
+Fuego took to execute the test by clicking on the "console log" link on
+the job page.  And you can see the formatted results for a job, and job
+details (like start time, test information, board information, and
+results) in the 'run.json' file.
+
+==================
+What do do next?
+==================
+
+In order to use Fuego in a real Continuus Integration loop, you need
+to do a few things:
+
+ * configure Fuego to work with your own board or product
+ * customize benchmark thresholds and functional baselines for your 
+   board
+ * configure Fuego jobs to be triggered after the board is installed
+   with new software to test
+
+Fuego does not currently have support for provisioning boards (that
+is, installing the "software under test" to the board).  Usually,
+Fuego users create their own Jenkins job which provisions the board,
+and then triggers Fuego jobs, after the new software is installed on
+the board.
+
+See further instructions see the :ref:`Fuego Quickstart Guide
+<quickstart>`, :ref:`Adding a board <adding_board>`, :ref:`Adding a
+toolchain <addtoolchain>` or the :ref:`Installing Fuego
+<installfuego>` page.
+
+
diff --git a/docs/rst_src/Raspberry_Pi_Fuego_Setup.rst b/docs/rst_src/Raspberry_Pi_Fuego_Setup.rst
index be8516e..7ce2a43 100644
--- a/docs/rst_src/Raspberry_Pi_Fuego_Setup.rst
+++ b/docs/rst_src/Raspberry_Pi_Fuego_Setup.rst
@@ -4,14 +4,14 @@
 Raspberry Pi Fuego Setup
 #########################
 
-This is a list of instructions for setting up a Raspberry Pi board
-for use with Fuego.  These instructions will help you set up the
-ssh server, used by Fuego to communicate with the board, and the 
-test directory on the machine, that Fuego will use to store programs
-and files during a test.
+This is a list of instructions for setting up a Raspberry Pi board for
+use with Fuego.  These instructions will help you set up the ssh
+server, used by Fuego to communicate with the board, and the test
+directory on the machine, that Fuego will use to store programs and
+files during a test.
 
-These instructions and the screen shots are for a Raspberry Pi Model 3 B,
-running "Raspbian 9 (stretch)".
+These instructions and the screen shots are for a Raspberry Pi Model 3
+B, running "Raspbian 9 (stretch)".
 
 This assumes that the Raspberry Pi is already installed, and that
 networking is already configured and running.
@@ -20,21 +20,22 @@ networking is already configured and running.
 Obtain your network address
 ==============================
 
-First, determine what your Pi's network address is.  You can see this by using
-the command 'ifconfig' in a terminal window, and checking for the 'inet' address.
+First, determine what your Pi's network address is.  You can see this
+by using the command 'ifconfig' in a terminal window, and checking for
+the 'inet' address.
 
-Or, move your mouse cursor over the network icon in the desktop panel bar.
-If you leave the mouse there for a second or two, a box will appear showing
-information about your current network connection.
+Or, move your mouse cursor over the network icon in the desktop panel
+bar.  If you leave the mouse there for a second or two, a box will
+appear showing information about your current network connection.
 
-This is what the network information box looks like (in the upper right corner
-of this screen shot):
+This is what the network information box looks like (in the upper
+right corner of this screen shot):
 
 .. image:: ../images/rpi-network-address.png
    :height: 400
 
-In this case, my network address is 10.0.1.103.
-Your address might start with 192.168, which is common for home or local networks.
+In this case, my network address is 10.0.1.103.  Your address might
+start with 192.168, which is common for home or local networks.
 
 Note this address for use later.
 
@@ -46,14 +47,14 @@ Configure the SSH server
 In order for other machines to access the Pi remotely, you need to
 enable the ssh server.
 
-This is done by enabling the SSH interface in the Raspberry Pi Configuration
-dialog.
+This is done by enabling the SSH interface in the Raspberry Pi
+Configuration dialog.
 
-To access this dialog, click on the raspberry logo in the upper right corner
-of the main desktop window.  Then click on "Preferences", then on
-"Raspberry Pi Configuration".  In the dialog that appears, click on the
-"Interfaces" tab, and on the list of interfaces click on the "Enable"
-radio button for the SSH interface.
+To access this dialog, click on the raspberry logo in the upper right
+corner of the main desktop window.  Then click on "Preferences", then
+on "Raspberry Pi Configuration".  In the dialog that appears, click on
+the "Interfaces" tab, and on the list of interfaces click on the
+"Enable" radio button for the SSH interface.
 
 Here is the menu:
 
@@ -69,8 +70,8 @@ The configuration dialog looks something like this:
 Try connecting
 ================
 
-Now, close this dialog, and make sure you can access the Pi using
-SSH from your host machine.
+Now, close this dialog, and make sure you can access the Pi using SSH
+from your host machine.
 
 
 Try the following command, from your host machine:
@@ -91,8 +92,8 @@ This is not recommended on machines that are in production, as it is
 a significant security risk.  However, for test machines it may be
 acceptable to allow root access over ssh.
 
-To do this, on the Raspberry Pi, with root permissions, edit the file /etc/ssh/sshd_config
-and add the following line: ::
+To do this, on the Raspberry Pi, with root permissions, edit the file
+/etc/ssh/sshd_config and add the following line: ::
 
    PermitRootLogin yes
 
@@ -126,9 +127,9 @@ information.
 
 ``$ adduser fuego``
 
-Answer the questions, including setting the password for this
-account. Remember the password you select, and use that in the board
-file when configuring Fuego to access this board.
+Answer the questions, including setting the password for this account.
+Remember the password you select, and use that in the board file when
+configuring Fuego to access this board.
 
 This will create the directory ``/home/fuego``.
 
@@ -196,7 +197,8 @@ Inside the Fuego container, run: ::
   $ ftc add-job -b rpi -t Functional.fuego_board_check
 
 
-An easy way to populate Jenkins with a set of tests is to install a batch test.
+An easy way to populate Jenkins with a set of tests is to install a
+batch test.
 
 Install the "smoketest" batch test, as follows:
 
@@ -211,11 +213,12 @@ Run a board check
 To see if everything is set up correctly, execute the test:
 Functional.fuego_board_check.
 
-In the Jenkins interface, select "rpi.default.Functional.fuego_board_check"
-and select the menu item "Build Now" on the left hand side of the screen.
+In the Jenkins interface, select
+"rpi.default.Functional.fuego_board_check" and select the menu item
+"Build Now" on the left hand side of the screen.
 
-Wait a few moments for the test to complete. when the test completes, check
-the log for the test by clicking on the link to the 'testlog'.
+Wait a few moments for the test to complete. when the test completes,
+check the log for the test by clicking on the link to the 'testlog'.
 
 
 
diff --git a/docs/rst_src/Test_variables.rst b/docs/rst_src/Test_variables.rst
index 51717c0..a20c262 100644
--- a/docs/rst_src/Test_variables.rst
+++ b/docs/rst_src/Test_variables.rst
@@ -12,12 +12,13 @@ When Fuego executes a test, shell environment variables are used to
 provide information about the test environment, test execution
 parameters, communications methods and parameters, and other items.
 
-These pieces of information are originate from numerous different places.
-An initial set of test variables comes in the shell environment from
-either Jenkins or from the shell in which ftc is executed (depending
-on which one is used to invoke the test).
+These pieces of information are originate from numerous different
+places.  An initial set of test variables comes in the shell
+environment from either Jenkins or from the shell in which ftc is
+executed (depending on which one is used to invoke the test).
 
-The information about the board being tested comes primarily from two sources:
+The information about the board being tested comes primarily from two
+sources:
 
  * the board file
  * the stored board variables file
@@ -37,41 +38,57 @@ test execution.
 Board file
 ==============
  
-The board file contains static information about a board.  It is processed
-by the overlay system, and the values inside it appear as variables
-in the environment of a test, during test execution.
+The board file contains static information about a board.  It is
+processed by the overlay system, and the values inside it appear as
+variables in the environment of a test, during test execution.
 
 The board file resides in:
 
  * /fuego-ro/boards/$BOARD.board
 
-There are a number of variables which are used by the Fuego system itself,
-and there may also be variables that are used by individual tests.
+There are a number of variables which are used by the Fuego system
+itself, and there may also be variables that are used by individual
+tests.
 
 Common board variables 
 =========================
+
 Here is a list of the variables which might be found in a board file:
+
  * ARCHITECTURE - specifies the architecture of the board
  * BAUD - baud rate for serial device (if using 'serial' transport)
  * BOARD_TESTDIR - directory on board where tests are executed
- * BOARD_CONTROL - the mechanism used to control board hardware (e.g. hardware reboot)
- * DISTRIB - filename of distribution overlay file (if not the default)
- * IO_TIME_SERIAL - serial port delay parameter (if using 'serial' transport)
+ * BOARD_CONTROL - the mechanism used to control board hardware 
+   (e.g. hardware reboot)
+ * DISTRIB - filename of distribution overlay file 
+   (if not the default)
+ * IO_TIME_SERIAL - serial port delay parameter 
+   (if using 'serial' transport)
  * IPADDR - network address of the board
  * LOGIN - specifies the user account to use for Fuego operations
- * PASSWORD - specifies the password for the user account on the board used by Fuego
+ * PASSWORD - specifies the password for the user account on the board 
+   used by Fuego
  * PLATFORM - specifies the toolchain to use for the platform
- * SATA_DEV - specifies a filesystem device node (on the board) for SATA filesystem tests
- * SATA_MP - specifies a filesystem mount point (on the board) for SATA filesystem tests
- * SERIAL - serial device on host for board's serial console (if using 'serial' transport)
- * SRV_IP - network address of server endpoint, for networking tests (if not the same as the host)
- * SSH_KEY - the absolute path to key file  with ssh key for password-less ssh operations (e.g. "/fuego-ro/board/myboard_id_rsa")
- * SSH_PORT - network port of ssh daemon on board (if using ssh transport)
+ * SATA_DEV - specifies a filesystem device node (on the board) for 
+   SATA filesystem tests
+ * SATA_MP - specifies a filesystem mount point (on the board) 
+   for SATA filesystem tests
+ * SERIAL - serial device on host for board's serial console 
+   (if using 'serial' transport)
+ * SRV_IP - network address of server endpoint, for networking tests 
+   (if not the same as the host)
+ * SSH_KEY - the absolute path to key file  with ssh key for 
+   password-less ssh operations (e.g. "/fuego-ro/board/myboard_id_rsa")
+ * SSH_PORT - network port of ssh daemon on board (if using 
+   ssh transport)
  * TRANSPORT - this specifies the transport to use with the target
- * USB_DEV - specifies a filesystem device node (on the board) for USB filesystem tests
- * USB_MP - specifies a filesystem mount point (on the board) for USB filesystem tests
+ * USB_DEV - specifies a filesystem device node (on the board) for 
+   USB filesystem tests
+ * USB_MP - specifies a filesystem mount point (on the board) for 
+   USB filesystem tests
 
-See :ref:`Adding a Board <adding_board>` for more details about these variables.
+See :ref:`Adding a board <adding_board>` for more details about these 
+variables.
 
 A board may also have additional variables, including variables that
 are used for results evaluation for specific tests.
@@ -102,25 +119,30 @@ The overlay system is described in greater detail here:
 Stored variables 
 =======================
 
-Stored board variables are test variables that are defined on a per-board
-basis, and can be modified and managed under program control.
+Stored board variables are test variables that are defined on a
+per-board basis, and can be modified and managed under program
+control.
 
 Stored variables allow the Fuego system, a test, or a user to store
-information that can be used by tests.  This essentially
-creates an information cache about the board, that can be both
-manually and programmatically generated and managed.
-
-The information that needs to be held for a particular board depends on the tests that are installed in the system. Thus the system needs to support
-ad-hoc collections of variables.  Just putting everything into the static board
-file would not scale, as the number of tests increases.
-
-''Note: the LAVA test framework has a similar concept called a *board dictionary*.''
-
-One use case for this to have a "board setup" test, that scans for lots of 
-different items, and populates the stored variables with values that
-are used by other tests.  Some items that are useful to know about a board
-take time to discover (using e.g. 'find' on the target board), and using
-a board dynamic variable can help reduce the time required to check these items.
+information that can be used by tests.  This essentially creates an
+information cache about the board, that can be both manually and
+programmatically generated and managed.
+
+The information that needs to be held for a particular board depends
+on the tests that are installed in the system. Thus the system needs
+to support ad-hoc collections of variables.  Just putting everything
+into the static board file would not scale, as the number of tests
+increases.
+
+*Note: the LAVA test framework has a similar concept called*
+*a board dictionary.*
+
+One use case for this to have a "board setup" test, that scans for
+lots of different items, and populates the stored variables with
+values that are used by other tests.  Some items that are useful to
+know about a board take time to discover (using e.g. 'find' on the
+target board), and using a board dynamic variable can help reduce the
+time required to check these items.
 
 The board stored variables are kept in the file:
  * /fuego-rw/boards/$BOARD.vars
@@ -130,21 +152,27 @@ These variables are included in the test by the overlay generator.
 Commands for interacting with stored variables 
 ====================================================
 
-A user or a test can manipulate a board stored variable using the ftc command.
-The following commands can be used to set, query and delete variables:
+A user or a test can manipulate a board stored variable using the ftc
+command.The following commands can be used to set, query and delete 
+variables:
 
- *  **tc query-board** - to see test variables (both regular board variables and stored variables)
+ *  **tc query-board** - to see test variables (both regular board 
+    variables and stored variables)
  *  **ftc set-var** - to add or update a stored variable
  *  **ftc delete-var** - to delete a stored variable
 
 ftc query-board
 ------------------
 
-'ftc query-board' is used to view the variables associated with a Fuego board.
-You can use the command to see all the variables, or just a single variable.
+'ftc query-board' is used to view the variables associated with a
+Fuego board.  You can use the command to see all the variables, or
+just a single variable.
 
-Note that 'ftc query-board' shows the variables for a test that come from both the board file and board stored variables file (that is, both 'static' board
-variables and stored variables).  It does not show variables which come from testplans or spec files, as those are specific to a test.
+Note that 'ftc query-board' shows the variables for a test that come
+from both the board file and board stored variables file (that is,
+both 'static' board variables and stored variables).  It does not show
+variables which come from testplans or spec files, as those are
+specific to a test.
 
 The usage is:
  * ftc query-board <board> [-n <VARIABLE>]
@@ -154,18 +182,20 @@ Examples:
  $ ftc query-board myboard -n PROGRAM_BC
 
 The first example would show all board variables, including functions.
-The second example would show only the variable PROGRAM_BC, if it existed, for board 'myboard'.
+The second example would show only the variable PROGRAM_BC, if it
+existed, for board 'myboard'.
 
 ftc set-var
 ------------
 
-'ftc set-var' allows setting or updating the value of a board stored variable.
+'ftc set-var' allows setting or updating the value of a board stored
+variable.
 
 The usage is:
  * ftc set-var <board> <VARIABLE>=<value>
 
-By convention, variable names are all uppercase, and function names are
-lowercase, with words separated by underscores.
+By convention, variable names are all uppercase, and function names
+are lowercase, with words separated by underscores.
 
 Example:
  $ ftc set-var PROGRAM_BC=/usr/bin/bc
@@ -181,40 +211,43 @@ Example:
 Example usage
 ==============
 
-Functional.fuego_board_check could detect the path for the 'foo' binary,
-(e.g. is_on_target foo PROGRAM_FOO)
-and call 'ftc set-var $NODE_NAME PROGRAM_FOO=$PROGRAM_FOO'.
-This would stay persistently defined as a test variable, so other
-tests could use $PROGRAM_FOO (with assert_defines, or in
-'report' or 'cmd' function calls.)
+Functional.fuego_board_check could detect the path for the 'foo'
+binary, (e.g. is_on_target foo PROGRAM_FOO) and call 'ftc set-var
+$NODE_NAME PROGRAM_FOO=$PROGRAM_FOO'.  This would stay persistently
+defined as a test variable, so other tests could use $PROGRAM_FOO
+(with assert_defines, or in 'report' or 'cmd' function calls.)
 
 
 Example Stored variables
 =========================
 
-Here are some examples of variables that can be kept as stored variables,
-rather than static variables from the board file:
+Here are some examples of variables that can be kept as stored
+variables, rather than static variables from the board file:
 
  * SATA_DEV = Linux device node for SATA file system tests
  * SATA_MP = Linux mount point for SATA file system tests
- * LTP_OPEN_POSIX_SUBTEST_COUNT_POS = expected number of pass results for LTP OpenPosix test
- * LTP_OPEN_POSIX_SUBTEST_COUNT_NEG = expected number of fail results for LTP OpenPosix test
+ * LTP_OPEN_POSIX_SUBTEST_COUNT_POS = expected number of pass results 
+   for LTP OpenPosix test
+ * LTP_OPEN_POSIX_SUBTEST_COUNT_NEG = expected number of fail results 
+   for LTP OpenPosix test
  * PROGRAM_BC = path to 'bc' program on the target board
- * MAX_REBOOT_RETRIES = number of retries to use when rebooting a board
+ * MAX_REBOOT_RETRIES = number of retries to use when rebooting a 
+   board
 
 ===================
 Spec variables 
 ===================
-A test spec can define one or more variables to be used with a test.  These are commonly
-used to control test variations, and are specified in a spec.json file.
+A test spec can define one or more variables to be used with a test.  
+These are commonly used to control test variations, and are specified 
+in a spec.json file.
 
 When a spec file defines a variable associated with a named test spec,
-the variable is read by the overlay generator on test execution, and the
-variable name is prefixed with the name of the test, and converted to
-all upper case.
+the variable is read by the overlay generator on test execution, and
+the variable name is prefixed with the name of the test, and converted
+to all upper case.
 
-For example, support a test called "Functional.foo" had a test spec that
-defined the variable 'args' with a line
+For example, support a test called "Functional.foo" had a test spec 
+that defined the variable 'args' with a line
 like the following in its spec.json file: ::
 
 	 "default": {
@@ -223,9 +256,11 @@ like the following in its spec.json file: ::
 
 
 When the test was run with this spec (the "default" spec), then the
-variable FUNCTIONAL_FOO_ARGS would be defined, with the value "-v -p2".
+variable FUNCTIONAL_FOO_ARGS would be defined, with the value "-v
+-p2".
 
-See  :ref:`Test_Specs_and_Plans <test_specs_and_plans>` for more information about specs and plans.
+See  :ref:`Test_Specs_and_Plans <test_specs_and_plans>` for more
+information about specs and plans.
 
 Note that spec variables are overridden by 
 
@@ -233,21 +268,22 @@ Note that spec variables are overridden by
 Dynamic variables
 =========================
 
-Another category of variables used during testing are dynamic variables.
-These variables are defined on the command line of 'ftc run-test' using
-the '--dynamic-vars' option.
+Another category of variables used during testing are dynamic
+variables.  These variables are defined on the command line of 'ftc
+run-test' using the '--dynamic-vars' option.
 
 The purpose of these variables is to allow scripted variations when
-running 'ftc run-test'  The scripted variables are processed and presented
-the same way as Spec variables, which is to say that the variable name
-is prefixed with the test name, and converted to all upper case.
+running 'ftc run-test'  The scripted variables are processed and
+presented the same way as Spec variables, which is to say that the
+variable name is prefixed with the test name, and converted to all
+upper case.
 
 For example, if the following command was issued:
 
  * ftc run-test -b beaglebone -t Functional.foo --dynamic_vars *ARGS=-p*
 
-then during test execution the variable *FUNCTIONAL_FOO_ARGS* would be defined
-with the value *-p*.
+then during test execution the variable *FUNCTIONAL_FOO_ARGS* would be
+defined with the value *-p*.
 
 See :ref:`Dynamic Variables <dynamic_variables>` for more information.
 
@@ -255,10 +291,12 @@ See :ref:`Dynamic Variables <dynamic_variables>` for more information.
 Variable precedence 
 ========================
 
-Here is the precedence of variable definition for Fuego, during test execution:
+Here is the precedence of variable definition for Fuego, during test
+execution:
 
 (from lowest to highest)
- * environment variable (from Jenkins or shell where 'ftc run-test' is invoked)
+ * environment variable (from Jenkins or shell where 'ftc run-test' is 
+   invoked)
  * board variable (from fuego-ro/boards/$BOARD.board file)
  * stored variable (from fuego-rw/boards/$BOARD.vars file)
  * spec variable (from spec.json file)
@@ -266,9 +304,9 @@ Here is the precedence of variable definition for Fuego, during test execution:
  * core variable (from Fuego scripts)
  * fuego_test variable (from fuego_test.sh)
 
-Spec and dynamic variables are prefixed with the test name, and converted
-to upper case.  That tends to keep them in a separate name space from the
-rest of the test variables.
+Spec and dynamic variables are prefixed with the test name, and 
+converted to upper case.  That tends to keep them in a separate name 
+space from the rest of the test variables.
 
 
 
diff --git a/docs/rst_src/Using_Batch_tests.rst b/docs/rst_src/Using_Batch_tests.rst
index 985221c..176dfd4 100644
--- a/docs/rst_src/Using_Batch_tests.rst
+++ b/docs/rst_src/Using_Batch_tests.rst
@@ -5,28 +5,31 @@ Using Batch Tests
 ##########################
 
 
-A "batch test" in Fuego is a Fuego test that runs a series of other tests
-as a group.  The results of the individual tests are consolidated into
-a list of testcase results for the batch test.
+A "batch test" in Fuego is a Fuego test that runs a series of other
+tests as a group.  The results of the individual tests are
+consolidated into a list of testcase results for the batch test.
 
-Prior to Fuego version 1.5, there was a different feature, called "testplans",
-which allowed users to compose sets of tests into logical groups, and run
-them together.  The batch test system, introduced in Fuego version 1.5
-replaces the testplan system.
+Prior to Fuego version 1.5, there was a different feature, called
+"testplans", which allowed users to compose sets of tests into logical
+groups, and run them together.  The batch test system, introduced in
+Fuego version 1.5 replaces the testplan system.
 
 =============================
 How to make a batch test
 =============================
  
-A batch test consists of a Fuego test that runs other tests.  A Fuego batch
-test is similar to other Fuego tests, in that the test definition lives
-in fuego-core/tests/<test-name>, and it consists of a fuego_test.sh file,
-a spec file, a parser.py, a test.yaml file and possibly other files.
+A batch test consists of a Fuego test that runs other tests.  A Fuego
+batch test is similar to other Fuego tests, in that the test
+definition lives in fuego-core/tests/<test-name>, and it consists of a
+fuego_test.sh file, a spec file, a parser.py, a test.yaml file and
+possibly other files.
 
-The difference is that a Fuego batch test runs other Fuego tests, as a group.
-The batch test has a few elements that are different from other tests.
+The difference is that a Fuego batch test runs other Fuego tests, as a
+group.  The batch test has a few elements that are different from
+other tests.
 
-Inside the fuego_test.sh file, a batch test must define two main elements:
+Inside the fuego_test.sh file, a batch test must define two main
+elements:
 
  * the testplan element
  * the test_run function, with commands to run other tests
@@ -34,17 +37,19 @@ Inside the fuego_test.sh file, a batch test must define two main elements:
 Testplan element
 =========================
 
-The testplan element consists of data assigned to the shell variable BATCH_TESTPLAN. This variable contains lines
-that specify, in machine-readable form, the tests that are part of the batch job.
-The testplan is specified in json format, and is used to specify the
-attributes (such as timeout, flags, and specs) for each test.
-The testplan element is used by 'ftc add-jobs' to create Jenkins jobs for
+The testplan element consists of data assigned to the shell variable
+BATCH_TESTPLAN. This variable contains lines that specify, in
+machine-readable form, the tests that are part of the batch job.  The
+testplan is specified in json format, and is used to specify the
+attributes (such as timeout, flags, and specs) for each test.  The
+testplan element is used by 'ftc add-jobs' to create Jenkins jobs for
 each sub-test that is executed by this batch test.
 
-The BATCH_TESTPLAN variable must be defined in the fuego_test.sh file. The
-definition must begin with a
-line starting with the string 'BATCH_TESTPLAN=' and end with a line starting with the string 'END_TESTPLAN'.
-By convention this is defined as a shell "here document", like this example: ::
+The BATCH_TESTPLAN variable must be defined in the fuego_test.sh file.
+The definition must begin with a line starting with the string
+'BATCH_TESTPLAN=' and end with a line starting with the string
+'END_TESTPLAN'.  By convention this is defined as a shell "here
+document", like this example: ::
 
 
 	BATCH_TESTPLAN=$(cat <<END_TESTPLAN
@@ -59,13 +64,17 @@ By convention this is defined as a shell "here document", like this example: ::
 	)
 
 
-The lines of the testplan follow the format described at :ref:`Testplan_Reference <testplan_reference>`.  Please see that page for details about the plan fields and structure (the schema for the testplan data).
+The lines of the testplan follow the format described at
+:ref:`Testplan_Reference <testplan_reference>`.  Please see that page
+for details about the plan fields and structure (the schema for the
+testplan data).
 
 test_run function
 ====================
 
-The other element in a batch test's fuego_test.sh is a test_run function.
-This function is used to actually execute the tests in the batch.
+The other element in a batch test's fuego_test.sh is a test_run
+function.  This function is used to actually execute the tests in the
+batch.
 
 There are two functions that are available to help with this:
 
@@ -76,14 +85,17 @@ The body of the test_run function for a batch test usually has a few
 common elements:
 
  * setting of the FUEGO_BATCH_ID
- * execution of the sub-tests, using a call to the function :ref:`run_test <func_run_test>` for each one
+ * execution of the sub-tests, using a call to the function 
+   :ref:`run_test <func_run_test>` for each one
 
-Here are the commands in the test_run function for the test ``Functional.batch_hello``: ::
+Here are the commands in the test_run function for the test
+``Functional.batch_hello``: ::
 
 	function test_run {
 		  export TC_NUM=1
 		  DEFAULT_TIMEOUT=3m
-		  export FUEGO_BATCH_ID="hello-$(allocate_next_batch_id)"
+		  export FUEGO_BATCH_ID="hello-
+                  $(allocate_next_batch_id)"
 
 		  # don't stop on test errors
 		  set +e
@@ -101,19 +113,22 @@ Here are the commands in the test_run function for the test ``Functional.batch_h
 Setting the batch_id
 ----------------------------
 
-Fuego uses a 'batch id' to indicate that a group of test runs are related.
-Since a single Fuego test can be run in many different ways (e.g. from
-the command line or from Jenkins, triggered manually or automatically,
-or as part of one batch test or another), it is helpful for the run data for
-a test to be assigned a batch_id that can be used to generate reports or visualize data for the group of tests that are part of the batch.
-
-A batch test should set the FUEGO_BATCH_ID for the run to a unique string
-for that run of the batch test.  Each sub-test will store the batch id 
-in its run.json file, and this can be used to filter run data in subsequent
-test operations.  The Fuego system can provide a unique number, via the
-routine :ref:`allocate_next_batch_id <func_allocate_next_batch_id>`.  By
-convention, the batch_id for a test is created by combining a test-specific
-prefix string with the number returned from ``allocate_next_batch_id``.
+Fuego uses a 'batch id' to indicate that a group of test runs are
+related.  Since a single Fuego test can be run in many different ways
+(e.g. from the command line or from Jenkins, triggered manually or
+automatically, or as part of one batch test or another), it is helpful
+for the run data for a test to be assigned a batch_id that can be used
+to generate reports or visualize data for the group of tests that are
+part of the batch.
+
+A batch test should set the FUEGO_BATCH_ID for the run to a unique
+string for that run of the batch test.  Each sub-test will store the
+batch id in its run.json file, and this can be used to filter run data
+in subsequent test operations.  The Fuego system can provide a unique
+number, via the routine :ref:`allocate_next_batch_id
+<func_allocate_next_batch_id>`.  By convention, the batch_id for a
+test is created by combining a test-specific prefix string with the
+number returned from ``allocate_next_batch_id``.
 
 In the example above, the prefix used is 'hello-', and this would be
 followed by a number returned by allocate_next_batch_id.
@@ -121,74 +136,82 @@ followed by a number returned by allocate_next_batch_id.
 Executing sub-tests
 ----------------------
 
-The :ref:`run_test <func_run_test>` function is used to execute the sub-tests
-that are part of the batch.  The other portions of the example above
-show setting various shell variables that are used by 'run_test', and
-turning off 'errexit' mode while the sub-tests are running.
+The :ref:`run_test <func_run_test>` function is used to execute the
+sub-tests that are part of the batch.  The other portions of the
+example above show setting various shell variables that are used by
+'run_test', and turning off 'errexit' mode while the sub-tests are
+running.
 
-In the example above, TC_NUM, TC_NAME, and DEFAULT_TIMEOUT are used for
-various effects.  These variables are optional, and in most cases a
-batch test can be written without having to set them.  Fuego will generate
-automatic strings or values for these variables if they are not defined
-by the batch test.
+In the example above, TC_NUM, TC_NAME, and DEFAULT_TIMEOUT are used
+for various effects.  These variables are optional, and in most cases
+a batch test can be written without having to set them.  Fuego will
+generate automatic strings or values for these variables if they are
+not defined by the batch test.
 
-Please see the documentation for :ref:`run_test <func_run_test>` for details
-about the environment and arguments used when calling the function.
+Please see the documentation for :ref:`run_test <func_run_test>` for
+details about the environment and arguments used when calling the
+function.
 
 Avoiding stopping on errors
 ----------------------------------------
 
-The example above shows use of 'set +e' and 'set -e' to control the shell's
-'errexit' mode.  By default, Fuego runs tests with the shell errexit
-mode enabled.  However, a batch test should anticipate that some of its
-sub-tests might fail.  If you want all of the tests in the batch to run,
-even if some of them fail, they you should use 'set +e' to disable
-errexit mode, and 'set -e' to re-enable it when you are done.
-
-Of course, if you want the batch test to stop if one of the sub-tests fails, they you should control the errexit mode accordingly (for example, leaving it
-set during all sub-test executions, or disabling it or enabling it
-only during the execution of particular sub-tests).
-
-Whether to manipulate the shell errexit mode or not depends on what the
-batch test is doing.  If it is implementing a sequence of dependent test
-stages, the errexit mode should be left enabled.  If a batch test is 
-implementing a series of unrelated, independent tests, the errexit mode
-should be disabled and re-enabled as shown.
+The example above shows use of 'set +e' and 'set -e' to control the
+shell's 'errexit' mode.  By default, Fuego runs tests with the shell
+errexit mode enabled.  However, a batch test should anticipate that
+some of its sub-tests might fail.  If you want all of the tests in the
+batch to run, even if some of them fail, they you should use 'set +e'
+to disable errexit mode, and 'set -e' to re-enable it when you are
+done.
+
+Of course, if you want the batch test to stop if one of the sub-tests
+fails, they you should control the errexit mode accordingly (for
+example, leaving it set during all sub-test executions, or disabling
+it or enabling it only during the execution of particular sub-tests).
+
+Whether to manipulate the shell errexit mode or not depends on what
+the batch test is doing.  If it is implementing a sequence of
+dependent test stages, the errexit mode should be left enabled.  If a
+batch test is implementing a series of unrelated, independent tests,
+the errexit mode should be disabled and re-enabled as shown.
 
 ================
 test output
 ================
 
 The run_test function logs test results in a format similar to TAP13.
-This consists
-of the test output, followed by a line starting with the batch id
-(inside double brackets), then "ok" or "not ok" to indicate the sub-test result, followed by the testcase number and testcase name.
+This consists of the test output, followed by a line starting with the
+batch id (inside double brackets), then "ok" or "not ok" to indicate
+the sub-test result, followed by the testcase number and testcase
+name.
 
-A standard parser.py for this syntax is available and used by other 
-batch tests in the system (See fuego-core/tests/Functional.batch_hello/parser.py)
+A standard parser.py for this syntax is available and used by other
+batch tests in the system (See
+fuego-core/tests/Functional.batch_hello/parser.py)
 
 ========================================
 Preparing the system for a batch job 
 ========================================
 
-In order to run a batch test from Jenkins, you need to define a Jenkins
-job for the batch test, and jobs for all of the sub-tests that are called
-by the batch test.
+In order to run a batch test from Jenkins, you need to define a
+Jenkins job for the batch test, and jobs for all of the sub-tests that
+are called by the batch test.
 
-You can use 'ftc add-jobs' with the batch test, and Fuego will create the
-job for the batch test itself as well as jobs for all of its sub-tests.
+You can use 'ftc add-jobs' with the batch test, and Fuego will create
+the job for the batch test itself as well as jobs for all of its
+sub-tests.
 
-It is possible to run a batch test from the command line using 'ftc run-test', without
-creating Jenkins jobs.  However if you want to see the results of the test in
-the Jenkins interface, then the Jenkins test jobs need to be defined prior to
-running the batch test from the command line.
+It is possible to run a batch test from the command line using 'ftc
+run-test', without creating Jenkins jobs.  However if you want to see
+the results of the test in the Jenkins interface, then the Jenkins
+test jobs need to be defined prior to running the batch test from the
+command line.
 
 ===========================
 Executing a batch test 
 ===========================
 
-A batch test is executed the same way as any other Fuego test.
-Once installed as a Jenkins job, you can execute it using the Jenkins
+A batch test is executed the same way as any other Fuego test.  Once
+installed as a Jenkins job, you can execute it using the Jenkins
 interface (manually), or use Jenkins features to cause it to trigger
 automatically.  Or, you can run the test from the command line using
 'ftc run-test'.
@@ -205,25 +228,29 @@ You can view results from a batch test in two ways:
 Jenkins batch test results tables
 =====================================
 
-Inside the Jenkins interface, a batch job will display the list of sub-tests,
-and the PASS/FAIL status of each one.  In addition, if there is a Jenkins
-job associated with a particular sub-test, there will be a link in the
-table cell for that test run, that you can click to see that individual
-test's result and data in the Jenkins interface.
+Inside the Jenkins interface, a batch job will display the list of
+sub-tests, and the PASS/FAIL status of each one.  In addition, if
+there is a Jenkins job associated with a particular sub-test, there
+will be a link in the table cell for that test run, that you can click
+to see that individual test's result and data in the Jenkins
+interface.
 
 
 Generating a report 
 ======================
 
-You can view a report for a batch test, by specifying the batch_id with
-the  'ftc gen-report' command.
+You can view a report for a batch test, by specifying the batch_id
+with the  'ftc gen-report' command.
 
-To determine the batch_id, look at the log for the batch test (testlog.txt file).  Or, generate a report listing the batch_ids for the batch test, like so:
+To determine the batch_id, look at the log for the batch test
+(testlog.txt file).  Or, generate a report listing the batch_ids for
+the batch test, like so:
  
- * $ ``ftc gen-report --where test=batch_<name> --fields timestamp,batch_id``
+ * $ ``ftc gen-report --where test=batch_<name> --fields 
+   timestamp,batch_id``
 
-Select an appropriate batch_id from the list that appears, and note it for
-use in the next command.
+Select an appropriate batch_id from the list that appears, and note it 
+for use in the next command.
 
 Now, to see the results from the individual sub-tests in the batch, use
 the desired batch_id as part of a ''where'' clause, like so:
@@ -239,20 +266,21 @@ Miscelaneous notes
 Timeouts
 ==========
 
-The timeout for a batch test should be long enough for all sub-tests to complete.  When a batch test is launched from Jenkins, the board on which
-it will run is reserved and will be unavailable for tests until the entire
-batch is complete.  Keep this in mind when executing batch tests that 
-call sub-tests that have a long duration.
+The timeout for a batch test should be long enough for all sub-tests
+to complete.  When a batch test is launched from Jenkins, the board on
+which it will run is reserved and will be unavailable for tests until
+the entire batch is complete.  Keep this in mind when executing batch
+tests that call sub-tests that have a long duration.
 
 The timeout for individual sub-tests can be specified multiple ways.
 First, the timeout listed in the testplan (embedded in fuego_test.sh
-as the BATCH_TESTPLAN variable) is the one assigned to the Jenkins
-job for the sub-test, when jobs are created during test installation into Jenkins.
-These take effect when a sub-test is run independently from the batch
-test.
+as the BATCH_TESTPLAN variable) is the one assigned to the Jenkins job
+for the sub-test, when jobs are created during test installation into
+Jenkins.  These take effect when a sub-test is run independently from
+the batch test.
 
-If you want to specify a non-default timeout for a test, then you
-must use a --timeout argument to the run_test function, for that sub-test.
+If you want to specify a non-default timeout for a test, then you must
+use a --timeout argument to the run_test function, for that sub-test.
 
 
 
diff --git a/docs/rst_src/Using_the_qemuarm_target.rst b/docs/rst_src/Using_the_qemuarm_target.rst
index 02cdcc6..0978199 100644
--- a/docs/rst_src/Using_the_qemuarm_target.rst
+++ b/docs/rst_src/Using_the_qemuarm_target.rst
@@ -7,8 +7,8 @@ Using the qemuarm target
 Here are some quick instructions for using the qemuarm target that is
 preinstalled in fuego.
 
-Fuego does not ship with a qemuarm image in the repository, but assumes
-that you have built one with the Yocto Project.
+Fuego does not ship with a qemuarm image in the repository, but
+assumes that you have built one with the Yocto Project.
 
 If you don't have one lying around, you will need to build one.  Then
 you should follow the other steps on this  page to configure it to run
@@ -18,14 +18,18 @@ with Fuego.
 Build a qemuarm image
 =========================
 
-Here are some quick steps for building a qemuarm image using the Yocto Project:
-(See the `Project Quick Start <http://www.yoctoproject.org/docs/2.1/yocto-project-qs/yocto-project-qs.html|Yocto>`_, for more information)
+Here are some quick steps for building a qemuarm image using the Yocto
+Project: (See the `Project Quick Start
+<http://www.yoctoproject.org/docs/2.1/yocto-project-qs/
+yocto-project-qs.html|Yocto>`_,
+for more information)
 
 Note that these steps are for Ubuntu.
 
  * make sure you have required packages for building the software
 
-   * sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat libsdl1.2-dev xterm
+   * sudo apt-get install gawk wget git-core diffstat unzip texinfo 
+     gcc-multilib build-essential chrpath socat libsdl1.2-dev xterm
 
  * install the qemu software
 
@@ -41,7 +45,8 @@ Note that these steps are for Ubuntu.
    * source oe-init-build-env build-qemuarm build-qemuarm
    * edit conf/local.conf
 
-     * Under the comment about "Machine Selection", uncomment the line 'MACHINE ?= "qemuarm"'
+     * Under the comment about "Machine Selection", uncomment the line
+       'MACHINE ?= "qemuarm"'
 
  * build a minimal image (this will take a while)
 
@@ -74,8 +79,9 @@ Of course, substitute the correct IP address in the commands above.
 
 Once you know that things are working, directly connecting from the
 host to the qemuarm image, make sure the correct values are in the
-qemu-arm.board file.  You can edit this file inside the fuego container
-at /fuego-ro/boards/qemu-arm.board, or on your host in fuego-ro/boards/qemu-arm.board
+qemu-arm.board file.  You can edit this file inside the fuego
+container at /fuego-ro/boards/qemu-arm.board, or on your host in
+fuego-ro/boards/qemu-arm.board
 
 Here are the values you should set:
 
@@ -88,21 +94,23 @@ Here are the values you should set:
 Test building software
 ==========================
 
-It is important to be able to build the test software for the image you
-are using with qemu.
+It is important to be able to build the test software for the image
+you are using with qemu.
 
 The toolchain used to compile programs for a board is controlled via
 the PLATFORM variable in the board file.  Currently the qemu-arm.board
-file specifies PLATFORM="qemu-armv7hf".  Unfortunately, in my own testing
-that toolchain won't produce a binary that runs with a core-image-minimal
-image from YP Poky.
+file specifies PLATFORM="qemu-armv7hf".  Unfortunately, in my own
+testing that toolchain won't produce a binary that runs with a
+core-image-minimal image from YP Poky.
 
 You may need to install your Yocto Project SDK into fuego, in order to
 successfully build programs for the platforms.
 
-See :ref:`Adding a toolchain <addtoolchain>` for information about how to do that.
+See :ref:`Adding a toolchain <addtoolchain>` for information about how
+to do that.
 
-Try building a simple program, like hello_world, as a test for the new system, and see what happens.
+Try building a simple program, like hello_world, as a test for the new
+system, and see what happens.
 
 
 
diff --git a/docs/rst_src/Working_with_remote_boards.rst b/docs/rst_src/Working_with_remote_boards.rst
index 40f377b..7e4e325 100644
--- a/docs/rst_src/Working_with_remote_boards.rst
+++ b/docs/rst_src/Working_with_remote_boards.rst
@@ -4,69 +4,78 @@
 Working with remote boards
 ###################################
 
-Here are some general tips for working with remote boards
-(that is, boards in remote labs)
+Here are some general tips for working with remote boards (that is,
+boards in remote labs)
 
 ==========================
 using a jump server
 ==========================
 
-If you have an SSH jump server, then you can access
-machine directly in another lab, using the ssh ProxyCommand
-in the host settings for a board.
+If you have an SSH jump server, then you can access machine directly
+in another lab, using the ssh ProxyCommand in the host settings for a
+board.
 
 I found this page to be helpful:
 `<https://www.tecmint.com/access-linux-server-using-a-jump-host/>`_
 
-You should try to make each leg of the jump (from local machine to jump server,
-and from jump server to remote machine) password-less.
+You should try to make each leg of the jump (from local machine to
+jump server, and from jump server to remote machine) password-less.
 
-I found that if my local machine's public key was in the remote machine's
-authorized keys file, then I could log in without a password, even if
-the jump server's public key was not in the remote machine's authorized keys
-file.
+I found that if my local machine's public key was in the remote
+machine's authorized keys file, then I could log in without a
+password, even if the jump server's public key was not in the remote
+machine's authorized keys file.
 
 ==================================
 Using ttc transport remotely
 ==================================
 
-If you have a server that already has ttc configured for a bunch of board,
-you can accomplish a lot just by referencing ttc commands on that server.
+If you have a server that already has ttc configured for a bunch of
+board, you can accomplish a lot just by referencing ttc commands on
+that server.
 
 For example, in your local ttc.conf, you can put: ::
 
 	PASSWORD=foo
 	USER=myuser
-	SSH_ARGS=-o UserKnownHostsFile=/dev/null -o StrictHostKeychecking=no -o LogLevel=QUIET
+	SSH_ARGS=-o UserKnownHostsFile=/dev/null -o 
+        StrictHostKeychecking=no -o LogLevel=QUIET
 
 	pos_cmd=ssh timdesk ttc %%(target)s pos
 	off_cmd=ssh timdesk ttc %%(target)s off
 	on_cmd=ssh timdesk ttc %%(target)s on
 	reboot_cmd=ssh timdesk ttc %%(target)s reboot
 
-	login_cmd=sshpass -p %%(PASSWORD)s ssh %%(SSH_ARGS)s -x %%(USER)s@%%(target)s
-	run_cmd=sshpass -p %%(PASSWORD)s ssh %%(SSH_ARGS)s -x %%(USER)s@%%(target)s "$COMMAND"
-	copy_to_cmd=sshpass -p %%(PASSWORD)s scp %%(SSH_ARGS)s $src %%(USER)s@%%(target)s:/$dest
-	copy_from_cmd=sshpass -p %%(PASSWORD)s scp %%(SSH_ARGS)s %%(USER)s@%%(target)s:/$src $dest
+	login_cmd=sshpass -p %%(PASSWORD)s ssh %%(SSH_ARGS)s -x 
+        %%(USER)s@%%(target)s
+	run_cmd=sshpass -p %%(PASSWORD)s ssh %%(SSH_ARGS)s -x 
+        %%(USER)s@%%(target)s "$COMMAND"
+	copy_to_cmd=sshpass -p %%(PASSWORD)s scp %%(SSH_ARGS)s 
+        $src %%(USER)s@%%(target)s:/$dest
+	copy_from_cmd=sshpass -p %%(PASSWORD)s scp %%(SSH_ARGS)s 
+        %%(USER)s@%%(target)s:/$src $dest
 
 
-Please note that 'ttc status <remote-board>' does not work with ttc version 1.4.4.
-This is due to internal usage of %%(ip_addr)s in the function network_status(),
-which will not be correct for the remote-board.
+Please note that 'ttc status <remote-board>' does not work with ttc
+version 1.4.4.  This is due to internal usage of %%(ip_addr)s in the
+function network_status(), which will not be correct for the
+remote-board.
 
 =============================================================
 setting up ssh ProxyCommand in the Fuego docker container 
 =============================================================
 
-Please note that tests in Fuego are executed inside the docker container as user 'jenkins'.
+Please note that tests in Fuego are executed inside the docker
+container as user 'jenkins'.
 
-In order to set up password-less operation, or use of a jump server or ProxyCommand,
-you have to add appropriate items (config and keys) to:
+In order to set up password-less operation, or use of a jump server or
+ProxyCommand, you have to add appropriate items (config and keys) to:
 /var/lib/jenkins/.ssh
 
-Please note that this may make your docker container a security risk, as it may expose
-your private keys to tests.  Please use caution when adding private keys or other 
-sensitive security information to the docker container.
+Please note that this may make your docker container a security risk,
+as it may expose your private keys to tests.  Please use caution when
+adding private keys or other sensitive security information to the
+docker container.
 
 
 
diff --git a/docs/rst_src/index.rst b/docs/rst_src/index.rst
index e359897..5a3dbff 100644
--- a/docs/rst_src/index.rst
+++ b/docs/rst_src/index.rst
@@ -83,6 +83,7 @@ Index
    :hidden:
 
    Fuego_Quickstart_Guide
+   Quick_Setup_Guide.rst
    Raspberry_Pi_Fuego_Setup
    Using_the_qemuarm_target
 
@@ -169,7 +170,4 @@ Indices and tables
    The following is to hide a warning.
    FrontPage.rst is included rather than referenced using toctree
 
-.. toctree::
-  :hidden:
 
-  FrontPage
diff --git a/docs/rst_src/integration_with_ttc.rst b/docs/rst_src/integration_with_ttc.rst
index 1d53784..1dc484b 100644
--- a/docs/rst_src/integration_with_ttc.rst
+++ b/docs/rst_src/integration_with_ttc.rst
@@ -11,8 +11,8 @@ board farms, and for doing kernel development on multiple different
 target boards at a time (including especially boards with varying
 processors and architectures.)
 
-This page describes how ttc and fuego can be integrated, so that the fuego
-test framework can use 'ttc' as it's transport mechanism.
+This page describes how ttc and fuego can be integrated, so that the
+fuego test framework can use 'ttc' as it's transport mechanism.
 
 You can find more information about 'ttc' on the linux wiki at:
 http://elinux.org/Ttc_Program_Usage_Guide
@@ -23,39 +23,51 @@ Outline of supported functionality
 
 Here is a rough outline of the support for 'ttc' in fuego:
 
- * Integration for the tool and helper utilities in the container build
+ * Integration for the tool and helper utilities in the container 
+   build
 
-   * When the docker container is built, ttc is downloaded from github and installed into the docker image.
-   * During this process, the path to the ttc.conf file is changed from /etc/ttc.conf to /fuego-ro/conf/ttc.conf
+   * When the docker container is built, ttc is downloaded from github 
+     and installed into the docker image.
+   * During this process, the path to the ttc.conf file is changed 
+     from /etc/ttc.conf to /fuego-ro/conf/ttc.conf
 
  * 'ttc' is now a valid transport option
 
-   * You can specify ttc as the 'transport' for a board, instead of ssh
+   * You can specify ttc as the 'transport' for a board, instead of 
+     ssh
 
  * ttc now supports -r as an option to the 'ttc cp' command
 
-   * this is required since fuego uses -r extensively to do recursive directory copies (See :ref:`Transport_notes <transport_notes>` for details)
+   * this is required since fuego uses -r extensively to do recursive 
+     directory copies (See :ref:`Transport_notes <transport_notes>` 
+     for details)
 
- * fuego-core has been modified to avoid using wildcards on 'get' operations
+ * fuego-core has been modified to avoid using wildcards on 'get' 
+   operations
 
  * a new test called Functional.fuego_transport has been added
 
-   * this tests use of wildcards, multiple files and directories and directory recursion with the 'put' command.
-   * it also indirectly tests the 'get' command, because logs are obtained during the test.
+   * this tests use of wildcards, multiple files and directories and 
+     directory recursion with the 'put' command.
+   * it also indirectly tests the 'get' command, because logs are 
+     obtained during the test.
 
 
 ==========================
 Supported operations 
 ==========================
 
-ttc has several sub-commands.  Fuego currently only uses the following ttc sub-commands:
+ttc has several sub-commands.  Fuego currently only uses the following
+ttc sub-commands:
 
  * 'ttc run' - to run a command on the target
- * 'ttc cp' - to get a file from the target, and to put files to the target
+ * 'ttc cp' - to get a file from the target, and to put files to the 
+   target
 
-Note that some other commands, such as 'ttc reboot' are not used, in spite of there
-being similar functionality provided in fuego (see
-:ref:`function target reboot <func_target_reboot>` and :ref:`function ov rootfs reboot <func_ov_rootfs_reboot>`.
+Note that some other commands, such as 'ttc reboot' are not used, in
+spite of there being similar functionality provided in fuego (see
+:ref:`function target reboot <func_target_reboot>` and :ref:`function
+ov rootfs reboot <func_ov_rootfs_reboot>`).
 
 Finally, other commands, such as 'ttc get_kernel', 'ttc get_config',
 'ttc kbuild'  and 'ttc kinstall' are not used currently.  These may be
@@ -80,22 +92,32 @@ Steps to use ttc with a target board
 
 Here is a list of steps to set up a target board to use ttc.
 These steps assume you have already added a board to fuego
-following the steps described in :ref:`Adding a Board <adding_board>`.
+following the steps described in :ref:`Adding a board <adding_board>`.
 
- * If needed, create your docker container using 'docker-create-usb-privileged-container.sh
+ * If needed, create your docker container using 'docker-create-usb-
+   privileged-container.sh'
 
-    * This may be needed if you are using ttc with board controls that require access to USB devices (such as the Sony debug board)
-    * substitute this command in place of 'docker-create-container.sh' in the `Fuego Quickstart Guide <http://fuegotest.org/wiki/Fuego_Quickstart_Guide#Download,_build,_start_and_access>`_.
+    * This may be needed if you are using ttc with board controls that
+      require access to USB devices (such as the Sony debug board)
+    * substitute this command in place of 'docker-create-container.sh'
+      in the `Fuego Quickstart Guide <http://fuegotest.org/wiki/Fuego_
+      Quickstart_Guide#Download,_build,_start_and_access>`_.
 
- * Make sure that /userdata/conf/ttc.conf has the definitions required for your target board
+ * Make sure that /userdata/conf/ttc.conf has the definitions required
+   for your target board
 
-   * Validate this by doing 'ttc list' to see that the board is present, and 'ttc run' and 'ttc cp' commands, to test that these operations work with the      board, from inside the container.
+   * Validate this by doing 'ttc list' to see that the board is 
+     present, and 'ttc run' and 'ttc cp' commands, to test that these 
+     operations work with the board, from inside the container.
 
- * Edit the fuego board file (found in /userdata/conf/boards/<somthing>.board
+ * Edit the fuego board file (found in /userdata/conf/boards
+   /<somthing>.board)
 
    * Set the TRANSPORT to 'ttc'
-   * Set the TTC_TARGET variable is set to the name for the target used by ttc
-   * See the following example, for a definition for a target named 'bbb' (for my beaglebone black board)::
+   * Set the TTC_TARGET variable is set to the name for the target 
+     used by ttc
+   * See the following example, for a definition for a target named 
+     'bbb' (for my beaglebone black board)::
 
 
 	TRANSPORT=ttc
@@ -105,10 +127,11 @@ following the steps described in :ref:`Adding a Board <adding_board>`.
 modify your copy_to_cmd
 ===========================
 
-In your ttc.conf file, you may need to make changes to any copy_to_cmd definitions.  Fuego allows programs to pass a '-r' argument to its internal
-'put' command, which in turn invokes ttc's cp command, with the source as target
-and destination as the host.  In other words, it ends up invokings ttc's
-'copy_from_cmd' for the indicated target.
+In your ttc.conf file, you may need to make changes to any copy_to_cmd
+definitions.  Fuego allows programs to pass a '-r' argument to its
+internal 'put' command, which in turn invokes ttc's cp command, with
+the source as target and destination as the host.  In other words, it
+ends up invokings ttc's 'copy_from_cmd' for the indicated target.
 
 All versions of copy_to_cmd should be modified to
 reference a new environment variable $copy_args.
@@ -122,11 +145,7 @@ use to execute a copy_to_cmd.
 See examples in ttc.conf.sample and ttc.conf.sample2 for usage examples.
 
 
-.. toctree::
-   :hidden:
 
-   Transport_notes
- 
 
 
 
-- 
2.7.4


-- 






This
message contains confidential information and is intended only 
for the
individual(s) named. If you are not the intended
recipient, you are 
notified that disclosing, copying, distributing or taking any
action in 
reliance on the contents of this mail and attached file/s is strictly

prohibited. Please notify the
sender immediately and delete this e-mail 
from your system. E-mail transmission
cannot be guaranteed to be secured or 
error-free as information could be
intercepted, corrupted, lost, destroyed, 
arrive late or incomplete, or contain
viruses. The sender therefore does 
not accept liability for any errors or
omissions in the contents of this 
message, which arise as a result of e-mail
transmission.


More information about the Fuego mailing list