[Fuego] [PATCH] docs: .rst files for pages categorized as Explanation, Tutorials and How Tos.

Bird, Tim Tim.Bird at sony.com
Tue Sep 22 22:51:32 UTC 2020


First - thanks for getting this into 'patch in email message body' format.
I know that's a pain to get set up, but it's helpful for me.

One thing to watch out for is trailing spaces on lines, and empty lines
at the bottom of the wiki pages.  There are lots of them.  There is a
bug in my wiki page that adds an extra newline whenever a section
is edited (a bug I've been meaning to fix, but that's another story).
Anyway, a lot of pages have long trails of empty lines.  These can
all be removed.

Also, I'd like to remove all whitespace at the end of lines, as a general
policy.  I think there were a lot of whitespace at the end of lines in the
wiki pages, so I'm not sure whether you introduced some or not
with your edits.  I'd like the following grep to come up empty
on the rst_src directory: 'grep -P " $"'.

After my review of the pages, I'll do a general trailing whitespace
cleanup and push that, so we should be starting with a clean slate
from here on out.

I have found the following setting in my ~/.vimrc file to be helpful
to highlight trailing whitespace while I'm editing:
highlight ExtraWhitespace ctermbg=red guibg=red
autocmd ColorScheme * highlight ExtraWhitespace ctermbg=red guibg=red
match ExtraWhitespace /\s\+$/
autocmd BufWinEnter * match ExtraWhitespace /\s\+$/
autocmd InsertEnter * match ExtraWhitespace /\s\+\%#\@<!$/
autocmd InsertLeave * match ExtraWhitespace /\s\+$/
autocmd BufWinLeave * call clearmatches()

Finally - this is a lot of files to put into a single patch.  If you could
limit this to a more manageable 2 or 3 files per patch, that would
be good.

Please see other comments inline below.

> -----Original Message-----
> From: Pooja <pooja.sm at pathpartnertech.com>
> 
>  convert the following pages from the fuegotest wiki
>  into rst format and add to the Fuegl rst documentation directory:
Fuegl -> Fuego

>  Adding_a_Board, Adding_a_new_test, Adding_a_toolchain,
>  Adding_or_Customizing_a_Distribution, Adding_test_jobs_to_Jenkins,
>  Adding_views_to_Jenkins,Architecture,Artwork, Building_Documentation,
>  FAQ.rst, FrontPage, Fuego_Quickstart_Guide, Fuego_naming_rules,
>  Installing_Fuego, License_And_Contribution_Policy
>  OSS_Test_Vision, Parser_module_API, Quick_Setup_Guide,
>  Raspberry_Pi_Fuego_Setup,Test_variables,
>  Using_Batch_tests, Using_the_qemuarm_target.
> 
> Signed-off-by: Pooja More <pooja.sm at pathpartnertech.com>
> ---
>  docs/rst_src/Adding_a_Board.rst                    | 242 +++++++------
>  docs/rst_src/Adding_a_new_test.rst                 | 230 ++++++++-----
>  docs/rst_src/Adding_a_toolchain.rst                | 224 +++++++++++-
>  .../Adding_or_Customizing_a_Distribution.rst       | 130 ++++---
>  docs/rst_src/Adding_test_jobs_to_Jenkins.rst       | 136 +++++++-
>  docs/rst_src/Adding_views_to_Jenkins.rst           |  68 ++--
>  docs/rst_src/Architecture.rst                      | 375 +++++++++++----------
>  docs/rst_src/Artwork.rst                           |   7 +-
>  docs/rst_src/Building_Documentation.rst            |  25 +-
>  docs/rst_src/FAQ.rst                               |  49 +++
>  docs/rst_src/FrontPage.rst                         |  43 ++-
>  docs/rst_src/Fuego_Quickstart_Guide.rst            | 255 ++++++++++++++
>  docs/rst_src/Fuego_naming_rules.rst                | 126 ++++---
>  docs/rst_src/Installing_Fuego.rst                  | 235 +++++++------
>  docs/rst_src/License_And_Contribution_Policy.rst   | 137 ++++----
>  docs/rst_src/OSS_Test_Vision.rst                   | 349 +++++++++++++++++++
>  docs/rst_src/Parser_module_API.rst                 | 114 ++++---
>  docs/rst_src/Quick_Setup_Guide.rst                 | 161 +++++++++
>  docs/rst_src/Raspberry_Pi_Fuego_Setup.rst          |  73 ++--
>  docs/rst_src/Test_variables.rst                    | 206 ++++++-----
>  docs/rst_src/Using_Batch_tests.rst                 | 240 +++++++------
>  docs/rst_src/Using_the_qemuarm_target.rst          |  38 ++-
>  docs/rst_src/Working_with_remote_boards.rst        |  63 ++--
>  docs/rst_src/index.rst                             |   4 +-
>  docs/rst_src/integration_with_ttc.rst              |  83 +++--
>  25 files changed, 2575 insertions(+), 1038 deletions(-)
>  create mode 100644 docs/rst_src/FAQ.rst
>  create mode 100644 docs/rst_src/Fuego_Quickstart_Guide.rst
>  create mode 100644 docs/rst_src/OSS_Test_Vision.rst
>  create mode 100644 docs/rst_src/Quick_Setup_Guide.rst
> 
> diff --git a/docs/rst_src/Adding_a_Board.rst b/docs/rst_src/Adding_a_Board.rst
> index 5ad2a10..8bed1ac 100644
> --- a/docs/rst_src/Adding_a_Board.rst
> +++ b/docs/rst_src/Adding_a_Board.rst
> @@ -1,7 +1,8 @@
>  .. _adding_board:
> 
> +
>  #################
> -Adding a Board
> +Adding a board
>  #################
> 
>  ==============
> @@ -10,25 +11,28 @@ Overview
> 
>  To add your own board to Fuego, there are five main steps:
> 
> - * 1. Make sure you can access the target via ssh, serial or some other connection
> - * 2. Decide whether to use an existing user account, or to create a user account specifically for testing
> - * 3. create a test directory on the target
> + * 1. Make sure you can access the target via ssh, serial or some
> +   other connection
> + * 2. Decide whether to use an existing user account, or to create a
> +   user account specifically for testing
> + * 3. create a test directory on the target
>   * 4. create a board file (on the host)
>   * 5. add your board as a node in the Jenkins interface

When converting numbered lists from tbwiki markup to rst, you
can drop the leading bullets.  tbwiki handling of numbered lists
was weird, so I used bullets to compensate, but Sphinx handles
them better, so it would be good to get rid of the leading bullets
on lists like these.

> 
>  1 - Set up communication to the target board
>  ==============================================
> 
> -In order for Fuego to test a board, it needs to communicate with it from
> -the host machine where Fuego is running.
> +In order for Fuego to test a board, it needs to communicate with it
> +from the host machine where Fuego is running.
> 
>  The most common way to do this is to use 'ssh' access over a network
>  connection.  The target board needs to run an ssh server, and the host
>  machine connects to it using the 'ssh' client.
> 
> -The method of setting an ssh server up on a board varies from system to system,
> -but sample instructions for setting up an ssh server on a raspberry pi are
> -located here:  :ref:`Raspberry Pi Fuego Setup <raspPiFuegoSetup>`
> +The method of setting an ssh server up on a board varies from system
> +to system, but sample instructions for setting up an ssh server on a
> +raspberry pi are located here:
> +:ref:`Raspberry Pi Fuego Setup <raspPiFuegoSetup>`
> 
>  Another method that can work is to use a serial connection between
>  the host and the board's serial console.  Setting this up is outside
> @@ -40,8 +44,8 @@ package to accomplish this.  I
> 
>  On your target board, a user account is required in order to run tests.
> 
> -The user account used by Fuego is determined by your board file, which you
> -will configure manually in step 4.  You need
> +The user account used by Fuego is determined by your board file, which
> +you will configure manually in step 4.  You need
>  to decide which account to use.  There are three options:
> 
>   * use the root account
> @@ -50,21 +54,23 @@ to decide which account to use.  There are three options:
> 
>  There are pros and cons to each approach.
> 
> -My personal preference is to use the root account.  Several tests in Fuego
> -require root privileges.  If you are working with a test board, that you
> -can re-install easily, using the 'root' account will allow you to run the
> -greatest number of tests.  However, this should not be used to test machines
> -that are in production.  A Fuego test can run all kinds of commands, and
> -you should not trust that tests will not destroy your machine (either
> -accidentally or via some malicious intent).
> +My personal preference is to use the root account.  Several tests in
> +Fuego require root privileges.  If you are working with a test board,
> +that you can re-install easily, using the 'root' account will allow
> +you to run the greatest number of tests.  However, this should not be
> +used to test machines that are in production.  A Fuego test can run
> +all kinds of commands, and you should not trust that tests will not
> +destroy your machine (either accidentally or via some malicious
> +intent).
> 
> -If you don't use 'root', then you can either use an existing account, or
> -create a new account.  In most circumstances it is worthwhile to create a new
> -account dedicated to testing.  However, you may not have sufficient privileges
> -on your board to do this.
> +If you don't use 'root', then you can either use an existing account,
> +or create a new account.  In most circumstances it is worthwhile to
> +create a new account dedicated to testing.  However, you may not have
> +sufficient privileges on your board to do this.
> 
> -In any event, at this point, decide which account you will use for testing
> -with Fuego, and note it to include in the board file, described later.
> +In any event, at this point, decide which account you will use for
> +testing with Fuego, and note it to include in the board file,
> +described later.
> 
> 
>  3 - Create test directory on target
> @@ -98,7 +104,8 @@ Create board file
> 
>  Now, create your board file.
>  The board files reside in <fuego-source-dir>/fuego-ro/boards, and
> -each file has a filename with the name of the board, with the extension ".board".
> +each file has a filename with the name of the board, with the
> +extension ".board".
> 
>  The easiest way to create a board file is to copy an existing one,
>  and edit the variables to match those of your board.  The following
> @@ -129,7 +136,8 @@ with that transport type.
> 
>   * TRANSPORT - this specifies the transport to use with the target
> 
> -   * there are three transport types currently supported: 'ssh', 'serial', 'ttc'
> +   * there are three transport types currently supported: 'ssh',
> +     'serial', 'ttc'
>     * Most boards will use the 'ssh' or 'serial' transport type
>     * ex: TRANSPORT="ssh"
> 
> @@ -146,12 +154,13 @@ For targets using ssh:
>   * SSH_PORT
>   * SSH_KEY
> 
> -IPADDR is the network address of your board.  SSH_PORT is the port where
> -the ssh daemon is listening for connections.  By default this is 22, but
> -you should set this to whatever your target board uses.  SSH_KEY is the
> -absolute path where an SSH key file
> -may be found (to allow password-less access to a target machine).  An
> -example would be:
> +IPADDR is the network address of your board.  SSH_PORT is the port
> +where the ssh daemon is listening for connections.  By default this is
> +22, but you should set this to whatever your target board uses.
> +SSH_KEY is the absolute path where an SSH key file may be found (to
> +allow password-less access to a target machine).
> +
> +An example would be:
> 
>   * SSH_KEY="/fuego-ro/boards/myboard_id_rsa"
> 
> @@ -163,28 +172,40 @@ For targets using serial:
>   * BAUD
>   * IO_TIME_SERIAL
> 
> -SERIAL is serial port name used to access the target from the host.  This
> -is the name of the serial device node on the host (or in the container).
> -this is specified without the /dev/ prefix.  Some examples are:
> +SERIAL is serial port name used to access the target from the host.
> +This is the name of the serial device node on the host (or in the
> +container).this is specified without the /dev/ prefix.
> +
> +Some examples are:
> 
>   * ttyACM0
>   * ttyACM1
>   * ttyUSB0
> 
> -BAUD is the baud-rate used for the serial communication, for eg. "115200".
> +BAUD is the baud-rate used for the serial communication, for eg.
> +"115200".
> 
> -IO_TIME_SERIAL is the time required to catch the command's response from the target. This is specified as a decimal fraction of a second,
> and is usually
> -very short.  A time that usually works is "0.1" seconds.
> +IO_TIME_SERIAL is the time required to catch the command's response
> +from the target. This is specified as a decimal fraction of a second,
> +and is usually very short.  A time that usually works is "0.1"
> +seconds.
> 
>   * ex: IO_TIME_SERIAL="0.1"
> 
> -This value directly impacts the speed of operations over the serial port, so
> -it should be adjusted with caution.  However, if you find that some operations
> -are not working over the serial port, try increasing this value (in small increments - 0.15, 0.2, etc.)
> -
> -*Note: In the case of TRANSPORT="serial", Please make sure that docker container and Fuego have sufficient permissions to access the
> specified serial port. You may need to modify docker-create-usb-privileged-container.sh prior to making your docker image, in order to
> make sure the container can access the ports.  Also, if check that the host filesystem permissions on the device node (e.g /dev/ttyACM0
> allows access. From inside the container
> -you can try using the sersh or sercp commands directly, to test access to
> -the target.*
> +This value directly impacts the speed of operations over the serial
> +port, so it should be adjusted with caution.  However, if you find
> +that some operations are not working over the serial port, try
> +increasing this value (in small increments - 0.15, 0.2, etc.)
> +
> +*Note: In the case of TRANSPORT="serial", Please make sure that docker
> +container and Fuego have sufficient permissions to access the
> +specified serial port. You may need to modify
> +docker-create-usb-privileged-container.sh prior to making your docker
> +image, in order to make sure the container can access the ports.
> +Also, if check that the host filesystem permissions on the device node
> +(e.g /dev/ttyACM0 allows access. From inside the container you can try
> +using the sersh or sercp commands directly, to test access to the
> +target.*

Notes like these should be converted to rst note format:
.. note::
   blah, blah
> 
>  For targets using ttc:
> 
> @@ -202,44 +223,52 @@ Other parameters
>   * DISTRIB
>   * BOARD_CONTROL
> 
> -The BOARD_TESTDIR directory is an absolute path in the filesystem on the
> -target board where the Fuego tests are run.
> -Normally this is set to something like "/home/fuego", but you can set it to
> -anything.  The user you specify for LOGIN should have access rights to
> -this directory.
> +The BOARD_TESTDIR directory is an absolute path in the filesystem on
> +the target board where the Fuego tests are run.
> +Normally this is set to something like "/home/fuego", but you can set
> +it to anything.  The user you specify for LOGIN should have access
> +rights to this directory.
> 
> -The ARCHITECTURE is a string describing the architecture used by toolchains to build the tests for the target.
> +The ARCHITECTURE is a string describing the architecture used by
> +toolchains to build the tests for the target.
> 
> -The TOOLCHAIN variable indicates the toolchain to use to build the tests
> -for the target.  If you are using an ARM target, set this to "qemu-armv7hf".
> -This is a default ARM toolchain installed in the docker container, and should
> -work for most ARM boards.
> +The TOOLCHAIN variable indicates the toolchain to use to build the
> +tests for the target.  If you are using an ARM target, set this to
> +"debian-armhf". This is a default ARM toolchain installed in the
> +docker container, and should work for most ARM boards.
> 
> -If you are not using ARM, or for some reason the pre-installed arm toolchains
> -don't work for the Linux distribution installed on your board, then
> -you will need to install your own SDK or toolchain.  In this case, follow
> -the steps in [[Adding a toolchain]], then come back to this step and set
> -the TOOLCHAIN variable to the name you used for that operation.
> +If you are not using ARM, or for some reason the pre-installed arm
> +toolchains don't work for the Linux distribution installed on your
> +board, then you will need to install your own SDK or toolchain.
> +In this case, follow the steps in [[Adding a toolchain]], then come
> +back to this step and set the TOOLCHAIN variable to the name you used
> +for that operation.
> 
>  For other variables in the board file, see the section below.
> 
>  The DISTRIB variable specifies attributes of the Linux distribution
> -running on the board, that are used by Fuego.  Currently, this is mainly
> -used to tell Fuego what kind of system logger the operating system on
> -the board has.  Here are some options that are available:
> +running on the board, that are used by Fuego.  Currently, this is
> +mainly  used to tell Fuego what kind of system logger the operating
> +system on the board has.
> 
> - * base.dist - a "standard" distribution that implements syslogd-style system logging.  It should have the commands: logread, logger, and
> /var/log/messages
> - * nologread.dist - a distribution that has no 'logread' command, but does have /var/log/messages
> - * nosyslogd.dist - a distribution that does not have syslogd-style system logging.
> +Here are some options that are available:
> 
> -If DISTRIB is not specified, Fuego will default to using "nosyslogd.dist".
> + * base.dist - a "standard" distribution that implements syslogd-style
> + * system logging.  It should have the commands: logread, logger, and
> + * /var/log/messages nologread.dist - a distribution that has no
> + * 'logread' command, but does have /var/log/messages nosyslogd.dist -
> + * a distribution that does not have syslogd-style system logging.

This bullet list got mangled with this edit.  I think you may have done an automatic
word warp here, and the editor introduced a bullet for every new line created.
Please make sure original bullets and lines are kept intact.

> 
> -The BOARD_CONTROL variable specifies the name of the system used to control
> -board hardware operations.  When Fuego is used in conjunction with board
> -control hardware, it can automate more testing functionality.  Specifically,
> -it can reboot the board, or re-provision the board, as needed for testing.
> -As of the 1.3 release, Fuego only supports the 'ttc' board control system.
> -Other board control systems will be introduced and supported over time.
> +If DISTRIB is not specified, Fuego will default to using
> +"nosyslogd.dist".
> +
> +The BOARD_CONTROL variable specifies the name of the system used to
> +control board hardware operations.  When Fuego is used in conjunction
> +with board control hardware, it can automate more testing
> +functionality.  Specifically, it can reboot the board, or re-provision
> +the board, as needed for testing.  As of the 1.3 release, Fuego only
> +supports the 'ttc' board control system.  Other board control systems
> +will be introduced and supported over time.
> 
>  Add node to Jenkins interface
>  ================================
> @@ -252,13 +281,17 @@ You can see a list of the boards that Fuego knows about using:
> 
>   * $ ftc list-boards
> 
> -When you run this command, you should see the name of the board you just
> -created.
> +When you run this command, you should see the name of the board you
> +just created.
> +
> +You can see the nodes that have already been installed in Jenkins
> +with:
> 
> -You can see the nodes that have already been installed in Jenkins with:
>   * $ ftc list-nodes
> 
> -To actually add the board as a node in jenkins, inside the docker container, run the following command at a shell prompt:
> +To actually add the board as a node in jenkins, inside the docker
> +container, run the following command at a shell prompt:
> +
>   * $ ftc add-nodes -b <board_name>
> 
>  ==============================
> @@ -266,11 +299,13 @@ Board-specific test variables
>  ==============================
> 
>  The following other variables can also be defined in the board file:
> +
>   * MAX_REBOOT_RETRIES
>   * FUEGO_TARGET_TMP
>   * FUEGO_BUILD_FLAGS
> 
> -See :ref:`Variables <variables>` for the definition and usage of these variables.
> +See :ref:`Variables <variables>` for the definition and usage of these
> +variables.
> 
>  General Variables
>  ====================
> @@ -278,33 +313,36 @@ General Variables
>  File System test variables (SATA, USB, MMC)
>  =============================================
> 
> -If running filesystem tests, you will want to declare the Linux device name
> -and mountpoint path, for the filesystems to be tested.  There are three
> -different device/mountpoint options available depending on the testplan you
> -select (SATA, USB, or MMC).  Your board may have all of these types of
> -storage available, or only one.
> +If running filesystem tests, you will want to declare the Linux device
> +name and mountpoint path, for the filesystems to be tested.  There are
> +three different device/mountpoint options available depending on the
> +testplan you select (SATA, USB, or MMC).  Your board may have all of
> +these types of storage available, or only one.
> 
>  To prepare to run a test on a filesystem on a sata device, define the
>  SATA device and mountpoint variables for your board.
> 
> -For example, if you had a SATA device with a mountable filesystem accessible
> -on device /dev/sdb1, and you have a directory on your target of /mnt/sata
> -that can be used to mount this device at, you could declare the following
> -variables in your board file.
> +For example, if you had a SATA device with a mountable filesystem
> +accessible on device /dev/sdb1, and you have a directory on your
> +target of /mnt/sata that can be used to mount this device at, you
> +could declare the following variables in your board file.
> 
>   * SATA_DEV="/dev/sdb1"
>   * SATA_MP="/mnt/sata"
> 
> -You can define variables with similar names (USB_DEV and USB_MP, or MMC_DEV and MMC_MP) for USB-based filesystems or MMC-
> based filesystems.
> +You can define variables with similar names (USB_DEV and USB_MP, or
> +MMC_DEV and MMC_MP) for USB-based filesystems or MMC-based
> +filesystems.
> 
>  LTP test variables
>  ======================
> 
> -LTP (the Linux Test Project) test suite is a large collection of tests that
> -require some specialized handling, due to the complexity and diversity of
> -the suite. LTP has a large number of tests, some of which may not work correctly on your board.  Some of the LTP tests
> -depend on the kernel configuration or on aspects of your Linux distribution
> -or your configuration.
> +LTP (the Linux Test Project) test suite is a large collection of tests
> +that require some specialized handling, due to the complexity and
> +diversity of the suite. LTP has a large number of tests, some of which
> +may not work correctly on your board.  Some of the LTP tests depend on
> +the kernel configuration or on aspects of your Linux distribution or
> +your configuration.
> 
>  You can control whether the LTP posix test succeeds by indicating the
>  number of positive and negative results you expect for your board.
> @@ -313,19 +351,23 @@ These numbers are indicated in test variables in the board file:
>   * LTP_OPEN_POSIX_SUBTEST_COUNT_POS
>   * LTP_OPEN_POSIX_SUBTEST_COUNT_NEG
> 
> -You should run the LTP test yourself once, to see what your baseline values
> -should be, then set these to the correct values for your board (configuration
> -and setup).
> +You should run the LTP test yourself once, to see what your baseline
> +values should be, then set these to the correct values for your board
> +(configuration and setup).
> 
>  Then, Fuego will report any deviation from your accepted numbers, for
>  LTP tests on your board.
> 
>  LTP may also use these other test variables defined in the board file:
> 
> - * FUNCTIONAL_LTP_HOMEDIR - If this variable is set, it indicates where a pre-installed version of LTP resides in the board's filesystem.
> This can be used to avoid a lengthy deploy phase on each execution of LTP.
> - * FUNCTIONAL_LTP_BOARD_SKIPLIST - This variable has a list of individual LTP test programs to skip.
> + * FUNCTIONAL_LTP_HOMEDIR - If this variable is set, it indicates
> +   where a pre-installed version of LTP resides in the board's
> +   filesystem.  This can be used to avoid a lengthy deploy phase on
> +   each execution of LTP.
> + * FUNCTIONAL_LTP_BOARD_SKIPLIST - This variable has a list of
> +   individual LTP test programs to skip.
> 
> -See :ref:`Functional.LTP <functionalLTP>` for more information about the LTP test, and test
> -variables used by it.
> +See :ref:`Functional.LTP <functionalLTP>` for more information about
> +the LTP test, and test variables used by it.
> 
> 
> diff --git a/docs/rst_src/Adding_a_new_test.rst b/docs/rst_src/Adding_a_new_test.rst
> index 644a37a..0bb8de5 100644
> --- a/docs/rst_src/Adding_a_new_test.rst
> +++ b/docs/rst_src/Adding_a_new_test.rst
> @@ -16,29 +16,40 @@ To add a new test to Fuego, you need to perform the following steps:
>   * 4. Write a test script for the test
>   * 5. Add the test_specs (if any) for the test
>   * 6. Add log processing to the test
> - * 6-a. (if a benchmark) Add parser.py and criteria and reference files
> + * 6-a. (if a benchmark) Add parser.py and criteria and reference
> +   files
>   * 7. Create the Jenkins test configuration for the test

Since there's a 6-a here, maybe it's OK to leave this as a bullet list.
I believe rst can do nested numbered lists, with different symbol
serieses for the numbers, but I don't know off the top of my head
how to do it.  If you can figure it out, please use it here.

> 
>  ==========================
>  Decide on a test name
>  ==========================
> 
> -The first step to creating a test is deciding the test name.  There are two
> -types of tests supported by Fuego: functional tests and benchmark tests.
> -A functional test either passes or fails, while a benchmark test produces one or more numbers representing some performance
> measurements for the system.
> +The first step to creating a test is deciding the test name.  There
> +are two types of tests supported by Fuego: functional tests and
> +benchmark tests.  A functional test either passes or fails, while a
> +benchmark test produces one or more numbers representing some
> +performance measurements for the system.
> 
>  Usually, the name of the test will be a combination of the test type
> -and a name to identify the test itself.  Here are some examples: *bonnie* is a popular disk performance test.  The name of this test in the
> fuego system is *Benchmark.bonnie*.  A test which runs portions of the posix test suite is a functional test (it either passes or fails), and in
> Fuego is named *Functional.posixtestsuite*.  The test name should be all one word (no spaces).
> +and a name to identify the test itself.  Here are some examples:
> +*bonnie* is a popular disk performance test.  The name of this test in
> +the fuego system is *Benchmark.bonnie*.  A test which runs portions of
> +the posix test suite is a functional test (it either passes or fails),
> +and in Fuego is named *Functional.posixtestsuite*.  The test name
> +should be all one word (no spaces).
> 
> -This name is used as the directory name where the test materials will live in the Fuego system.
> +This name is used as the directory name where the test materials will
> +live in the Fuego system.
> 
>  ======================================
>  Create the directory for the test
>  ======================================
> 
> -The main test directory is located in /fuego-core/engine/tests/*<test_name>*
> +The main test directory is located in
> +/fuego-core/engine/tests/*<test_name>*
> 
> -So if you just created a new Functional test called 'foo', you would create the directory:
> +So if you just created a new Functional test called 'foo', you would
> +create the directory:
> 
>   * /fuego-core/engine/tests/Functional.foo
> 
> @@ -50,31 +61,34 @@ The actual creation of the test program itself is outside
>  the scope of Fuego.  Fuego is intended to execute an existing
>  test program, for which source code or a script already exists.
> 
> -This page describes how to integrate such a test program into the Fuego test system.
> +This page describes how to integrate such a test program into the
> +Fuego test system.
> 
>  A test program in Fuego is provided in source form so that it can
>  be compiled for whatever processor architecture is used by the
>  target under test. This source may be in the form of a tarfile,
>  or a reference to a git repository, and one or more patches.
> 
> -Create a tarfile for the test, by downloading the test source manually, and
> -creating the tarfile.  Or, note the reference for the git repository for the
> -test source.
> +Create a tarfile for the test, by downloading the test source
> +manually, and creating the tarfile.  Or, note the reference for the
> +git repository for the test source.
> 
>  tarball source
>  ================
> 
> -If you are using source in the form of a tarfile, you add the name of the
> -tarfile (called 'tarball') to the test script.
> +If you are using source in the form of a tarfile, you add the name of
> +the tarfile (called 'tarball') to the test script.
> 
> -The tarfile may be compressed.  Supported compression schemes, and their associated extensions are:
> +The tarfile may be compressed.  Supported compression schemes, and
> +their associated extensions are:
> 
>   * uncompressed (extension='.tar')
>   * compressed with gzip (extension='.tar.gz' or '.tgz')
>   * compressed with bzip2 (extension='.bz2')
> 
> -For example, if the source for your test was in the tarfile 'foo-1.2.tgz' you
> -would add the following line to your test script, to reference this source: ::
> +For example, if the source for your test was in the tarfile
> +'foo-1.2.tgz' you would add the following line to your test script, to
> +reference this source: ::
> 
>    tarball=foo-1.2.tgz
> 
> @@ -82,17 +96,18 @@ would add the following line to your test script, to reference this source: ::
>  git source
>  ===============
> 
> -If you are using source from an online git repository, you reference this
> -source by adding the variables 'gitrepo' and 'gitref' to the test script.
> +If you are using source from an online git repository, you reference
> +this source by adding the variables 'gitrepo' and 'gitref' to the test
> +script.
> 
>  In this case, the 'gitrepo' is the URL used to access the source, and
> -the 'gitref' refers to a commit id (hash, tag, version, etc.) that refers
> -to a particular version of the code.
> +the 'gitref' refers to a commit id (hash, tag, version, etc.) that
> +refers to a particular version of the code.
> 
> -For example, if your test program is built from source in an online 'foo' repository,
> -and you want to use version 1.2 of that (which is tagged in the repository as 'v1.2',
> -on the master branch,  you might have some lines like the following in the test's
> -script. ::
> +For example, if your test program is built from source in an online
> +'foo' repository, and you want to use version 1.2 of that (which is
> +tagged in the repository as 'v1.2', on the master branch,  you might
> +have some lines like the following in the test's script. ::
> 
>    gitrepo=http://github.com/sampleuser/foo.git
>    gitref=master/v1.2
> @@ -101,16 +116,17 @@ script. ::
>  script-based source
>  =====================
> 
> -Some tests are simple enough to be implemented as a single script (that runs on the board).
> -For these tests, no additional source is necessary, and the script
> -can just be placed directly in the test's home directory. In *fuego_test.sh* you must set the following variable: ::
> +Some tests are simple enough to be implemented as a single script
> +(that runs on the board).  For these tests, no additional source is
> +necessary, and the script can just be placed directly in the test's
> +home directory. In *fuego_test.sh* you must set the following
> +variable: ::
> 
> 
>   local_source=1
> 
> 
> -During
> -the deploy phase, the script is sent to the board directly from
> +During the deploy phase, the script is sent to the board directly from
>  the test home directory instead of from the test build directory.
> 
> 
> @@ -118,7 +134,10 @@ the test home directory instead of from the test build directory.
>  Test script
>  =================
> 
> -The test script is a small shell script called ``fuego_test.sh``. It specifies the source tarfile containing the test program, and provides
> implementations for the functions needed to build, deploy, execute, and evaluate the results from the test program.
> +The test script is a small shell script called ``fuego_test.sh``. It
> +specifies the source tarfile containing the test program, and provides
> +implementations for the functions needed to build, deploy, execute,
> +and evaluate the results from the test program.
> 
>  The test script for a functional test should contain the following:
> 
> @@ -136,7 +155,9 @@ in order to run the test.
>  Sample test script
>  ========================
> 
> -Here is the ``fuego_test.sh`` script for the test Functional.hello_world.  This script demonstrates a lot of the core elements of a test script.::
> +Here is the ``fuego_test.sh`` script for the test
> +Functional.hello_world.  This script demonstrates a lot of the core
> +elements of a test script.::
> 
> 
>  	#!/bin/bash
> @@ -152,7 +173,8 @@ Here is the ``fuego_test.sh`` script for the test Functional.hello_world.  This
>  	}
> 
>  	function test_run {
> -	    report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./hello $FUNCTIONAL_HELLO_WORLD_ARG"
> +	    report "cd $BOARD_TESTDIR/fuego.$TESTDIR;
> +            ./hello $FUNCTIONAL_HELLO_WORLD_ARG"
>  	}
> 

This script excerpt should NOT be word-wrapped.  It should appear exactly as it appears
in the script being quoted.  In this case, the quoted script was actually changed to
use a line continuation character (so the original tbwiki page was incorrect, but your
wrap here is also incorrect.)


>  	function test_processing {
> @@ -163,9 +185,12 @@ Here is the ``fuego_test.sh`` script for the test Functional.hello_world.  This
>  Description of base test functions
>  =========================================
> 
> -The base test functions (test_build, test_deploy, test_run, and test_processing) are fairly simple.  Each one contains a few statements to
> accomplish that phase of the test execution.
> +The base test functions (test_build, test_deploy, test_run, and
> +test_processing) are fairly simple.  Each one contains a few
> +statements to accomplish that phase of the test execution.
> 
> -You can find more information about each of these functions at the following links:
> +You can find more information about each of these functions at the
> +following links:
> 
>   * :ref:`test_pre_check <func_test_pre_check>`
>   * :ref:`test_build <func_test_build>`
> @@ -182,7 +207,8 @@ Another element of every test is the *test spec*.  A file is used
>  to define a set of parameters that are used to customize the test
>  for a particular use case.
> 
> -You must define the test spec(s) for this test, and add an entry to the appropriate testplan for it.
> +You must define the test spec(s) for this test, and add an entry to
> +the appropriate testplan for it.
> 
>  Each test in the system must have a test spec file.  This file
>  is used to list customizable variables for the test.
> @@ -195,12 +221,14 @@ The test spec file is:
> 
>   * named 'spec.json' in the test directory,
>   * in JSON format,
> - * provides a ``testName`` attribute, and a ``specs`` attribute, which is a list,
> - * may include any named spec you want, but must define at least the 'default' spec for the test
> + * provides a ``testName`` attribute, and a ``specs``
> +   attribute, which is a list,
> + * may include any named spec you want, but must define at least the
> +   'default' spec for the test
> 
>     * Note that the 'default' spec can be empty, if desired.
> 
> -Here is an example one that defines no variables.::
> +Here is an example one that defines no variables. ::
> 
> 
>  	{
> @@ -211,7 +239,8 @@ Here is an example one that defines no variables.::
>  	}
> 
> 
> -And here is the spec.json of the Functional.hello_world example, which defines three specs: ::
> +And here is the spec.json of the Functional.hello_world example, which
> +defines three specs: ::
> 
> 
>  	{
> @@ -230,14 +259,20 @@ And here is the spec.json of the Functional.hello_world example, which defines t
>  	}
> 
> 
> -Next, you may want to add an entry to one of the testplan files.  These files are located in the directory ``/fuego-
> core/engine/overlays/testplans``.
> +Next, you may want to add an entry to one of the testplan files.
> +These files are located in the directory
> +``/fuego-core/engine/overlays/testplans``.
> 
> -Choose a testplan you would like to include this test, and edit the corresponding file. For example, to add your test to the list of tests
> executed when the 'default' testplan is used, add an entry ``default`` to the 'testplan_default.json' file.
> +Choose a testplan you would like to include this test, and edit the
> +corresponding file. For example, to add your test to the list of tests
> +executed when the 'default' testplan is used, add an entry ``default``
> +to the 'testplan_default.json' file.
> 
> -Note that you should add a comma after your entry, if it is not the last
> -one in the list of *tests*.
> +Note that you should add a comma after your entry, if it is not the
> +last one in the list of *tests*.
> 
> -Please read :ref:`Test Specs and Plans <test_specs_and_plans>` for more details.
> +Please read :ref:`Test Specs and Plans <test_specs_and_plans>` for
> +more details.
> 
> 
>  ========================
> @@ -246,10 +281,12 @@ Test results parser
>  Each test should also provide some mechanism to parse the results
>  from the test program, and determine the success of the test.
> 
> -For a simple Functional test, you can use the :ref:`log_compare <func_log_compare>` function to specify a pattern to search
> -for in the test log, and the number of times that pattern should be found
> -in order to indicate success of the test.  This is done from the
> -:ref:`test_processing <func_test_processing>` function in the test script.
> +For a simple Functional test, you can use the :ref:`log_compare
> +<func_log_compare>` function to specify a pattern to search for in the
> +test log, and the number of times that pattern should be found in
> +order to indicate success of the test.  This is done from the
> +:ref:`test_processing <func_test_processing>` function in the test
> +script.
> 
>  Here is an example of a call to log_compare: ::
> 
> @@ -258,55 +295,58 @@ Here is an example of a call to log_compare: ::
>  	}
> 
> 
> -This example looks for the pattern *^TEST.*OK*, which finds lines in the
> -test log that start with the word 'TEST' and are followed by the string 'OK'
> -on the same line.  It looks for this pattern 11 times.
> +This example looks for the pattern *^TEST.*OK*, which finds lines in
> +the test log that start with the word 'TEST' and are followed by the
> +string 'OK' on the same line.  It looks for this pattern 11 times.
> 
>  :ref:`log_compare <func_log_compare>` can be used to parse the logs of
>  simple tests with line-oriented output.
> 
> -For tests with more complex output, and for Benchmark tests that produce
> -numerical results, you must add a python program called 'parser.py',
> -which scans the test log and produces a data structure used by
> -other parts of the Fuego system.
> +For tests with more complex output, and for Benchmark tests that
> +produce numerical results, you must add a python program called
> +'parser.py', which scans the test log and produces a data structure
> +used by other parts of the Fuego system.
> 
>  See :ref:`parser.py <parser>` for information about this program.
> 
> 
> 
> -====================================
> -Pass criteria and reference info
> -====================================
> -You should also provide information to Fuego to indicate how to evaluate
> -the ultimate resolution of the test.
> +====================================
> +Pass criteria and reference info
> +====================================
> 
> -For a Functional test, it is usually the case that the whole test passes
> -only if all individual test cases in the test pass.  That is, one error in
> -a test case indicates overall test failure.  However, for Benchmark tests,
> -the evaluation of the results is more complicated.  It is required to specify
> -what numbers constitute success vs. failure for the test.
> +You should also provide information to Fuego to indicate how to
> +evaluate the ultimate resolution of the test.
> +
> +For a Functional test, it is usually the case that the whole test
> +passes only if all individual test cases in the test pass.  That is,
> +one error in a test case indicates overall test failure.  However, for
> +Benchmark tests, the evaluation of the results is more complicated.
> +It is required to specify what numbers constitute success vs. failure
> +for the test.
> 
>  Also, for very complicated Functional tests, there may be complicated
>  results, where, for example, some results should be ignored.
> 
> -You can specify the criteria used to evaluate the test results, by creating
> -a ':ref:`criteria.json <criteria.json>`' file for the test.
> +You can specify the criteria used to evaluate the test results, by
> +creating a ':ref:`criteria.json <criteria.json>`' file for the test.
> 
> -Finally, you may wish to add a file that indicates certain information about
> -the test results.  This information is placed in the ':ref:`reference.json <reference_json>`' file
> -for a test.
> +Finally, you may wish to add a file that indicates certain information
> +about the test results.  This information is placed in the
> +':ref:`reference.json <reference_json>`' file for a test.
> 
> -Please see the links for those files to learn more about what they are and
> -how to write them, and customize them for your system.
> +Please see the links for those files to learn more about what they are
> +and how to write them, and customize them for your system.
> 
>  =================================
>  Jenkins job definition file
>  =================================
> 
> -The last step in creating the test is to create the Jenkins job for it.
> +The last step in creating the test is to create the Jenkins job for
> +it.
> 
> -A Jenkins job describes to Jenkins what board to run the test on,
> -what variables to pass to the test (including the test spec (or variant),
> +A Jenkins job describes to Jenkins what board to run the test on, what
> +variables to pass to the test (including the test spec (or variant),
>  and what script to run for the test.
> 
>  Jenkins jobs are created using the command-line tool 'ftc'.
> @@ -321,32 +361,37 @@ The ftc 'add-jobs' sub-command uses '-b' to specify the board,
>  '-t' to specify the test, and '-s' to specify the test spec that
>  will be used for this Jenkins job.
> 
> -In this case, the name of the Jenkins job that would be created would be:
> +In this case, the name of the Jenkins job that would be created would
> +be:
> 
>   * myboard.default.Functional.mytest
> 
> -This results in the creation of a file called config.xml, in the /var/lib/jenkins/jobs/<job_name> directory.
> +This results in the creation of a file called config.xml, in the
> +/var/lib/jenkins/jobs/<job_name> directory.
> +
> 
> 
> 
> 
> 
> +=========================
> +Publishing the test
> +=========================
> 
> -=========================
> -Publishing the test
> -=========================
> -Tests that are of general interest should be submitted for inclusion into fuego-core.
> +Tests that are of general interest should be
> +submitted for inclusion into fuego-core.
> 
> -Right now, the method of doing this is to create a commit and send that commit
> -to the fuego mailing list, for review, and hopefully acceptance and
> -integration by the fuego maintainers.
> +Right now, the method of doing this is to create a commit and send
> +that commit to the fuego mailing list, for review, and hopefully
> +acceptance and integration by the fuego maintainers.
> 
> -In the future, a server will be provided where test developers can share
> -tests that they have created in a kind of "test marketplace".  Tests will
> -be available for browsing and downloading, with results from other
> -developers available to compare with your own results.  There is already
> -preliminary support for packaging a test using the 'ftc package-test' feature.
> -More information about this service will be made available in the future.
> +In the future, a server will be provided where test developers can
> +share tests that they have created in a kind of "test marketplace".
> +Tests will be available for browsing and downloading, with results
> +from other developers available to compare with your own results.
> +There is already preliminary support for packaging a test using the
> +'ftc package-test' feature.  More information about this service will
> +be made available in the future.
> 
>  =======================
>  Technical Details
> @@ -357,7 +402,8 @@ This section has technical details about a test.
>  Directory structure
>  ========================
> 
> -The directory structure used by Fuego is documented at [[Fuego directories]]
> +The directory structure used by Fuego is documented at
> +[[Fuego directories]]

This will need to be converted to a ref in when that file is added.

> 
> 
> 
> diff --git a/docs/rst_src/Adding_a_toolchain.rst b/docs/rst_src/Adding_a_toolchain.rst
> index ae3700e..265b912 100644
> --- a/docs/rst_src/Adding_a_toolchain.rst
> +++ b/docs/rst_src/Adding_a_toolchain.rst
> @@ -1,6 +1,224 @@
>  .. _addtoolchain:
> 
> 
> -#################
> -Adding Toolchain
> -#################
> +###################
> +Adding a toolchain
> +###################
> +
> +==================
> +Introduction
> +==================
> +
> +In order to build tests for your target board, you need to install a
> +toolchain (often in the form of an SDK) into the Fuego system, and let
> +Fuego know how to access it.
> +
> +Adding a toolchain to Fuego consists of these steps:
> +
> + * 1. obtain (generate or retrieve) the toolchain
> + * 2. copy the toolchain to the container
> + * 3. install the toolchain inside the container
> + * 4. create a -tools.sh file for the toolchain
> + * 5. reference the toolchain in the appropriate board file

Numbered list can drop the leading bullets.

> +
> +========================
> +Obtain a toolchain
> +========================
> +
> +First, you need to obtain a toolchain that will work with your board.
> +You should have a toolchain that produces software which will work
> +with the Linux distribution on your board.  This is usually obtained
> +from your build tool, if you are building the distribution yourself,
> +or from your semiconductor supplier or embedded Linux OS vendor, if
> +you have been provided the Linux distribution from an external source.
> +
> +
> +Installing a Debian cross-toolchain target
> +==============================================
> +
> +If you are using an Debian-based target, then to get started, you may
> +use a script to install a cross-compiler toolchain into the container.
> +For example, for an ARM target, you might want to install the Debian
> +armv7hf toolchain.  You can even try a Debian toolchain with other
> +Linux distributions.  However, if you are not using Debian on your
> +target board, there is no guarantee that this will produce correct
> +software for your board.  It is much better to install your own SDK
> +for your board into the fuego system.
> +
> +To install a Debian cross toolchain into the container, get to the
> +shell prompt in the container and use the following script:
> +
> + * /fuego-ro/toolchains/install_cross_toolchain.sh
> +
> +To use the script, pass it the argument naming the cross-compile
> +architecture you are using.  Available values are:
> +
> + * arm64 armel armhf mips mipsel powerpc ppc64el
> +
> +Execute the script, inside the docker container, with a single
> +command-line option to indicate the cross-toolchain to install.  You
> +can use the script more than once, if you wish to install multiple
> +toolchains.
> +
> +Example:
> +
> + * # /fuego-ro/toolchains/install_cross_toolchain.sh armhf
> +
> +The Debian packages for the specified toolchain will be installed into
> +the docker container.
> +
> +Building a Yocto Project SDK
> +===============================
> +
> +When you build an image in the Yocto Project, you can also build an
> +SDK to go with that image using the '-c do_populate_sdk' build step
> +with bitbake.
> +
> +To build the SDK in Yocto Project, inside your yocto build directory
> +do:
> +
> + * bitbake <image-name> -c do_populate_sdk
> +
> +This will build an SDK archive (containing the toolchain, header files
> +and libraries needed for creating software on your target, and put it
> +into the directory <build-root>/tmp/deploy/sdk/
> +
> +For example, if you are building the 'core-image-minimal' image, you
> +would execute: ::
> +
> +  $ bitbake core-image-minimal -c do_populate_sdk
> +
> +At this step look in tmp/deploy/sdk and note the name of the sdk
> +install package (the file ending with .sh).
> +
> +===========================================
> +Install the SDK in the docker container
> +===========================================
> +
> +To allow fuego to use the SDK, you need to install it into the fuego
> +docker container.  First, transfer the SDK into the container using
> +docker cp.
> +
> +With the container running, on the host machine do:
> +
> + * docker ps (note the container id)
> + * docker cp tmp/deploy/sdk/<sdk-install-package> <container-id>:/tmp
> +
> +This last command will place the SDK install package into the /tmp
> +directory in the container.
> +
> +Now, install the SDK into the container, whereever you would like.
> +Many toolchains install themselves under /opt.
> +
> +At the shell inside the container, run the SDK install script
> +(which is a self-extracting archive):
> +
> +  * /tmp/poky-....sh
> +
> +    * during the installation, select a toolchain installation
> +      location, like: /opt/poky/2.0.1
> +
> +These instructions are for an SDK built by the Yocto Project.  Similar
> +instructions would apply for installing a different toolchain or SDK.
> +That is, get the SDK into the container, then install it inside the
> +container.
> +
> +==============================================
> +Create a -tools.sh file for the toolchain
> +==============================================
> +
> +Now, fuego needs to be told how to interact with the toolchain.
> +During test execution, the fuego system determines what toolchain to
> +use based on the value of the TOOLCHAIN variable in the board file for
> +the target under test.  The TOOLCHAIN variable is a string that is
> +used to select the appropriate '<TOOLCHAIN>-tools.sh' file in
> +/fuego-ro/toolchains.
> +
> +You need to determine a name for this TOOLCHAIN, and then create a
> +file with that name, called $TOOLCHAIN-tools.sh.  So, for example if
> +you created an SDK with poky for the qemuarm image, you might call the
> +TOOLCHAIN "poky-qemuarm".  You would create a file called
> +"poky-qemuarm-tools.sh"
> +
> +The -tools.sh file is used by Fuego to define the environment
> +variables needed to interact with the SDK.  This includes things like
> +CC, AR, and LD.  The complete list of variables that this script
> +neeeds to provide are described on the page [[tools.sh]]
> +
> +Inside the -tools.sh file, you execute instructions that will set the
> +environment variables needed to build software with that SDK.  For an
> +SDK built by the Yocto Project, this involves setting a few variables,
> +and calling the environment-setup... script that comes with the SDK.
> +For SDKs from other sources, you can define the needed variables by
> +directly exporting them.
> +
> +Here is an example of the tools.sh script for poky-qemuarm.  This is
> +in the sample file /fuego-ro/toolchains/poky-qemuarm-tools.sh: ::
> +
> +
> +	# fuego toolchain script
> +	# this sets up the environment needed for fuego to use a
> +	# toolchain
> +	# this includes the following variables:
> +	# CC, CXX, CPP, CXXCPP, CONFIGURE_FLAGS, AS, LD, ARCH
> +	# CROSS_COMPILE, PREFIX, HOST, SDKROOT
> +	# CFLAGS and LDFLAGS are optional
> +	#
> +	# this script is sourced by /fuego-ro/toolchains/tools.sh
> +
> +	POKY_SDK_ROOT=/opt/poky/2.0.1
> +	export SDKROOT=${POKY_SDK_ROOT}/sysroots/
> +        armv5e-poky-linux-gnueabi
> +
> +	# the Yocto project environment setup script changes PATH so
> +        # that python uses
> +	# libs from sysroot, which is not what we want, so save the
> +        # original path
> +	# and use it later
> +	ORIG_PATH=$PATH
> +
> +	PREFIX=arm-poky-linux-gnueabi
> +	source ${POKY_SDK_ROOT}/environment-setup-armv5e-
> +        poky-linux-gnueabi
> +
> +	HOST=arm-poky-linux-gnueabi
> +
> +	# don't use PYTHONHOME from environment setup script
> +	unset PYTHONHOME
> +	env -u PYTHONHOME
> +
> +
> +
> +===============================================
> +Reference the toolchain in a board file
> +===============================================
> +
> +Now, to use that SDK for building test software for a particular
> +target board, set the value of the TOOLCHAIN variable in the board
> +file for that target.
> +
> +Edit the board file:
> + * vi /fuego-ro/boards/myboard.board
> +
> +And add (or edit) the line:
> +
> + * TOOLCHAIN="poky-qemuarm"
> +
> +============
> +Notes
> +============
> +
> +Python execution
> +==================
> +
> +You may notice that some of the example scripts set the environment
> +variable ORIG_PATH.  This is used by the function
> +[[function_run_python|run_python]] internally to execute the

This should be a :ref:

> +container's default python interpreter, instead of the interpreter
> +that was built by the Yocto Project.
> +
> +
> +
> +
> +
> +
> diff --git a/docs/rst_src/Adding_or_Customizing_a_Distribution.rst b/docs/rst_src/Adding_or_Customizing_a_Distribution.rst
> index 6e067c9..0384ac7 100644
> --- a/docs/rst_src/Adding_or_Customizing_a_Distribution.rst
> +++ b/docs/rst_src/Adding_or_Customizing_a_Distribution.rst
> @@ -8,31 +8,33 @@ Adding or Customizing a Distribution
>  Introduction
>  =====================
> 
> -Although Fuego is configured to execute on a standard Linux distribution,
> -Fuego supports customizing certain aspects of its interaction with the system
> -under test.  Fuego uses several features of the operating system on the
> -board to perform
> -aspects of its test execution.  This includes things like accessing the system
> -log, flushing file system caches, and rebooting the board.  The ability
> -to customize Fuego's interaction with the system under test is useful in
> -case you have a non-standard Linux distribution (where, say, certain features
> -of Linux are missing or changed), or when you are trying to use Fuego with
> -a non-Linux system.
> -
> -A developer can customize the distribution layer of Fuego in one of two ways:
> - * adding overlay functions to a board file
> - * by creating a new distribution overlay file
> +Although Fuego is configured to execute on a standard Linux
> +distribution, Fuego supports customizing certain aspects of its
> +interaction with the system under test.  Fuego uses several features
> +of the operating system on the board to perform aspects of its test
> +execution.  This includes things like accessing the system log,
> +flushing file system caches, and rebooting the board.  The ability to
> +customize Fuego's interaction with the system under test is useful in
> +case you have a non-standard Linux distribution (where, say, certain
> +features of Linux are missing or changed), or when you are trying to
> +use Fuego with a non-Linux system.
> +
> +A developer can customize the distribution layer of Fuego in one of
> +two ways:
> +
> + * adding overlay functions to a board file by creating a new
> + * distribution overlay file
> 
>  ==============================
>  Distribution overlay file
>  ==============================
> 
> -A distribution overlay file can be added to Fuego, by adding a new ''.dist''
> -file to the directory: fuego-core/overlays/distrib
> +A distribution overlay file can be added to Fuego, by adding a new
> +''.dist'' file to the directory: fuego-core/overlays/distrib
> 
>  The *distribution* functions are defined in the file:
> -fuego-core/overlays/base/base-distrib.fuegoclass
> -These include functions for doing certain operations on your board, including:
> +fuego-core/overlays/base/base-distrib.fuegoclass These include
> +functions for doing certain operations on your board, including:
> 
>   - :ref:`ov_get_firmware <func_ov_get_firmware>`
>   - :ref:`ov_rootfs_reboot <func_ov_rootfs_reboot>`
> @@ -54,23 +56,29 @@ You can look up what each override function should do by
>  reading the fuegoclass code, or looking at the function documentation
>  at: :ref:`Test Script APIs <test_script_apis>`
> 
> -The inheritance mechanism and syntax for Fuego overlay files is described
> -at: :ref:`Overlay Generation <overlay_generation>`
> +The inheritance mechanism and syntax for Fuego overlay files is
> +described at: :ref:`Overlay Generation <overlay_generation>`
> 
> -The goal of the distribution abstraction layer in Fuego is to allow you to
> -customize Fuego operations to match what is available on your target board.
> -For example, the default (base class) :ref:`ov_rootfs_logread() <func_ov_rootfs_logread>` function assumes
> +The goal of the distribution abstraction layer in Fuego is to allow
> +you to customize Fuego operations to match what is available on your
> +target board.  For example, the default (base class)
> +:ref:`ov_rootfs_logread() <func_ov_rootfs_logread>` function assumes
>  that the target board has the command "/sbin/logread" that can be used
> -to read the system log.  If your distribution does not have "/sbin/logread", or indeed
> -if there is no system log, then you would need to override ov_rootfs_logread()
> -to do something appropriate for your distribution or OS.
> -
> -*Note: In fact, this is a common enough situation that there is already a 'nologread.dist' file already in the overlay/distribs directory.*
> -
> -Similarly, :ref:`ov_rootfs_kill <func_ov_rootfs_kill>` uses the /proc filesystem, /proc/$pid/status, and the
> -cat, grep, kill and sleep commands on the target board to do its work.  If our distribution
> -is missing any of these, then you would need to override ov_rootfs_kill()
> -with a function that did the appropriate thing on your distribution (or OS).
> +to read the system log.  If your distribution does not have
> +"/sbin/logread", or indeed if there is no system log, then you would
> +need to override ov_rootfs_logread() to do something appropriate for
> +your distribution or OS.
> +
> +*Note: In fact, this is a common enough situation that there is*
> +*already a 'nologread.dist' file already in the overlay/distribs*
> +*directory.*
> +
> +Similarly, :ref:`ov_rootfs_kill <func_ov_rootfs_kill>` uses the /proc
> +filesystem, /proc/$pid/status, and the cat, grep, kill and sleep
> +commands on the target board to do its work.  If our distribution is
> +missing any of these, then you would need to override ov_rootfs_kill()
> +with a function that did the appropriate thing on your distribution
> +(or OS).
> 
>  Existing distribution overlay files
>  =====================================
> @@ -87,16 +95,17 @@ that commonly occur in embedded Linux testing.
>  Referencing the distribution in the board file
>  ===========================================================
> 
> -Inside the board file for your board, indicate the distribution overlay you are using
> -by setting the *DISTRIB* variable.
> +Inside the board file for your board, indicate the distribution
> +overlay you are using by setting the *DISTRIB* variable.
> 
> -If the DISTRIB variable is not set, then the default distribution overlay
> -functions are used.
> +If the DISTRIB variable is not set, then the default distribution
> +overlay functions are used.
> 
> -For example, if your embedded distribution of Linux does not have a system
> -logger, you can override the normal logging interaction of Fuego by using
> -the 'nosyslogd.dist' distribution overlay.  To do this, add the following
> -line to the board file for target board where this is the case: ::
> +For example, if your embedded distribution of Linux does not have a
> +system logger, you can override the normal logging interaction of
> +Fuego by using the 'nosyslogd.dist' distribution overlay.  To do this,
> +add the following line to the board file for target board where this
> +is the case: ::
> 
> 
>    DISTRIB="nosyslogd.dist"
> @@ -123,16 +132,16 @@ Notes
>  =========
> 
>  Fuego does not yet fully support testing non-Linux operating systems.
> -There is work-in-progress to support testing of NuttX, but that feature
> -is not complete as of this writing. In any event, Fuego does include
> -a 'NuttX' distribution overlay, which may provide some ideas if you wish
> -to write your own overlay for a non-Linux OS.
> +There is work-in-progress to support testing of NuttX, but that
> +feature is not complete as of this writing. In any event, Fuego does
> +include a 'NuttX' distribution overlay, which may provide some ideas
> +if you wish to write your own overlay for a non-Linux OS.
> 
>  NuttX distribution overlay
>  ============================
> 
> -By way of illustration, here are the contents of the NuttX
> -distribution overlay file (fuego-core/overlays/distribs/nuttx.dist).::
> +By way of illustration, here are the contents of the NuttX
> +distribution overlay file (fuego-core/overlays/distribs/nuttx.dist). ::
> 
> 
>  	override-func ov_get_firmware() {
> @@ -196,34 +205,23 @@ Hypothetical QNX distribution
>  Say you wanted to add support for testing QNX with Fuego.
> 
>  Here are some first steps to add a QNX distribution overlay:
> +
>   * set up your board file
> - * create a custom QNX.dist (stubbing out or replacing base class functions as needed)
> + * create a custom QNX.dist (stubbing out or replacing base class
> +   functions as needed)
> 
> -    * you could copy null.dist to QNX.dist, and deciding which items to replace with QNX-specific functionality
> +    * you could copy null.dist to QNX.dist, and deciding which items
> +      to replace with QNX-specific functionality
> 
>   * add DISTRIB="QNX.dist" to your board file
> - * run the Functional.fuego_board_check test (using ftc, or adding the node and job to Jenkins
> -and building the job using the Jenkins interface), and
> + * run the Functional.fuego_board_check test (using ftc, or adding
> +   the node and job to Jenkins and building the job using the Jenkins
> +   interface), and
>   * examine the console log to see what issues surface
> 
> 
> 
> -.. toctree::
> -   :hidden:
> 
> -
> -   function_ov_get_firmware
> -   function_ov_rootfs_reboot
> -   function_ov_rootfs_state
> -   function_ov_logger
> -   function_ov_rootfs_sync
> -   function_ov_rootfs_drop_caches
> -   function_ov_rootfs_oom
> -   function_ov_rootfs_kill
> -   function_ov_rootfs_logread
> -   Test_Script_APIs
> -   Overlay_Generation
> -
> 
> 
> 
> diff --git a/docs/rst_src/Adding_test_jobs_to_Jenkins.rst b/docs/rst_src/Adding_test_jobs_to_Jenkins.rst
> index cb0d23f..ca715b3 100644
> --- a/docs/rst_src/Adding_test_jobs_to_Jenkins.rst
> +++ b/docs/rst_src/Adding_test_jobs_to_Jenkins.rst
> @@ -1,5 +1,139 @@
>  .. _addtestjob:
> 
>  ############################
> -Adding test jobs to jenkins
> +Adding test jobs to Jenkins
>  ############################
> +
> +Before performing any tests with Fuego, you first need to
> +add Jenkins jobs for those tests in Jenkins.
> +
> +To add jobs to Jenkins, you use the 'ftc' command line tool.
> +
> +Fuego comes with over a hundred different tests, and not
> +all of them will be useful for your environment or testing needs.
> +
> +In order to add jobs to Jenkins, you first need to have
> +created a Jenkins node for the board for which you wish to add
> +the test.  If you have not already added a board definition,
> +or added your board to Jenkins, please see:
> +:ref:`Adding a board <adding_board>`
> +
> +One your board is defined as a Jenkins node, you can add test
> +jobs for it.
> +
> +There are two ways of adding test jobs, individually, and
> +using testplans.  In both cases, you use the 'ftc add-jobs'
> +command.
> +
> +============================
> +Selecting tests or plans
> +============================
> +
> +The list of all tests that are available can be seen
> +by running the command 'ftc list-tests'.
> +
> +Run this command inside the docker container, by going to
> +the shell prompt inside the Fuego docker container, and typing ::
> +
> +
> +  (container_prompt)$ ftc list-tests
> +
> +
> +To see the list of plans that come pre-configured with Fuego,
> +use the command 'ftc list-plans'.
> +
> +  (container_prompt)$ ftc list-plans
> +
> +
> +A plan lists a set of tests to execute.  You can examine the
> +list of tests that a testplan includes, by examining the testplan
> +file. The testplan files are in JSON format, and are in the
> +directory ``fuego-core/engine/overlays/testplans``.
> +
> +============================
> +Adding individual tests
> +============================
> +
> +To add an individual test, add it using the 'ftc add-jobs'
> +command.  For example, to add the test "Functional.hello_world"
> +for the board "beaglebone", you would use the following command: ::
> +
> +
> +  (container prompt)$ ftc add-job -b beaglebone -t
> +  Functional.hello_world
> +
> +
> +Configuring job options
> +=========================
> +
> +When Fuego executes a test job, several options are available to
> +control aspects of job execution.  These can be configued on the
> +'ftc add-job' command line.
> +
> +The options available are:
> +
> + * timeout
> + * rebuild flag
> + * reboot flag
> + * precleanup flag
> + * postcleanup flag
> +
> +See 'ftc add-jobs help' for details about these options and how to
> +specify them.
> +
> +Adding tests for more than one board
> +======================================
> +
> +If you want to add tests for more than one board at a time, you can do
> +so by specifying multiple board names after the '-b' option with
> +'ftc add-jobs'.Board names should be a single string argument, with
> +individual board names separated by commas.
> +
> +For example, the following would add a job for Functional.hello_world
> +to each of the boards rpi1, rpi2 and beaglebone. ::
> +
> +
> +  (container prompt)$ ftc add-job -b rpi1,rpi2,beaglebone -t
> +  Functional.hello_world
> +
> +
> +
> +================================
> +Adding jobs based on testplans
> +================================
> +
> +A testplan is a list of Fuego tests with some options for each one.
> +You can see the list of testplans in your
> +system with the following command: ::
> +
> +
> +  (container prompt)$ ftc list-plans
> +
> +
> +To create a set of jobs related to docker image testing, for the
> +'docker' board on the system, do the following: ::
> +
> +
> +  (container prompt)$ ftc add-jobs -b docker -p testplan_docker
> +
> +
> +To create a set of jobs for a board called 'beaglebone',
> +do the following: ::
> +
> +
> +  (container prompt)$ ftc add-jobs -b myboard -p testplan_smoketest
> +
> +
> +The "smoketest" testplan has about 20 tests that exercise a variety of
> +features on a Linux system.  After running these commands, a set of
> +jobs will appear in the Jenkins interface.
> +
> +Once this is done, your Jenkins interface should look something like
> +this:
> +
> +.. image:: ../images/fuego-1.1-jenkins-dashboard-beaglebone-jobs.png
> +   :width: 900
> +
> +
> +
> +
> diff --git a/docs/rst_src/Adding_views_to_Jenkins.rst b/docs/rst_src/Adding_views_to_Jenkins.rst
> index e61bbff..8d0a679 100644
> --- a/docs/rst_src/Adding_views_to_Jenkins.rst
> +++ b/docs/rst_src/Adding_views_to_Jenkins.rst
> @@ -5,14 +5,15 @@
>  Adding views to Jenkins
>  #########################
> 
> -It is useful to organize your Jenkins test jobs into "views".  These appear
> -as tabs in the main Jenkins interface. Jenkins always provides a tab
> -that lists all of the installed jobs, call "All".  Other views that you
> -create will appear on tabs next to this, on the main Jenkins page.
> -
> -You can define new Jenkins views using the Jenkins interface,
> -but Fuego provides a command that allows you to easily create views
> -for boards, or for sets of related tests (by name and wildcard), from the
> +It is useful to organize your Jenkins test jobs into "views".  These
> +appear as tabs in the main Jenkins interface. Jenkins always provides
> +a tab that lists all of the installed jobs, call "All".  Other views
> +that you create will appear on tabs next to this, on the main Jenkins
> +page.
> +
> +You can define new Jenkins views using the Jenkins interface, but
> +Fuego provides a command that allows you to easily create views for
> +boards, or for sets of related tests (by name and wildcard), from the
>  Linux command line (inside the container).
> 
>  The usage line for this command is: ::
> @@ -20,22 +21,22 @@ The usage line for this command is: ::
>    Usage: ftc add-view <view-name> [<job_spec>]
> 
> 
> -The view-name parameter indicates the name of the view in Jenkins,
> -and the job-spec parameter is used to select the jobs which appear in
> -that view.
> +The view-name parameter indicates the name of the view in Jenkins, and
> +the job-spec parameter is used to select the jobs which appear in that
> +view.
> 
> -If the job_spec is provided and starts with an '=', then it is
> -interpreted as one or more specific job names.  Otherwise, the view
> -is created using a regular expression statement that Jenkins uses to select
> -the jobs to include in the view.
> +If the job_spec is provided and starts with an '=', then it is
> +interpreted as one or more specific job names.  Otherwise, the view is
> +created using a regular expression statement that Jenkins uses to
> +select the jobs to include in the view.
> 
>  ======================
>  Adding a board view
>  ======================
> 
>  By convention, most Fuego users populate their Jenkins interface with
> -a view for each board in their system (well, for labs with a small number
> -of boards, anyway).
> +a view for each board in their system (well, for labs with a small
> +number of boards, anyway).
> 
>  The simplest way to add a view for a board is to just specify the
>  board name, like so: ::
> @@ -52,28 +53,28 @@ Customizing regular expressions
>  ==================================
> 
>  Note that if your board name is not unique enough, or is a string
> -contained in some tests, then
> -you might see some test jobs listed that were not specific
> -to that board.  For example, if you had a board name "Bench",
> -then a view you created with the view-name of "Bench", would also
> -include Benchmarks.  You can work around this by specifying a more
> -details regular expression for your job spec.
> +contained in some tests, then you might see some test jobs listed that
> +were not specific to that board.  For example, if you had a board name
> +"Bench", then a view you created with the view-name of "Bench", would
> +also include Benchmarks.  You can work around this by specifying a
> +more details regular expression for your job spec.
> 
>  For example: ::
> 
>    (container_prompt)$ ftc add-view Bench "Bench.*"
> 
> 
> -This would only include the jobs that started with "Bench" in the "Bench"
> -view.  Benchmark jobs for other boards would not be included, since they
> -only have "Benchmark" somewhere in the middle of their job name - not at the
> -beginning.
> +This would only include the jobs that started with "Bench" in the
> +"Bench" view.  Benchmark jobs for other boards would not be included,
> +since they only have "Benchmark" somewhere in the middle of their job
> +name - not at the beginning.
> 
>  ===============================================
>  Add view by test name regular expression
>  ===============================================
> 
> -This command would create a view to show LTP results for multiple boards: ::
> +This command would create a view to show LTP results for multiple
> +boards: ::
> 
>   (container_prompt)$ ftc add-view LTP
> 
> @@ -86,7 +87,8 @@ prefixed with  *"fuego_"*.  ::
>    (container_prompt)$ ftc add-view fuego ".*fuego_.*"
> 
> 
> -And the following command will show all the batch jobs defined in the system: ::
> +And the following command will show all the batch jobs defined in the
> +system: ::
> 
>    (container_prompt)$ ftc add-view .*.batch
> 
> @@ -101,11 +103,13 @@ list of job names.  The job names must be complete, including the
>  board name, spec name and full test name. ::
> 
> 
> -  (container_prompt)$ ftc add-view network-tests =docker.default.Functional.ipv6connect,docker.default.Functional.netperf
> +  (container_prompt)$ ftc add-view network-tests =docker.default.
> +  Functional.ipv6connect,docker.default.Functional.netperf
> 
> 
> -In this command, the view would be named "network-tests", and it would show
> -the jobs "docker.default.Functional.ipv6connect" and "docker.default.Functional.netperf".
> +In this command, the view would be named "network-tests", and it would
> +show the jobs "docker.default.Functional.ipv6connect" and
> +"docker.default.Functional.netperf".
> 
> 
> 
> diff --git a/docs/rst_src/Architecture.rst b/docs/rst_src/Architecture.rst
> index 84fc4ab..e85d9c7 100644
> --- a/docs/rst_src/Architecture.rst
> +++ b/docs/rst_src/Architecture.rst
> @@ -5,9 +5,9 @@
>  Architecture
>  ################
> 
> -Fuego consists of a continuous integration system,
> -along with some pre-packaged test programs and a shell-based
> -test harness, running in a Docker container.::
> +Fuego consists of a continuous integration system, along with some
> +pre-packaged test programs and a shell-based test harness, running in
> +a Docker container.::
> 
>     Fuego = (Jenkins + abstraction scripts + pre-packed tests)
>            inside a container
> @@ -23,21 +23,22 @@ Major elements
> 
>  The major elements in the Fuego architecture are:
> 
> - * Host system
> + * host system
> +
> +   * container build system
> +   * fuego container instance
> 
> -     * Fuego container instance
> -     * Container build system
>       * Jenkins continuous integration system
> 
> -        * web-based user interface (web server on port 8090)
> -        * plugins
> +       * web-based user interface (web server on port 8090)
> +       * plugins
> 
> -     * Test programs
> -     * Build environment (not shown in the diagram above)
> -     * Fuego core system
> +     * test programs
> +     * abstraction scripts (test scripts)
> +     * build environment (not shown in the diagram above)
> 
> - * Target system
> - * Web client, for interaction with the system
> + * target system
> + * web client, for interaction with the system
> 
>  ==============
>  Jenkins
> @@ -82,13 +83,6 @@ system).
>  =========================
>  Pre-packaged tests
>  =========================
> -Fuego contains over 100 pre-packaged tests, ready for you to start
> -testing with these tests "out-of-the-box".  The tests individual
> -tests like a test of 'iputils' or 'pmqtest', as well as several
> -Benchmarks in the area of CPU performance, networking, graphics
> -and realtime.  Fuego also includes some full test suites, like
> -LTP (Linux Test Project).  Finally Fuego includes a set of selftest
> -tests, to validate board operation or Fuego core functionality.
> 
>  =========================
>  Abstraction scripts
> @@ -108,9 +102,10 @@ Fuego uses a set of shell script fragments to support abstractions for
>  Container
>  ==========================
> 
> -By default, Fuego runs inside a Docker container.  This provides two benefits:
> +By default, Fuego runs inside a Docker container.  This provides two
> +benefits:
> 
> - * It makes it easy to run the system on a variety of different Linux
> + * It makes it easy to run the system on a variety of different Linux
>     distributions
>   * It makes the build environment for the test programs consistent
> 
> @@ -131,63 +126,66 @@ utilities and tools available for performing tests
>  Different objects in Fuego
>  ============================
> 
> -It is useful to give an overview of the major objects used in Fuego, as
> -they will be referenced many times:
> +It is useful to give an overview of the major objects used in Fuego,
> +as they will be referenced many times:
> 
>  Fuego core objects:
> 
> - * board - a description of the device under test
> - * test - materials for conducting a test
> - * spec - one or more sets of variables for describing a test variant
> - * plan - a collection of tests, with additional test settings for their
> -   execution
> - * run - the results from a individual execution of a test on a board
> + * board - a description of the device under test
> + * test - materials forconducting a test
> + * spec - one or more sets of variables for describing a test variant
> + * plan - a collection of tests, with additional test settings for
> +   their execution
> + * run - the results from
> +   a individual execution of a test on a board
> 
>  Jenkins objects:
> 
> - * node - the Jenkins object corresponding to a Fuego board
> - * job - a Jenkins object corresponding to a combination of board, spec,
> -   and test
> - * build - the test results, from Jenkins perspective - corresponding to
> -   a Fuego 'run'
> -
> -There are both a front-end and a back-end to the system, and different
> -names are used to describe the front-end and back-end objects used by
> -the system, to avoid confusion.  In general, Jenkins objects have
> -rough counterparts in the Fuego system:
> -
> -  +------------------+-------------------------------+
> -  | Jenkins object   | Corresponds to fuego object   |
> -  +==================+===============================+
> -  | node             | board                         |
> -  +------------------+-------------------------------+
> -  | job              | test                          |
> -  +------------------+-------------------------------+
> -  | build            | run                           |
> -  +------------------+-------------------------------+
> + * node - the Jenkins object corresponding to a Fuego board
> + * job - a Jenkins object corresponding to a combination of board,
> +   spec, and test
> + * build - the test results, from Jenkins perspective - corresponding
> +   to a Fuego 'run'
> +
> +There are both a front-end and a back-end to the system, and different
> +names are used to describe the front-end and back-end objects used by
> +the system, to avoid confusion.  In general, Jenkins objects have
> +rough counterparts in
> +the Fuego system:
> +
> +        +------------------+-------------------------------+
> +        | Jenkins object   | corresponds to fuego object   |
> +        +==================+===============================+
> +        | node		   | board                         |
> +        +------------------+-------------------------------+
> +        | job              | test                          |
> +        +------------------+-------------------------------+
> +        | build            | run                           |
> +        +------------------+-------------------------------+
> 
>  =======================
>   Jenkins operations
>  =======================
> 
>  How does Jenkins work?
> - * When the a job is initiated, Jenkins starts a slave process to run
> + * When the a job is initiated, Jenkins starts a slave process to run
>     the test that corresponds to that job
>   * Jenkins records stdout from slave process
> - * The slave (slave.jar) runs a script specified in the config.xml
> -   for the job
> + * the slave (slave.jar) runs a script specified in the config.xml for
> +   the job
> 
> -   * This script sources functions from the scripts and overlays
> -     directory of Fuego, and does the actual building, deploying and
> +   * this script sources functions from the scripts and overlays
> +     directory of Fuego, and does the actual building, deploying and
>       test executing
> -   * Also, the script does results analysis on the test logs, and calls
> -     the post_test operation to collect additional information and cleanup
> -     after the test
> -
> - * While a test is running, Jenkins accumulates the log output from the
> -   generated test script and displays it to the user (if they are watching
> -   the console log)
> - * Jenkins provides a web UI for browsing the nodes, jobs, and test
> +   * Also, the script does results analysis on the test logs, and
> +     calls the post_test operation to collect additional information
> +     and clean up after the test
> +
> + * while a test is running, Jenkins accumulates the log output from
> +   the generated test script and displays it to the user (if they are
> +   watching the console log)
> +
> + * Jenkins provides a web UI for browsing the nodes, jobs, and test
>     results (builds), and displaying graphs for benchmark data
> 
>  ======================
> @@ -200,103 +198,112 @@ How do the Fuego scripts work?
>  Test execution
>  ======================
> 
> - * Each test has a base script, that defines a few functions specific
> + * each test has a base script, that defines a few functions specific
>     to that test (see below)
> - * Upon execution, this base script loads additional test variables
> -   and function definitions from other files using something called
> + * upon execution, this base script loads additional test variables
> +   and function definitions from other files using something called
>     the overlay generator
> - * The overlay generator creates a script containing test variables
> + * the overlay generator creates a script containing test variables
>     for this test run
> 
> -    * The script is created in the run directory for the test
> -    * The script is called prolog.sh
> -    * The overlay generator is called ovgen.py
> - * The base script (with the test variable script sourced into it)
> -   runs on the host, and uses fuego functions to perform different
> +   * the script is created in the run directory for the test
> +   * the script is called prolog.sh
> +   * the overlay generator is called ovgen.py
> +
> + * the base script (with the test variable script sourced into it)
> +   runs on the host, and uses fuego functions to perform different
>     phases of the test
> - * For a detailed flow graph of normal test execution see:
> + * for a detailed flow graph of normal test execution see:
>     :ref:`test execution flow outline <Outline>`
> 
>  ================================
> -Test variable file generation
> +test variable file generation
>  ================================
> 
> - * The generator takes the following as input:
> -    * environment variables passed by Jenkins
> -    * board file for the target (specified with NODE_NAME)
> -    * tools.sh (vars from tools.sh are selected with TOOLCHAIN,
> -      from the board file)
> -    * the distribution file, and (selected with DISTRIB)
> -    * the testplans for the test (selected with TESTPLAN)
> -    * test specs for the test
> -
> -The generator produces the test variable file, which it places
> -in the "run" directory for a test, which has the name ``prolog.sh``
> -This generation happens on the host, inside the docker container.
> -This test variable file has all the functions which are available to
> -be called by the base test script, as well as test variables
> -from various source in the test system.
> + * the generator takes the following as input:
> +
> +   * environment variables passed by Jenkins
> +   * board file for the target (specified with NODE_NAME)
> +   * tools.sh (vars from tools.sh are selected with TOOLCHAIN, from
> +     the board file)
> +   * the distribution file, and (selected with DISTRIB)
> +   * the testplans for the test (selected with TESTPLAN)
> +   * test specs for the test
> +
> + * the generator produces the test variable file
> + * the test variable file is in "run" directory for a test, and has
> +   the name: prolog.sh
> + * this generation happens on the host, inside the docker container
> + * the test variable file has functions which are available to be
> +   called by the base test script
> 
>  .. image:: ../images/fuego-script-generation.png
>     :width: 600
> 
>  Input
>  ======
> - * Input descriptions:
> -    * the board file has variables defining attributes of the board,
> -      like the toolchain, network address, method of accessing the
> -      board, etc.
> -    * The tools.sh script has variables which are used for identifying the
> -      toolchain used to build binary test programs
> -
> -       * It uses the TOOLCHAIN variable to determine the set of variables
> -         to define
> -
> -   * A testplan lists multiple tests to run
> -      * It specifies a test name and spec for each one
> -      * a spec file holds the a set of variable declarations which
> -        are used by the tests themselves.
> -        These are put into environment variables on the target.
> -
> - * ovgen.py reads the plans, board files, distrib files and specs,
> -   and produces a single prolog.sh file that has all the information
> -   for the test
> +
> + * input descriptions:
> +
> +   * the board file has variables defining attributes of the board,
> +     like the toolchain, network address, method of accessing the
> +     board, etc.
> +   * tools.sh has variables which are used for identifying the
> +     toolchain used to build binary test programs
> +
> +     * it uses the TOOLCHAIN variable to determine the set of
> +       variables to define
> +
> +   * a testplan lists multiple tests to run
> +
> +     * it specifies a test name and spec for each one
> +
> +     * a spec files hold the a set of variable declarations which are
> +       used by the tests themselves.
> +       These are put into environment variables on the target.
> +
> + * ovgen.py reads the plans, board files, distrib files and specs,
> +   and produces
> +   a single prolog.sh file that has all the information for the test
> 
>   * Each test in the system has a fuego shell script
> 
> -    * This must have the same name as the base name of the test:
> -       * \<base_test_name>.sh
> +   * this must have the same name as the base name of the test:
> +
> +     * \<base_test_name>.sh
> 
>   * Most (but not all) tests have an additional test program
> 
> -    * this program is executed on the board (the device under test)
> -    * it is often a compiled program, or set of programs
> -    * it can be a simple shell script
> -    * it is optional - sometime the base script can execute the
> -      needed commands for a test without an additional program
> -      placed on the board
> +   * this program is executed on the board (the device under test)
> +   * it is often a compiled program, or set of programs
> +   * it can be a simple shell script
> +   * it is optional - sometime the base script can execute the needed
> +     commands for a test without an additional program placed on the
> +     board
> +
> + * the base script declares the tarfile for the test, and has
> +   functions for: test_build(), test_deploy() and test_run()
> 
> - * The base script declares the tarfile for the test, and has functions
> -   for: test_build(), test_deploy() and test_run()
> +   * the test script is run on host (in the container)
> 
> -    * The test script is run on host (in the container)
> -       * but it can include commands that will run on the board
> -    * tarball has the tarfile
> -    * test_build() has commands (which run in the container) to compile
> -      the test program
> -    * test_deploy() has commands to put the test programs on the target
> -    * test_run() has commands to define variables, execute the actual
> -      test, and log the results.
> +     * but it can include commands that will run on the board
> 
> - * The test program is run on the target
> +   * tarball has the tarfile
> +   * test_build() has commands (which run in the container) to compile
> +     the test program
> +   * test_deploy() has commands to put the test programs on the target
> +   * test_run() has commands to define variables, execute the actual
> +     test, and log the results.
> 
> -    * This is the actual test program that runs and produces a result
> + * the test program is run on the target
> +
> +   * this is the actual test program that runs and produces a result
> 
>  ====================
>  fuego test phases
>  ====================
> 
> -A test execution in fuego runs through several phases, some of which
> +A test execution in fuego runs through several phases, some of which
>  are optional, depending on the test.
> 
>  The test phases are:
> @@ -339,12 +346,12 @@ software.
> 
>  This phase is split into multiple parts:
>   * pre_build - build workspace is created, a build lock is acquired
> -   and the tarball is unpacked
> + * and the tarball is unpacked

This bullet is incorrect.

> 
> -    * :ref:`unpack <unpack>` is called during pre_build
> +   * :ref:`unpack <unpack>` is called during pre_build
>   * test_build - this function, from the base script, is called
> 
> -    * Usually this consists of 'make', or 'configure ; make'
> +   * usually this consists of 'make', or 'configure ; make'
>   * post_build - (empty for now)
> 
>  deploy
> @@ -356,10 +363,11 @@ required supporting files, to the target.
>  This consists of 3 sub-phases:
>   * pre_deploy - cd's to the build directory
>   * test_deploy - the base script's 'test_deploy' function is called.
> -    * Usually this consists of tarring up needed files, copying them
> -      to the target with 'put', and then extracting them there
> -    * Items should be placed in the directory
> -      $BOARD_TESTDIR/fuego.$TESTDIR/ directory on the target
> +
> +   * Usually this consists of tarring up needed files, copying them to
> +     the target with 'put', and then extracting them there
> +   * Items should be placed in the directory
> +     $BOARD_TESTDIR/fuego.$TESTDIR/ directory on the target
>   * post_deploy - removes the build lock
> 
>  run
> @@ -415,47 +423,49 @@ Also, a final analysis is done on the system logs is done in this step
>  phase relation to base script functions
>  ============================================================
> 
> -Some of the phases are automatically performed by Fuego, and some end
> -up calling a routine in the base script (or use data from the base
> -script) to perform their actions.  This table shows the relation
> -between the phases and the data and routines that should be defined in
> -the base script.
> +Some of the phases are automatically performed by Fuego, and some end
> +up calling a routine in the base script (or use data from the base
> +script) to perform their actions.  This table shows the relation
> +between the phases and the data and routines that should be defined
> +in the base script.
> 
>  It also shows the most common commands utilized by base script
>  functions for this phase.
> 
> 
> -  +------------+-------------------------------+------------------------------+
> -  | phase      | relationship to base script   | common operations            |
> -  +============+===============================+==============================+
> -  | pre_test   | calls 'test_pre_check'        |assert_define, is_on_target,  |
> -  |            |                               |check_process_is_running      |
> -  +------------+-------------------------------+------------------------------+
> -  | build      | uses the 'tarfile' definition,|patch,configure,make          |
> -  |            | calls'test_build'             |                              |
> -  +------------+-------------------------------+------------------------------+
> -  | deploy     | Calls 'test_deploy'           | put                          |
> -  +------------+-------------------------------+------------------------------+
> -  | run        | calls 'test_run'              | cmd,report,report_append     |
> -  +------------+-------------------------------+------------------------------+
> -  |get_testlog |(none)                         |                              |
> -  +------------+-------------------------------+------------------------------+
> -  |processing  |calls 'test_processing'        | log_compare                  |
> -  +------------+-------------------------------+------------------------------+
> -  |post_test   |calls 'test_cleanup'           | kill procs                   |
> -  +------------+-------------------------------+------------------------------+
> +        +------------+-------------------------------+---------------------------+
> +        | phase      | relationship to base script   | common operations         |
> +        +============+===============================+===========================+
> +        | pre_test   | calls 'test_pre_check'        |assert_define,is_on_target |
> +        |            |                               |,check_process_is_running  |
> +        +------------+-------------------------------+---------------------------+
> +        | build      | uses the 'tarfile' definition,|patch,configure,make       |
> +	|            | calls'test_build'             | 				 |
> +        +------------+-------------------------------+---------------------------+
> +        | deploy     | Calls 'test_deploy'           | put                       |
> +        +------------+-------------------------------+---------------------------+
> +        | run        | calls 'test_run'              | cmd,report,report_append  |
> +        +------------+-------------------------------+---------------------------+
> +        |get_testlog |(none)                         |                           |
> +        +------------+-------------------------------+---------------------------+
> +        |processing  |calls 'test_processing'        | log_compare               |
> +        +------------+-------------------------------+---------------------------+
> +        |post_test   |calls 'test_cleanup'           | kill procs                |
> +	+------------+-------------------------------+---------------------------+

I had moved this table left a few spaces, to allow two of the rows
to be longer.  It looks overall like a lot of my
changes in the file got removed, and a lot of trailing whitespace got
added.  I'll have to review this one in more detail to see if I want
to revert some of these changes.

> 
>  other scripts and programs
>  ==============================
> 
>   * parser.py is used for benchmark tests
> -    * It is run against the test log, on the host
> -    * It extracts the values from the test log and puts them in a
> -      normalized format
> -    * These values, called benchmark 'metrics', are compared against
> -      pre-defined threshholds to determine test pass or failure
> -    * The values are saved for use by plotting software
> +
> +   * it is run against the test log, on the host
> +   * it extracts the values from the test log and puts them in a
> +     normalized format
> +   * these values, called benchmark 'metrics', are compared against
> +     pre-defined threshholds to determine test pass or failure
> +   * the values are saved for use by plotting software
> +
> 
>  ==============
>   Data Files
> @@ -480,7 +490,7 @@ The base shell script should:
>   * execute the tests
>   * read the log data from the test
> 
> -The base shell script can handle host/target tests (because it runs on
> +The base shell script can handle host/target tests (because it runs on
>  the host).
> 
>  (That is, tests that involve actions on both the host and target.
> @@ -500,9 +510,13 @@ specifies the board, spec and base script for the test.
>  ========
> 
>  Human roles:
> - * test program author - person who creates a new standalone test program
> - * test integrator - person who integrates a standalone test into fuego
> - * fuego developer - person who modifies Fuego (including the fuego system scripts or Jenkins) to support more test scenarios or
> additional features
> + * test program author - person who creates a new standalone test
> +   program
> + * test integrator - person who integrates a standalone test into
> +   fuego
> + * fuego developer - person who modifies Fuego (including the fuego
> +   system scripts or Jenkins) to support more test scenarios or
> +   additional features
>   * tester - person who executes tests and evaluates results
> 
>  =================
> @@ -514,3 +528,24 @@ their interactions at:
> 
>   * :ref:`Fuego Developer Notes <Devref>`
> 
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +
> +

These empty lines at the end of the file can all be removed.

> diff --git a/docs/rst_src/Artwork.rst b/docs/rst_src/Artwork.rst
> index b4b89d6..f5868fe 100644
> --- a/docs/rst_src/Artwork.rst
> +++ b/docs/rst_src/Artwork.rst
> @@ -1,9 +1,11 @@
> +.. _artwork:
> 
>  #########
>  Artwork
>  #########
> 
> -This page has artwork (logos, photos and images) for use in Fuego presentations and documents.
> +This page has artwork (logos, photos and images) for use in Fuego
> +presentations and documents.
> 
>  =======
>  Logos
> @@ -83,7 +85,8 @@ images
>  Photos
>  =========
> 
> - * Tim Bird presenting Introduction to Fuego at ELC 2016 (Youtube video framegrab) (png, 370x370)
> + * Tim Bird presenting Introduction to Fuego at ELC 2016 (Youtube
> +   video framegrab) (png, 370x370)

It not where I would have broken the lines, but whatever.

> 
>    .. image:: ../images/Youtube-Intro-to-Fuego-ELC-2016-square.png
>       :height: 100

########################################
I'm going to stop here, because this message is getting too long, and I
want to communicate my feedback so far.  I'll send
feedback on the rest of the patch, in another e-mail.

Thanks for all this work.  I think we're narrowing in on a set of conventions for
the conversion, and a look and feel for the docs that will be very nice.

I've accepted this patch, and will likely fix a bunch of these issues.  So this feedback
is not intended to have you go back and fix anything (unless you see that I missed
some change in my commits).  That is, you don't have to re-do this patch or these
files, but please use this feedback for additional page conversions going forward.

Regards,
 -- Tim



More information about the Fuego mailing list