[Fuego] [PATCH] Docs: Convert the Fuego wiki pages into rst docs categorized under "feature refences"

Bird, Tim Tim.Bird at sony.com
Tue Nov 10 22:19:50 UTC 2020



> -----Original Message-----
> From: Pooja <pooja.sm at pathpartnertech.com>
> 
> From: Pooja More <pooja.sm at pathpartnertech.com>
> 
> To avoid duplicate section  heading problem (Test_Specs_and_Plans  and Testplan_reference
> have same section heading as "Testplan reference")  ''autosectionlabel_prefix_document''
> variable is set true in conf.py file which will prefix each section label with the
> name of the document it is in.This patch refers pages as :ref:`<filepath>:Section heading`
> in order to avoid duplicate section headings

Well.  I haven't approved that yet, but it looks like it only affects on link, in Test_Specs_and_Plans.
$ grep :ref:\`Testplan_Reference

Test_Specs_and_Plans.rst:on the page:  :ref:`Testplan_Reference:Testplan Reference`

I have NOT updated the sphinx configuration, but I'm going to leave this reference alone
and we'll decide whether to leave it as is, or use some other method to solve the duplicate
link problem.

> 
> Fuego wiki pages converted:
> Jenkins_Visualization
> Overlay_Generation
> Target_Packages
> Test_Specs_and_Plans
> Testplan_Reference
> 
> Signed-off-by: Pooja More <pooja.sm at pathpartnertech.com>
> ---
>  docs/rst_src/Jenkins_Visualization.rst | 437 +++++++++++++++++++++++++++++++++
>  docs/rst_src/Overlay_Generation.rst    | 222 +++++++++++++++++
>  docs/rst_src/Target_Packages.rst       |  71 ++++++
>  docs/rst_src/Test_Specs_and_Plans.rst  | 317 ++++++++++++++++++++++++
>  docs/rst_src/Testplan_Reference.rst    | 204 +++++++++++++++
>  5 files changed, 1251 insertions(+)
>  create mode 100644 docs/rst_src/Jenkins_Visualization.rst
>  create mode 100644 docs/rst_src/Overlay_Generation.rst
>  create mode 100644 docs/rst_src/Target_Packages.rst
>  create mode 100644 docs/rst_src/Test_Specs_and_Plans.rst
>  create mode 100644 docs/rst_src/Testplan_Reference.rst
> 
> diff --git a/docs/rst_src/Jenkins_Visualization.rst b/docs/rst_src/Jenkins_Visualization.rst
> new file mode 100644
> index 0000000..629ec5f
> --- /dev/null
> +++ b/docs/rst_src/Jenkins_Visualization.rst
> @@ -0,0 +1,437 @@
> +#######################
> +Jenkins Visualization
> +#######################
> +
> +Fuego test results are presented in the Jenkins interface via a number
> +of mechanisms.
> +
> +===========================
> +built-in Jenkins status
> +===========================
> +
> +Jenkins automatically presents the status of the last few tests (jobs)
> +that have been executed, on the job page for those jobs.
> +
> +A list of previous builds of the job are shown in the left-hand pane
> +of the page for the job, showing colored balls indicating the test
> +status.
> +
> +FIXTHIS - add more details here
I'll fix this section.

> +
> + * What are the different job statuses
> +
> +   * Pass, fail, unstable vs. stable
> +   * What do the colors mean
> +
> +================
> +flot plugin
> +================
> +
> +The flot plugin for Jenkins provides visualization of Fuego test
> +results.
> +
> +In Fuego 1.0 and previous (JTA), this plugin only showed plot data for
> +Benchmark results.  In Fuego 1.2, all tests have charts presented,
> +showing recent test results.  For benchmarks, the results are shown as
> +plots (graphs) of measure data, and for functional tests, tables are
> +shown with either individual results for each testcase, or summary
> +data for the testsets in the test.


I updated the wording for this.

> +
> +See :ref:`flot` for more information.
> +
> +====================
> +Charting details
> +====================
> +
> +Fuego results charts consists of either plots (a graph of results
> +versus build number) or tables (a table of results versus build
> +number).
> +
> +There are 3 different chart output options in Fuego 1.2:
> +
> + 1) A plot of benchmark measures (called "measure_plot")
> + 2) A table of testcase results (called "testcase_table")
> + 3) A table of testcase summary counts per testset (called "testset_summary_table")
> +
> +A user can control what type of visualization is used for a test using
> +a file called :ref:`chart_config.json`.  This file is in the test
> +directory for each individual test.  See the wiki page for that file
> +for additional details.
> +
> +Scope of data displayed
> +============================
> +
> +The page for a particular job, in Jenkins, shows the data for all of
> +the specs and boards related to the test. This can be confusing, but
> +it allows users to compare results between boards, and between
> +different test specs for the same test.
> +
> +For example, a job that runs the test Benchmark.bonnie, using the
> +'default' test spec job (e.g. board1.default.Benchmark.bonnie) shows
> +results for:
> +
> + * Boards: board1, and also other boards
> + * Specs: default, noroot
> + * Measures: (the ones specified in ``chart_config.json``)
> +
> +============================
> +Planned for the future
> +============================
> +
> +In future releases of Fuego, additional chart types are envisioned:
> +
> +A fourth chart type is:
> +
> +  4) A plot of testcase summary counts per testset (called testset_summary_plot)
> +
> +=============================
> +Detailed chart information
> +=============================
> +
> +Internally, output (by the :ref:`flot` module) is controlled by a file
> +called: ``flot_chart_data.json``
> +
> +Inside that, there is a data structure indicating the configuration
> +for one or more charts, called the chart_config.  This is placed there
> +during chart processing, by the results parser system.  A section of
> +that file, the chart_config element, is a direct copy of the data from
> +``chart_config.json`` that comes from the test directory for the test.
> +
> +Information flow
> +======================
> +
> +The internal module fuego_parser_results.py is used to generate
> +results.json.  That module takes the results from multiple run.json
> +files, and puts it into a single results file.
> +
> +The internal module ``prepare_chart_data.py`` is used to generate
> +``flat_plot_data.txt``.  The data in this file is stored as a series of
> +text lines, one per result for every testcase in every run of the
> +test.
> +
> +This file is then used to create a file called: flot_chart_data.json,
> +which has the data pre-formated as either 'flot' data structures, or
> +HTML tables.
> +
> +A file called chart_config.json is used to determine what type of
> +charts to include in the file, and what data to include.
> +
> +Here's an ASCII diagram of this flow:
> +
> +.. note::
> +   Programs are in rectangles (with '+' corners), and data files are in
> +   "double-line rounded rectangles" with '/' and '\' corners.
> +
> +::
> +
> +  +-------------+    //===========\\    +---------+    //========\\
> +  |test program | -> ||testlog.txt|| -> |parser.py| -> ||run.json|| ----+
> +  +-------------+    \\===========//    +---------+    \\========//     |
> +                                                                        |
> +    +-------------------------------------------------------------------+
> +    |
> +    |   +---------------------+     //==================\\
> +    +-> |prepare_chart_data.py| <-> ||flat_plot_data.txt||
> +        +---------------------+     \\==================//
> +            ^              |
> +            |              |
> +  //=================\\    |   //===============================\\    +------+
> +  ||chart_config.json||    +-> ||  flot_chart_data.json         || -> |mod.js| -> (table or graph)
> +  \\=================//        ||(HTML table or flot graph data)||    +------+
> +                               \\===============================//
> +

I replaced this ASCII diagram with an image.

> +The flot program mod.js is used to draw the actual plots and tables
> +based on ``flot_chart_data.json``.  mod.js is included in the web page for
> +the job view by Jenkins (along with the base flot libraries and jquery
> +library, which flot uses).
> +
> +measure_plot
> +===============
> +
> +A measure_plot is a graph of measures for a benchmark, with the
> +following attributes: ::
> +
> +  title=<test>-<testset>
> +  X series=build number
> +  Y1 series=result
> +  Y1 label=<board>-<spec>-<test>-<kernel>-<tguid>
> +  Y2 series=ref
> +  Y2 label=<board>-<spec>-<test>-<kernel>-<tguid>-ref
> +
> +
> +It plots measures (y) versus build_numbers.
> +
> +Here's example data for this: ::
> +
> + "charts": [
> +    {  # definition of chart 1
> +      "title": "Benchmark.fuego_check_plots-main.shell_random"
> +      "chart": {
> +         "chart_type": "measure_plot",
> +         "data": [
> +           {
> +              "label": "min1-default_spec-Benchmark.fuego_check_plots-v4.4-main.shell_random",
> +              "data": [ ["1","1006"],["2","1116"] ],
> +              "points": {"symbol": "circle"}
> +           },
> +           {
> +              "label": "min1-default_spec-Benchmark.fuego_check_plots-v4.4-main.shell_random-ref",
> +              "data": [ ["1","800"],["2","800"] ],
> +              "points": ["symbol":"cross"}
> +           }
> +         ]
> +         # note: could put flot config object here
> +      }
> +  }
> + ]
> +
> +
> +FIXTHIS - add testset, and draw one plot per testset.
> +
> +measure_table
> +===================
> +
> +A measure_table is a table of test spec with the following attributes:
> +
> + * row=(one per line with matching testspec/build-number in flat_chart_data.txt)
> + * columns=test set, build_number, testcase value, testcase ref value, testcase
> +   result(PASS/FAIL), duration
> + * Sort rows by testspec, then by build_number
> +
> +Here was the format of the first attempt: ::
> +
> +  title=<board>-<test>-<spec> (kernel)
> +  headers:
> +     board:
> +     kernel(s):
> +     test spec:
> +  ---------------------------------------------------------------
> +                            |    build number
> +  measure items  | test set |   b1   |   b2   |   b3   |   bN   |
> +  X1             |  <ts1>   | value1 | value2 | value3 | valueN |
> +  X1(ref)        |  <ts1>   | ref(X1)| ref(X1)| ref(X1)| ref(X1)|
> +  <bn>           |  <ts2>   |                ...
> +    (row-span    |  <ts2>   |                ...
> +  as appropriate)|  <ts3>   |                ...
> +  <b2n>          |  <ts3>   |                ...
> +
> +
> +And, 'valueN' is displayed in a correct color, e.g. GREEN if value1 is
> +in the expectation interval specified by 'ref', otherwise in RED, so
> +that we can display more info in a chart.
> +
> +testcase_table
> +====================
> +
> +A testcase_table is a table of testcases (usually for a functional
> +test), with the following attributes: ::
> +
> +  title=<board>-<spec>-<test>-<kernel>-<tguid>
> +  headers:
> +     board:
> +     test:
> +     kernel:
> +     tguid:
> +  row=(one per line with matching tguid in flat_chart_data.txt)
> +  columns=build_number, start_time/timestamp, duration, result
> +
> +
> +It shows testcase results by build_id (runs).
> +
> +Daniel's table has: ::
> +
> +  overall title=<test>
> +    chart title=<board>-<spec>-<testset>-<testcase>
> +    headers:
> +       board:
> +       kernel_version:
> +       test_spec:
> +       test_case:
> +       test_plan:
> +       test_set:
> +       toolchain:
> +   build number | status | start_time | duration
> +
> +
> +Cai's table has: ::
> +
> +   overall title=<test>
> +   summary:
> +      latest total:
> +      latest pass:
> +      latest fail
> +      latest untest:
> +   table:
> +   "no" | <test-name>  | test time |
> +                               | start-time |
> +                               | end-time |
> +                               | board version |
> +                               | test dir |
> +                               | test device |
> +                               | filesystem |
> +                               | command line |
> +   --------------------------------------------
> +   testcase number | testcase     | result |
> +
> +
> +This shows the result of only one run (the latest)
> +
> +Tim's testcase table has:
> +(one table per board-testname-testset) ::
> +
> +   overall title=<test>
> +   header:
> +     board
> +     kernel version
> +     spec?
> +     filesystem
> +     test directory?
> +     command line?
> +   --------------------------------------------
> +   tguid | results
> +         | build_number |
> +         | b1 | b2 | bn |
> +   <tguid1>|result1|result2|resultn|
> +        totals
> +   pass: |    |    |    |
> +   fail: |    |    |    |
> +   skip: |    |    |    |
> +   error:|    |    |    |
> +   --------------------------------------------

I removed this material about alternative data layouts.

> +
> +testset_summary_table
> +==========================
> +
> +A testset_summary_table is a table of testsets (usually for a complex
> +functional test) with the following attributes:
> +
> + * row=(one per line with matching testset/build-number in flat_chart_data.txt)
> + * columns=test set, build_number, start_time/timestamp, testset pass count, testset fail count, duration
> + * Sort rows by testset, then by build_number
> +
> +::
> +
> +  title=<board>
> +  headers:
> +     board:
> +     kernel(s):
> +  -----------------------------------------------------
> +                            |    counts
> +  build number   | test set | pass | fail| skip | err |
> +  <bn>           |  <ts1>   |
> +    (row-span    |  <ts2>   |
> +  as appropriate)|  <ts3>   |
> +  <b2n>          |  <ts1>   |
> +                 |  <ts2>   |
> +
> +
> +Here was the format of the first attempt: ::
> +
> +  title=<board>-<spec>-<test>-<kernel>
> +  headers:
> +     board:
> +     test:
> +     kernel:
> +  -------------------------------------------------------------------------
> +                                                |    counts
> +  testset | build_number | start time| duration | pass | fail| skip | err |
> +  <ts>    | ...|
> +
> +
> +It shows testset summary results by runs
> +
> +Here's an alternate testset summary arrangement, that I'm not using at
> +the moment: ::
> +
> +   --------------------------------------------
> +   testset | results
> +           | b1                      | b2    | bn    |
> +           | pass | fail | skip | err |p|f|s|e|p|f|s|e|
> +   <ts>    | <cnt>| <cnt>| <cnt>| <cnt>...            |
> +        totals
> +   --------------------------------------------
> +
> +
> +
> +testset_summary_plot
> +==========================
> +
> +A testset_summary_plot is a graph of testsets (usually for a complex
> +functional test) with the following attributes: ::
> +
> +  title=<board>-<spec>-<test>-<kernel>
> +  X series=build number
> +  Y1 series=pass_count
> +  Y1 label=<board>-<spec>-<test>-<kernel>-<testset>-pass
> +  Y2 series=fail_count
> +  Y2 label=<board>-<spec>-<test>-<kernel>-<testset>-fail
> +
> +
> +It graphs testset summary results versus build_ids
> +
> +structure of chart_data.json
> +==================================
> +
> +Here's an example: ::
> +
> + {
> +  "chart_config": {
> +     "type": "measure_plot"
> +     "title:": "min1-Benchmark.fuego_check_plots-default"
> +     "chart_data": {
> +        data
> + }
> +
> +
> +Feature deferred to a future release
> +========================================
> +
> + * Ability to specify the axes for plots
> + * Ability to specify multiple charts in chart_config
> +
> +  * Current Daniel code tries to automatically do this based on test_sets
> +
> +========================================
> +Architecture for generic charting
> +========================================
> +
> +Assuming you have a flat list of entries with attributes for
> +board, testname, spec, tguid, result, etc., then you can use treat this like
> +a sql database, and do the following:
> +
> + * Make a list of charts to build
> +
> +   * Have a chart-loopover-key = type of data to use for loop over charts
> +   * Or, specify a list of charts
> +
> + * Define a key to use to extract data for a chart (the chart-key)
> + * For each chart:
> +
> +   * Make a list of rows to build
> +
> +     * Have a row-loopover-key = filter for rows to include
> +     * Or, specify a list of rows
> +
> +   * Define a key to use to extract data for each row
> +   * If sub-columns are defined:
> +
> +     * Make a sub-column-key
> +     * Make a two-dimensional array to hold the sub-column data
> +
> +   * For each entry:
> +
> +     * Add the entry to the correct row and sub-column
> +
> +   * Sort by the desired column
> +   * Output the data in table format
> +
> +     * Loop over rows in sort order
> +     * Generate the html for each row
> +
> +       * Loop over sub-columns, if defined
> +
> +   * Return html
> +
> +There's a similar set of data (keys, looping) for defining plot data.
> +With keys selecting the axes.
> diff --git a/docs/rst_src/Overlay_Generation.rst b/docs/rst_src/Overlay_Generation.rst
> new file mode 100644
> index 0000000..df72771
> --- /dev/null
> +++ b/docs/rst_src/Overlay_Generation.rst
> @@ -0,0 +1,222 @@
> +####################
> +Overlay Generation
> +####################
> +
> +Overlay generation refers to the process of converting overlay files
> +into a test variable script.  This allows for board files and base
> +test scripts to override functions and variables in the base fuego
> +system with customized versions.  This implements a weak form of an
> +object-orientated programming.
> +
> +At run time, the base test script is sourced.  This in turn sources
> +the fuego test system.  During that 'source' operation, environment
> +variables (NODE_NAME and DISTRIB) are used to select the .board and
> +.dist files for the target.  These files, in turn, can inherit and
> +include definitions of variables and functions from other files
> +(called "overlay" or "class" files).
> +
> +The program ``ovgen.py`` is called to read the .board and .dist files, and
> +to combine the information in these with the overlay files, and
> +finally to also add information from the testplans and test spec
> +files, to create a single unified ``prolog`` script.  This script is
> +called the ``test variables file`` and is sourced into the running
> +script, to provide final definitions for functions and variables used
> +during test execution.
> +
> +The call to ``ovgen.py`` looks like this: ::
> +
> + * $OF_OVGEN $OF_CLASSDIR_ARGS $OF_OVFILES_ARGS $OF_TESTPLAN_ARGS $OF_SPECDIR_ARGS $OF_OUTPUT_FILE_ARGS

Leading '*' not needed here.

> +
> +Which expands to something like: ::
> +
> + /fuego-core/engine/scripts/ovgen/ovgen.py \
> +   --classdir /fuego-core/engine/overlays//base \
> +   --ovfiles /fuego-core/engine/overlays//distribs/nologger.dist \
> +             /fuego-core/engine/overlays//boards/qemu-arm.board \
> +   --testplan /fuego-core/engine/overlays//testplans/testplan_default.json \
> +   --output /fuego-rw/work/qemu-test-arm_prolog.sh
> +
> +
> +This says to take the 2 ovfiles mentioned: ``nologger.dist`` and
> +``qemu-arm.board``, and process them using the indicated classdir,
> +testplan and specdir, to product the output ``qemu-test-arm_prolog.sh``.
> +
> +The result will be a single file containing all the functions and
> +variables defined in the combined files, taking into account any
> +overrides encountered.
> +
> +The classdir defines where base ``fuegoclass`` files are located, which
> +can be included or inherited into the environment space.
> +
> +The testplan and specdir are used to augment the environment space
> +with variables for the indicated testplan.
> +
> +========================================
> +Inheritance, inclusion and overrides
> +========================================
> +
> +The system implements a weak form of object-oriented behavior
> +(specifically function and variable polymorphism), by allowing
> +functions and variables from the base Fuego system to be overridden
> +during execution of the program.
> +
> +A ``class`` file has the same syntax as a shell script, but the
> +extension ``.fuegoclass``.  To include the material from a class file
> +into another file, you use either the 'inherit' keyword or the
> +'include' keyword.
> +
> +If you 'inherit' a class file, then the variables and functions in the
> +file may be overridden by local definitions in your shell script.
> +
> +The functions which are intended to be overridable start with the
> +prefix ov_ (usually), and reside in the 'class' files in the classdir.
> +Variables can also be overridden.  These have no special identifying
> +prefix.
> +
> +If you 'include' a class file, then the variables and functions in
> +that file may NOT be overridden by local definitions in your shell
> +script.
> +
> +It is presumed that these overrides will be specified is in the .board
> +and .dist files.
> +
> +nologread.dist
> +=====================
> +
> +One example of an override is provided in the system already, in the
> +form of nologread.dist.  Every target node defined in the system (in
> +the Jenkins interface) defines both a board file and a dist file.
> +These are intended to define parameters and functions for accessing
> +the board, and for executing certain functions based on the type of
> +distribution on the board (e.g. poky vs debian).
> +
> +The ``base.dist`` file is the default .dist file used by targets, and it
> +does not override any functions or variables provided by the fuego
> +system.  It merely inherits all pre-defined functions from
> +``base-distrib.fuegoclass``.
> +
> +However, the ``nologger.dist`` file is intended for use when there is no
> +command 'logread' provided on the target.  It uses 'cat' instead to
> +retrieve the log information during the test.  It inherits the
> +pre-defined functions from ``base-distrib.fuegoclass``, but then overrides
> +the function ov_rootfs_logread.
> +
> +Here is a list of overridable functions:
> +
> +From ``base-board.fuegoclass``:
> +
> + * ov_transport_get
> + * ov_transport_put
> + * ov_transport_cmd
> +
> +From ``base-distrib.fuegoclass``:
> +
> + * ov_get_firmware
> + * ov_rootfs_reboot
> + * ov_rootfs_state
> + * ov_logger
> + * ov_rootfs_sync
> + * ov_rootfs_drop_caches
> + * ov_rootfs_oom
> + * ov_rootfs_kill
> + * ov_rootfs_logread
> +
> +From ``base-funcs.fuegoclass``:
> +
> + * default_target_route_setup
> +
> +The following variables can be overriden:
> +
> +From ``base-params.fuegoclass``:
> +
> + * DEVICE
> + * PATH
> + * SSH
> + * SCP
> +
> +=======================================
> +How to use the override/class system
> +=======================================
> +
> +Board and distribution files are referenced in the Jenkins definition
> +for a test node (target).  These files are interpreted by Fuego as
> +overlay files, which can use values and functions from other files
> +(fuegoclass files), and override them if necessary for a particular
> +board.
> +
> +Inheriting and including other variables
> +==============================================
> +
> +An overlay file (board or distribution file) defined variables and
> +functions from other base class files in the system using the
> +'inherit' and 'include' directives.
> +
> +The inherit directive is used to read items from a fuegoclass file
> +that can be overridden.
> +
> +Items that are read from a fuegoclass file using the 'include'
> +directive cannot be overridden in the overlay file.
> +
> +For example, a board file usually uses the following directives:
> +
> + * Inherit "base-board"
> + * Include "base-params"
> +
> +This means that the functions and variables declared in the
> +``base-board.fuegoclass`` file can be overridden in the board file.
> +However, the functions and variables declared in the
> +``base-params.fuegoclass`` file can not be overridden in the board file.
> +
> +Syntax for overriding variables and functions
> +===================================================
> +
> +To override a variable that is defined in another file, you re-declare
> +the variable in the board or distrib file using the normal syntax
> +(NAME="value"), but put an "override" prefix on the line, like so:
> +
> +::
> +
> + override NAME="value"
> +
> +
> +To override a function, use the syntax as follows: ::
> +
> +
> +  override-func func_name() {
> +      function commands...
> +  }
> +
> +
> +The syntax must be precise, including the number of spaces in the
> +first line and the brace placement (on same line as function name for
> +the opening brace, and at the first of the line for the closing brace)
> +
> +
> +==========================
> +System Developer Notes
> +==========================
> +
> +Outline of ovgen operation
> +===========================
> +
> +Here is an outline of ovgen operation:
> +
> + * run
> +
> +   * Parse command line arguments
> +   * Parse test specs, if specdir is specified on command line
> +   * Parse test plans, if testplan is specified on command line
> +   * Parse all the base fuegoclass files (from classdir directory)
> +   * Parse classes out of the override file
> +
> +     * This processes inherited values and overrides during the parse
> +
> +   * Generate the prolog (test variable script) from the data read
> +
> +.. note::
> +   testplans and testspecs are simple maps internally (in ovgen.py).
> +   However, parseBaseDir() and parseOverrideFile() return class objects
> +   that are put in a list.
> +
> +For additional developer notes on the overlay system, see
> +:ref:`ovgen feature notes`

I fixed a few grammar errors (not related to your conversion) in a fixup commit.

> diff --git a/docs/rst_src/Target_Packages.rst b/docs/rst_src/Target_Packages.rst
> new file mode 100644
> index 0000000..729aa7a
> --- /dev/null
> +++ b/docs/rst_src/Target_Packages.rst
> @@ -0,0 +1,71 @@
> +####################
> +Target Packages
> +####################
> +
> +A 'target package' is a binary archive file that contains the
> +materials that are needed on a board, to execute a test.  It is in
> +'tar' format, and contains the materials that would normally be
> +deployed to the board during the 'deploy' phase of test execution.
``deploy``

In general, test phases should be literal-quoted.

> +
> +=============================
> +Building a target package
> +=============================
> +
> +To build a target package, use "ftc run-test" and specify a subset of
``ftc run-test``

> +phases to run.  Specifically, specify to run the 'pretest, 'build',
> +'deploy' and 'makepkg' phases.

I reworded this slightly, and literal-quoted all the phase names.

> +
> +Example: ::
> +
> +   ftc run-test -b beaglebone -t Functional.hello_world -p "pbdm"
> +
> +The package will be created and placed in the directory ``/fuego-rw/cache/``
> +with the name: ``$TOOLCHAIN-$TESTNAME-board-package.tar.gz``
> +
> +==========================================
> +Making a full cache of target packages
> +==========================================
> +
> +To make all of the target packages for a particular board, use the
> +script ``make_cache.sh``.
> +
> +This script is located at ``fuego-core/engine/scripts/make_cache.sh``.  To
> +use it, provide a board name as a command line argument.
> +
> +It will call 'ftc' to create all of the target package files that it
``ftc``
> +can (ie that build successfully).
> +
> +===================
> +Developer notes
> +===================
> +
> +To support this features, a new test execution phase was added to
> +Fuego.  The new phase is called 'makepkg', and the letter 'm' is used
``makepkg``

> +in the phase string used with the '-p' option to 'ftc run-test'. By
> +default, the "makepkg" phase is not executed (that is, during a
``makepkg``

> +"normal" run of a test).  This phase must be specifically requested in
> +order for Fuego to execute it during a test run.
> +
> +If the 'makepkg' phase is specified, then deploy is altered to put the
``makepkg``
> +materials into the directory ``/fuego-rw/stage/fuego.<testname>``.
> +Then, after deployment the internal function 'makepkg' is called to
``makepkg``
(functions are also literal-quoted)

> +create the target package file.  The file is called
> +``/fuego-rw/cache/$TOOLCHAIN-$TESTNAME-board-package.tar.gz``.
> +
> +Outstanding work
> +=======================
> +
> +This system captures the materials that would be in
> +``$BOARD_TESTDIR/fuego.$TESTNAME`` after the deploy phase.  If a
> +test's 'test_deploy' function manipulates items on the target board
``test_deploy``

> +that are outside this directory, those changes will not be captured in
> +the target package.
> +
> +For that, we will need to add 2 things:
> +
> + - ability to specify multiple target locations for files
> + - pre-install and post-install scripts, just like Debian and RedHat packages
> +
> +Note that by default, the packages are relocatable since they omit the
> +absolute path in the files contained in them.  They should all be
> +relative to the ``$BOARD_TESTDIR/fuego.$TESTNAME directory.``
> diff --git a/docs/rst_src/Test_Specs_and_Plans.rst b/docs/rst_src/Test_Specs_and_Plans.rst
> new file mode 100644
> index 0000000..2a750e1
> --- /dev/null
> +++ b/docs/rst_src/Test_Specs_and_Plans.rst
> @@ -0,0 +1,317 @@
> +########################
> +Test Specs and Plans
> +########################

This feature is in process of being refactored (and is deprecated in
favor of batch tests).  I put a note at the top of the page reflecting
that status.

> +
> +================
> +Introduction
> +================
> +
> +Fuego provides a mechanism to control test execution using something
> +called "test specs" and "test plans".  Together, these allow for
> +customizing the invocation and execution of tests in the system.  A
> +test spec lists variables, with their possible values for different
> +test scenarios, and the test plan lists a set of tests, and the set of
> +variable values (the spec) to use for each one.
> +
> +There are numerous cases where you might want to run the same test
> +program, but in a different configuration (i.e. with different
> +settings), in order to measure or test some different aspect of the
> +system.  One of the simplest different type of test settings you might
> +choose is whether to run a quick test or a thorough (long) test.
> +Selecting between quick and long is a high-level concept, and
> +corresponds to the concept of a test plan. The test plan selects
> +different arguments, for the tests that this makes sense for.
> +
> +For example, the arguments to run a long filesystem stress test, are
> +different than the arguments to run a long network benchmark test.
> +For each of these individual tests, the arguments will be different
> +for different plans.
> +
> +Another broad category of test difference is what kind of hardware
> +device or media you are running a filesystem test on.  For example,
> +you might want to run a filesystem test on a USB-based device, but the
> +results will likely not be comparable with the results for an
> +MMC-based device. This is due to differences in how the devices
> +operate at a hardware layer and how they are accessed by the system.
> +Therefore, depending on what you are trying to measure, you may wish
> +to measure only one or the other types of hardware.
> +
> +The different settings for these different plans are stored in the
> +test spec file.  Each test in the system has a test spec file, which
> +lists different specifications (or "specs") that can be incorporated
> +into a plan.  The specs list a set of variables for each spec.  When a
> +testplan references a particular spec, the variable values for that
> +spec are set by the Fuego overlay generator during the test execution.
> +
> +In general, test plan files are global and have the names of
> +categories of tests.
> +
> +.. note::
> +    Note that a test plan may not apply to every test. In fact the
> +    only one that does is the default test plan.  It is important
> +    for the user to recognize which test plans may be suitably used
> +    with which tests.
> +
> +::
> +
> +  FIXTHIS - the Fuego system should handle this, by examining the
> +  test plans and specs, and only presenting to the user the plans that
> +  apply to a particular test. (what?)
> +
> +===============
> +Test plans
> +===============
> +
> +The Fuego  "test plan" feature is provided as an aid to organizing
> +testing activities.
> +
> +There are only a few "real" testplans provided in Fuego (as of early
> +2019).  There is a "default" testplan, which includes a smattering of
> +different tests, and some test plans that allow for selecting between
> +different kinds of hardware devices that provide file systems.  Fuego
> +includes a number of different file system tests, and these plans
> +allow customizing each test to run with filesystems on either USB,
> +SATA, or MMC devices.
> +
> +These test plans allow this selection:
> +
> + * testplan_usbstor
> + * testplan_sata
> + * testplan_mmc
> +
> +These plans select test specs named: 'usb', 'sata', and 'mmc'
> +respectively.
> +
> +Fuego also includes some test-specific test plans (for the
> +Functional.bc and Functional.hello_world tests), but these are there
> +more as examples to show how the test plan and spec system works, than
> +for any real utility.
> +
> +A test plan is specified by a file in JSON format, that indicates the
> +test plan name, and for each test to which it applies, the specs which
> +should be used for that test, when run with this plan.  The test plan
> +file should have a descriptive name starting with 'testplan_' and
> +ending in the suffix '.json', and the file must be placed in the
> +``engine/overlays/testplans`` directory.
> +
> +Example
> +=============
> +
> +The test program from the hello_world test allows for selecting
> +whether the test succeeds, always fails, or fails randomly.  It does
> +this using a command line argument.
> +
> +The Fuego system includes test plans that select these different
> +behaviors.  These test plan files are named:
> +
> + * ``testplan_default.json``
> + * ``testplan_hello_world_fail.json``
> + * ``testplan_hello_world_random.json``
> +
> +Here is ``testplan_hello_world_random.json``
> +
> +::
> +
> +  {
> +       "testPlanName": "testplan_hello_world_random",
> +       "tests": [
> +           {
> +               "testName": "Functional.hello_world",
> +               "spec": "hello-random"
> +           }    ]
> +   }
> +
> +
> +Testplan Reference
> +========================
> +
> +A testplan can include several different fields, at the "top" level of
> +the file, and associated with an individual test.  These are described
> +on the page:  :ref:`Testplan_Reference:Testplan Reference`
> +
> +==============
> +Test Specs
> +==============
> +
> +Each test in the system should have a 'test spec' file, which lists
> +different specifications, and the variables for each one that can be
> +customized for that test.  Every test is required, at a minimum, to
> +define the "default" test spec, which is the default set of test
> +variables used when running the test.  Note that a test spec can
> +define no test variables, if none are required.
> +
> +The set of variables, and what they contain is highly test-specific.
> +
> +The test spec file is in JSON format, and has the name "spec.json".
> +
> +The test_spec file is placed in the test's home directory, which is
> +based on the test's name:
> +
> + *  ``/fuego-core/engine/tests/$TESTNAME/spec.json``

removed 'engine', as it's now deprecated.

> +
> +Example
> +=============
> +
> +The Functional.hello_world test has a test spec that provides options
> +for executing the test normally (the 'default' spec), for succeeding
> +or failing randomly (the 'hello-random' spec) or for always failing
> +(the 'hello-fail' spec).
> +
> +This file is located in
> +``engine/tests/Functional.hello_world/spec.json``
> +
> +Here is the complete spec for this test: ::
> +
> +   {
> +       "testName": "Functional.hello_world",
> +       "specs": {
> +           "hello-fail": {
> +               "ARG":"-f"
> +           },
> +           "hello-random": {
> +               "ARG":"-r"
> +           },
> +           "default": {
> +               "ARG":""
> +           }
> +       }
> +   }
> +
> +
> +During test execution, the variable FUNCTIONAL_HELLO_WORLD_ARG will be
> +set to one of the three values shown, depending on which testplan is
> +used to run the test.
> +
> +======================================
> +Variable use during test execution
> +======================================
> +
> +Variables from the test spec are expanded by the overlay generator
> +during test execution.  The variables declared in the test spec files
> +may reference other variables from the environment, such as from the
> +board file, relating to the toolchain, or from the fuego system
> +itself.
> +
> +The name of the argument is appended to the end of the test name to
> +form the environment variable for the test.  This can then be used in
> +the base script as arguments to the test program (or for any other
> +use).
> +
> +Example
> +=============
> +
> +In this hello-world example, here is what the actual program
> +invocation looks like.  This is an excerpt from the base script for
> +this test
> +(``/home/jenkins/tests/Functional.hello_world/hello_world.sh``).
> +
> +::
> +
> +   function test_run {
> +       report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./hello $FUNCTIONAL_HELLO_WORLD_ARG"
> +   }
> +
> +
> +Note that in the default spec for hello_world, the variable ('ARG' in
> +the test spec) is left empty.  This means that during execution of
> +this test with testplan_default, the program 'hello' is called with no
> +arguments, which will cause it to perform it's default operation.  The
> +default operation for 'hello' is a dummy test that always succeeds.
> +
> +===============================
> +Specifying failure cases
> +===============================
> +
> +The test spec file can also specify one or more failure cases.  These
> +represent string patterns that are scanned for in the test log, to
> +detect error conditions indicating the that test failed.  The syntax
> +for this is described next.
> +
> +Example of fail case
> +==========================
> +
> +The following example of a test spec (from the Functional.bc test),
> +shows how to declare an array of failure tests for this test.
> +
> +There should be an variable named "fail_case" declared in test test
> +spec JSON file, and it should consist of an array of objects, each one
> +specifying a 'fail_regexp' and a 'fail_message', with an optional
> +variable (use_syslog) indicating to search for the item in the system
> +log instead of the test log.
> +
> +The regular expression is used with grep to scan lines in the test
> +log.  If a match is found, then the associated message is printed, and
> +the test is aborted.
> +
> +::
> +
> +   {
> +       "testName": "Functional.bc",
> +       "fail_case": [
> +           {
> +               "fail_regexp": "some test regexp",
> +               "fail_message": "some test message"
> +           },
> +           {
> +               "fail_regexp": "Bug",
> +               "fail_message": "Bug or Oops detected in system log",
> +               "use_syslog": 1
> +           }
> +           ],
> +       "specs":
> +       [
> +           {
> +               "name":"default",
> +               "EXPR":"3+3",
> +               "RESULT":"6"
> +           }
> +       ]
> +   }
> +
> +
> +These variables are turned into environment variables by the overlay
> +generator and are used with the function
> +:ref:`fail_check_cases <function_fail_check_cases>`  which is called during
> +the 'post test' phase of the test.
> +
> +Note that the above items would be turned into the following
> +environment variables internally in the fuego system:
> +
> + * FUNCTIONAL_BC_FAIL_CASE_COUNT=2
> + * FUNCTIONAL_BC_FAIL_PATTERN_0="some test regexp"
> + * FUNCTIONAL_BC_FAIL_MESSAGE_0="some test message"
> + * FUNCTIONAL_BC_FAIL_PATTERN_1="Bug"
> + * FUNCTIONAL_BC_FAIL_MESSAGE_1="Bug or Oops detected in system log"
> + * FUNCTIONAL_BC_FAIL_1_SYSLOG=true
> +
> +=============================
> +Catalog of current plans
> +=============================
> +
> +Fuego, as of January 2017, only has a few testplans defined.
> +
> +Here is the full list:
> +
> + * testplan_default
> + * testplan_mmc
> + * testplan_sata
> + * testplan_usbstor
> + * testplan_bc_add
> + * testplan_bc_mult
> + * testplan_hello_world_fail
> + * testplan_hello_world_random
> +
> +The storage-related testplans (mmc, sata, and usbstor) allow the test
> +spec to configure the appropriate following variables:
> +
> + * MOUNT_BLOCKDEV
> + * MOUNT_POINT
> + * TIMELIMIT
> + * NPROCS
> +
> +Both the 'bc' and 'hello_world' testplans are example testplans to
> +demonstrate how the testplan system works.
> +
> +The 'bc' testplans are for selecting different operations to test in
> +'bc'.  The 'hello_world' testplans are for selecting different results
> +to test in 'hello_world'
> diff --git a/docs/rst_src/Testplan_Reference.rst b/docs/rst_src/Testplan_Reference.rst
> new file mode 100644
> index 0000000..e188999
> --- /dev/null
> +++ b/docs/rst_src/Testplan_Reference.rst
> @@ -0,0 +1,204 @@
> +####################
> +Testplan  Reference
> +####################
> +
> +In Fuego, a testplan is used to specify a set of tests to execute, and
> +the settings to use for each one.
> +
> +========================
> +Filename and location
> +========================
> +
> +A testplan is in json format, and can be located in two places:
> +
> + * As a file located in the directory ``fuego-core/engine/overlay/testplans``.
> + * As a here document embedded in a batch test script
> +   (``fuego_test.sh``)
> +
> +A testplan file name should start with the prefix "testplan_", and end
> +with the  extension ".json".
> +
> +A testplan here document should be preceded by a line starting with
> +BATCH_TESTPLAN= and followed by a line starting with "END_TESTPLAN".
> +
> +========================
> +Top level attributes
> +========================
> +
> +The top level objects that may be defined in a testplan are:
> +
> + * testPlanName
> + * tests
> + * default_timeout
> + * default_spec
> + * default_reboot
> + * default_rebuild
> + * default_precleanup
> + * default_postcleanup
> +
> +
> +Each of these attributes, except for 'tests' has a value that is a string.
> +Here are their meaninings and legal values:
> +
> +The testPlanName is the name of this testplan.  It must match the
> +filename that holds this testplan (without the "testplan_" prefix or
> +".json" extension.  This object is required.
> +
> +'tests' is a list of tests that are part of this testplan.  See below
> +for a detailed description of the format of an element in the 'tests'
> +list.  This object is required.
> +
> +Default test settings
> +==========================
> +
> +The testplan may also include a set of default values for test settings.
> +The test settings for which defaults may be specified are:
> +
> + * timeout
> + * spec
> + * reboot
> + * rebuild
> + * precleanup
> + * postcleanup
> +
> +These values are used if the related setting is not specified in the
> +individual test definition.
> +
> +For example, the testplan might define a default_timeout of "15m"
> +(meaning 15 minutes).  The plan could indicate timeouts different from
> +this (say 5 minutes or 30 minutes) for individual tests, but if a test
> +in the testplan doesn't indicate its own timout it would default to
> +the one specified as the default at the top level of the testplan.
> +
> +The ability to specify per-plan and per-test settings makes it easier
> +to manage these settings to fit the needs of your Fuego board or lab.
> +
> +Note that if neither the individual test nor the testplan provide
> +a default value is not provided, then a Fuego global default value
> +for that setting will be used.
> +
> +Note that the default_spec specifies the name of the test spec to use
> +for the test (if one is not is specified for the individual test
> +definition).  The name should match a spec that is defined for every
> +test listed in the plan.  Usually this will be something like
> +"default", but it could be something that is common for a set of
> +tests, like 'mmc' or 'usb' for filesystem tests.
> +
> +See the individual test definitions for descriptions of these
> +different test settings objects.
> +
> +===============================
> +Individual test definitions
> +===============================
> +
> +The 'tests' object is a list of objects, each of which indicates a
> +test that is part of the plan.  The objects included in each list
> +element are:
> +
> + * testName
> + * spec
> + * timeout
> + * reboot
> + * rebuild
> + * precleanup
> + * postcleanup
> +
> +All object values are strings.
> +
> +TestName
> +==============
> +
> +The 'testName' object has the name of a Fuego test included in this
> +plan.  It should be the fully-qualified name of the test (that is, it
> +should include the "Benchmark." or "Functional." prefix.)  This
> +attribute of the test element is required.
> +
> +Spec
> +==========
> +
> +The 'spec' object has the name of the spec to use for this test. It
> +should match the name of a valid test spec for this test.  If 'spec'
> +is not specified, then the value of "default_spec" for this testplan
> +will be used.
> +
> +Timeout
> +=========
> +
> +The timeout object has a string indicating the timeout to use for a
> +test.  The string is positive integer followed by a single-character
> +units-suffix.  The units suffixes available are:
> +
> + * 's' for seconds
> + * 'm' for minutes
> + * 'h' for hours
> + * 'd' for days
> +
> +Most commonly, a number of minutes is specified, like so:
> +
> + * "default_timeout" : "15m",
> +
> +If no 'timeout' is specified, then the value of 'default_timeout' for
> +this testplan is used.
> +
> +Reboot
> +============
> +
> +The 'reboot' object has a string indicating whether to reboot the
> +board prior to the test.  It should have a string value of 'true' or
> +'false'.
> +
> +Rebuild
> +===============
> +
> +The 'rebuild' object has a string indicating whether to rebuild the
> +test software, prior to executing the test.  The object value must be
> +a string of either 'true' or 'false'.
> +
> +If the value is 'false', then Fuego will do the following, when
> +executing the test:
> +
> + * If the test program is not built, then build it
> + * If the test program is already built, then use the existing test program
> +
> +If the value is 'true', then Fuego will do the following:
> +
> + * Remove any existing program build directory and assets
> + * Build the program (including fetching the source, unpacking it,
> +   and executing the instructions in the test's "test_build" function)
> +
> +Precleanup
> +===============
> +
> +The 'precleanup' flag indicates whether to remove all previous test
> +materials on the target board, prior to deploying and executing the test.
> +The object value must be a string of either 'true' or 'false'.
> +
> +Postcleanup
> +=================
> +
> +The 'postcleanup' flag indicates whether to remove all test materials
> +on the target board, after the test is executed.
> +The flag value must be a string of either 'true' or 'false.
> +
> +============================
> +Test setting precedence
> +============================
> +
> +Note that the test settings are used by the plan at job creation time,
> +to set the command line arguments that will be passed to 'ftc run-test'
> +by the Jenkins job, when it is eventually run.
> +
> +A user can always edit a Jenkins job (for a Fuego test), to
> +override the test settings for that job.
> +
> +The precedence of the settings encoded into the job definition at job
> +creation time are:
> +
> + - Testplan individual test setting (highest priority)
> + - Testplan default setting
> + - Fuego default setting
> +
> +The precedence of settings at job execution time are:
> +
> + - 'ftc run-test' command line option setting (highest priority)
> + - Fuego default setting
> --
> 2.7.4
> 
> 
> --

Thanks.
 -- Tim



More information about the Fuego mailing list