[Fuego] [PATCH] Docs:Convert tbwiki pages into .rst files

Pooja pooja.sm at pathpartnertech.com
Fri Nov 20 05:31:54 UTC 2020


From: Pooja More <pooja.sm at pathpartnertech.com>

Following are the pages converted:
run.json.rst
spec.json.rst
test.yaml.rst

Signed-off-by: Pooja More <pooja.sm at pathpartnertech.com>
---
 docs/rst_src/run.json.rst  | 323 +++++++++++++++++++++++++++++++++++++++++++++
 docs/rst_src/spec.json.rst | 183 +++++++++++++++++++++++++
 docs/rst_src/test.yaml.rst | 177 +++++++++++++++++++++++++
 3 files changed, 683 insertions(+)
 create mode 100644 docs/rst_src/run.json.rst
 create mode 100644 docs/rst_src/spec.json.rst
 create mode 100644 docs/rst_src/test.yaml.rst

diff --git a/docs/rst_src/run.json.rst b/docs/rst_src/run.json.rst
new file mode 100644
index 0000000..d128fb3
--- /dev/null
+++ b/docs/rst_src/run.json.rst
@@ -0,0 +1,323 @@
+###########
+run.json
+###########
+
+Summary
+===========
+
+The ``run.json`` file has data about a particular test run.  It has
+information about the test, including the results for the test.
+
+The format of portions of this file was inspired by the KernelCI API.
+See `<https://api.kernelci.org/schema-test-case.html>`_
+
+The results are included in an array of test_set objects, which can
+contain arrays of test_case objects, which themselves may contain
+measurement objects.
+
+
+Field details
+===================
+
+ * **duration** - the amount of time, in milliseconds, that the test
+   took to execute
+
+   * If the test included a build, this time is included in this number
+
+ * **metadata** - various fields that are specific to Fuego
+
+   * **attachments** - a list of the files that are available for this
+     test - usually logs and such
+   * **batch_id** - a string indicating the batch of tests this test was run
+     in (if applicable)
+   * **board** - the board the test was executed on
+   * **build_number** - the Jenkins build number
+   * **compiled_on** - indicates the location where the test was compiled
+   * **fuego_core_version** - version of the fuego core system
+   * **fuego_version** - version of the fuego container system
+   * **host_name** - the host.  If not configured, it may be 'local_host'
+   * **job_name** - the Jenkins job name for this test run
+   * **keep_log** - indicates whether the log is kept (???)
+   * **kernel_version** - the version of the kernel running on the board
+   * **reboot** - indicates whether a reboot was requested for this test run
+   * **rebuild** - indicates whether it was requested to rebuild the source
+     for this run
+   * **start_time** - time when this test run was started (in seconds since
+     Jan 1, 1970)
+   * **target_postcleanup** - indicates whether cleanup of test materials on the
+     board was requested for after test execution
+   * **target_precleanup** - indicates whether cleanup of test materials on the
+     board was requested for before test execution
+   * **test_plan** - test plan being executed for this test run.  May be 'None'
+     if test was not executed in the context of a larger plan
+   * **test_spec** - test spec used for this run
+   * **testsuite_version** - version of the source program used for this run
+
+     * FIXTHIS - testsuite_version is not calculated properly yet
+
+   * **timestamp** - time when this test run was started (in ISO 8601 format)
+   * **toolchain** - the toolchains (or PLATFORM) used to build the test program
+   * **workspace** - a directory on the host where test materials were extracted
+     and built, for this test.
+
+     * This is the parent directory used, and not the specific directory used for
+       this test.
+
+ * **name** - the name of the test
+ * **status** - the test result as a string.  This can be one of:
+
+   * PASS
+   * FAIL
+   * ERROR
+   * SKIP
+
+ * **test_sets** - list of test_set objects, containing test results
+ * **test_cases** - list of test_case objects, containing test results
+
+   * Each test_case object has:
+
+     * **name** - the test case name
+     * **status** - the result for that test case
+
+ * **measurements** - list of measurement objects, containing test results
+
+   * For each measurement, the following attributes may be present:
+
+     * **name** - the measure name
+     * **status** - the pass/fail result for that test case
+     * **measure** - the numeric result for that test case
+
+============
+Examples
+============
+
+Here are some sample run.json files, from Fuego 1.2
+
+
+Functional test results
+=============================
+
+This was generated using
+
+::
+
+ ftc run-test -b docker -t Functional.hello_world
+
+This example only has a single test_case.
+
+::
+
+  {
+      "duration_ms": 1245,
+      "metadata": {
+          "attachments": [
+              {
+                  "name": "devlog",
+                  "path": "devlog.txt"
+              },
+              {
+                  "name": "devlog",
+                  "path": "devlog.txt"
+              },
+              {
+                  "name": "syslog.before",
+                  "path": "syslog.before.txt"
+              },
+              {
+                  "name": "syslog.after",
+                  "path": "syslog.after.txt"
+              },
+              {
+                  "name": "testlog",
+                  "path": "testlog.txt"
+              },
+              {
+                  "name": "consolelog",
+                  "path": "consolelog.txt"
+              },
+              {
+                  "name": "test_spec",
+                  "path": "spec.json"
+              }
+          ],
+          "board": "docker",
+          "build_number": "3",
+          "compiled_on": "docker",
+          "fuego_core_version": "v1.1-805adb0",
+          "fuego_version": "v1.1-5ad677b",
+          "host_name": "fake_host",
+          "job_name": "docker.default.Functional.hello_world",
+          "keep_log": true,
+          "kernel_version": "3.19.0-47-generic",
+          "reboot": "false",
+          "rebuild": "false",
+          "start_time": "1509662455755",
+          "target_postcleanup": true,
+          "target_precleanup": "true",
+          "test_plan": "None",
+          "test_spec": "default",
+          "testsuite_version": "v1.1-805adb0",
+          "timestamp": "2017-11-02T22:40:55+0000",
+          "toolchain": "x86_64",
+          "workspace": "/fuego-rw/buildzone"
+      },
+      "name": "Functional.hello_world",
+      "schema_version": "1.0",
+      "status": "PASS",
+      "test_sets": [
+          {
+              "name": "default",
+              "status": "PASS",
+              "test_cases": [
+                  {
+                      "name": "hello_world",
+                      "status": "PASS"
+                  }
+              ]
+          }
+      ]
+  }
+
+
+
+Benchmark results
+=======================
+
+Here is the run.json file for a run of the test ``Benchmark.netperf``
+on the board 'ren1' (which is a Renesas board in my lab).
+
+::
+
+  {
+      "duration_ms": 33915,
+      "metadata": {
+          "attachments": [
+              {
+                  "name": "devlog",
+                  "path": "devlog.txt"
+              },
+              {
+                  "name": "devlog",
+                  "path": "devlog.txt"
+              },
+              {
+                  "name": "syslog.before",
+                  "path": "syslog.before.txt"
+              },
+              {
+                  "name": "syslog.after",
+                  "path": "syslog.after.txt"
+              },
+              {
+                  "name": "testlog",
+                  "path": "testlog.txt"
+              },
+              {
+                  "name": "consolelog",
+                  "path": "consolelog.txt"
+              },
+              {
+                  "name": "test_spec",
+                  "path": "spec.json"
+              }
+          ],
+          "board": "ren1",
+          "build_number": "3",
+          "compiled_on": "docker",
+          "fuego_core_version": "v1.2.0",
+          "fuego_version": "v1.2.0",
+          "host_name": "local_host",
+          "job_name": "ren1.default.Benchmark.netperf",
+          "keep_log": true,
+          "kernel_version": "4.9.0-yocto-standard",
+          "reboot": "false",
+          "rebuild": "false",
+          "start_time": "1509669904085",
+          "target_postcleanup": true,
+          "target_precleanup": "true",
+          "test_plan": "None",
+          "test_spec": "default",
+          "testsuite_version": "v1.1-805adb0",
+          "timestamp": "2017-11-03T00:45:04+0000",
+          "toolchain": "poky-aarch64",
+          "workspace": "/fuego-rw/buildzone"
+      },
+      "name": "Benchmark.netperf",
+      "schema_version": "1.0",
+      "status": "PASS",
+      "test_sets": [
+          {
+              "name": "default",
+              "status": "PASS",
+              "test_cases": [
+                  {
+                      "measurements": [
+                          {
+                              "measure": 928.51,
+                              "name": "net",
+                              "status": "PASS"
+                          },
+                          {
+                              "measure": 59.43,
+                              "name": "cpu",
+                              "status": "PASS"
+                          }
+                      ],
+                      "name": "MIGRATED_TCP_STREAM",
+                      "status": "PASS"
+                  },
+                  {
+                      "measurements": [
+                          {
+                              "measure": 934.1,
+                              "name": "net",
+                              "status": "PASS"
+                          },
+                          {
+                              "measure": 56.61,
+                              "name": "cpu",
+                              "status": "PASS"
+                          }
+                      ],
+                      "name": "MIGRATED_TCP_MAERTS",
+                      "status": "PASS"
+                  }
+              ]
+          }
+      ]
+  }
+
+
+==========
+Ideas
+==========
+
+Some changes to the fields might be useful:
+
+ * We don't have anything that records the 'cause', from Jenkins
+
+   * This is supposed to indicate what triggered the test
+   * The Jenkins strings are somewhat indecipherable:
+
+     * Here is a Jenkins cause: <hudson.model.Cause_-UserIdCause/><int>1</int>
+
+ * It might be worthwhile to add some fields from the board or target:
+
+   * Architecture
+   * Transport - not sure about this one
+   * Distrib
+   * File system
+
+ * If we add monitors or side-processes (stressors), it would be good to add
+   information about those as well
+
+Use of flat data
+======================
+
+Parsing the tree-structured data has turned out to be a real pain, and
+it might be better to do most of the work in a flat format.  The
+charting code uses a mixture of both structured (nested objects) and
+flat testcase names, and I think there's a lot of duplicate code lying
+around that handles the conversion back and forth, that could probably
+be coalesced into a single set of library routines.
+
diff --git a/docs/rst_src/spec.json.rst b/docs/rst_src/spec.json.rst
new file mode 100644
index 0000000..6296776
--- /dev/null
+++ b/docs/rst_src/spec.json.rst
@@ -0,0 +1,183 @@
+############
+spec.json
+############
+
+================
+Introduction
+================
+
+The file ``spec.json`` is defined for each test.  This file allows for
+the same test to be used in multiple different ways.  This is often
+referred to as a parameterized test.
+
+The ``spec.json`` file indicates a list of "specs" for the test, and
+for each test the values for test variables (parameters) that the test
+will use to configure its behavior.
+
+The variables declared in a spec are made available as shell variables
+to the test at test runtime.  To avoid naming collisions, the test
+variables are prefixed with the name of the test.  They are also
+converted to all upper-case.
+
+So, for example, for a test called Functional.mytest, if the spec
+declared a variable called 'loops', with a value of "10", the
+following test variable would be defined: FUNCTIONAL_MYTEST_LOOPS=10
+
+The intent is allow for a test author or some other party to declare a
+set of parameters to run the test in a different configuration.
+
+Fuego is often used to wrap existing test programs and benchmarks,
+which have command line options for controlling various test execution
+parameters.  Setting these command line options is one of the primary
+purposes of specs, and the spec.json file.
+
+==========
+Schema
+==========
+
+``spec.json`` holds a single object, with a 'testName' attribute, and an
+attribute called 'specs' that is a collection of spec definitions.
+Each spec definition has a name and a collection of named attributes.
+
+ * **testName** - this indicates the test that these specs apply to
+ * **specs** - this indicates the collection of specs
+ * **fail_case** - this allows a test to provide a list failure expressions
+   that will be be checked for in the test or system logs
+
+    * **fail_regexp** - a regular expression that indicates a failure.
+      This is grep'ed for in the testlog (unless use_syslog is set)
+    * **fail_message** - a message to output when the regular expression is
+      found
+
+    * **use_syslog** - a flag indicating to scan for the fail_regexp in the
+      system log rather than the test log
+
+Within each spec, there should be a collection of name/value pairs.
+Note that the values in a name/value pair are expanded in test context,
+so that the value may reference other test variables (such as from
+the board file, or the stored variables file for a board).
+
+Special variables
+=======================
+
+There are some special variables that can be defined, that are recognized
+by the Fuego core system.
+
+One of these is:
+
+ * **PER_JOB_BUILD** - if this variable is defined, then Fuego will create
+   a separate build area for each job that this test is included in, even if
+   a board or another job uses the same toolchain.  This is used when the test
+   variables are used in the ''build'' phase, and can affect the binary that is
+   compiled during this phase.
+
+============
+Examples
+============
+
+Here is an example, from the test ``Functional.bc``:
+
+::
+
+  {
+      "testName": "Functional.bc",
+      "fail_case": [
+          {
+              "fail_regexp": "syntax error",
+              "fail_message": "Text expression has a syntax error"
+          },
+          {
+              "fail_regexp": "Bug",
+              "fail_message": "Bug or Oops detected in system log",
+              "use_syslog": "1"
+          }
+      ],
+      "specs": {
+         "default": {
+              "EXPR":"3+3",
+              "RESULT":"6"
+          },
+          "bc-mult": {
+              "EXPR":"2*2",
+              "RESULT": "4"
+          },
+          "bc-add": {
+              "EXPR":"3+3",
+              "RESULT":"6"
+          },
+           "bc-by2": {
+              "PER_JOB_BUILD": "true",
+              "tarball": "by2.tar.gz",
+              "EXPR":"3+3",
+              "RESULT":"12"
+          },
+          "bc-fail": {
+              "EXPR":"3 3",
+              "RESULT":"6"
+          },
+      }
+  }
+
+In this example, the EXPR variable is used as input to the program
+'bc' and the RESULT gives the expected output from bc.
+
+This particular ``spec.json`` file is this complex for instructional
+purposes, and this particular test is somewhat overly parameterized.
+
+
+Here is an example, from the test ``Functional.synctest``:
+
+::
+
+  {
+      "testName": "Functional.synctest",
+      "specs": {
+          "sata": {
+              "MOUNT_BLOCKDEV":"$SATA_DEV",
+              "MOUNT_POINT":"$SATA_MP",
+              "LEN":"10",
+              "LOOP":"10"
+          },
+          "mmc": {
+              "MOUNT_BLOCKDEV":"$MMC_DEV",
+              "MOUNT_POINT":"$MMC_MP",
+              "LEN":"10",
+              "LOOP":"10"
+          },
+          "usb": {
+              "MOUNT_BLOCKDEV":"$USB_DEV",
+              "MOUNT_POINT":"$USB_MP",
+              "LEN":"10",
+              "LOOP":"10"
+          },
+          "default": {
+              "MOUNT_BLOCKDEV":"ROOT",
+              "MOUNT_POINT":"$BOARD_TESTDIR/work",
+              "LEN":"30",
+              "LOOP":"10"
+          }
+      }
+  }
+
+
+Note the use of variables references for ``MOUNT_BLOCKDEV`` and
+``MOUNT_POINT``.  These use values ($SATA_DEV, $MMC_DEV or $USB_DEV) that
+should be defined in a board file for filesystem-related tests.
+
+When a test defines variables, they should be documented in the test's
+``test.yaml`` file.
+
+============
+Defaults
+============
+
+If a test has no ``spec.json``, then default set of values is used, which
+is a single spec with the name "default", and no values defined.
+
+============
+See also
+============
+
+ * See :ref:`Test Spec and Plans` for more information about
+   Fuego's test spec and testplan system.
+
diff --git a/docs/rst_src/test.yaml.rst b/docs/rst_src/test.yaml.rst
new file mode 100644
index 0000000..1db55f6
--- /dev/null
+++ b/docs/rst_src/test.yaml.rst
@@ -0,0 +1,177 @@
+############
+test.yaml
+############
+
+The ``test.yaml`` file is used to hold meta-information about a test.  This is
+used by the :ref:`Test package system`  for packaging a test and providing
+information for viewing and searching for tests in a proposed "test store".
+The ``test.yaml`` file can also can be used by human maintainers to preserve
+information (in a structured format) about a test, that is not included in the
+other test materials.
+
+As an overview, the ``test.yaml`` file indicates where the source for the
+test comes from, it's license, the name of the test maintainer, a
+description of the test and tags for categorizing the test, and a
+formal list of parameters that are used by the test (what they mean
+and how to use them).
+
+=====================
+test.yaml fields
+=====================
+
+Here are the fields supported in a ``test.yaml`` file:
+
+``fuego_package_version``
+
+  Indicates the version of package
+  (in case of changes to the package schema).  For now, this is always 1.
+
+``name``
+
+  Has the full Fuego name of the test.  Ex: Benchmark.iperf
+
+``description``
+
+  Has an English description of the test
+
+``license``
+
+  Has an SPDX identifier for the test.  This is the main
+  license of the test project that the Fuego test uses, if the project
+  has a tarfile or git repo.  Otherwise it reflects the license of any
+  non-Fuego-specific materials in the test directory.  In such case,
+  the test directory should include a LICENSE file.  Fuego materials
+  (``fuego_test.sh``, ``spec.json``, ``chart_config.json``, etc.) are
+  considered to be under the default Fuego license (which is BSD-3-Clause)
+  unless otherwise specifically indicated in these files.  The license
+  identifier for this field should be obtained from
+  `<https://spdx.org/licenses/>`_
+
+``author``
+
+  The author or authors of the base test
+
+``maintainer``
+
+  The maintainer of the Fuego materials for this test
+
+``version``
+
+  The version of the base test
+
+``fuego_release``
+
+  The version of Fuego materials for this test.  This is a monotonically
+  incrementing integer, starting at 1 for each new version of the base test.
+
+``type``
+
+  Either Benchmark or Functional
+
+``tags``
+
+  A list of tags used to categorize this test.  This is intended to be
+  used in an eventual online test store.
+
+``tarball_src``
+
+  A URL where the tarball was originally obtained from
+
+
+``gitrepo``
+
+  A git URL where the source may be obtained from
+
+``host_dependencies``
+
+  A list of Debian package names that must be installed in the docker
+  container in order for this test to work properly.  This field is
+  optional, and indicates packages needed that are beyond those included in
+  the standard Fuego host distribution in the Fuego docker container.
+
+``params``
+
+  A list of parameters that may be used with this test, including their
+  descriptions, whether they are optional or required, and an example
+  value for each one
+
+``data_files``
+
+  A list of the files that are included in this test.  This is used as the
+  manifest for packaging the test (``fuego_test.sh``, and ``test.yaml`` are
+  implicitly included in the packaging manifest).
+
+
+More on params
+===================
+
+The 'params' field in the test.yaml file is a list of dictionaries
+with one item per test variable used by the test.
+
+The name of the parameter is the short name of the parameter, without
+the test name prefix (e.g. FUNCTIONAL_LTP).  The parameter name is the
+key for the dictionary with that parameters attributes.
+
+Each parameter has a dictionary with attributes describing it.  The
+dictionary has the following fields (keys):
+
+ - 'description' - text description of the parameter
+ - 'example' - an example of the parameter
+ - 'optional' - indicates whether the test requires this parameter
+   (test variable) to be set or not.  The value of the 'optional'
+   field must be one of 'yes' or 'no'.
+
+The test variables may be described by the ``test.yaml`` file can be
+defined in one of multiple locations in the Fuego test system.  Most
+commonly the test variables are defined in a spec for the test, but
+they can also be defined in the board file, or as a dynamic board
+variable.
+
+=========
+Example
+=========
+
+Here is an example ``test.yaml`` file, for the package ``Benchmark.iperf3``:
+
+::
+
+  fuego_package_version: 1
+  name: Benchmark.iperf3
+  description: |
+      iPerf3 is a tool for active measurements of the maximum achievable
+      bandwidth on IP networks.
+  license: BSD-3-Clause.
+  author: |
+      Jon Dugan, Seth Elliott, Bruce A. Mah, Jeff Poskanzer, Kaustubh Prabhu,
+      Mark Ashley, Aaron Brown, Aeneas Jaißle, Susant Sahani, Bruce Simpson,
+      Brian Tierney.
+  maintainer: Daniel Sangorrin <daniel.sangorrin at toshiba.co.jp>
+  version: 3.1.3
+  fuego_release: 1
+  type: Benchmark
+  tags: ['network', 'performance']
+  tarball_src: https://iperf.fr/download/source/iperf-3.1.3-source.tar.gz
+  gitrepo: https://github.com/esnet/iperf.git
+  params:
+      - server_ip:
+          description: |
+              IP address of the server machine. If not provided, then SRV_IP
+              _must_ be provided on the board file. Otherwise the test will fail.
+              if the server ip is assigned to the host, the test automatically
+              starts the iperf3 server daemon. Otherwise, the tester _must_ make
+              sure that iperf3 -V -s -D is already running on the server machine.
+          example: 192.168.1.45
+          optional: yes
+      - client_params:
+          description: extra parameters for the client
+          example: -p 5223 -u -b 10G
+          optional: yes
+  data_files:
+      - chart_config.json
+      - fuego_test.sh
+      - parser.py
+      - spec.json
+      - criteria.json
+      - iperf-3.1.3-source.tar.gz
+      - reference.json
+      - test.yaml
-- 
2.7.4


-- 






This
message contains confidential information and is intended only 
for the
individual(s) named. If you are not the intended
recipient, you are 
notified that disclosing, copying, distributing or taking any
action in 
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail 
from your system. E-mail transmission
cannot be guaranteed to be secured or 
error-free as information could be
intercepted, corrupted, lost, destroyed, 
arrive late or incomplete, or contain
viruses. The sender therefore does 
not accept liability for any errors or
omissions in the contents of this 
message, which arise as a result of e-mail
transmission.


More information about the Fuego mailing list