[Fuego] [PATCH] Docs: Convert the following tbwiki pages into rst format

Pooja pooja.sm at pathpartnertech.com
Mon Oct 12 06:26:04 UTC 2020


From: Pooja More <pooja.sm at pathpartnertech.com>

Following is a list of pages categorized under "feature reference"
Board_Snapshot
Dependencies
Dynamic_Variables
Fuego_Developer_Notes

Pages are cross referenced directly using page titles.
No anchor labels has been used.

Signed-off-by: Pooja More <pooja.sm at pathpartnertech.com>
---
 docs/rst_src/Board_Snapshot.rst        |  44 +++
 docs/rst_src/Dependencies.rst          | 241 +++++++++++++
 docs/rst_src/Dynamic_Variables.rst     |  51 +++
 docs/rst_src/Fuego_Developer_Notes.rst | 603 +++++++++++++++++++++++++++++++++
 4 files changed, 939 insertions(+)
 create mode 100644 docs/rst_src/Board_Snapshot.rst
 create mode 100644 docs/rst_src/Dependencies.rst
 create mode 100644 docs/rst_src/Dynamic_Variables.rst
 create mode 100644 docs/rst_src/Fuego_Developer_Notes.rst

diff --git a/docs/rst_src/Board_Snapshot.rst b/docs/rst_src/Board_Snapshot.rst
new file mode 100644
index 0000000..be0d2d2
--- /dev/null
+++ b/docs/rst_src/Board_Snapshot.rst
@@ -0,0 +1,44 @@
+
+
+###################
+Board Snapshot
+###################
+
+Fuego provides a feature to grab a "snapshot" of board status
+information and save that along with other data associated with the
+run. The idea is that this status information might be helpful for
+diagnosing the issue when a problem is encountered during a test
+(either test failure or an error during test execution).
+
+This status information is obtained during the 'snapshot' phase of
+test execution.  The 'snapshot' phase of execution is turned on by
+default, but if the phases are manually enumerated, this phase can be
+omitted.
+
+The letter 's' is used to specify the 'snapshot' phase of test
+execution, with the '-p' option in 'ftc run-test'.
+
+The default snapshot operation calls
+:ref:`ov_rootfs_state <function ov rootfs state>`.
+
+The board status data is saved in the file: ``machine_snapshot.txt``
+in the log directory for a run (that is, under /fuego-rw/logs).
+
+==================================
+Overriding snapshot operations
+==================================
+
+There are two ways to override the snapshot operation, one on a
+per-board basis and one on a per-test basis.
+
+To override the operation on a per-board basis, the function
+ov_rootfs_state can can be overridden.  This is done by creating a
+custom distribution overlay file, and then using the DISTRIB variable
+in the board file for a board.
+
+To override the operation on a per-test basis, a test can define its
+own 'test_snapshot' function.  If defined, then this function will be
+called in place of the default snapshot function in the Fuego core.
+
+See :ref:`test_snapshot  <function_test_snapshot>` for information about how
+to define this function in a ``fuego_test.sh`` script.
diff --git a/docs/rst_src/Dependencies.rst b/docs/rst_src/Dependencies.rst
new file mode 100644
index 0000000..b3f72e8
--- /dev/null
+++ b/docs/rst_src/Dependencies.rst
@@ -0,0 +1,241 @@
+
+
+####################
+Dependencies
+####################
+
+===============
+Introduction
+===============
+
+Fuego includes a test dependency system that allows Fuego to determine
+whether a test can be run on a board or not.  The test dependency
+system provides an opportunity for Fuego to do an early abort of a
+test in case required conditions are not met for the test.
+
+The dependency system allows short-circuiting of test execution.  That
+is, these dependencies are checked during a pre_test phase, and if the
+dependencies are not met, Fuego aborts the test, before the test is
+built, deployed and executed on the target.
+
+The dependency system consists of 2 parts:
+
+ 1) A set of test variables in the base script that specify needed attributes
+    of the device under test. These dependencies are expressed as statically
+    declared "NEED_" variables in ``fuego_test.sh``
+
+ 2) The ability to define a test function, :ref:`test_pre_check
+    <function test pre check>`, that is called before the test executes,
+    which can test for arbitrary conditions
+
+In the future, it intended that this feature will allow for
+automatically detecting what tests are applicable to particular
+boards.
+
+==================
+NEED variables
+==================
+
+A test can declare variables (called 'NEED' variables) that describe
+attributes of the device under test, in the fuego_test.sh script.
+
+The following NEED variables are currently supported:
+
+ * NEED_MEMORY
+ * NEED_FREE_STORAGE
+ * NEED_ROOT
+ * NEED_KCONFIG
+ * NEED_MODULE
+
+These variables are usually declared after the source reference
+definition, and before the first function declaration in ``fuego_test.sh``
+
+Declaration example
+=======================
+
+Here is an example, from Benchmark.signaltest:
+
+This shows the first few lines of ``fuego_test.sh`` for this test ::
+
+  tarball=signaltest.tar.gz
+
+  NEED_ROOT=1
+
+  function test_build {
+    make CC="$CC" LD="$LD" LDFLAGS="$LDFLAGS" CFLAGS="$CFLAGS"
+  }
+  ...
+
+
+NEED_MEMORY
+============
+
+The NEED_MEMORY variable is used to require that the board have a
+certain amount of free memory, for the test to run.  The value is
+expressed in either bytes, kilobytes, megabytes, gigabytes or
+terabytes.
+
+The value is declared as an integer number (base 10) followed by an
+optional prefix - one of 'K', 'M', 'G', 'T'.
+
+Here are some examples:
+
+ * NEED_MEMORY=500K
+ * NEED_MEMORY=2G
+ * NEED_MEMORY=1500000
+
+As a technical detail, the value specified is compared with the value
+of MemFree in /proc/meminfo of target board.
+
+NEED_FREE_STORAGE
+======================
+
+The NEED_FREE_STORAGE variable is used to require that the board have
+a certain amount of free storage, in the filesystem where the test
+needs it, in for the test to run.  The value is expressed in either
+bytes, kilobytes, megabytes, gigabytes or terabytes.  The value of
+NEED_FREE_STORAGE is usually provided with 2 strings - a string
+indicating the required size, and a directory to check.
+
+Most tests are executed in the directory specified by $BOARD_TESTDIR,
+so that is often the second string provided.  However, if a test needs
+space somewhere else in the filesystem (besides where the test
+normally runs), this can be specified directly (statically), or via
+some other test variable.  If no second string is provided, the
+specified value of free storage required is compared with the amount
+of available storage in the root filesystem.
+
+The value required is declared as an integer number (base 10) followed
+by an optional prefix - one of 'K', 'M', 'G', 'T'.
+
+Here are some examples:
+
+ * NEED_FREE_STORAGE=2G
+ * NEED_FREE_STORAGE="50M $BOARD_TESTDIR"
+ * NEED_FREE_STORAGE="5T /media/raid-array"
+
+As a technical detail, the value specified is compared with the value
+of the 'Available' column returned by 'df' for the indicated
+directory.
+
+NEED_ROOT
+==========
+
+This variable is declared if a test required to be executed with
+'root' privileges.  In that case, the following should be added to the
+test script:
+
+ * NEED_ROOT=1
+
+NEED_KCONFIG
+=============
+
+This variable is used to check that one or more kernel configuration
+options have specified values.
+
+The NEED_KCONFIG line can list more than one kernel config option.
+Each option is of the form: CONFIG_OPTION={value}.  Currently, the
+value must be one of: 'y' or 'n'.
+
+Here are some examples:
+
+ * NEED_KCONFIG="CONFIG_PRINTK=y"
+ * NEED_KCONFIG="CONFIG_LOCKDEP_SUPPORT=n"
+ * NEED_KCONFIG="CONFIG_USB=y CONFIG_USB_EHCI_MV_U20=y"
+
+Technical detail:  The kernel configuration is searched for in the
+following locations, on the target, in order:
+
+ * /proc/config.gz
+ * /boot/config-$(uname -r) and on the host at:
+ * $KERNEL_SRC/.config
+
+If NEED_KCONFIG is defined, but if the kernel configuration of the
+target board can not be found, then the dependency check fails.
+
+.. note::
+   it is intended that the kernel build system will set the board
+   variable KERNEL_SRC, for use by the Fuego system (but this is not
+   implemented yet).
+
+NEED_MODULE
+===============
+
+This variable indicates that a test needs a particular module loaded
+on the system, in order to run.
+
+Here is an example:
+
+ * NEED_MODULE=bitrev
+
+.. note::
+   it's unclear that this is a good way to detect a kernel feature
+   needed for a test.  Any module that is upstream can also be included in
+   the kernel statically.  Testing for a driver or feature as a module would
+   miss that configuration.
+
+==================
+test_pre_check
+==================
+
+A test base script (fuego_test.sh) can provide a function called
+:ref:`test_pre_check <function test pre check>`  where arbitrary commands
+can be run, to determine if the device under test (the board) has the
+required features or hardware for the test.
+
+This function, if present, is run during the pre_test phase of text
+execution.  Thus, if prerequisite conditions are not met, the test can
+abort early and avoid the additional test phases (build, deploy, run,
+etc.)
+
+The following functions are commonly used in the test_pre_check routine:
+
+ * :ref:`assert_define <function assert define>`
+ * :ref:`is_on_target <function is on target>`
+ * :ref:`is_on_target_path <function is on target path>`
+ * :ref:`assert_has_program <function assert has program>`
+ * :ref:`is_on_sdk <function is on sdk>`
+ * :ref:`abort_job <function_abort_job>`
+
+For examples of how to use these functions, see the individual
+documentation pages for the functions (linked above).
+
+Addtionally, the test_pre_check function can execute any additional
+code it wants (using the 'cmd' function), in order to query the target
+during this phase, for required capabilities. Or, it might check
+conditions on the host, network, or extended test environment, to
+verify that conditions are ready for the test.
+
+This might include checking for things like:
+
+ * mounted file systems
+ * network connections
+ * required hardware
+ * auxiliary test harness availability and preparation
+
+======================
+Envisioned features
+======================
+
+In the future, the test dependency system may be used to allow a Fuego
+user to select tests which are appropriate for the hardware or
+distribution that they have.
+
+Fuego does not currently have thousands of tests.  But in a future
+where there ARE thousands of tests, it will be overwhelming to the
+test user to select those tests which are of interest for their
+hardware.  The test dependency system will allow Fuego to
+automatically compare the features or hardware required for a test,
+with the features and hardware of a board, and decide if a test is
+compatible or relevant for that board.
+
+When Fuego has a test "server", this can be used as a matching service
+to select tests for execution on boards that have specific features or
+hardware.  When Fuego has a test "store", then the dependency system
+can be use to filter the tests to only those that are relevant to
+their testing needs.
+
+The NEED variables are declarative, rather than imperative (like the
+test_pre_check function), so that it will be possible to develop an
+automated system to do this matching (between test and board).
+
diff --git a/docs/rst_src/Dynamic_Variables.rst b/docs/rst_src/Dynamic_Variables.rst
new file mode 100644
index 0000000..97f741a
--- /dev/null
+++ b/docs/rst_src/Dynamic_Variables.rst
@@ -0,0 +1,51 @@
+
+
+###########################
+Dynamic Varaiables
+###########################
+
+"Dynamic variables" in Fuego are variables that can be passed to a
+test on the command line, and used to customize the operation of a
+test, for a particular test run.
+
+In general testing nomenclature this is referred to as test
+parameterization.
+
+The purpose of dynamic variables is to support "variant" testing,
+where a script can loop over a test multiple times, changing the
+variable to different values.
+
+In Fuego, during test execution dynamic variable names are expanded to
+full variables names that are prefixed with the name of the test.  A
+dynamic variable overrides a spec variable of the same name.
+
+Here is an example of using dynamic variables: ::
+
+  $ ftc run-test -b beaglebone -t Benchmark.Dhrystone --dynamic-vars "LOOPS=100000000"
+
+
+This would override the default value for BENCHMARK_DHRYSTONE_LOOPS,
+setting it to 100000000 (100 million) for this run.  Normally, the
+default spec for Benchmark.Dhrystones specifies a value of 10000000
+(10 million) for LOOPS.
+
+This feature is intended to be useful for doing 'git bisect's of a
+bug, passing a different git commit id for each iteration of the test.
+
+See :ref:`Test_variables <Test variables>` for more information.
+
+Notes
+==========
+
+Note that dynamic vars are added to the runtime ``spec.json`` file, which
+is saved in the log directory for the run being executed.
+
+This spec.json file is copied from the one specified for the run
+(usually from the test's home directory).
+
+If dynamic variables have been defined for a test, then they are
+listed by name in the run-specific spec.json file, as the value of the
+variable "dyn_vars".  The reason for this is to allow someone who
+reviews the test results later to easily see whether a particular test
+variable had a value that derived from the spec, or from a dynamic
+variable.  This is important for proper results interpretation.
diff --git a/docs/rst_src/Fuego_Developer_Notes.rst b/docs/rst_src/Fuego_Developer_Notes.rst
new file mode 100644
index 0000000..5c1ee70
--- /dev/null
+++ b/docs/rst_src/Fuego_Developer_Notes.rst
@@ -0,0 +1,603 @@
+
+################################
+Fuego Developer Notes
+################################
+
+
+This page has some detailed notes about Fuego, Jenkins and how they
+interact:
+
+=============
+Resources
+=============
+
+Here are some pages in this wiki with developer information:
+
+ * :ref:`Coding style <Coding style>`
+ * :ref:`Core_interfaces <Core interfaces>`
+ * :ref:`Glossary`
+ * :ref:`Fuego test results determination`
+ * :ref:`Fuego_naming_rules <Fuego naming rules>`
+ * :ref:`Fuego Object Details`
+ * :ref:`Integration with ttc`
+ * :ref:`Jenkins User Interface`
+ * :ref:`Jenkins Plugins`
+ * :ref:`License And Contribution Policy`
+ * :ref:`Log files`
+ * :ref:`Metrics`
+ * :ref:`Overlay Generation`
+ * :ref:`ovgen feature notes`
+ * :ref:`Parser module API`
+ * :ref:`Test Script APIs`
+ * :ref:`Test package system`
+ * :ref:`Test server system`
+ * :ref:`Transport notes`
+ * :ref:`Variables`
+
+
+==========
+Notes
+==========
+
+specific questions to answer
+===============================
+
+What happens when you click on the "run test" button:
+
+ * what processes start on the host
+
+    * java - jar ``/home/jenkins/slave.jar``, executing a shell running
+      the contents of the job.xml "hudson.tasks.Shell/command" block:
+
+      * this block is labeled: "Execute shell: Command" in the "Build"
+        section of the job, in the configure page for the job in the
+        Jenkins user interface.
+
+ * what interface is used between ``prolog.sh`` and the jenkins processes
+
+   * stop appears to be by issuing "http://localhost:8090/...stop"
+   * see :ref:`Fuego-Jenkins`
+
+Each Jenkins node is defined in Jenkins
+in:``/var/lib/jenkins/nodes/config.xml``
+
+ * The name of the node is used as the "Device" and "NODE_NAME" for a
+   test.
+
+   * These environment variables are passed to the test agent, which is
+     always "java -jar /home/jenkins/slave.jar"
+
+ * Who calls ovgen.py - it is included indirectly, when the base
+   script sources the shell script for it's test type (``functional.sh``
+   or ``benchmark.sh``)
+
+  * base_script sources: ``functional.sh``
+
+     * functional.sh sources: ``overlays.sh``
+
+       * overlays.sh calls: ``ovgen.py``
+
+Jenkins calls:
+
+  * java -jar /fuego-core/engine/slave.jar
+
+     * with variables:
+
+       * Device
+       * Reboot
+       * Rebuild
+       * Target_PreCleanup
+       * Target_PostCleanup
+       * TESTDIR
+       * TESTNAME
+       * TESTSPEC
+       * FUEGO_DEBUG
+
+   * the Test Run section of the job for a test configuration has a
+     command with the following shell commands: ::
+
+       export Reboot=false
+       export Rebuild=true
+       export Target_PreCleanup=true
+       export Target_PostCleanup=true
+       export TESTDIR=Functional.bc
+       export TESTNAME=bc
+       export TESTSPEC=default
+       #export FUEGO_DEBUG=1
+       timeout --signal=9 100m /bin/bash $FUEGO_CORE/engine/tests/${TESTDIR}/${TESTNAME}.sh
+
+Some Jenkins notes:Jenkins stores its configuration in plain files
+under JENKINS_HOME You can edit the data in these files using the web
+interface, or from the command line using manual editing (and have the
+changes take affect at runtime by selecting "Reload configuration from
+disk".
+
+By default, Jenkins assumes you are doing a continuous integration
+action of "build the product, then test the product".   It has default
+support for Java projects.
+
+Fuego seems to use distributed builds (configured in a master/slave
+fashion).
+
+Jenkins home has (from 2007 docs):
+
+  * config.xml - has stuff for the main user interface
+  * *.xml
+  * fingerprints - directory for artifact fingerprints
+  * jobs
+
+    * <JOBNAME>
+
+      * config.xml
+      * workspace
+      * latest
+      * builds
+
+        * <ID>
+        * build.xml
+        * log
+        * changelog.xml
+
+The docker container interfaces to the outside host filesystem via the
+following links:
+
+ * /fuego-ro -> <host-fuego-location>/fuego-ro
+ * /fuego-rw -> <host-fuego-location>/fuego-rw
+ * /fuego-core -> <host-fuego-core-location>
+
+What are all the fields in the "configure node" dialog:
+Specifically:
+
+ * where is "Description" used?
+ * what is "# of executors"?
+ * how is "Remote FS root" used?
+
+   * is this a path inside the Fuego container, or on the target?
+
+     * I presume that the slave program is actually 'xxx_prolog.sh', which runs
+       on host, and that ``/tmp/dev-slave1`` would be where builds for the target
+       would occur.
+
+ * what are Labels used for?
+   as tags for grouping builds
+ * Launch method: Fuego uses the Jenkins option "Launch slave via
+   execution of command on the Master"
+   The command is "java -jar /fuego-core/engine/slave.jar"
+
+     * NOTE: slave.jar comes from jta-core git repository, under engine/slave.jar
+
+
+The fuego-core repository has: ::
+
+
+ engine
+   overlays - has the base classes for fuego functions
+     base - has core shell functions
+     testplans - has json files for parameter specifications
+     distribs - has shell functions related to the distro
+   scripts - has fuego scripts and programs
+    (things like overlays.sh, loggen.py, parser/common.py, ovgen.py, etc.
+   slave.jar - java program that Jenkins calls to execute a test
+   tests - has a directory for each test
+     Benchmark.foo
+       Benchmark.foo.spec
+       foo.sh
+       test.yaml
+       reference.log
+       parser.py
+     Functional.bar
+     LTP
+     etc.
+
+
+What is groovy:
+
+ * an interpreted language for Java, used by the scriptler plugin to
+   extend Jenkins
+
+What plugins are installed with Jenkins in the JTA configuration?
+
+ * Jenkins Mailer, LDPA, External Monitor Job Type, PAM, Ant, Javadoc
+ * Jenkins Environment File (special)
+ * Credentials, SSH Credentials, Jenkins SSH Slags, SSH Agent
+ * Git Client, Subversion, Token Macro, Maven Integration, CVS
+ * Parameterized Trigger (special)
+ * Git, Groovy Label Assignment Extended Choie Parameter
+ * Rebuilder...
+ * Groovy Postbuild, ez-templates, HTML Publisher (special)
+ * JTA Benchmark show plot plugin (special)
+ * Log Parser Plugin (special)
+ * Dashboard view (special)
+ * Compact Columns (special)
+ * Jenkins Dynamic Parameter (special)
+ * flot (special) - benchmark graphs plotting plug-in for Fuego
+
+Which of these did Cogent write?
+
+ * the flot plugin (not flot itself)
+
+What scriptler scripts are included in JTA?
+
+ * getTargets
+ * getTestplans
+ * getTests
+
+What language are scriptler scripts in?
+
+ * Groovy
+
+What is the Maven plugin for Jenkins?
+
+ * Maven is an apache project to build and manage Java projects
+
+   * I don't think the plugin is needed for Fuego
+
+Jenkins refers to a "slave" - what does this mean?
+
+ * it refers to a sub-process that can be delegated work.  Roughly
+   speaking, Fuego uses the term 'target' instead of 'slave', and
+   modifies the Jenkins interface to support this.
+
+
+How the tests work
+===================
+
+A simple test that requires no building is Functional.bc
+
+  * the test script and test program source are found in the
+    directory: ``/home/jenkins/tests/Functional.bc``
+
+This runs a shell script on target to test the 'bc' program.
+
+Functional.bc has the files: ::
+
+    bc-script.sh
+       declares "tarball=bc-script.tar.gz"
+       defines shell functions:
+         test_build - calls 'echo' (does nothing)
+         test_deploy - calls 'put bc-device.sh'
+         test_run - calls 'assert_define', 'report'
+           report references bc-device.sh
+         test_processing - calls 'log_compare'
+           looking for "OK"
+       sources $JTA_SCRIPTS_PATH/functional.sh
+     bc-script.tar.gz
+       bc-script/bc-device.sh
+
+
+Variables used (in bc-script.sh): ::
+
+  BOARD_TESTDIR
+  TESTDIR
+  FUNCTIONAL_BC_EXPR
+  FUNCTIONAL_BC_RESULT
+
+
+
+A simple test that requires simple building:
+  Functional.synctest
+
+This test tries to call fsync to write data to a file, but is
+interupted with a kill command during the fsync().  If the child dies
+before the fsync() completes, it is considered success.
+
+It requires shared memory (shmget, shmat) and semaphore IPC (semget
+and semctl) support in the kernel.
+
+Functional synctest has the files: ::
+
+     synctest.sh
+       declares "tarball=synctest.tar.gz"
+       defines shell functions:
+         test_build - calls 'make'
+         test_deploy - calls 'put'
+         test_run - calls 'assert_define', hd_test_mount_prepare, and 'report'
+         test_processing - calls 'log_compare'
+           looking for "PASS : sync interrupted"
+       sources $JTA_SCRIPTS_PATH/functional.sh
+     synctest.tar.gz
+       synctest/synctest.c
+       synctest/Makefile
+     synctest_p.log
+       has "PASS : sync interrupted"
+
+
+Variables used (by synctest.sh) ::
+
+  CFLAGS
+  LDFLAGS
+  CC
+  LD
+  BOARD_TESTDIR
+  TESTDIR
+  FUNCTIONAL_SYNCTEST_MOUNT_BLOCKDEV
+  FUNCTIONAL_SYNCTEST_MOUNT_POINT
+  FUNCTIONAL_SYNCTEST_LEN
+  FUNCTIONAL_SYNCTEST_LOOP
+
+.. note::
+
+  could be improved by checking for CONFIG_SYSVIPC in /proc/config.gz
+  to verify that the required kernel features are present
+
+MOUNT_BLOCKDEV and MOUNT_POINT are used by 'hd_test_mount_prepare' but
+are prefaced with FUNCTIONAL_SYNCTEST or BENCHMARK_BONNIE
+
+
+from clicking "Run Test", to executing code on the target...
+config.xml has the slave command: /home/jenkins/slave.jar
+-> which is a link to /home/jenkins/jta/engine/slave.jar
+
+overlays.sh has "run_python $OF_OVGEN ..."
+where OF_OVGEN is set to "$JTA_SCRIPTS_PATH/ovgen/ovgen.py"
+
+How is overlays.sh called?
+  it is sourced by ``/home/jenkins/scripts/benchmarks.sh`` and
+    ``/home/jenkins/scripts/functional.sh``
+
+``functional.sh`` is sourced by each Funcational.foo script.
+
+
+For Functional.synctest: ::
+
+  Functional.synctest/config.xml
+    for the attribute <hudson.tasks.Shell> (in <builders>)
+      <command>....
+        souce $JTA_TESTS_PATH/$JOB_NAME/synctest.sh</command>
+
+  synctest.sh
+    '. $JTA_SCRIPTS_PATH/functional.sh'
+       'source $JTA_SCRIPTS_PATH/overlays.sh'
+       'set_overlay_vars'
+           (in overlays.sh)
+           run_python $OF_OVGEN ($JTA_SCRIPTS_PATH/ovgen/ovgen.py) ...
+                  $OF_OUTPUT_FILE ($JTA_SCRIPTS_PATH/work/${NODE_NAME}_prolog.sh)
+             generate xxx_prolog.sh
+           SOURCE xxx_prolog.sh
+
+       functions.sh pre_test()
+
+       functions.sh build()
+          ... test_build()
+
+       functions.sh deploy()
+
+       test_run()
+         assert_define()
+         functions.sh report()
+
+
+
+
+NOTES about ovgen.py
+======================
+
+What does this program do?
+
+Here is a sample command line from a test console output: ::
+
+  python /home/jenkins/scripts/ovgen/ovgen.py \
+    --classdir /home/jenkins/overlays//base \
+    --ovfiles /home/jenkins/overlays//distribs/nologger.dist /home/jenkins/overlays//boards/bbb.board \
+    --testplan /home/jenkins/overlays//testplans/testplan_default.json \
+    --specdir /home/jenkins/overlays//test_specs/ \
+    --output /home/jenkins/work/bbb_prolog.sh
+
+
+So, ovgen.py takes a classdir, a list of ovfiles a testplan and a
+specdir, and produces a xxx_prolog.sh file, which is then sourced by
+the main test script
+
+Here is information about ovgen.py source: ::
+
+  Classes:
+   OFClass
+   OFLayer
+   TestSpecs
+
+
+::
+
+  Functions:
+   parseOFVars - parse Overlay Framework variables and definitions
+   parseVars - parse variables definitions
+   parseFunctionBodyName
+   parseFunction
+   baseParseFunction
+   parseBaseFile
+   parseBaseDir
+   parseInherit
+   parseInclude
+   parseLayerVarOverride
+   parseLayerFuncOverride
+   parseLayerVarDefinition
+   parseLayerCapList - look for BOARD.CAP_LIST
+   parseOverrideFile
+   generateProlog
+   generateSpec
+   parseGenTestPlan
+   parseSpec
+   parseSpecDir
+   run
+
+
+
+Sample generated test script
+==================================
+
+bbb_prolog.sh is 195 lines, and has the following vars and functions:
+::
+
+
+   from class:base-distrib:
+     ov_get_firmware()
+     ov_rootfs_kill()
+     ov_rootfs_drop_caches()
+     ov_rootfs_oom()
+     ov_rootfs_sync()
+     ov_rootfs_reboot()
+     ov_rootfs_state()
+     ov_logger()
+     ov_rootfs_logread()
+
+   from class:base-board:
+    LTP_OPEN_POSIX_SUBTEST_COUNT_POS
+    MMC_DEV
+    SRV_IP
+    SATA_DEV
+    ...
+    JTA_HOME
+    IPADDR
+    PLATFORM=""
+    LOGIN
+    PASSWORD
+    TRANSPORT
+    ov_transport_cmd()
+    ov_transport_put()
+    ov_transport_get()
+
+   from class:base-params:
+    DEVICE
+    PATH
+    SSH
+    SCP
+
+   from class:base-funcs:
+    default_target_route_setup()
+
+   from testplan:default:
+    BENCHMARK_DHRYSTONE_LOOPS
+    BENCHMARK_<TESTNAME>_<VARNAME>
+    ...
+    FUNCTIONAL_<TESTNAME>_<VARNAME>
+
+
+========
+Logs
+========
+
+When a test is executed, several different kinds of logs are
+generated: devlog, systemlogs, the testlogs, and the console log.
+
+
+created by Jenkins
+====================
+
+ * console log
+
+   * this is located in ``/var/lib/jenkins/jobs/<test_name>/builds/<build_id>/log``
+   * is has the output from running the test script (on the host)
+
+
+created by ftc
+=====================
+
+ * console log
+
+   * if 'ftc' was used to run the test, then the console log is
+     created in the log directory
+
+   * it is called consolelog.txt
+
+
+
+created by the test script
+================================
+
+ * these are created in the directory:
+   ``/fuego-rw/logs/<test_name>/<board>.<spec>.<build_id>.<build_number>/``
+ * devlog has a list of commands run on the board during the test
+
+   * named devlog.txt
+
+ * system logs have the log data from the board (e.g.
+   /var/log/messages) before and after the test run:
+
+   * named: ``syslog.before.txt`` and ``syslog.after.txt``
+
+ * the test logs have the actual output from the test program on the target
+
+   * this is completely dependent on what the test program outputs
+   * named: ``testlog.txt``
+
+     * this is the 'raw' log
+
+   * there may be 'parsed' logs, which is the log filtered by log_compare operations:
+
+      * this is named: ``testlog.p.txt`` or ``testlog.n.txt``
+      * the 'p' indicated positive results and the 'n' indicates negative results
+
+================
+Core scripts
+================
+
+The test script is sourced by the Fuego main.sh script
+
+This script sources several other scripts, and ends up including
+``fuego_test.sh`` via the Fuego overlay mechanism
+
+ * load overlays and set_overlay vars
+ * pre_test $TEST_DIR
+ * build
+ * deploy
+ * test_run
+ * set_testres_file, bench_processing, check_create_logrun (if a benchmark)
+ * get_testlog $TESTDIR, test_processing (if a functional test)
+ * get_testlog $TESTDIR (if a stress test)
+ * test_processing (if a regular test)
+
+functions available to test scripts:
+See :ref:`Test Script APIs`
+
+
+Benchmark tests must provide a parser.py file, which extracts the
+benchmark results from the log data.
+
+It does this by doing the following:
+import common as plib
+f = open(plib.TEST_LOG)
+lines = f.readlines()
+((parse the data))
+create a dictionary with a key and value, where the key matches
+the string in the reference.log file
+
+The ``parser.py`` program builds a dictionary of values by parsing
+the log from the test (basically the test output).
+It then sends the dictionary, and the pattern for matching the
+reference log test criteria to the routine:
+common.py:process_data()
+
+It defines ref_section_pat, and passes that to process_data()
+Here are the different patterns for ref_section_pat: ::
+
+  9  "\[[\w]+.[gle]{2}\]"
+  1  "\[\w*.[gle]{2}\]"
+  1  "^\[[\d\w_ .]+.[gle]{2}\]"
+  1  "^\[[\d\w_.-]+.[gle]{2}\]"
+  1  "^\[[\w\d&._/()]+.[gle]{2}\]"
+  4  "^\[[\w\d._]+.[gle]{2}\]"
+  2  "^\[[\w\d\s_.-]+.[gle]{2}\]"
+  3  "^\[[\w\d_ ./]+.[gle]{2}\]"
+  5  "^\[[\w\d_ .]+.[gle]{2}\]"
+  1  "^\[[\w\d_\- .]+.[gle]{2}\]"
+  1  "^\[[\w]+.[gle]{2}\]"
+  1  "^\[[\w_ .]+.[gle]{2}\]"
+
+
+Why are so many different ones needed??
+Why couldn't the syntax be: <var-name> <test> <value> on one line?
+
+It turns out this is processed by an 'awk' script.  thus the weird
+syntax.  We should get rid of the awk script and use python instead.
+
+
+How is benchmarking graphing done?
+===================================
+
+See :ref:`Benchmark parser note`
+
+
+docker tips
+============
+
+See :ref:`Docker Tips`
-- 
2.7.4


-- 






This
message contains confidential information and is intended only 
for the
individual(s) named. If you are not the intended
recipient, you are 
notified that disclosing, copying, distributing or taking any
action in 
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail 
from your system. E-mail transmission
cannot be guaranteed to be secured or 
error-free as information could be
intercepted, corrupted, lost, destroyed, 
arrive late or incomplete, or contain
viruses. The sender therefore does 
not accept liability for any errors or
omissions in the contents of this 
message, which arise as a result of e-mail
transmission.


More information about the Fuego mailing list