[Fuego] AGL-JTA XML output investigation

Daniel Sangorrin daniel.sangorrin at toshiba.co.jp
Thu Nov 24 07:29:45 UTC 2016


Hi Tim,

> -----Original Message-----
> From: fuego-bounces at lists.linuxfoundation.org [mailto:fuego-bounces at lists.linuxfoundation.org] On Behalf Of Daniel Sangorrin
> Sent: Thursday, November 17, 2016 5:56 PM
>From a first glance at the AGL-JTA-core repository, they have a "common" folder
> that includes almost the same tests as Fuego (some tests are new and some are missing).
> Most of them have unmodified parser.py files (apart from the JTA-Fuego renaming).
> However, I observe that each test also has a file "create_xml_testname.py". From a quick
> read it looks like they are parsing the logs into xml files, and at the same time saving information
> such as timing or build number.  I haven't checked yet but my guess is they are later
> converting the XML files into HTML so they can be read from the Jenkins interface.
> # I will check what the XML format looks like next week and report

After installing AGL-JTA, I have been investigating how the XML output actually works
and its relation with the old parser and flot plotter plugin. I will try to illustrate the steps 
using dbench as an example.

It all starts when Jenkins launches dbench.sh which in turn sources benchmark.sh.
At some point, once the test execution on the target ends, benchmark.sh calls
"bench_processing" which does the following:
    - Outputs a "RESULT ANALYSIS" message
    - Fetches the test execution log:
        $ cat /home/jenkins/logs/Benchmark.dbench/testlogs/<board.job>.log
            ...
            Throughput 138.451 MB/sec 2 procs
    - It calls dbench's parser.py which uses a regex to match the lines in 
      the log that contain the benchmark results; and then calls into the 
      common parser code (engine/scripts/parser/common.py) which appends 
      the result to a plot.data file; generates a plot.png file; and decides
      if the test passes based on a thresold comparison (using reference.log).
        -> /home/jenkins/logs/Benchmark.dbench/plot.data
        -> /home/jenkins/tests/common/Benchmark.dbench/reference.log
        -> /home/jenkins/logs/Benchmark.dbench/plot.png
    - It then calls engine/scripts/parser/dataload.py which transforms the
      information from plot.data into two json files:
        -> /home/jenkins/logs/Benchmark.dbench/Benchmark.dbench.Throughput.json
        -> /home/jenkins/logs/Benchmark.dbench/Benchmark.dbench.info.json
      [Note] This json files are supposed to be used by the flot-plotter plugin
             but afaik that plugin is not available in AGL-JTA.
      [Note] If you want the output format to be in json, as opposed to XML,
             this could be a good time to do that.

After that, the "POST BUILD TASK" checks the syslogs and cleans directories
on the target as necessary.

So far this is very similar to Fuego. The real difference comes with a 
new "Post-build Actions" called "Execute set of scripts" that has been added, 
and contains the following calls to python code:
    RET=0
    python $JTA_TESTS_PATH/$TEST_CATEGORY/$JOB_NAME/create_xml_dbench.py || RET=$?
    python $JTA_ENGINE_PATH/scripts/detailed_results/get_detailed_results.py 
       $JTA_LOGS_PATH/$JOB_NAME/test_result.${BUILD_ID}.${BUILD_NUMBER}.html 
       $JENKINS_HOME/jobs/$JOB_NAME/builds/$BUILD_NUMBER/test_result.xml
    exit $RET

Each test in AGL-JTA has its own create_xml_<test>.py (btw duplicating lots of code).
This script uses the information stored at "build.xml" (start time, endtime, result..),
"config_default" (avg,min,max values), and the _complete_ execution log; and
produces a "test_result.xml" file as a result. 
    - jta/job_conf/common/Benchmark.dbench/builds/5/build.xml
    - jta/engine/tests/common/Benchmark.dbench/config_default
    - jta/jobs/Benchmark.dbench/builds/5/log
      [IMPORTANT] it parses the messages printed out by parser.py so there is a hidden 
      dependency between this and the old parser.
    - jta/job_conf/common/Benchmark.dbench/builds/5/test_result.xml

This is a test_result.xml example (failure):
    <?xml version="1.0" encoding="utf-8"?>
    <report>
        <name>Benchmark.dbench</name>
        <starttime>2016-11-24 02:15:52</starttime>
        <endtime>2016-11-24 02:18:22</endtime>
        <result>FAILURE</result>
        <items>
        </items>
    </report>

And this is another one (success):
    <?xml version="1.0" encoding="utf-8"?>
    <report>
        <name>Benchmark.dbench</name>
        <starttime>2016-11-24 03:00:23</starttime>
        <endtime>2016-11-24 03:00:51</endtime>
        <result>SUCCESS</result>
        <items>
            <item>
                <name>Throughput</name>
                <average>99.48</average>
                <unit>MB/s</unit>
                <criterion>0.00 ~ 100.00</criterion>
                <output>138.451</output>
                <rate>1.39</rate>
                <result>PASS</result>
            </item>
        </items>
        <test_dir>/tests/jta.Benchmark.dbench</test_dir>
        <command_line>./dbench -t 10 -D /a/jta.Benchmark.dbench -c /a/jta.Benchmark.dbench/client.txt 2</command_line>
    </report>

The script "get_detailed_results.py" takes the "test_result.xml" and a
template html ("detailed_results_tpl.html") as inputs, and renders the results
in html format. In the process it also summarizes the pass/fail results.
    - engine/scripts/detailed_results/get_detailed_results.py
    - jta/engine/scripts/detailed_results/detailed_results_tpl.html
    - /userdata/logs/Benchmark.dbench/test_result.5.5.html

Finally, the resulting html, as well as the plot.png file, are linked
on the "Set build description" section using these lines:
    Dbench benchmark<p><b><a href="/userContent/jta.logs/Benchmark.dbench/plot.png">
    Graph</a></b> <b><a href="/userContent/jta.logs/${JOB_NAME}/test_result.${BUILD_ID}.
    ${BUILD_NUMBER}.html">Test Result</a></b></p>

Other test_result.xml examples:
    <?xml version="1.0" encoding="utf-8"?>
    <report>
        <name>Benchmark.Dhrystone</name>
        <starttime>2016-11-24 03:05:30</starttime>
        <endtime>2016-11-24 03:05:45</endtime>
        <result>SUCCESS</result>
        <items>
            <item>
                <name>Dhrystone</name>
                <average>5555555.50</average>
                <unit>Dhrystones/s</unit>
                <criterion>0.00 ~ 100.00</criterion>
                <output>909090.9</output>
                <rate>0.16</rate>
                <result>PASS</result>
            </item>
        </items>
        <test_dir>/tests/jta.Benchmark.Dhrystone</test_dir>
        <command_line>./dhrystone 10000000</command_line>
    </report>

    <?xml version="1.0" encoding="utf-8"?>
    <report>
        <name>Functional.LTP.Syscalls</name>
        <starttime>2016-11-24 04:17:40</starttime>
        <endtime>2016-11-24 04:51:36</endtime>
        <result>FAILURE</result>
        <items>
            <item>
                <name>abort01</name>
                <result>PASS</result>
            </item>
            <item>
                <name>accept01</name>
                <result>PASS</result>
            </item>
            ...OMITTED...
            <item>
                <name>futex_wait_bitset01</name>
                <result>PASS</result>
            </item>
            <item>
                <name>futex_wait_bitset02</name>
                <result>PASS</result>
            </item>
        </items>
        <test_dir>/tmp/jta.LTP/target_bin</test_dir>
        <command_line>./runltp -f syscalls -g /tmp/jta.LTP/syscalls.html</command_line>
    </report>

I hope that was clear and easy to understand. If you have any questions let me know.

Regards,
Daniel





More information about the Fuego mailing list