[Fuego] [PATCH 3/3] backfire: Add a new test of the rt-tests

Daniel Sangorrin daniel.sangorrin at toshiba.co.jp
Tue Jan 30 01:25:07 UTC 2018


Hi Tim

> -----Original Message-----
> From: fuego-bounces at lists.linuxfoundation.org [mailto:fuego-bounces at lists.linuxfoundation.org] On Behalf Of Bird, Timothy
> Sent: Saturday, January 27, 2018 10:59 AM
> To: Hoang Van Tuyen; fuego at lists.linuxfoundation.org
> Subject: Re: [Fuego] [PATCH 3/3] backfire: Add a new test of the rt-tests
...
> > +function test_run {
> > +    # sendme does not support a option for printing a summary only on exit.
> > +    # So, We get the three lines at the end of the command's output as a
> > +    # summary of the report.
> > +    report "cd $BOARD_TESTDIR/fuego.$TESTDIR; insmod ./backfire.ko;
> > ./sendme $BENCHMARK_SENDME_PARAMS | tail -3"
> 
> I mentioned doing a pre-test check for insmod.  I'm not sure about tail.
> Hmmm.  It looks like a few other tests use tail.  But they all use
> the -n syntax.  ... Checking busybox, it does not support the
> syntax 'tail  -<num_lines>', but rather requires 'tail -n <num_lines>'.
> Please use 'tail -n 3' here.
> (and don't worry about checking for tail.  I'm going to add it to
> the list of commands required on the board).
> 
> Daniel - do you agree with this, or should we not have Fuego require 'tail',
> and have individual tests check for it before using it?  It's a pretty standard command,
> and I don't know of many busybox instances that don't have it compiled in.
> 
> On the other hand, where we're using it could probably be replaced with
> host-side processing.
> 
> Let me know what you think.

I would rather call tail on the host side just in case, but as you mentioned most systems have tail in them 
so it is not a strong opinion. 

Thanks,
Daniel

> 
> > +}
> > +
> > +function test_cleanup {
> > +    cmd "rmmod backfire &> /dev/null"
> > +}
> Nice.  good cleanup.
> 
> However we should also check for rmmod in test_pre_check.
> 
> > diff --git a/engine/tests/Benchmark.backfire/parser.py
> > b/engine/tests/Benchmark.backfire/parser.py
> > new file mode 100755
> > index 0000000..e8416f1
> > --- /dev/null
> > +++ b/engine/tests/Benchmark.backfire/parser.py
> > @@ -0,0 +1,23 @@
> > +#!/usr/bin/python
> > +import os, re, sys
> > +sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
> > +import common as plib
> > +
> > +regex_string = ".* Min\s+(\d+).*, Avg\s+(\d+), Max\s+(\d+)"
> > +measurements = {}
> > +matches = plib.parse_log(regex_string)
> > +
> > +if matches:
> > +    min_intervals = []
> > +    avg_intervals = []
> > +    max_intervals = []
> > +    for thread in matches:
> > +        min_intervals.append(float(thread[0]))
> > +        avg_intervals.append(float(thread[1]))
> > +        max_intervals.append(float(thread[2]))
> > +    measurements['default.intervals'] = [
> > +        {"name": "max_interval", "measure" : max(max_intervals)},
> > +        {"name": "min_interval", "measure" : min(min_intervals)},
> > +        {"name": "avg_interval", "measure" :
> > sum(avg_intervals)/len(avg_intervals)}]
> > +
> > +sys.exit(plib.process(measurements))
> > diff --git a/engine/tests/Benchmark.backfire/reference.json
> > b/engine/tests/Benchmark.backfire/reference.json
> > new file mode 100644
> > index 0000000..d1dd0bc
> > --- /dev/null
> > +++ b/engine/tests/Benchmark.backfire/reference.json
> > @@ -0,0 +1,26 @@
> > +{
> > +    "test_sets":[
> > +        {
> > +            "name":"default",
> > +            "test_cases":[
> > +                {
> > +                    "name":"intervals",
> > +                    "measurements":[
> > +                        {
> > +                            "name":"max_interval",
> > +                            "unit":"us"
> > +                        },
> > +                        {
> > +                            "name":"min_interval",
> > +                            "unit":"us"
> > +                        },
> > +                        {
> > +                            "name":"avg_interval",
> > +                            "unit":"us"
> > +                        }
> > +                    ]
> > +                }
> > +            ]
> > +        }
> > +    ]
> > +}
> > diff --git a/engine/tests/Benchmark.backfire/spec.json
> > b/engine/tests/Benchmark.backfire/spec.json
> > new file mode 100644
> > index 0000000..3103935
> > --- /dev/null
> > +++ b/engine/tests/Benchmark.backfire/spec.json
> > @@ -0,0 +1,14 @@
> > +{
> > +    "testName": "Benchmark.sendme",
> > +    "specs": {
> > +        "default": {
> > +            "PARAMS": "-a -p99 -l100"
> > +        },
> > +        "latest": {
> > +            "PER_JOB_BUILD": "true",
> > +            "gitrepo":
> > "https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git",
> > +            "gitref": "unstable/devel/v1.1.1",
> > +            "PARAMS": "-a -p99 -l100"
> > +        }
> > +    }
> 
> If we only ever use one set of PARAMS, do we need to include them here?
> Are you planning on adding additional specs for this test, with other parameter
> values?
> 
> > +}
> > --
> > 2.1.4
> 
> Please revise as indicated, and see my comments on the Dockerfile patch.
> We may need to discuss some things before this is accepted.
>  -- Tim
> 
> _______________________________________________
> Fuego mailing list
> Fuego at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/fuego





More information about the Fuego mailing list