[Fuego] [PATCH 1/2] ptsematest: Add a new test for the rt-tests

Bird, Timothy Tim.Bird at sony.com
Mon Jan 22 23:49:29 UTC 2018


Great addition.  See comment below.

> -----Original Message-----
> From: Hoang Van Tuyen on Monday, January 15, 2018 8:46 PM
>
> The ptsematest starts two threads and measure the latency of interprocess
> communication with POSIX mutex.
> Currently, ptsematest does not support a option for printing a summary
> output.
> We get some lines at the end of the ptsematest's output.
> The number of the lines is twice the target's cpu number.
> 
> Signed-off-by: Hoang Van Tuyen <tuyen.hoangvan at toshiba-tsdv.com>
> ---
>   .../tests/Benchmark.ptsematest/chart_config.json   |  5 +++++
>   engine/tests/Benchmark.ptsematest/criteria.json    | 26
> ++++++++++++++++++++++
>   engine/tests/Benchmark.ptsematest/fuego_test.sh    | 25
> +++++++++++++++++++++
>   engine/tests/Benchmark.ptsematest/parser.py        | 23
> +++++++++++++++++++
>   engine/tests/Benchmark.ptsematest/reference.json   | 26
> ++++++++++++++++++++++
>   engine/tests/Benchmark.ptsematest/spec.json        | 14 ++++++++++++
>   6 files changed, 119 insertions(+)
>   create mode 100644
> engine/tests/Benchmark.ptsematest/chart_config.json
>   create mode 100644 engine/tests/Benchmark.ptsematest/criteria.json
>   create mode 100755 engine/tests/Benchmark.ptsematest/fuego_test.sh
>   create mode 100755 engine/tests/Benchmark.ptsematest/parser.py
>   create mode 100644 engine/tests/Benchmark.ptsematest/reference.json
>   create mode 100644 engine/tests/Benchmark.ptsematest/spec.json
> 
> diff --git a/engine/tests/Benchmark.ptsematest/chart_config.json
> b/engine/tests/Benchmark.ptsematest/chart_config.json
> new file mode 100644
> index 0000000..cdaf6a2
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/chart_config.json
> @@ -0,0 +1,5 @@
> +{
> +    "chart_type": "measure_plot",
> +    "measures": ["default.latencies.max_latency",
> +        "default.latencies.avg_latency"]
> +}
> diff --git a/engine/tests/Benchmark.ptsematest/criteria.json
> b/engine/tests/Benchmark.ptsematest/criteria.json
> new file mode 100644
> index 0000000..a023558
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/criteria.json
> @@ -0,0 +1,26 @@
> +{
> +    "schema_version":"1.0",
> +    "criteria":[
> +        {
> +            "tguid":"default.latencies.max_latency",
> +            "reference":{
> +                "value":100,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"default.latencies.min_latency",
> +            "reference":{
> +                "value":100,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"default.latencies.avg_latency",
> +            "reference":{
> +                "value":100,
> +                "operator":"le"
> +            }
> +        }
> +    ]
> +}
> diff --git a/engine/tests/Benchmark.ptsematest/fuego_test.sh
> b/engine/tests/Benchmark.ptsematest/fuego_test.sh
> new file mode 100755
> index 0000000..1260602
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/fuego_test.sh
> @@ -0,0 +1,25 @@
> +tarball=../rt-tests/rt-tests-v1.1.1.tar.gz
> +
> +NEED_ROOT=1
> +
> +function test_pre_check {
> +    assert_define BENCHMARK_PTSEMATEST_PARAMS
> +}
> +
> +function test_build {
> +    patch -p1 -N -s <
> $TEST_HOME/../rt-tests/0001-Add-scheduling-policies-for-old-kernels.patch
> +    make NUMA=0 ptsematest
> +}
> +
> +function test_deploy {
> +    put ptsematest  $BOARD_TESTDIR/fuego.$TESTDIR/
> +}
> +
> +function test_run {
> +    # ptsematest does not support a option for printing a summary only
> on exit.
> +    # So, We get some lines at the end of the command's output.
> +    # The number for getting the lines depends on the cpu number of
> target machine.
> +    target_cpu_number=`cmd "nproc"`

This is neat.  I've wondered if we could do direct assignment of
a shell variable from output of 'cmd'.  This would save intermediary
temp files in a few other scripts.

However, I don't like using backticks.  I accepted your patch, but
then added another on top to convert this to:
target_cpu_number=$(cmd "nproc")

> +    getting_line_number=`expr $target_cpu_number +
> $target_cpu_number`

Bash can handle arithmetic inline, using $(( ... )), I modified
this also, after accepting this patch.  This saves having to
call out to an external command just to do some arithmetic.

> +    report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./ptsematest
> $BENCHMARK_PTSEMATEST_PARAMS | tail -$getting_line_number"
> +}
> diff --git a/engine/tests/Benchmark.ptsematest/parser.py
> b/engine/tests/Benchmark.ptsematest/parser.py
> new file mode 100755
> index 0000000..edc77ff
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/parser.py
> @@ -0,0 +1,23 @@
> +#!/usr/bin/python
> +import os, re, sys
> +sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
> +import common as plib
> +
> +regex_string = ".*, Min\s+(\d+).*, Avg\s+(\d+), Max\s+(\d+)"
> +measurements = {}
> +matches = plib.parse_log(regex_string)
> +
> +if matches:
> +    min_latencies = []
> +    avg_latencies = []
> +    max_latencies = []
> +    for thread in matches:
> +        min_latencies.append(float(thread[0]))
> +        avg_latencies.append(float(thread[1]))
> +        max_latencies.append(float(thread[2]))
> +    measurements['default.latencies'] = [
> +        {"name": "max_latency", "measure" : max(max_latencies)},
> +        {"name": "min_latency", "measure" : min(min_latencies)},
> +        {"name": "avg_latency", "measure" :
> sum(avg_latencies)/len(avg_latencies)}]
> +
> +sys.exit(plib.process(measurements))
> diff --git a/engine/tests/Benchmark.ptsematest/reference.json
> b/engine/tests/Benchmark.ptsematest/reference.json
> new file mode 100644
> index 0000000..415a8dd
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/reference.json
> @@ -0,0 +1,26 @@
> +{
> +    "test_sets":[
> +        {
> +            "name":"default",
> +            "test_cases":[
> +                {
> +                    "name":"latencies",
> +                    "measurements":[
> +                        {
> +                            "name":"max_latency",
> +                            "unit":"us"
> +                        },
> +                        {
> +                            "name":"min_latency",
> +                            "unit":"us"
> +                        },
> +                        {
> +                            "name":"avg_latency",
> +                            "unit":"us"
> +                        }
> +                    ]
> +                }
> +            ]
> +        }
> +    ]
> +}
> diff --git a/engine/tests/Benchmark.ptsematest/spec.json
> b/engine/tests/Benchmark.ptsematest/spec.json
> new file mode 100644
> index 0000000..8fd2db9
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/spec.json
> @@ -0,0 +1,14 @@
> +{
> +    "testName": "Benchmark.ptsematest",
> +    "specs": {
> +        "default": {
> +            "PARAMS": "-a -t -p99 -i100 -d25 -l100000"
> +        },
> +        "latest": {
> +            "PER_JOB_BUILD": "true",
> +            "gitrepo":
> "https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git",
> +            "gitref": "unstable/devel/v1.1.1",
> +            "PARAMS": "-a -t -p99 -i100 -d25 -l100000"
> +        }
> +    }
> +}
> --
> 2.1.4

Looks good.  Applied and pushed.
 -- Tim



More information about the Fuego mailing list