[Fuego] [PATCH 1/2] ptsematest: Add a new test for the rt-tests

Bird, Timothy Tim.Bird at sony.com
Wed Jan 17 06:04:21 UTC 2018


Hoang,

I just wanted to acknowledge receipt of these patches.  I haven't had time to review
or test them, but hope to in the next day or two.  I've been busy with some other
work activities, and some of my lab machines are getting some updates related
to meltdown and spectre, so it's causing some delays in my Fuego patch pipeline.

Thanks.
  -- Tim


> -----Original Message-----
> From: fuego-bounces at lists.linuxfoundation.org [mailto:fuego-
> bounces at lists.linuxfoundation.org] On Behalf Of Hoang Van Tuyen
> Sent: Monday, January 15, 2018 8:46 PM
> To: fuego at lists.linuxfoundation.org
> Subject: [Fuego] [PATCH 1/2] ptsematest: Add a new test for the rt-tests
> 
> The ptsematest starts two threads and measure the latency of interprocess
> communication with POSIX mutex.
> Currently, ptsematest does not support a option for printing a summary
> output.
> We get some lines at the end of the ptsematest's output.
> The number of the lines is twice the target's cpu number.
> 
> Signed-off-by: Hoang Van Tuyen <tuyen.hoangvan at toshiba-tsdv.com>
> ---
>   .../tests/Benchmark.ptsematest/chart_config.json   |  5 +++++
>   engine/tests/Benchmark.ptsematest/criteria.json    | 26
> ++++++++++++++++++++++
>   engine/tests/Benchmark.ptsematest/fuego_test.sh    | 25
> +++++++++++++++++++++
>   engine/tests/Benchmark.ptsematest/parser.py        | 23
> +++++++++++++++++++
>   engine/tests/Benchmark.ptsematest/reference.json   | 26
> ++++++++++++++++++++++
>   engine/tests/Benchmark.ptsematest/spec.json        | 14 ++++++++++++
>   6 files changed, 119 insertions(+)
>   create mode 100644
> engine/tests/Benchmark.ptsematest/chart_config.json
>   create mode 100644 engine/tests/Benchmark.ptsematest/criteria.json
>   create mode 100755 engine/tests/Benchmark.ptsematest/fuego_test.sh
>   create mode 100755 engine/tests/Benchmark.ptsematest/parser.py
>   create mode 100644 engine/tests/Benchmark.ptsematest/reference.json
>   create mode 100644 engine/tests/Benchmark.ptsematest/spec.json
> 
> diff --git a/engine/tests/Benchmark.ptsematest/chart_config.json
> b/engine/tests/Benchmark.ptsematest/chart_config.json
> new file mode 100644
> index 0000000..cdaf6a2
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/chart_config.json
> @@ -0,0 +1,5 @@
> +{
> +    "chart_type": "measure_plot",
> +    "measures": ["default.latencies.max_latency",
> +        "default.latencies.avg_latency"]
> +}
> diff --git a/engine/tests/Benchmark.ptsematest/criteria.json
> b/engine/tests/Benchmark.ptsematest/criteria.json
> new file mode 100644
> index 0000000..a023558
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/criteria.json
> @@ -0,0 +1,26 @@
> +{
> +    "schema_version":"1.0",
> +    "criteria":[
> +        {
> +            "tguid":"default.latencies.max_latency",
> +            "reference":{
> +                "value":100,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"default.latencies.min_latency",
> +            "reference":{
> +                "value":100,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"default.latencies.avg_latency",
> +            "reference":{
> +                "value":100,
> +                "operator":"le"
> +            }
> +        }
> +    ]
> +}
> diff --git a/engine/tests/Benchmark.ptsematest/fuego_test.sh
> b/engine/tests/Benchmark.ptsematest/fuego_test.sh
> new file mode 100755
> index 0000000..1260602
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/fuego_test.sh
> @@ -0,0 +1,25 @@
> +tarball=../rt-tests/rt-tests-v1.1.1.tar.gz
> +
> +NEED_ROOT=1
> +
> +function test_pre_check {
> +    assert_define BENCHMARK_PTSEMATEST_PARAMS
> +}
> +
> +function test_build {
> +    patch -p1 -N -s <
> $TEST_HOME/../rt-tests/0001-Add-scheduling-policies-for-old-kernels.patch
> +    make NUMA=0 ptsematest
> +}
> +
> +function test_deploy {
> +    put ptsematest  $BOARD_TESTDIR/fuego.$TESTDIR/
> +}
> +
> +function test_run {
> +    # ptsematest does not support a option for printing a summary only
> on exit.
> +    # So, We get some lines at the end of the command's output.
> +    # The number for getting the lines depends on the cpu number of
> target machine.
> +    target_cpu_number=`cmd "nproc"`
> +    getting_line_number=`expr $target_cpu_number +
> $target_cpu_number`
> +    report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./ptsematest
> $BENCHMARK_PTSEMATEST_PARAMS | tail -$getting_line_number"
> +}
> diff --git a/engine/tests/Benchmark.ptsematest/parser.py
> b/engine/tests/Benchmark.ptsematest/parser.py
> new file mode 100755
> index 0000000..edc77ff
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/parser.py
> @@ -0,0 +1,23 @@
> +#!/usr/bin/python
> +import os, re, sys
> +sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
> +import common as plib
> +
> +regex_string = ".*, Min\s+(\d+).*, Avg\s+(\d+), Max\s+(\d+)"
> +measurements = {}
> +matches = plib.parse_log(regex_string)
> +
> +if matches:
> +    min_latencies = []
> +    avg_latencies = []
> +    max_latencies = []
> +    for thread in matches:
> +        min_latencies.append(float(thread[0]))
> +        avg_latencies.append(float(thread[1]))
> +        max_latencies.append(float(thread[2]))
> +    measurements['default.latencies'] = [
> +        {"name": "max_latency", "measure" : max(max_latencies)},
> +        {"name": "min_latency", "measure" : min(min_latencies)},
> +        {"name": "avg_latency", "measure" :
> sum(avg_latencies)/len(avg_latencies)}]
> +
> +sys.exit(plib.process(measurements))
> diff --git a/engine/tests/Benchmark.ptsematest/reference.json
> b/engine/tests/Benchmark.ptsematest/reference.json
> new file mode 100644
> index 0000000..415a8dd
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/reference.json
> @@ -0,0 +1,26 @@
> +{
> +    "test_sets":[
> +        {
> +            "name":"default",
> +            "test_cases":[
> +                {
> +                    "name":"latencies",
> +                    "measurements":[
> +                        {
> +                            "name":"max_latency",
> +                            "unit":"us"
> +                        },
> +                        {
> +                            "name":"min_latency",
> +                            "unit":"us"
> +                        },
> +                        {
> +                            "name":"avg_latency",
> +                            "unit":"us"
> +                        }
> +                    ]
> +                }
> +            ]
> +        }
> +    ]
> +}
> diff --git a/engine/tests/Benchmark.ptsematest/spec.json
> b/engine/tests/Benchmark.ptsematest/spec.json
> new file mode 100644
> index 0000000..8fd2db9
> --- /dev/null
> +++ b/engine/tests/Benchmark.ptsematest/spec.json
> @@ -0,0 +1,14 @@
> +{
> +    "testName": "Benchmark.ptsematest",
> +    "specs": {
> +        "default": {
> +            "PARAMS": "-a -t -p99 -i100 -d25 -l100000"
> +        },
> +        "latest": {
> +            "PER_JOB_BUILD": "true",
> +            "gitrepo":
> "https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git",
> +            "gitref": "unstable/devel/v1.1.1",
> +            "PARAMS": "-a -t -p99 -i100 -d25 -l100000"
> +        }
> +    }
> +}
> --
> 2.1.4
> 
> 
> --
> ==========================================================
> ======
> Hoang Van Tuyen (Mr.)
> TOSHIBA SOFTWARE DEVELOPMENT (VIETNAM) CO., LTD.
> 16th Floor, VIT Building, 519 Kim Ma Str., Ba Dinh Dist., Hanoi, Vietnam
> Tel: 84-4-22208801 (Company) - Ext.251
> Fax: 84-4-22208802 (Company)
> Email: tuyen.hoangvan at toshiba-tsdv.com
> ==========================================================
> ======



More information about the Fuego mailing list