[Fuego] [PATCH] tests: add support for Linaro test-definitons

daniel.sangorrin at toshiba.co.jp daniel.sangorrin at toshiba.co.jp
Wed Feb 13 07:06:58 UTC 2019


Hi Tim,

I think you misunderstood Linaro test-definitions. See below.

> -----Original Message-----
> From: Tim.Bird at sony.com <Tim.Bird at sony.com>
> Sent: Wednesday, February 13, 2019 3:40 PM
> To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin at toshiba.co.jp>;
> fuego at lists.linuxfoundation.org
> Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> 
> I don't know enough about LAVA test mechanics to judge parts of this,
> but overall this looks good.  It looks much simpler than I thought it would.

LAVA has nothing to do here. These test run on Fuego like any other test, you don't need to setup LAVA.
Linaro test-definitions _can_ be used in LAVA, but it is not necessary.

> > -----Original Message-----
> > From: Daniel Sangorrin
> >
> > This adds initial support for reusing Linaro test-definitions.
> > It is still a proof of concept and only tested with
> > smoke tests. I have written a few FIXTHIS to indicate what
> > is left.
> >
> > To try it follow these steps:
> >
> > - prepare SSH_KEY for your board
> >     Eg: Inside fuego's docker container do
> >     > su jenkins
> >     > cp path/to/bbb_id_rsa ~/.ssh/
> >     > vi ~/.ssh/config
> >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > - ftc add-job -b bbb -t Functional.linaro
> > - execute the job from jenkins
> > - expected results
> > 	- table with each test case and the results (PASS/FAIL/SKIP)
> > 	- run.json
> > 	- csv
> >
> > Signed-off-by: Daniel Sangorrin <daniel.sangorrin at toshiba.co.jp>
> > ---
> >  tests/Functional.linaro/chart_config.json |  3 ++
> >  tests/Functional.linaro/fuego_test.sh     | 59
> > +++++++++++++++++++++++++++++++
> >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> >  tests/Functional.linaro/spec.json         | 16 +++++++++
> >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> >  5 files changed, 130 insertions(+)
> >  create mode 100644 tests/Functional.linaro/chart_config.json
> >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> >  create mode 100755 tests/Functional.linaro/parser.py
> >  create mode 100644 tests/Functional.linaro/spec.json
> >  create mode 100644 tests/Functional.linaro/test.yaml
> >
> > diff --git a/tests/Functional.linaro/chart_config.json
> > b/tests/Functional.linaro/chart_config.json
> > new file mode 100644
> > index 0000000..b8c8fb6
> > --- /dev/null
> > +++ b/tests/Functional.linaro/chart_config.json
> > @@ -0,0 +1,3 @@
> > +{
> > +    "chart_type": "testcase_table"
> > +}
> > diff --git a/tests/Functional.linaro/fuego_test.sh
> > b/tests/Functional.linaro/fuego_test.sh
> > new file mode 100755
> > index 0000000..17b56a9
> > --- /dev/null
> > +++ b/tests/Functional.linaro/fuego_test.sh
> > @@ -0,0 +1,59 @@
> > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > +
> > +# Root permissions required for
> > +# - installing dependencies on the target (debian/centos) when -s is not
> > specified
> > +# - executing some of the tests
> > +# FIXTHIS: don't force root permissions for tests that do not require them
> > +NEED_ROOT=1
> > +
> > +function test_pre_check {
> > +    # linaro parser dependencies
> > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > +    assert_has_program sed
> > +    assert_has_program awk
> > +    assert_has_program grep
> > +    assert_has_program egrep
> > +    assert_has_program tee
> > +
> > +    # test-runner requires a password-less connection
> > +    # Eg: Inside fuego's docker container do
> > +    # su jenkins
> > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > +    # vi ~/.ssh/config
> > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > ro/boards/$NODE_NAME.board)"
> > +}
> > +
> > +function test_build {
> > +    source ./automated/bin/setenv.sh
> > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > +}
> > +
> > +function test_run {
> > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > +
> > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > "automated/linux/smoke/smoke.yaml"}
> 
> Just to be clear, the spec would define a "yaml" variable
> with the full path to the Linaro yaml file?  

OK, so smoke.yaml is a test set, with serveral test cases inside.
Like in LTP we can run a whole test set or individual test cases.
If you want to run a different test set, then you specify it with the yaml variable.
You can choose from automated/linux/*/*.yaml
https://github.com/Linaro/test-definitions/tree/master/automated/linux/

> Is more than one
> supported, or is this targeted at executing exactly one Linaro test?
You can test more than one by suppying a Linaro test plan yaml file, instead of a test set yaml file.
https://github.com/Linaro/test-definitions/blob/master/plans/linux-test-plan-example.yaml
From Fuego, it will look as a single test with multiple test sets, and multiple test cases per set.

> That is, would you create a spec per Linaro test, and add a bunch
> of Fuego jobs for the different Linaro jobs you wanted to run?
You can do it that way.
You can also use a Linaro test plan.
And you can always use dynamic-variables to specify the yaml and the parameters for the test.

> > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > +            abort_job "$yaml_file not found"
> > +    fi
> > +
> > +    if startswith "$yaml_file" "plans"; then
> > +            echo "using test plan: $yaml_file"
> > +            test_or_plan_flag="-p"
> > +    else
> > +            echo "using test definition: $yaml_file"
> > +            test_or_plan_flag="-d"
> > +    fi
> > +
> > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > +    else
> > +        PARAMS=""
> > +    fi
> > +
> > +    # FIXTHIS: don't use -s for targets with debian/centos
> > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > $PARAMS -g $LOGIN@$IPADDR -s -e
> 
> I'm going to have to read up on test-runner, and what these different options are.
https://github.com/Linaro/test-definitions/blob/master/automated/utils/test-runner.py
Summarized:
usage: test-runner [-h] [-o OUTPUTFOLDER] [-p TEST_PLAN] [-d TEST_DEF]
				   [-r PARAM1=VALUE [PARAM2=VALUE ...]] [-k {automated,manual}]
				   [-t TIMEOUT] [-g root at sshhostip] [-s] [-e] [-l] 
				   [-O TEST_PLAN_OVERLAY] [-v]
  -s, --skip_install  skip install section defined in test definition.
  -e, --skip_environment skip environmental data collection (board name, distro, etc)
  -l, --lava_run        send test result to LAVA with lava-test-case.
  -v, --verbose         Set log level to DEBUG.
> 
> > +}
> > +
> > +# FIXTHIS: the log directory is populated with a copy of the whole
> > repository, clean unnecessary files
> > diff --git a/tests/Functional.linaro/parser.py
> > b/tests/Functional.linaro/parser.py
> > new file mode 100755
> > index 0000000..48b502b
> > --- /dev/null
> > +++ b/tests/Functional.linaro/parser.py
> > @@ -0,0 +1,25 @@
> > +#!/usr/bin/python
> > +
> > +import os, sys, collections
> > +import common as plib
> > +import json
> > +
> > +# allocate variable to store the results
> > +measurements = {}
> > +measurements = collections.OrderedDict()
> > +
> > +# read results from linaro result.json format
> > +with open(plib.LOGDIR + "/result.json") as f:
> > +    data = json.load(f)[0]
> 
> This can't be this easy, can it?

:D

> > +for test_case in data['metrics']:
> > +    test_case_id = test_case['test_case_id']
> > +    result = test_case['result']
> > +    # FIXTHIS: add measurements when available
> > +    # measurement = test_case['measurement']
> > +    # units = test_case['units']
> > +    measurements['default.' + test_case_id] = result.upper()
> > +
> > +# FIXTHIS: think about how to get each test's log from stdout.log
> > +
> > +sys.exit(plib.process(measurements))
> > diff --git a/tests/Functional.linaro/spec.json
> > b/tests/Functional.linaro/spec.json
> > new file mode 100644
> > index 0000000..561e2ab
> > --- /dev/null
> > +++ b/tests/Functional.linaro/spec.json
> > @@ -0,0 +1,16 @@
> > +{
> > +    "testName": "Functional.linaro",
> > +    "specs": {
> > +        "default": {
> > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > +            "extra_success_links": {"csv": "result.csv"},
> > +            "extra_fail_links": {"csv": "results.csv"}
> > +        },
> > +        "smoke": {
> > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > +            "params": "TESTS='pwd'",
> > +            "extra_success_links": {"csv": "result.csv"},
> > +            "extra_fail_links": {"csv": "results.csv"}
> > +        }
> 
> I'd kind of like to add a "hello world" test, or a cyclictest test to
> Linaro's test definitions, if they're not already there, so we have
> something like a rosetta stone to compare the tests.  This is something
> I'll look into in my test definitions analysis.

Cyclictest is there.
All you need to do is:
docker# su jenkins
docker$ setup ssh key for your board as I wrote in the commit/mail/code
docker$ ftc run-test -b bbb -t Function.linaro --dynamic-vars "yaml=./automated/linux/cyclictest/cyclictest.yaml"

You can also execute other tests such as these. They should work
docker$ ftc run-test -b bbb -t Function.linaro --dynamic-vars "yaml=./automated/linux/unixbench/unixbench.yaml"
docker$ ftc run-test -b bbb -t Function.linaro --dynamic-vars "yaml=./automated/linux/lshw/lshw.yaml"
docker$ ftc run-test -b bbb -t Function.linaro --dynamic-vars "yaml=./automated/linux/iozone/iozone.yaml"
docker$ ftc run-test -b bbb -t Function.linaro --dynamic-vars "yaml=./automated/linux/hackbench/hackbench.yaml"

I haven't verified all tests yet though so some might fail.
 
> I guess I need to bite the bullet and set up LAVA for my lab.
> My problem is that I only have one board that has a serial port,
> and I thought that was a requirement for using LAVA.  Apparently
> I was wrong, if this test works.

No, this does not require LAVA at all.

Now, I am working on the opposite: using Fuego tests on Linaro. I am getting close to run a hello world.
Then, I will see what we need to do for our tests to support LAVA properly. It seems pretty easy: we just have to translate and echo our run.json in a specific format that LAVA recognizes on the serial output.

Thanks,
Daniel

> > +    }
> > +}
> > diff --git a/tests/Functional.linaro/test.yaml
> > b/tests/Functional.linaro/test.yaml
> > new file mode 100644
> > index 0000000..a2efee8
> > --- /dev/null
> > +++ b/tests/Functional.linaro/test.yaml
> > @@ -0,0 +1,27 @@
> > +fuego_package_version: 1
> > +name: Functional.linaro
> > +description: |
> > +    Linaro test-definitions
> > +license: GPL-2.0
> > +author: Milosz Wasilewski, Chase Qi
> > +maintainer: Daniel Sangorrin <daniel.sangorrin at toshiba.co.jp>
> > +version: latest git commits
> > +fuego_release: 1
> > +type: Functional
> > +tags: ['kernel', 'linaro']
> > +git_src: https://github.com/Linaro/test-definitions
> > +params:
> > +    - YAML:
> > +        description: test definiton or plan.
> > +        example: "automated/linux/smoke/smoke.yaml"
> > +        optional: no
> > +    - PARAMS:
> > +        description: List of params for the test PARAM1=VALUE1
> > [PARAM2=VALUE2]
> > +        example: "TESTS='pwd'"
> > +        optional: yes
> > +data_files:
> > +    - chart_config.json
> > +    - fuego_test.sh
> > +    - parser.py
> > +    - spec.json
> > +    - test.yaml
> > --
> > 2.7.4
> >
> > _______________________________________________
> > Fuego mailing list
> > Fuego at lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/fuego


More information about the Fuego mailing list