[Fuego] [PATCH] kselftest: add support for Linux kernel selftests

Bird, Timothy Tim.Bird at sony.com
Thu Nov 9 02:11:56 UTC 2017


I'm not sure how this one fell through the cracks.  Here's the slowest reply ever.

Looks good.  Applied to master.

I had to add an additional patch to convert the parser.py to the new parser API.

I have a few comments inline below.

> -----Original Message-----
> From: Daniel Sangorrin on Tuesday, May 23, 2017 12:44 AM
> 
> The source code for these tests comes from the Linux kernel
> sources, see folder 'tools/testing/selftests/'.
> 
> Test source from kernels older than v4.1 is not supported but
> you can still test older kernels if you use the test source
> code from a kernel version greater or equal than v4.1.
> The reason for that is that v4.1 introduced some features
> that make it very simple to use in Fuego.

In general, most of the kselftests are not unique to a specific
kernel version, so using 4.1 or greater seems OK.
> 
> You can define what tests to execute by using the variable
> 'targets'. Note that otherwise all available tests will be
> executed (except for the hotplug test).
> 
> Some tests may partially fail. You can use the variable
> "fail_count" to specify the maximum amount of tests that
> can fail without causing the overall Fuego test to fail.
> 
> Some tests require root permissions, and some other tests
> have dependencies such as perl or tput. Neither is checked
> at the moment so just look at the testlog for failed cases
> and adjust your targets/fail_count variables.
> 
> Signed-off-by: Daniel Sangorrin <daniel.sangorrin at toshiba.co.jp>
> ---
>  engine/overlays/testplans/testplan_docker.json  |  5 +++
>  engine/tests/Functional.kselftest/fuego_test.sh | 41
> +++++++++++++++++++++++++
>  engine/tests/Functional.kselftest/parser.py     | 18 +++++++++++
>  engine/tests/Functional.kselftest/spec.json     | 22 +++++++++++++
>  4 files changed, 86 insertions(+)
>  create mode 100755 engine/tests/Functional.kselftest/fuego_test.sh
>  create mode 100755 engine/tests/Functional.kselftest/parser.py
>  create mode 100644 engine/tests/Functional.kselftest/spec.json
> 
> diff --git a/engine/overlays/testplans/testplan_docker.json
> b/engine/overlays/testplans/testplan_docker.json
> index 02772e6..9f43d28 100644
> --- a/engine/overlays/testplans/testplan_docker.json
> +++ b/engine/overlays/testplans/testplan_docker.json
> @@ -51,6 +51,11 @@
>              "timeout": "100m"
>          },
>          {
> +            "testName": "Functional.kselftest",
> +            "spec": "docker",
> +            "timeout": "100m"
> +        },
> +        {
>              "testName": "Functional.LTP",
>              "spec": "docker",
>              "timeout": "100m"
> diff --git a/engine/tests/Functional.kselftest/fuego_test.sh
> b/engine/tests/Functional.kselftest/fuego_test.sh
> new file mode 100755
> index 0000000..81f7a4c
> --- /dev/null
> +++ b/engine/tests/Functional.kselftest/fuego_test.sh
> @@ -0,0 +1,41 @@
> +function test_pre_check {
> +    assert_define ARCHITECTURE
> +    if [ ! "$ARCHITECTURE" = "x86_64" ]; then
> +        assert_define CROSS_COMPILE
> +    fi
> +}
> +
> +function test_build {
> +    make ARCH=$ARCHITECTURE defconfig
> +    make ARCH=$ARCHITECTURE headers_install
Does this pollute the container for other tests? 
Is there a race condition here?
Where do the headers get installed?

> +    make ARCH=$ARCHITECTURE -C tools/testing/selftests
> +}
> +
> +function test_deploy {
> +    pushd tools/testing/selftests
Is pushd/popd a bashism?  Is it guaranteed to be there?

> +    mkdir fuego
> +    # kselftest targets (see tools/testing/selftests/Makefile)
> +    if [ ! -z ${FUNCTIONAL_KSELFTEST_TARGETS+x} ]; then
> +        INSTALL_PATH=fuego make
> TARGETS="$FUNCTIONAL_KSELFTEST_TARGETS" install || \
> +            abort_job "This test depends on tools/testing/selftests/Makefile
> having 'install' target support, which was introduced in kernel 4.1"
> +    else
> +        INSTALL_PATH=fuego make install || \
> +            abort_job "This test depends on tools/testing/selftests/Makefile
> having 'install' target support, which was introduced in kernel 4.1"
> +    fi
> +    put ./fuego/* $BOARD_TESTDIR/fuego.$TESTDIR/
> +    rm -rf fuego
> +    popd
> +}
> +
> +function test_run {
> +    report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./run_kselftest.sh"
> +}
> +
> +function test_processing {
> +    echo "Processing kselftest log"
> +    if [ ! -z ${FUNCTIONAL_KSELFTEST_FAIL_COUNT+x} ]; then
> +        log_compare "$TESTDIR" "$FUNCTIONAL_KSELFTEST_FAIL_COUNT"
> "^selftests: .* \[FAIL\]" "n"
> +    else
> +        log_compare "$TESTDIR" "0" "^selftests: .* \[FAIL\]" "n"
> +    fi
> +}
> diff --git a/engine/tests/Functional.kselftest/parser.py
> b/engine/tests/Functional.kselftest/parser.py
> new file mode 100755
> index 0000000..3380635
> --- /dev/null
> +++ b/engine/tests/Functional.kselftest/parser.py
> @@ -0,0 +1,18 @@
> +#!/bin/python
> +
> +import os, re, sys
> +
> +sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
> +import common as plib
> +
> +test_results = {}
> +with open(plib.TEST_LOG,'r') as f:
> +    for line in f:
> +        m = re.match("^selftests: (.*) \[(.*)\]$", line)
> +        if m:
> +            test_results[m.group(1)] = 0 if m.group(2) == 'PASS' else 1
> +
> +plib.process_data(test_results=test_results, plot_type='l', label='PASS=0
> FAIL=1')

I converted this over to the new parser API.

> +
> +# The parser always returns 0 (success) because the global result is
> calculated with log_compare
> +sys.exit(0)
> diff --git a/engine/tests/Functional.kselftest/spec.json
> b/engine/tests/Functional.kselftest/spec.json
> new file mode 100644
> index 0000000..f6026f1
> --- /dev/null
> +++ b/engine/tests/Functional.kselftest/spec.json
> @@ -0,0 +1,22 @@
> +{
> +    "testName": "Functional.kselftest",
> +    "specs": {
> +        "default": {
> +            "gitrepo": "https://github.com/torvalds/linux.git"
> +        },
> +        "docker": {
> +            "gitrepo": "https://github.com/torvalds/linux.git",
> +            "targets": "exec kcmp timers",
> +            "fail_count": "3"
> +        },
> +        "cip": {
> +            "gitrepo": "https://github.com/cip-project/linux-cip.git"
> +        },
It's not clear to me that the linux-cpi git will have different kselftests
than mainline.  This is OK as an example, but in practice, is this needed?
If anything, it will be better for the CIP project to use mainline
kselftest, so that as additional tests are written, they will be used here.

> +        "template": {
> +            "gitrepo": "https://xxx/yyy.git",
> +            "gitref": "my_tag_branch_or_commit_id",
> +            "targets": "breakpoints cpu-hotplug efivarfs exec firmware ftrace
> futex kcmp lib membarrier memfd memory-hotplug mount mqueue net
> powerpc pstore ptrace seccomp size static_keys sysctl timers user vm x86
> zram",
> +            "fail_count": "3"
> +        }
> +    }
> +}
> --
> 2.7.4

Thanks very much for this test!
 -- Tim



More information about the Fuego mailing list