[Fuego] [PATCH] kselftest: add support for Linux kernel selftests

Daniel Sangorrin daniel.sangorrin at toshiba.co.jp
Tue May 23 07:43:55 UTC 2017


The source code for these tests comes from the Linux kernel
sources, see folder 'tools/testing/selftests/'.

Test source from kernels older than v4.1 is not supported but
you can still test older kernels if you use the test source
code from a kernel version greater or equal than v4.1.
The reason for that is that v4.1 introduced some features
that make it very simple to use in Fuego.

You can define what tests to execute by using the variable
'targets'. Note that otherwise all available tests will be
executed (except for the hotplug test).

Some tests may partially fail. You can use the variable
"fail_count" to specify the maximum amount of tests that
can fail without causing the overall Fuego test to fail.

Some tests require root permissions, and some other tests
have dependencies such as perl or tput. Neither is checked
at the moment so just look at the testlog for failed cases
and adjust your targets/fail_count variables.

Signed-off-by: Daniel Sangorrin <daniel.sangorrin at toshiba.co.jp>
---
 engine/overlays/testplans/testplan_docker.json  |  5 +++
 engine/tests/Functional.kselftest/fuego_test.sh | 41 +++++++++++++++++++++++++
 engine/tests/Functional.kselftest/parser.py     | 18 +++++++++++
 engine/tests/Functional.kselftest/spec.json     | 22 +++++++++++++
 4 files changed, 86 insertions(+)
 create mode 100755 engine/tests/Functional.kselftest/fuego_test.sh
 create mode 100755 engine/tests/Functional.kselftest/parser.py
 create mode 100644 engine/tests/Functional.kselftest/spec.json

diff --git a/engine/overlays/testplans/testplan_docker.json b/engine/overlays/testplans/testplan_docker.json
index 02772e6..9f43d28 100644
--- a/engine/overlays/testplans/testplan_docker.json
+++ b/engine/overlays/testplans/testplan_docker.json
@@ -51,6 +51,11 @@
             "timeout": "100m"
         },
         {
+            "testName": "Functional.kselftest",
+            "spec": "docker",
+            "timeout": "100m"
+        },
+        {
             "testName": "Functional.LTP",
             "spec": "docker",
             "timeout": "100m"
diff --git a/engine/tests/Functional.kselftest/fuego_test.sh b/engine/tests/Functional.kselftest/fuego_test.sh
new file mode 100755
index 0000000..81f7a4c
--- /dev/null
+++ b/engine/tests/Functional.kselftest/fuego_test.sh
@@ -0,0 +1,41 @@
+function test_pre_check {
+    assert_define ARCHITECTURE
+    if [ ! "$ARCHITECTURE" = "x86_64" ]; then
+        assert_define CROSS_COMPILE
+    fi
+}
+
+function test_build {
+    make ARCH=$ARCHITECTURE defconfig
+    make ARCH=$ARCHITECTURE headers_install
+    make ARCH=$ARCHITECTURE -C tools/testing/selftests
+}
+
+function test_deploy {
+    pushd tools/testing/selftests
+    mkdir fuego
+    # kselftest targets (see tools/testing/selftests/Makefile)
+    if [ ! -z ${FUNCTIONAL_KSELFTEST_TARGETS+x} ]; then
+        INSTALL_PATH=fuego make TARGETS="$FUNCTIONAL_KSELFTEST_TARGETS" install || \
+            abort_job "This test depends on tools/testing/selftests/Makefile having 'install' target support, which was introduced in kernel 4.1"
+    else
+        INSTALL_PATH=fuego make install || \
+            abort_job "This test depends on tools/testing/selftests/Makefile having 'install' target support, which was introduced in kernel 4.1"
+    fi
+    put ./fuego/* $BOARD_TESTDIR/fuego.$TESTDIR/
+    rm -rf fuego
+    popd
+}
+
+function test_run {
+    report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./run_kselftest.sh"
+}
+
+function test_processing {
+    echo "Processing kselftest log"
+    if [ ! -z ${FUNCTIONAL_KSELFTEST_FAIL_COUNT+x} ]; then
+        log_compare "$TESTDIR" "$FUNCTIONAL_KSELFTEST_FAIL_COUNT" "^selftests: .* \[FAIL\]" "n"
+    else
+        log_compare "$TESTDIR" "0" "^selftests: .* \[FAIL\]" "n"
+    fi
+}
diff --git a/engine/tests/Functional.kselftest/parser.py b/engine/tests/Functional.kselftest/parser.py
new file mode 100755
index 0000000..3380635
--- /dev/null
+++ b/engine/tests/Functional.kselftest/parser.py
@@ -0,0 +1,18 @@
+#!/bin/python
+
+import os, re, sys
+
+sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
+import common as plib
+
+test_results = {}
+with open(plib.TEST_LOG,'r') as f:
+    for line in f:
+        m = re.match("^selftests: (.*) \[(.*)\]$", line)
+        if m:
+            test_results[m.group(1)] = 0 if m.group(2) == 'PASS' else 1
+
+plib.process_data(test_results=test_results, plot_type='l', label='PASS=0 FAIL=1')
+
+# The parser always returns 0 (success) because the global result is calculated with log_compare
+sys.exit(0)
diff --git a/engine/tests/Functional.kselftest/spec.json b/engine/tests/Functional.kselftest/spec.json
new file mode 100644
index 0000000..f6026f1
--- /dev/null
+++ b/engine/tests/Functional.kselftest/spec.json
@@ -0,0 +1,22 @@
+{
+    "testName": "Functional.kselftest",
+    "specs": {
+        "default": {
+            "gitrepo": "https://github.com/torvalds/linux.git"
+        },
+        "docker": {
+            "gitrepo": "https://github.com/torvalds/linux.git",
+            "targets": "exec kcmp timers",
+            "fail_count": "3"
+        },
+        "cip": {
+            "gitrepo": "https://github.com/cip-project/linux-cip.git"
+        },
+        "template": {
+            "gitrepo": "https://xxx/yyy.git",
+            "gitref": "my_tag_branch_or_commit_id",
+            "targets": "breakpoints cpu-hotplug efivarfs exec firmware ftrace futex kcmp lib membarrier memfd memory-hotplug mount mqueue net powerpc pstore ptrace seccomp size static_keys sysctl timers user vm x86 zram",
+            "fail_count": "3"
+        }
+    }
+}
-- 
2.7.4




More information about the Fuego mailing list