[Fuego] [kselftest] Separating testcases into their corresponding group

Bird, Timothy Tim.Bird at sony.com
Wed Nov 15 03:42:24 UTC 2017


> -----Original Message-----
> From: Hoang Van Tuyen on Tuesday, November 14, 2017 12:27 AM
> I would like to resend the patch following the suggestion of Daniel-san.
> 
> Please check and consider merging the patch.

It's easier to give feedback on the patch if it's included inline in the message body.
You can attach it as well, if you'd like, as that's actually easier (for me) to 
apply the patch.

Just a note - I've accepted the patch, but I made some changes to it.
I hope that's OK.  See my feedback and discussion below.

>
> The current parser.py is NOT putting testcases into their corresponding group.
> So, modify the parser.py for separating testcases into groups.
> Also, add criteria.json so that the test passes on docker.

Thanks very much - this is very useful.

>
> Signed-off-by: Hoang Van Tuyen <tuyen.hoangvan at toshiba-tsdv.com>
> ---
>  engine/tests/Functional.kselftest/criteria.json | 13 +++++++++++++
>  engine/tests/Functional.kselftest/parser.py     | 22 ++++++++++++++--------
>  2 files changed, 27 insertions(+), 8 deletions(-)
>  create mode 100644 engine/tests/Functional.kselftest/criteria.json
>
> diff --git a/engine/tests/Functional.kselftest/criteria.json b/engine/tests/Functional.kselftest/criteria.json
> new file mode 100644
> index 0000000..c2feb2c
> --- /dev/null
> +++ b/engine/tests/Functional.kselftest/criteria.json
> @@ -0,0 +1,13 @@
> +{
> +    "schema_version":"1.0",
> +    "criteria":[
> +        {
> +            "tguid":"exec",
> +            "fail_ok_list": ["execveat"]
> +        },
> +        {
> +            "tguid":"timers",
> +            "fail_ok_list": ["rtctest"]
> +        }
> +    ]
> +}

This is good for docker, but may not be correct for other boards.
I accepted this as is (as the generic criteria file for Functional.kselftest).
But I actually think we should move the file to the fuego repository, as:
fuego/fuego-ro/boards/docker-Functional.kselftest-criteria.json.

Fuego will read a per-board criteria file from that location, and use it in preference
to the general criteria file.  (This one would be our very first board-specific
criteria file!)  See fuego-core/engine/scripts/parser/common.py:load_criteria()
and http://fuegotest.org/wiki/criteria.json#Customizing_the_criteria.json_file_for_a_board

Let me know if you have any objections to moving this there.  We would still
then need a generic criteria.json file for Functional.kselftest.

> diff --git a/engine/tests/Functional.kselftest/parser.py b/engine/tests/Functional.kselftest/parser.py
> index fbec735..5e9a107 100755
> --- a/engine/tests/Functional.kselftest/parser.py
> +++ b/engine/tests/Functional.kselftest/parser.py
> @@ -5,13 +5,19 @@ import os, re, sys
>  sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
>  import common as plib
>
> -matches = plib.parse_log("^selftests: (.*) \[(.*)\]$")
>  results = {}
> -for m in matches:
> -    testcase = m[0]
> -    results[testcase] = "PASS" if m[1] == 'PASS' else "FAIL"
> +testset_regex = re.compile("^Running tests in")
> +testcase_regex = re.compile("^selftests:")
Using regexes here is overkill, given that there are no
actual wildcards or regular expressions in the patterns.

> +with open(plib.TEST_LOG,'r') as f:
'r' mode is not needed - it's the default.

> +    for line in f:
> +        fields = line.split()
> +        m = testset_regex.match(line)
> +        if m is not None:
> +            test_set = fields[3]
> +            continue
> +        m = testcase_regex.match(line)
> +        if m is not None:
> +            test_case = fields[1]
> +            results[test_set +'.' + test_case] = "PASS" if fields[2] == '[PASS]' else "FAIL"

I replaced this with the following, which uses no regexes and I think is simpler
+with open(plib.TEST_LOG) as f:
+    for line in f:
+        if line.startswith("Running tests in"):
+            test_set = line.split()[3]
+            continue
+        if line.startswith("selftests:"):
+            fields = line.split()
+            test_case = fields[1]
+            results[test_set+'.'+test_case] = "PASS" if fields[2] == '[PASS]' else "FAIL"

>
> -plib.process(results)
> -
> -# The parser always returns 0 (success) because the global result is calculated with log_compare
> -sys.exit(0)
> +sys.exit(plib.process(results))
> --
> 2.1.4

I pushed a modified commit as baf518c on my master branch.  Can you check it out
and give it a test?

We may want to change the chart_config.json for this, since it has lots of test_sets.
Let me know what you think.
 -- Tim



More information about the Fuego mailing list