[Fuego] [PATCH v2] Functional.lava: add a new testsuite to do remote lava test

Tim.Bird at sony.com Tim.Bird at sony.com
Wed Dec 12 07:05:10 UTC 2018


See comments inline below.


The comments that follow are from a quick review today of the patch.

I don't have time to do a more thorough review tonight, but I'll take a
closer look tomorrow and let you know if there are issues I'm concerned
about.

> -----Original Message-----
> From: Li Xiaoming
> 
> Using AGL LAVA CI tool "releng-scripts" to generate/customize lava test yaml
> file and use lava-tool to submit LAVA job, get test results.
> 
> It's easy for you to add your own board/test template to this test as an
> extension.
> 
> But you should keep in mind that this test can only test those tools that
> already exist on the targets.
> 
> If you want to test those artifacts built within Fuego, other methods should
> be considered.

Just as a note here, please see http://fuegotest.org/wiki/Target_Packages

This is a new feature that builds a binary package of the materials that Fuego
would normally deploy to the board.  I think that maybe we can work out
a mechanism to distribute a target package with the lava job, so that any
Fuego test can be run by LAVA.  (Well, at least those that can successfully
make target packages - there are some issues that might cause problems.
See the section "Outstanding work" on the Target Packages wiki page for
issues.)

This current patch is useful as is, but the Target Packages feature might
be a way to move forward with additional functionality.
> 
> Signed-off-by: Li Xiaoming <lixm.fnst at cn.fujitsu.com>
> ---
>  engine/scripts/functions.sh                        |  11 +++
>  engine/scripts/main.sh                             |   8 +-
>  engine/tests/Functional.lava/fuego_test.sh         | 108
> +++++++++++++++++++++
>  engine/tests/Functional.lava/parser.py             |  78 +++++++++++++++
>  engine/tests/Functional.lava/releng-scripts.tar.gz | Bin 0 -> 216399 bytes
>  engine/tests/Functional.lava/spec.json             |   7 ++
>  engine/tests/Functional.lava/test.yaml             |  19 ++++
>  7 files changed, 230 insertions(+), 1 deletion(-)
>  create mode 100755 engine/tests/Functional.lava/fuego_test.sh
>  create mode 100755 engine/tests/Functional.lava/parser.py
>  create mode 100644 engine/tests/Functional.lava/releng-scripts.tar.gz
>  create mode 100644 engine/tests/Functional.lava/spec.json
>  create mode 100644 engine/tests/Functional.lava/test.yaml
> 
> diff --git a/engine/scripts/functions.sh b/engine/scripts/functions.sh
> index b0cdb0e..8dac0a0 100755
> --- a/engine/scripts/functions.sh
> +++ b/engine/scripts/functions.sh
> @@ -575,6 +575,17 @@ function pre_test {
>      ov_rootfs_drop_caches
>  }
> 
> +function pre_check {
> +    # Make sure the target is alive, and prepare workspace for the test
> +    source $FUEGO_RO/toolchains/tools.sh
> +

You may need a call to make_pre_run_file here, if function "pre_test"
is not called.

If I understand correctly, this might end up calling test_pre_check twice
in the normal case (when a test other than Functional.lava is running).
This is because in the normal case, both the pre_test and pre_check phases
are specified, and the function pre_test (in functions.sh) will call test_pre_check, and
then main.sh will call function pre_test, which will call test_pre_check as well.

I don't have time to test this, but this should be avoided.  Let me know if 
I've got this wrong.

It might be good to completely refactor the relationship between pre_test
and pre_check (and possibly build as well, if we support pushing target packages
to a LAVA-managed board).  However, that can wait for a future release (post v1.4
of Fuego.)

> +    # support backwards-compatibility with old *-tools.sh files
> +    if [ -z "$TOOLCHAIN" ] ; then
> +        TOOLCHAIN="$PLATFORM"
> +    fi
> +    call_if_present test_pre_check
> +}
> +
>  function fetch_results {
>      local fuego_test_tmp=${FUEGO_TARGET_TMP:-/tmp}/fuego.$TESTDIR
> 
> diff --git a/engine/scripts/main.sh b/engine/scripts/main.sh
> index baa6115..0476c7c 100644
> --- a/engine/scripts/main.sh
> +++ b/engine/scripts/main.sh
> @@ -61,12 +61,18 @@ else
>      true
>  fi
> 
> +if [[ "$FUEGO_TEST_PHASES" == *pre_check* ]] ; then
> +    set_phase pre_check
> +    pre_check
> +else
> +    true
> +fi
> +

>  if [[ "$FUEGO_TEST_PHASES" == *build* ]] ; then
>      set_phase build
>      build
>  fi
> 
> -
>  if [[ "$FUEGO_TEST_PHASES" == *deploy* ]] ; then
>      set_phase deploy
>      deploy
> diff --git a/engine/tests/Functional.lava/fuego_test.sh
> b/engine/tests/Functional.lava/fuego_test.sh
> new file mode 100755
> index 0000000..903dfb0
> --- /dev/null
> +++ b/engine/tests/Functional.lava/fuego_test.sh
> @@ -0,0 +1,108 @@
> +tarball=releng-scripts.tar.gz
> +
> +# phase "pre_test" should be skipped for lava test, because the target
> boards might:
> +# - the target is not connected to Fuego and in power off status.
> +FUEGO_TEST_PHASES="pre_check build deploy run processing"
> +FUNCTIONAL_LAVA_PER_JOB_BUILD=true
> +source /fuego-ro/boards/$NODE_NAME.lava
> +LAVA_HOST_URL=$LAVA_HOST_PROTOCOL://${LAVA_USER}@${LAVA_HO
> ST}
> +
> +#TODO: test_pre_check should be called outside pre_test?
> +function test_pre_check {
> +    # phase "pre_test" is skipped, so we need to set the following env
> variables manually.
> +    # TODO: source toolchain.sh outside phase "pre_test" and get the FWVER
> from LAVA outputs

In general, the rest of the Fuego code uses FIXTHIS for items that need additional
work.  We even have some wiki code that scans the code for FIXTHIS lines,
and produces a report (see http://fuegotest.org/wiki/CodeFixthisList,
but be prepared for a long wait while the code is scanned.)

So, please use "FIXTHIS" instead of "TODO" throughout.

> +    export PLATFORM="unknow"
should be "unknown" (need trailing 'n')
I would prefer you use "TOOLCHAIN" here.

> +    export FWVER="unknow"
should be "unknown" (need trailing 'n')
> +
> +    assert_define LAVA_USER
> +    assert_define LAVA_HOST
> +
> +    which lava-tool > /dev/null || echo "Warning: lava-tool missing in your
> test framework."
> +
> +    #echo "default keyring config"
> +    if [ ! -d ~/.local/share/python_keyring/ ] ; then
> +        rm -rf ~/.local/share/python_keyring/ || true
> +        mkdir -p ~/.local/share/python_keyring/
> +    fi
> +
> +    cat <<EOF >  ~/.local/share/python_keyring/keyringrc.cfg
> +[backend]
> +default-keyring=keyring.backends.file.PlaintextKeyring
> +EOF
> +
> +    cat <<EOF > ./token
> +$LAVA_TOKEN
> +EOF
> +    echo "LAVA_HOST_URL:"  $LAVA_HOST_URL
> +    lava-tool auth-add --token-file ./token $LAVA_HOST_URL
> +    rm ./token
> +}
> +
> +function test_build {
> +    [ -n "$FUNCTIONAL_LAVA_FILE_SERVER" ] ||
> FUNCTIONAL_LAVA_FILE_SERVER=$DEFAULT_FILE_SERVER
> +    [ -n "$FUNCTIONAL_LAVA_BOOT_TYPE" ] ||
> FUNCTIONAL_LAVA_BOOT_TYPE=$DEFAULT_LAVA_BOOT_TYPE
> +
> +    JOB_OPTS="--test health-test"
> +    [ -n $LAVA_MACHINE_TYPE ] && JOB_OPTS="${JOB_OPTS} --machine
> $LAVA_MACHINE_TYPE" \
> +                              || abort_job "Please define LAVA_MACHINE_TYPE in your
> lava board file."
> +    [ -n "$FUNCTIONAL_LAVA_BOOT_TYPE" ] && JOB_OPTS="${JOB_OPTS} --
> boot ${FUNCTIONAL_LAVA_BOOT_TYPE}"
> +    [ -n "$FUNCTIONAL_LAVA_FILE_SERVER" ] && JOB_OPTS="${JOB_OPTS} -
> -url $FUNCTIONAL_LAVA_FILE_SERVER"
> +
> +    #TODO: add more options, e.g. to use --kernel-img to specify the name of
> the kernel to boot
> +    echo "Job creation: ./utils/create-jobs.py ${JOB_OPTS}"
> +    ./utils/create-jobs.py ${JOB_OPTS} > test.yaml
> +
> +    if [ -n "$FUNCTIONAL_LAVA_TESTSUITE_REPO" -a -n
> "$FUNCTIONAL_LAVA_TESTSUITE_PATH" ]; then
> +        echo "-test:" >> test.yaml
> +        echo "    definitions:" >> test.yaml
> +        echo "    - repository: $FUNCTIONAL_LAVA_TESTSUITE_REPO" >>
> test.yaml
> +        echo "      from: git" >> test.yaml
> +        echo "      path: $FUNCTIONAL_LAVA_TESTSUITE_PATH" >> test.yaml
> +        echo "      name: ${FUNCTIONAL_LAVA_TESTSUITE_NAME:-default-
> tests}" >> test.yaml
> +    fi
> +
> +    # TODO: add support to validate the definition of test.yaml.
> +}
> +
> +function test_deploy {
> +    echo "test_deploy finished successfully."
> +}
> +
> +function test_run {
> +    LAVA_JOB_ID=0
> +    LAVA_JOB_STATUS="Submitted"
> +
> +    #  It will be more flexible if not using "lava-tool block" here.
> +    echo "lava-tool: submit-job test.yaml to $LAVA_HOST with
> user($LAVA_USER)..."
> +    lava-tool submit-job $LAVA_HOST_URL test.yaml | tee tmp_submit.txt
> +    LAVA_JOB_ID=`cat tmp_submit.txt | grep -i "job id" | awk '{print $5}' | tr -
> d '\r'`
> +
> +    while [[ "$LAVA_JOB_STATUS" =~ "Submitted" || "$LAVA_JOB_STATUS"
> =~ "Running" ]]; do
> +        sleep 5
> +        lava-tool job-status $LAVA_HOST_URL $LAVA_JOB_ID > tmp_status.txt
> +        LAVA_JOB_STATUS=`cat tmp_status.txt | grep "Job Status" | awk
> '{print $3}' | tr -d '\r'`
> +
> +        [[ "$LAVA_JOB_STATUS" =~ "Submitted" ]] && continue
> +
> +        lava-tool job-output $LAVA_HOST_URL $LAVA_JOB_ID --overwrite >
> tmp_output.txt
> +        if [ -f $LAVA_JOB_ID\_output.txt ]; then
> +            if [ -f $LAVA_JOB_ID\_output.txt.ori ]; then
> +                diff $LAVA_JOB_ID\_output.txt $LAVA_JOB_ID\_output.txt.ori \
> +                     | grep -v "\"results\"" | awk -F '"' '{print "LAVA>> "$12}'
> +            else
> +                cat $LAVA_JOB_ID\_output.txt | grep -v "\"results\"" | awk -F '"'
> '{print "LAVA>> "$12}'
> +            fi
> +            mv $LAVA_JOB_ID\_output.txt $LAVA_JOB_ID\_output.txt.ori
> +        fi
> +    done
> +
> +    if [ ! -f ${LOGDIR}/testlog.txt ]; then
> +        echo "generate ${LOGDIR}/testlog.txt..."
> +        [ -f $LAVA_JOB_ID\_output.txt.ori  ] && mv
> $LAVA_JOB_ID\_output.txt.ori ${LOGDIR}/testlog.txt
> +    fi
> +}
> +
> +function test_processing {
> +    # TODO: get more useful outputs from LAVA side
> +    log_compare "$TESTDIR" "0" "OK" "p"
> +}
> diff --git a/engine/tests/Functional.lava/parser.py
> b/engine/tests/Functional.lava/parser.py
> new file mode 100755
> index 0000000..52eaaf3
> --- /dev/null
> +++ b/engine/tests/Functional.lava/parser.py
> @@ -0,0 +1,78 @@
> +#!/usr/bin/python
> +# -*- coding: UTF-8 -*-
> +import os, os.path, re, sys, collections
> +sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
> +import common as plib
> +
> +SAVEDIR=os.getcwd()
> +LOGDIR=os.environ["LOGDIR"]
> +test_results = {}
> +test_results = collections.OrderedDict()
> +regex_string = '.*"case": "([^ ]*)".*"definition": "([^ ]*)".*"result": "([^
> ]*)".*'
> +
> +def make_dirs(dir_path):
> +    if os.path.exists(dir_path):
> +        return
> +
> +    try:
> +        os.makedirs(dir_path)
> +    except OSError:
> +        pass
> +
> +## Check testlog.txt
> +try:
> +    f = open("%s/testlog.txt" % LOGDIR)
> +except IOError:
> +    print '"testlog.txt" cannot be opened.'
> +
> +lines = f.readlines()
> +f.close()
> +result_dir = '%s/result/default/outputs' % (LOGDIR)
> +make_dirs(result_dir)
> +in_loop = 0
> +
> +regc = re.compile(regex_string)
> +for line in lines:
> +    if in_loop == 0:
> +        try:
> +            output_each = open(result_dir+"/tmp.log", "w")
> +            in_loop = 1
> +        except IOError:
> +            print('"%s" cannot be created or "%s/tmp.log" cannot be opened.' %
> (out_dir, out_dir))
> +
> +    target_outputs = line.split("\"")[11]
> +    if target_outputs != 'case':
> +        output_each.write("%s\n" % target_outputs)
> +
> +    m = regc.match(line)
> +    if m is not None:
> +        test_case = m.group(1)
> +        test_set = m.group(2)
> +        result = m.group(3)
> +        fin_case = test_case
> +        icnt = 1
> +
> +        if result == "pass":
> +            status = "PASS"
> +        else:
> +            status = "FAIL"
> +
> +        #TODO: it will be better if the LAVA related testcases can be
> +        #      listed in execution time order in the final test report.
> +        while test_set + '.' + fin_case in test_results.keys():
> +            fin_case = test_case + '-' + str(icnt)
> +            icnt += 1
> +
> +        if "_" in test_set:
> +            test_set = test_set.split('_')[1]
> +
> +        test_results[test_set + '.' + fin_case] = status
> +        output_each.close()
> +
> +        output_dir = '%s/result/%s/outputs' % (LOGDIR, test_set)
> +        make_dirs(output_dir)
> +
> +        os.rename(result_dir+"/tmp.log", output_dir+"/%s.log" % fin_case)
> +        in_loop = 0
> +
> +sys.exit(plib.process(test_results))
> diff --git a/engine/tests/Functional.lava/releng-scripts.tar.gz
> b/engine/tests/Functional.lava/releng-scripts.tar.gz
> new file mode 100644
> index
> 0000000000000000000000000000000000000000..dbd24d443a2a96c1bcb3954b1
> 4a07eb7126bd53c
> GIT binary patch
> literal 216399
... binary data omitted...
> 
> literal 0
> Hc$@<O00001
> 
> diff --git a/engine/tests/Functional.lava/spec.json
> b/engine/tests/Functional.lava/spec.json
> new file mode 100644
> index 0000000..c479475
> --- /dev/null
> +++ b/engine/tests/Functional.lava/spec.json
> @@ -0,0 +1,7 @@
> +{
> +    "testName": "Functional.lava",
> +    "specs": {
> +        "default": {}
> +    }
> +}
> +
> diff --git a/engine/tests/Functional.lava/test.yaml
> b/engine/tests/Functional.lava/test.yaml
> new file mode 100644
> index 0000000..1aa49b0
> --- /dev/null
> +++ b/engine/tests/Functional.lava/test.yaml
> @@ -0,0 +1,19 @@
> +fuego_package_version: 1
> +name: Functional.lava
> +description: |
> +    Lava CI tools and templates of AGL
> +license: MIT
> +author: AGL(Automotive Grade Linux)
> +maintainer: Liu Wenlong <liuwl.fnst at cn.fujitsu.com>
> +# this version number is kind of made-up
> +version: 5.99.5
> +fuego_release: 1
> +type: Functional
> +tags: ['LAVA']
> +gitrepo: https://git.automotivelinux.org/AGL/releng-scripts
> +data_files:
> +    - releng-scripts.tar.gz
> +    - spec.json
> +    - parser.py
> +    - fuego_test.sh
> +    - test.yaml
> --
> 2.7.4

If this is working for you as is, then I think this is a good start to 
an alternative method of executing LAVA jobs (compared
to the current prototype).  Since the Functional.lava code is
isolated from the rest of Fuego, I see no problem adding it,
if the issues above can be addressed (and the pre_test
refactoring doesn't break anything else in testing).

Please respond to the issues raised, and we'll go from there.
 -- Tim



More information about the Fuego mailing list