[Fuego] Fuego's version up and other changes

Jan-Simon Möller dl9pf at gmx.de
Wed Feb 1 10:46:21 UTC 2017


Hi !

Am Mittwoch, 1. Februar 2017, 09:56:03 schrieb Daniel Sangorrin:
> Thanks for the detailed example. A few observations:
> 
> - Normally the timeout value depends on the board's performance and the test
> duration.  In my next branch, timeouts are defined for each test in the
> testplan_board.json file. Here it looks like the timeout needs to be the
> same for all of the tests. I was wondering if it is possible to put a list
> of timeouts (same amount as the number of tests) and then use timeout
> --signal=9 {timeout}. Doing this by hand would be too tedious but we could
> generate the yaml files from the testplans using a script.

Oh, that was just /me taking a shortcut. But yes, the template wouldn't know directly the board/test specific values. 
One way would be to make these bash env files and source them before running the script.
The 'same' number of timeouts would create a matrix  [ tests x timeouts ], that is not what we're looking for.

> - What would we do if we want to execute the same test with several
> test_specs? I was thinking about creating separate jobs for each
> combination of test-test_spec because it's the simple but it wonder if that
> can be implemented with jjb.
That would then be:
- project:
    name: fuego-tests-smoke
    testplan:
        - smoke
    testspec:
        - testspec1
        - testspec2
    machine:
        - raspberrypi3
    testname:
        - Benchmark.Dhrystone
        - Benchmark.IOzone
        - Functional.hello_world
        - Functional.ipv6connect
        - Functional.stress
    jobs:
        - fuego-{machine}-{testplan}-{testspec}-batch
        - fuego-{machine}-{testplan}-{testspec}-{testname}

> Another approach would be to create a single
> job and then have fuego-core take care of reading the tests specs of each
> test from the testplan, and repeat the test for each test spec. Then a
> modified parser would display and compare the results of the test across
> the different test specs.
> [Note] I think that AGL's fuego has a similar parser.

Having all in one job let's you not execute one specific test easily. 
But if we simplify the shell call that jenkins has to do, creating jobs
with specific names and parameters/env-variables is trivial.

E.g.:
    builders:
      - shell: |
          #
          source $FUEGO_RO/conf/boards/{machine}.sh
          source $FUEGO_CORE/engine/env/${{DISTRO}}.sh         # DISTRO set in machine.sh
          source $FUEGO_CORE/engine/testplans/{testplan}.sh    # taken from jjb 
          #
          export Reboot=false
          export Rebuild=true
          export Target_Cleanup=true
          export TESTDIR={testname}
          export TESTNAME=$(echo "{testname}" | sed -e "s#.*\.##")
          timeout --signal=9 ${{JOB_TIMEOUT}} /bin/bash $FUEGO_CORE/engine/tests/${{TESTDIR}}/${{TESTNAME}}.sh

In this case, ${{JOB_TIMEOUT}} can be defined in the $FUEGO_RO/conf/boards/{machine}.sh  env script.

> - In my next branch, test specs have the ability to define which links
> should be available after the test is complete. I'm using the Description
> Setter plugin for that. Do you think that would be possible with the jjb
> approach?.

Links ... let me see
http://docs.openstack.org/infra/jenkins-job-builder/publishers.html#publishers.description-setter ?
should work

For reference:
http://docs.openstack.org/infra/jenkins-job-builder/definition.html
http://docs.openstack.org/infra/jenkins-job-builder/builders.html
http://docs.openstack.org/infra/jenkins-job-builder/publishers.html


> 
> > While running tests I also found:
> > 
> >
> > - we need to split the build of the test and the postprocessing to be run
> > separately>
> > 
> >
> > -- this is b/c it is not useful to keep to board waiting on us all the
> > time.
> I wasn't aware that the board is waiting on us while we do the
> post-processing. Could you elaborate a bit more?

We access the board very early even before the build (pre_test).
The we build, deploy, run, fetch results, analyse. At this point 

  
> > - Integration of board up/down could happen in a few ways:
> > -- Either ppl should just generate a wrapper around the (blocking) batch
> > job and trigger it>
> > with their pre/post to their needs. Done.
> > -- Or we allow hooks like the current "TARGET_SETUP_LINK" and amend it
> > with a matching "TARGET_TEARDOWN_LINK">
> > - If a model like the "TARGET_SETUP_LINK" is used, we add a delay here as
> > this call must block until the board is up.>
> > In this case our predifined timeouts are bogus as they track not just the
> > test run, but all processing.
> Good point. I wasn't using the reboot feature so I hadn't thought about it.
> I guess we would need to define a timeout for the board to reboot, and then
> the timeout for each test. The timeout for the board could be called
> BOARD_REBOOT_TIMEOUT for example and be defined in the asdf.board file. And
> the timeout for each test defined in another variable such as TEST_TIMEOUT.

As I said, there are multiple ways to integrate that. 
a) The wrapper-job triggering the batch leaves fuego out of the picture. But we need to block the whole chain to make flow-control possible.
b) TARGET_SETUP_LINK gives us a _per job_ way of brining up the board. Not just for the wrapp'ed batch.

Most users might get away and be happy with a) . While a per-job control as in b) might scale better if you think of the case that you could have
multiple target boards of the same type (multiple executors!) and could run jobs 'in parallel' !

> BOARD_REBOOT_TIMEOUT=1m
> TEST_TIMEOUT=15m
> /bin/bash $FUEGO_CORE/engine/tests/${{TESTDIR}}/${{TESTNAME}}.sh
> 
> Then, internally the timeouts would be used separately.
> # we could also split timeouts between execution time and post-processing
> time

Yep ... we probably should split the build, test and postproc into separate scripts.
or instead of $FUEGO_CORE/engine/tests/${{TESTDIR}}/${{TESTNAME}}.sh in jenkins, we could call ftc as you would do on the terminal. 

Also the above should be in separate jobs ... think of it:
- the build phase doesn't need to block, can run in parallel, 
  e.g. on master node!
- the test batch needs to block to allow flow-control and 
  run only on the specified executor. No concurrency but multiple jobs 
  of the *same* batch could run in parallel, given we can handle multiple 
  boards of the same type at the same time (which is true with an 'orchestrator' like lava).
- the batch is only  deploy/run/fetch result  but can trigger postprocessing
- the postprocessing is another job, can run in parallel, independent of the board, e.g. on master

JS
-- 
--
Jan-Simon Möller
dl9pf at gmx.de


More information about the Fuego mailing list