[Fuego] Parameter passing to tests

Bird, Timothy Tim.Bird at sony.com
Sat Jun 3 03:23:19 UTC 2017



> -----Original Message-----
> From: Rafael Gago Castano on Friday, June 02, 2017 6:30 PM
> 
> > At the moment, what you can do is to configure your job:
> > Jenkins > Testname > configure
> >     export PARAM1="hello param 1"
> >     export PARAM2="hello param 2"
> 
> What I was doing was adding the parameters to make a parametrized build
> instead
> because we favor working from the CLI (we have to automate everything
> and do
> CI), but this is a valid solution too.
> 
> > That should achieve the same result as using the parameterized build
> plugin. But it would be more
> > convenient to add an option to ftc:
> > ftc add-jobs -p PARAM1="..."
> 
> Now you can use add-jobs to add multiple jobs at once, so then a real call
> would
> be like:
> 
> ftc add-jobs job1 -p PARAM1="p1" -p PARAM2="ps" job2 -p PARAM1="p1"
> ...
> 
> What I don't like of this approach is that the extra parameters are far away
> from where there are defined, so adding a parameter to a test needs a
> modification in three places (the test, the spec and the script that adds the
> jobs). I suspect that this wouldn't bail out early when you do a typo or forget
> that you refactored a parameter name.
> 
> What I had thought was to modify the JSON file:
> 
> {
>   "testName": "Functional.serial",
>   "params":
>   [
>     {
>       "name"        : "DEV",
>       "description" : "Serial device to test",
>       "default"     : "/dev/ttyPS0"
>     },
>     {
>       "name"        : "BAUDRATES",
>       "description" : "Space separated baudrate list in bps",
>       "default"     : ""
>     }
>   ],
>   "specs":
>   [
>     {
>       "name"      : "minimal",
>       "BAUDRATES" : "9600 115200",
>     },
>     {
>       "name"      : "full",
>       "BAUDRATES" : "<big list with all Linux baudrates>",
>     }
>   ]
> }
> 
> By doing this way:
> 
> -Everything is kept compatible with what's in place now.
> -All the definitions are still done in one single file.
> -All available test parameters are integrated on both the jenkins web and
>  command line interface and can be overriden for a single run.
> -The different specs generate different project names in jenkins, and this
> can
>  be used for our advantage when different specs test different things (more
>  baudrates on this example). At the same time it still keeps out user specific
>  data out from the fuego repository.
> -There is the parameter description that acts as test documentation (and is
>  shown on the Jenkins web interface too).
> -All the parameters in the spec can be validated (for typos).
> -The cli for add-jobs is kept as it is now (a bit simpler).
> -The user can override one parameter while still using the spec (an
> test/jenkins
>  project name), e.g. if a serial device uses the full spec (and wants to appear
>  on jenkins "board.Functional.serial") but doesn't support some all the baud
>  rates on the full baud rate list.

OK - I like this approach, but need to think through it some more.
I'm just getting ready to get head to the airport, so I'm not giving
this the full consideration it needs, but I like it so far.

> Implementation remarks (that I can think of just now):
> 
> -All the test paramer values under "params" should be echoed before
>  initialization, so they end up in the jenkins log.
Agreed.  It might be worth adding them to the run.json file as well.

> -All the parameters would take the same name that they take now, but they
>  would be defined by jenkins (or at least the parameter on the scripts would
> get
>  the value from a jenkins parameter with a similar name).
> -The default value on jenkins can come from the "default" field on the
> parameter
>  map or from the "spec". So we could say that the spec values just override
> the
>  default value of the parameter. Which is enough for launching tests without
>  parameters.

Having a param be configurable from the add-jobs command line, from
the run-test command line, inside the spec file, and inside the Jenkins job,
and maybe from the testplan as well, would appear to give all the flexibility
required. :-)

Adding a param on the ftc CLI is pretty easy, IMHO, and could make the 1.2
release.  But adding the extra code for adding PARAMS to the specs and/or
testplans, and integrating it with the parameterized
build plugin in Jenkins is probably too much to get in to this release.

> -If the params array is defined all the different parameters in the different
>  specifications are validated for existance (typos). If the params array isn't
>  defined the parameters array is deduced from all the parameters populated
> on
>  all the different test specs.
> 
> > Configuring your job with new params is OK for debugging but in the long
> run
> > you may also want to save those values into your board's testplan.
> 
> In our case it would be more convenient to skip the testplan feature
> completely.
> We maintain 3 releases at the same time for many devices and that would
> end up
> with tedious duplication if we don't generate the files dinamically. If we are
> to script to generate the files it's the same effort to output each entry of
> the script processing to the jenkins CLI than to an intermediate testplan file.
> 
> We are dependant on test run ordering too for because we have a board
> setup/teardown test, but I guess that we hit a bug in this case (things stored
> inside a python dictionary?).
We definitely need to do something to ensure test run ordering.
Users will expect that the jobs will execute in the order they appear in
the testplan (IMHO - at least that's what I expect, and control of ordering
is too handy to not support).

> 
> > My proposal is as follows:
> >   - testplans should generally be created by the user. We provide some
> testplans for reference only. In
> > other words, a testplan is where you put your "dirty" changes.
> 
> I agree. Fuego should aim to keep its repositories free of user-local
> modifications.
> 
> > - The idea is that you could add 3 jobs to your testplan using the same iperf
> spec, and then _OVERWRITE_
> > those variables that you want to change (e.g. IP address)
> 
> IMO the JSON approach described + RAW jenkins enqueueing is preferrable
> for us,
> but either your proposed solution or mine renders the testplan feature
> incomplete. If I understand correctly the _OVERWRITE_ would be a new
> keyword/parameter on the testplan files to fix exactly this.
> 
> I guess that the implementation of this would just generate one jenkins
> project
> and the .batch job would invoke it 3 times with the different values in some
> way. Generating three different projects would be a bit bruteforce-ish and
> perceived by the user as bloat.
> 
> >  - testplans should be moved to /fuego/fuego-ro/boards.
> 
> IMO, and a bit unrelated, but one simple solution to these kind of issues
> would be to have a fuego config parameter that points to another folder
> (that
> in practice would be an user-managed git repository) containing a fixed
> folder
> layout will the user config.

I guess I need to understand what's in that user config, and why it varies
from user to user.  

>That folder should be mounted as rw on the
> docker container (to be able to git pull from inside the container e.g. on a
> nightly CI test run, now we are force to start the jobs on the container's host
> machines to do git pull on the repositories).
> 
> So the folder structure could be something like:
> .
> ├── boards
> ├── toolchains
> ├── tests
> └── testplans
> 
> And then fuego should be able detect extra definitions on this folder.

The overlay generator is capable of reading data from more than one folder
at a time.  That's how I recently added the dynamic variables functionality
(with relative ease).  So, in principle it shouldn't be too big a problem to
read from system-level specs and plans, as well as user-level specs and plans.

 -- Tim



More information about the Fuego mailing list