[Fuego] Parameter passing to tests

Daniel Sangorrin daniel.sangorrin at toshiba.co.jp
Fri Jun 2 13:18:51 UTC 2017


> -----Original Message-----
> From: Rafael Gago Castano [mailto:RGC at hms.se]
> Sent: Friday, June 02, 2017 6:30 PM
> To: Daniel Sangorrin; fuego at lists.linuxfoundation.org
> Subject: Re: [Fuego] Parameter passing to tests
> 
> > At the moment, what you can do is to configure your job:
> > Jenkins > Testname > configure
> >     export PARAM1="hello param 1"
> >     export PARAM2="hello param 2"
> 
> What I was doing was adding the parameters to make a parametrized build instead
> because we favor working from the CLI (we have to automate everything and do
> CI), but this is a valid solution too.
> 
> > That should achieve the same result as using the parameterized build plugin. But it would be more
> > convenient to add an option to ftc:
> > ftc add-jobs -p PARAM1="..."
> 
> Now you can use add-jobs to add multiple jobs at once, so then a real call would
> be like:
> 
> ftc add-jobs job1 -p PARAM1="p1" -p PARAM2="ps" job2 -p PARAM1="p1" ...
> 
> What I don't like of this approach is that the extra parameters are far away
> from where there are defined, so adding a parameter to a test needs a
> modification in three places (the test, the spec and the script that adds the
> jobs). I suspect that this wouldn't bail out early when you do a typo or forget
> that you refactored a parameter name.
> 
> What I had thought was to modify the JSON file:
> 
> {
>   "testName": "Functional.serial",
>   "params":
>   [
>     {
>       "name"        : "DEV",
>       "description" : "Serial device to test",
>       "default"     : "/dev/ttyPS0"
>     },
>     {
>       "name"        : "BAUDRATES",
>       "description" : "Space separated baudrate list in bps",
>       "default"     : ""
>     }
>   ],
>   "specs":
>   [
>     {
>       "name"      : "minimal",
>       "BAUDRATES" : "9600 115200",
>     },
>     {
>       "name"      : "full",
>       "BAUDRATES" : "<big list with all Linux baudrates>",
>     }
>   ]
> }
> 
> By doing this way:
> 
> -Everything is kept compatible with what's in place now.
> -All the definitions are still done in one single file.
> -All available test parameters are integrated on both the jenkins web and
>  command line interface and can be overriden for a single run.
> -The different specs generate different project names in jenkins, and this can
>  be used for our advantage when different specs test different things (more
>  baudrates on this example). At the same time it still keeps out user specific
>  data out from the fuego repository.
> -There is the parameter description that acts as test documentation (and is
>  shown on the Jenkins web interface too).
> -All the parameters in the spec can be validated (for typos).
> -The cli for add-jobs is kept as it is now (a bit simpler).
> -The user can override one parameter while still using the spec (an test/jenkins
>  project name), e.g. if a serial device uses the full spec (and wants to appear
>  on jenkins "board.Functional.serial") but doesn't support some all the baud
>  rates on the full baud rate list.

I like the fact that we can have defaults for each parameter and
that it can serve as an explanation of what each parameter does and
the list of parameters supported by the test (so it would deprecate the yaml files I guess).

My only concern then is how do you avoid Jenkins job collision?
For example, if you create 2 jobs with the same spec but you override the parameters. 

> Implementation remarks (that I can think of just now):
> 
> -All the test paramer values under "params" should be echoed before
>  initialization, so they end up in the jenkins log.

Currently they will be echoed to prolog.sh and jenkins log automatically.

> -All the parameters would take the same name that they take now, but they
>  would be defined by jenkins (or at least the parameter on the scripts would get
>  the value from a jenkins parameter with a similar name).
> -The default value on jenkins can come from the "default" field on the parameter
>  map or from the "spec". So we could say that the spec values just override the
>  default value of the parameter. Which is enough for launching tests without
>  parameters.
> -If the params array is defined all the different parameters in the different
>  specifications are validated for existance (typos). If the params array isn't
>  defined the parameters array is deduced from all the parameters populated on
>  all the different test specs.

I think there are two types of validations.
1) Check that there are no misspellings. This can be done in a generic way for any
test with the "params" list that you propose.
2) Check parameter dependencies. For example, if PARAM1 is defined then PARAM2 
should not be empty or something like that. We could do part of this generically by adding
some syntax to the JSON spec or leave it to each fuego_test.sh.
 
> > Configuring your job with new params is OK for debugging but in the long run
> > you may also want to save those values into your board's testplan.
> 
> In our case it would be more convenient to skip the testplan feature completely.
> We maintain 3 releases at the same time for many devices and that would end up
> with tedious duplication if we don't generate the files dinamically. If we are
> to script to generate the files it's the same effort to output each entry of
> the script processing to the jenkins CLI than to an intermediate testplan file.

Yes, a script calling "ftc add-jobs" with the parameters is another perfectly valid option.
 
> We are dependant on test run ordering too for because we have a board
> setup/teardown test, but I guess that we hit a bug in this case (things stored
> inside a python dictionary?).

Ordering is currently not implemented.

One option is to add a --after paramter like this:
ftc add-jobs Functional.Kselftest --after Functional.kernel_build --spec 4.4.y
# and the same for the testplans

This can be easily implemented in Jenkins.

> > My proposal is as follows:
> >   - testplans should generally be created by the user. We provide some testplans for reference only. In
> > other words, a testplan is where you put your "dirty" changes.
> 
> I agree. Fuego should aim to keep its repositories free of user-local
> modifications.
> 
> > - The idea is that you could add 3 jobs to your testplan using the same iperf spec, and then _OVERWRITE_
> > those variables that you want to change (e.g. IP address)
> 
> IMO the JSON approach described + RAW jenkins enqueueing is preferrable for us,
> but either your proposed solution or mine renders the testplan feature
> incomplete. If I understand correctly the _OVERWRITE_ would be a new
> keyword/parameter on the testplan files to fix exactly this.
> 

There would be 2 ways of overriding a parameter:
1) by specifying the value in "ftc add-jobs -p PARAM1=xxx".
2) by specifying the same parameter on the testplan. The testplan parameter would then
take precedence over the one defined in the spec.
# however I still have the concern of having job name collision

> I guess that the implementation of this would just generate one jenkins project
> and the .batch job would invoke it 3 times with the different values in some
> way. Generating three different projects would be a bit bruteforce-ish and
> perceived by the user as bloat.
>

It would generate 3 jenkins jobs. Is that what you mean by project?
The idea is that once you have the 3 jenkins jobs, you set up the trigger
as you want for each of the jobs (e.g. every day or when a new commit arrives).
 
> >  - testplans should be moved to /fuego/fuego-ro/boards.
> 
> IMO, and a bit unrelated, but one simple solution to these kind of issues
> would be to have a fuego config parameter that points to another folder (that
> in practice would be an user-managed git repository) containing a fixed folder
> layout will the user config. That folder should be mounted as rw on the
> docker container (to be able to git pull from inside the container e.g. on a
> nightly CI test run, now we are force to start the jobs on the container's host
> machines to do git pull on the repositories).

Yeah, a configuration file sounds good.
 
> So the folder structure could be something like:
> .
> ├── boards
> ├── toolchains
> ├── tests
> └── testplans
> 
> And then fuego should be able detect extra definitions on this folder. The
> objective of this would be to keep the fuego repositories totally free from user
> data (except from the config file, which should be .gitignored).
> 
> > To achieve this we would need to update the testplans path throughout Fuego and then modify the overlay  generator so that
> > testplan definitions overwrite those in the spec when prolog.sh is generated.
> >
> > Please let me know what you think about this solution.
> 
> I have not total knowledge of the fuego internals yet, sorry.

Thanks a lot for your suggestions. You know quite a lot already!
Daniel





More information about the Fuego mailing list