[Fuego] [PATCH 24/30] run-test: support dynamic variables
Daniel Sangorrin
daniel.sangorrin at toshiba.co.jp
Thu Jun 7 01:39:08 UTC 2018
> -----Original Message-----
> From: Tim.Bird at sony.com [mailto:Tim.Bird at sony.com]
> Sent: Thursday, June 7, 2018 4:56 AM
> To: daniel.sangorrin at toshiba.co.jp; fuego at lists.linuxfoundation.org
> Subject: RE: [Fuego] [PATCH 24/30] run-test: support dynamic variables
>
> A few comments inline below...
>
> > -----Original Message-----
> > From: Daniel Sangorrin
> >
> > A spec is created and saved in the logdir folder. This is
> > good not only for dynamic variables but also for static specs
> > because we can easily package the logdir folder with all
> > the important data.
> >
> > Signed-off-by: Daniel Sangorrin <daniel.sangorrin at toshiba.co.jp>
> > ---
> > engine/scripts/ftc | 47
> > ++++++++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 46 insertions(+), 1 deletion(-)
> >
> > diff --git a/engine/scripts/ftc b/engine/scripts/ftc
> > index 7da4f27..bfdc3d4 100755
> > --- a/engine/scripts/ftc
> > +++ b/engine/scripts/ftc
> > @@ -3164,6 +3164,18 @@ def do_run_test(conf, options):
> > else:
> > error_out("Run-test command requires a test name")
> >
> > + if '--dynamic-vars' in options:
> > + try:
> > + import ast
> > + dyn_vars_str = options[options.index('--dynamic-vars') + 1]
> > + dyn_vars = ast.literal_eval(dyn_vars_str)
> Interesting. This means that the argument for --dynamic-vars
> is in python syntax, not JSON syntax. Off the top of my head,
> I can't think of any differences that will cause us problems in
> our common Fuego use cases.
> But python ast does allow '\n' and other character escapes, that
> JSON does not. That this is python syntax is something that needs
> to be documented, IMHO.
I thought about using JSON, but specs are basically a set of key-values so I thought that using a python dictionary would be good enoguh.
>
> Just a note - using an existing syntax beats us creating our own new
> one with its own set of potential issues. So, good job! I wouldn't
> have thought to do it this way. This is nice and compact for parsing
> potentially multiple vars in a single shot.
>
> > + except IndexError:
> > + error_out('Dynamic vars not provided after --dynamic-vars.')
> > + options.remove(dyn_vars_str)
> > + options.remove('--dynamic-vars')
> > + else:
> > + dyn_vars = None
> > +
> > if '-s' in options:
> > try:
> > spec_name = options[options.index('-s') + 1]
> > @@ -3171,7 +3183,8 @@ def do_run_test(conf, options):
> > error_out('Testspec not provided after -s.')
> > spec_list = get_specs(conf, test_name)
> > if spec_name not in spec_list:
> > - error_out('Unknown spec %s' % spec_name)
> > + if not dyn_vars:
> > + error_out('Unknown spec %s' % spec_name)
> > options.remove(spec_name)
> > options.remove('-s')
> > else:
> > @@ -3369,6 +3382,38 @@ def do_run_test(conf, options):
> > build_data.timestamp = timestamp
> > os.environ["BUILD_TIMESTAMP"] = build_data.timestamp
> >
> > + # create a folder for this run
> > + run_dir = job_dir + "/builds/" + build_data.build_number
> > + if not os.path.isdir(run_dir):
> > + if fuego_caller == "jenkins":
> > + error_out("Jenkins did not create run folder " + run_dir)
> > + os.mkdir(run_dir)
> > +
> > + # prepare a per-run spec.json file
> > + specpath = '%s/engine/tests/%s/spec.json' % (conf.FUEGO_CORE,
> > test_name)
> > + with open(specpath) as f:
> > + try:
> > + test_spec_data = json.load(f)
> > + except:
> > + error_out("Error parsing spec file %s" % specpath)
> > +
> > + for key in test_spec_data['specs'].keys():
> > + if key != spec_name:
> > + del test_spec_data['specs'][key]
> > +
> > + if dyn_vars:
> > + if spec_name not in test_spec_data['specs']:
> > + test_spec_data['specs'][spec_name] = dyn_vars
> > + else:
> > + for key in dyn_vars.keys():
> > + test_spec_data['specs'][spec_name][key] = dyn_vars[key]
> > +
> > + # FIXTHIS: use a more pythonic way
> > + os.system("mkdir -p " + build_data.test_logdir)
> > +
> > + with open(build_data.test_logdir + '/spec.json', 'w+') as spec_file:
> > + json.dump(test_spec_data, spec_file)
>
> OK - one issue with this approach is that it could be misleading.
> If we are using a 'default' spec, and applying some dynamic variables,
> then the spec.json in the logdir will have a spec name of 'default', but with
> variable values that are NOT those of the default spec (from the original spec.json).
>
> I think it would be good to somehow indicate any variables that
> have dynamic (changed) values, from those in the original spec.
> I'm not sure if it would be better to add an annotation per variable,
> or to add a list of names of changed variables. But it would be good
> to capture that, IMHO, to avoid confusion. Maybe even just adding
> a single attribute indicating "some variables were changed" would
> be good. That way a user would know that any difference in test
> results (from some other run using the same spec name) might be due
> to changes in the variables.
OK, I have added a new key ("dyn_vars") to specs generated from dynamic variables. It contains an array with the variables that have been modified.
> BTW - is this spec (the one in logdir) the one used by the overlay
> generator during the actual test run? Or is it just recording the
> values for review after the test?
It is used by the overlay generator, not just recording. The idea is that we want to be able to reproduce a test from the contents of the log directory.
> > +
> > # create log file
> > console_log_filename = "consolelog.txt"
> > log_filename = run_dir + os.sep + console_log_filename
> > --
> > 2.7.4
>
> I'm applying this, but will do some testing before I push it out to
> the master branch.
OK. I will wait until they are pushed to add my fixes.
> One other note. The dynamic variables feature is nice for people
> doing manual testing and wanting to change a single variable for
> some purpose.
Just to clarify: not necessarily a single variable, you can change any variable you want or create a whole new spec.
And the purpose is to test for new bugs. In other words, using the same parameters all the time is for "regression" bugs, but it's not going to find new bugs. Being able to randomize the parameters allows for fuzzy testing (I plan to add scripts for fuzzy testing).
> But if people end up using this in a scripted way,
> that just puts the variable definitions into the script, rather than
> into a proper spec.json file, which defeats the goal of sharing
> useful test variable values and combinations with other Fuego users.
When results are shared we will publish the artifacts from the log directory. That includes the spec.json that was used for the test. Once it's on our cloud app, any user can download that spec from the dashboard and use it.
Developers can then check what specs are interesting and include them
in Fuego as board-dependent specs.
By the way, I am thinking about the command line options for publishing results. Currently we have
ftc put-run run_id | file.frp
I was thinking that we may need to be compatible with different web apps (e.g. fuego custom dashboard, squad, kernelci, others..). So perhaps something like this would be better:
ftc put-run -r run_id -w squad --params {squadparams}
> The proper way to codify a permanent change in the test variables
> is to put them into spec files. That's what spec files exist for. And
> if a test variable is being determined at runtime based on some
> conditions at the time of the test, that is something that really
> should be pushed into the test itself.
>
> I'm not opposed to the feature - just noting that it could have
> detrimental side effects if misused.
> -- Tim
>
Thanks,
Daniel
More information about the Fuego
mailing list