[Fuego] Patches for merge of Toshiba/Sony work

Daniel Sangorrin daniel.sangorrin at toshiba.co.jp
Fri Mar 24 00:40:01 UTC 2017


Hi Tim,

> -----Original Message-----
> From: Bird, Timothy [mailto:Tim.Bird at sony.com]
> Sent: Thursday, March 23, 2017 2:10 AM
> > It seems that in the new version plot.png is linked. Since plot.png already
> > conserves history I don't mind not having them separated but I think creating
> > that link in functions.sh is a bit ugly. I will think about how to express that
> > through
> > the json extralinks property.
> Good idea.   plot.png is really not a per-run concept, but something that
> applies to multiple runs.  Jenkins should point to the plot.png, not the link.
> Arguably, it would be better if the job page pointed at it, but it's quite handy
> to have a 'build' page point to it also.

Yeah, you're right. It should be in the build page.

I wonder if we can add a pdf/svg export button to the flot plugin. That
would create a vector format plot instead of the current png.

> > Yesterday, during the AGL meeting, you mentioned a method for
> > distinguishing
> > between logs from the same test/board but different specs/testplans.
> > Could you explain it again if you don't mind?
> 
> Sorry - I wasn't clear in the meeting.  I think you are referring to what I was
> saying about the job names.  Currently a job name is:
> <board>.<testplan>.<test_name>
> 
> Jenkins puts build information (what I call a 'run') under:
> /var/lib/jenkins/jobs/<jobname>/builds/<build_num>
> (which is the equivalent of:
> /var/lib/jenkins/jobs/<board>.<testplan>.<test_name>/builds/<build_num>
> 
> I put logs under /fuego-rw/logs/<testname>/<board>.<timestamp>.<build_num>
> 
> So there's a slight mismatch here.  I don't have the plan anywhere in my path,
> and Jenkins doesn't have the timestamp.  It used to, but they switched BUILD_ID
> from timestamp to build number.
> 
> In considering the tests, the things that should affect the results are obviously
> the test, the board, and the spec.  The spec has a potential to have a dramatic
> effect on the results.
> 
> I was thinking it would be better to have the spec name be part of the job
> name, rather than the testplan name.  So the job name would be:
> <board>.<spec>.<testname>

Yes, I agree. That's a better solution.

> Even if multiple plans contains the same test, if they use the same spec
> for that test, the results should be comparable.  Also, using the spec
> for the job name means that a single plan can contain the same test, but
> with different configurations (e.g. LTP with spec 'syscalls', or LTP with spec 'filesystems')
> The spec is really an extension of the test name, whereas the plan is not.

Yes, you are right.

> I haven't figured out whether I would add the spec somewhere in the fuego
> log path, but it seems like it would be a good idea.  Maybe something like:
> /fuego-rw/logs/<test_name>/<spec>/<board>.<timestamp>.<build_num>

Looks good to me.

If possible I'd rather have the "timestamp" on a file under the <build_num> directory 
to decrease the amount of links.

> It doesn't make sense, IMHO, to plot benchmark data from the same test
> but different specs in the same plot.  (well, maybe in some situations it would,
> but as a general rule I think the spec essentially makes a different test).

I tend to disagree here. 

I think that comparing the results of a test using different parameters
can be useful in several scenarios. For example, suppose that you have a network
test and you want to compare a spec that uses normal frames with another one
that uses jumbo frames. Then, suppose you find that jumbo frames are much 
slower than normal ones. That may indicate a problem with a driver 
or with the kernel configuration for example.

At the moment <plot.data> includes information about the board, architecture, and kernel.
Adding an entry indicating the spec's name should be trivial.

What I'm not sure about is how Fuego users should use this functionality.
I think this is kind of advanced functionality so one option is to restrict it to 
the command line interface so that normal users do not care about it:

$ ftc compare --testname mytest --spec default --spec optimized
  -> matplotlib's plot comparing the test results for different specs

$ ftc compare --testname mytest --spec default --board bbb --board raspi
  -> matplotlib's plot comparing the test results for different boards

> I don't have a solution yet, but I'm thinking about this.
> 
> > > > I didn't fix anything about flot with this (although the dataload.py module
> > > > has been updated with new paths for the metric data (for benchmarks)).
> >
> > Flot isn't working for me. Did you overwrite the plugin?
> I'll double-check this.  I don't think flot ever worked for me - I thought it was
> broken for you.  I didn't change anything that I know.

I fixed flot and was working for me fine. I will fix it and send you a pull request.

> > > > ftc still has a few nagging issues with executing tests independent of a
> > jenkins
> > > > job (that is, from the command line).  However, most things seem to
> > work.
> >
> > I tried
> > $ ftc run-test Benchmark.Dhrystone docker testplan_docker
> > and it seems it worked.
> Good.  There are a few issues that I didn't finish - like reading the
> timeout from the testplan file, but it was mostly working for me.
> 
> >
> > However i tried
> > $ ftc list-runs
> > and got no runs. Is this supposed to be correct?
> This is a command I didn't finish fixing.  I'll get to it shortly.
> I was working on fixing up ftc, and then realized that the
> higher priority was making sure all the existing Jenkins jobs
> and workflow worked correctly.
> 
> I saw that ftc list-runs was broken (but should be easy to fix).
> I haven’t tested a bunch of the ftc sub-commands, but plan to
> work on those shortly.

Roger!

Thanks,
Daniel





More information about the Fuego mailing list