[Fuego] Update on recent activity

Daniel Sangorrin daniel.sangorrin at toshiba.co.jp
Thu Jun 22 08:31:31 UTC 2017


Hi Tim

> -----Original Message-----
> From: fuego-bounces at lists.linuxfoundation.org [mailto:fuego-bounces at lists.linuxfoundation.org] On Behalf Of Bird, Timothy
> Sent: Thursday, June 22, 2017 7:59 AM
> To: fuego at lists.linuxfoundation.org
> Subject: [Fuego] Update on recent activity
> 
> Hello all,
> 
> Here is an update on recent Fuego activity.
> 
> There were numerous side meetings and presentations held at the Automotive Linux Summit and Open Source Summit Japan in
> Tokyo on May 31-June 2. In particular, Khiem Nguyen gave a good presentation there on how Renesas has complemented Fuego with
> additional functionality to create their full-blown CI infrastructure.  His slides are online and can be found on the Fuego presentations
> page:
> http://bird.org/fuego/Presentations
> 
> I held a testing Birds-of-a-Feather session, and answered some questions about Fuego and we had a good discussion about different
> issues related to Linux testing at the BOF.
> 
> Jan-Simon, Daniel and I had a private meeting to discuss 1.2 release features and priorities.  I am still hoping for a release candidate for
> 1.2 sometime in July.  But there is a lot to do before then.  Luckily, my family vacations and business trips are over for a bit.  So I hope
> to dig into Fuego work in the next few weeks.  Priorities for the 1.2 release are:
>  1) support for LAVA as a transport
>  2) test dependency system
>  3) uniform test output
>  4) documentation improvements
> 
> These were all discussed in Japan, and Jan-Simon even sent some preliminary patches for 1).
> 
> At an LTSI meeting in Japan, I agreed to take over primary testing for the LTSI project, using Fuego.
> I will be setting up some LTSI-related reference boards in my own Fuego lab shortly, as part of
> this work.
> 
> As part of the discussion, Greg Kroah-Hartman and I discussed the value of having kernel selftest tests
> produced a common output format.  I sent him a proposal and some information about TAP13, and
> he lined up some volunteers to work on this.  Patches have been submitted to Shuah Kahn, the
> kselftest maintainer, and some preliminary work in this area will likely appear in the 4.13 kernel.
> I think this is a great development, and I plan to continue to support this work in the kernel as it moves
> forward.
> 
> I also think that TAP13 would make a good 'default' recommended format for Fuego test output
> (for those cases where we are creating new tests and new output).  So I may send around a formal
> recommendation and/or write some code to facilitate this, and include it in the Fuego test framework.
> TAP13 is very easy to parse, and we essentially already have built-in support for it with our current
> routines.
> 
> Finally, in China I presented some information about embedded Linux at LinuxCon.  I had a chance to
> see Fengguang We at the event (but unfortunately not enough time to discuss anything).  I plan to follow
> up with him to see if there is a way to have Fuego features complement his 0-day kernel testing effort.
> Also, a developer from China offered to help us arrange Fuego hackathons in that country.  This is something
> I hope to follow up on (sometime in the future).

Sounds like a lot of progress to me.

I've been working on the third topic (uniform test output) for a bit. So far this is what I had been able to achieve:
- I refactored the parser to produce a single run.json file for each job build. This run.json contains both metadata related to the run and the actual results.
   + I am extending the parser to handle some things such as error messages or duration "per testcase".
- The parser still produces a results.json file that merges the most important data in the run.json files for the flot visualizer plugin.
- I  have added support for Junit output. This basically consists of "translating" part of the data stored in run.json into the Junit format that
  the Jenkins plugin "Test Results Analyzer" [1] understands. This plugin is quite neat because it displays the results as an HTML table and allows searching,
  generating graphs (which you can download in png/svg format) and more. However, it is not as flexible as I wanted so I might leave it as an option 
  and fix the custom HTML tables code created by AGL.
- After studying Junit format and TAP13, I've concluded that a new common test output format is required and I have written a first draft. I will be sharing it 
  once it is ready, I know it works on a variety of tests (both Functional/Unit tests and Benchmark tests) and I provide a Jenkins plugin for it. 

Thanks,
Daniel

[1] https://wiki.jenkins.io/display/JENKINS/Test+Results+Analyzer+Plugin






More information about the Fuego mailing list