[Fuego] [LTSI-dev] LTSI-4.9 RC1 is out for testing!

Bird, Timothy Tim.Bird at sony.com
Wed Aug 30 04:57:03 UTC 2017

> -----Original Message-----
> From: Khiem Nguyen on August 27, 2017 8:43 PM
> > Our team tested this RC1 using Fuego 1.0 (sorry, we have trouble when we
> use
> > Fuego 1.1.
> > So, we used Fuego 1.0) on R-Car H3 Salvator-X.
I would like to hear what problems were encountered with Fuego 1.1
that made it necessary to stay with 1.0. 

> Latest Fuego has not officially supported R-Car Gen3 boards (Salvator-X or
> Starter Kit) yet.
> Therefore, we have added board support locally for testing LTSI 4.9 RC1.
Previously I was hesitant to add board files to Fuego, as it required
Jenkins board definitions to be present in the code repository.
Under the new Fuego architecture, Fuego
does not ship the board definitions in our repository, and Jenkins
boards can be added and removed independently of the Fuego 
board definitions with a simple ftc command.

Therefore, I think we can add as many board definition files as we
would like (and not clutter up the Jenkins interface.)  That is, Fuego
users will only create the Jenkins board definitions for the boards
they want to use.

> There are other issues about test scripts in Fuego, so that some test cases
> could not be run successfully.
> We have also fixed the issues.
> We will send out the patches to fix test cases as well as adding new boards to
> Fuego.
> i.e after Fuego 1.2 is released in coming weeks.

That's great.  I look forward to hearing about the issues and seeing your
fixes.  I would be happy to add whatever board files you have to Fuego,
to make it easier for users to work with those boards.

> > And, I wrote the result in the end of this email as csv format.
> >
> > Also, our team found some issues (the board cannot resume well) and the
> > following patches can resolve it.
> > Would you do cherry-pick them?
> > Or, should I send the patches like general Linux development role? :)

My preference is to have the patches in a repository that I can cherry-pick from,
but also have them posted to the Fuego list so that we can discuss individual
patch elements on the mailing list.  Since I'm not integrating the patches
from the mailing list, they don't have to be as strictly formatted as they
would otherwise need to be.  However, I assume you can just use something
like git send-email to send to the list.   The standard Linux rules for
posting patches apply: please send messages in plain text, with the patch
inline in the message body (and not as an attachment).

> We did perform LTP test, BSP drivers test with Fuego.
> 1. LTP test (based on latest version, 20170516) has been executed.
>       The test result shows no big issues on LTSI 4.9 RC1.
>       I also attached the result for reference.

Thanks for posting the results spreadsheets - this is very useful to see
the report style you are using, and information you wished to see.
Were these generated automatically or manually?

We have some new abilities in Fuego 1.2 that may be helpful:
 - ability to skip executing a subtest in LTP ( via a spec file entry)
 - ability to ignore a failure of a testcase as part of the pass criteria for a test
(via a criteria.json entry)

> > No.,Test item,Remarks,Status
> > 1,Benchmark.aim7,Failure if executed bench processing step,Success
> > 2,Benchmark.blobsallad,Build failed,Failure 3,Benchmark.bonnie,,Success
> > 4,Benchmark.cyclictest,,Success 5,Benchmark.dbench,,Success
> > 6,Benchmark.Dhrystone,,Success 7,Benchmark.ebizzy,,Success
> > 8,Benchmark.ffsb,Build failed,Failure 9,Benchmark.fio,,Success
> > 10,Benchmark.GLMark,Build failed,Failure
> 11,Benchmark.gtkperf,Ditto,Failure
> > 12,Benchmark.hackbench,,Success
> > 13,Benchmark.himeno,,Success
> > 14,Benchmark.Interbench,Build failed,Failure
> 15,Benchmark.IOzone,Ditto,Failure
> > 16,Benchmark.iperf,Ditto,Failure 17,Benchmark.Java,Ditto,Failure
> > 18,Benchmark.linpack,Failure if executed bench processing step,Success
> > 19,Benchmark.lmbench2,Hang up when executed,Aborted
> > 20,Benchmark.nbench_byte,Hang up when executed,Aborted
> > 21,Benchmark.netperf,Build failed,Failure 22,Benchmark.netpipe,No such
> file
> > patch,Failure 23,Benchmark.OpenSSL,Build failed,Failure
> > 24,Benchmark.reboot,Failure if executed bench processing step,Success
> > 25,Benchmark.signaltest,Ditto,Success
> > 26,Benchmark.Stream,Ditto,Success
> > 27,Benchmark.tiobench,Build failed,Failure
> > 28,Benchmark.Whetstone,Ditto,Failure
> > 29,Benchmark.x11perf,Ditto,Failure
> >
> > No.,Test item,Remarks,Status
> > 1,Functional.aiostress,,Success
> > 2,Functional.arch_timer,Need to change test specification,Success
> > 3,Functional.bc,,Success 4,Functional.bzip2,,Success 5,Functional.cmt,Test
> case
> > failed,Failure 6,Functional.crashme,,Success 7,Functional.expat,Build
> > failed,Failure 8,Functional.fontconfig,,Success 9,Functional.ft2demos,Build
> > failed,Failure 10,Functional.glib,Ditto,Failure
> 11,Functional.hello_world,,Success
> > 12,Functional.ipv6connect,,Success
> > 13,Functional.jpeg,,Success
> > 14,Functional.libpng,Build failed,Failure 15,Functional.linus_stress,,Success
> > 16,Functional.LTP.Devices,Build failed,Failure
> 17,Functional.LTP.Filesystem,Build
> > failed,Failure 18,Functional.LTP.Open_Posix,Build failed,Failure
> > 19,Functional.netperf,3/5 failed,Failure 20,Functional.OpenSSL,Hang up
> when
> > executed,Aborted 21,Functional.pi_tests,,Success
> > 22,Functional.posixtestsuite,,Success
> > 23,Functional.rmaptest,,Success
> > 24,Functional.scifab,Need to change test specification,Success
> > 25,Functional.scrashme,Build failed,Failure 26,Functional.sdhi_0,Need to
> change
> > test specification,Success 27,Functional.stress,,Success
> > 28,Functional.synctest,,Success 29,Functional.zlib,,Success

Fuego still needs some work in the area of identifying and capturing 
the location of the failure (particularly build failures) and putting
that into the log data.  Some errors shows up in the console log, and
humans have to read that to interpret the failure.  After 1.2, I hope
to improve this so that Fuego can 1) add Fuego phase data to test
results output file (run.json), 2) parse the build portion of the log to
isolate the error.  This will help users avoid having to read through
the console log in order to find and identify some problems.

Thanks very much for the information and feedback.
 -- Tim

More information about the Fuego mailing list