[Fuego] showstopper for us

Bird, Timothy Tim.Bird at sony.com
Sat Jun 3 04:09:09 UTC 2017



> -----Original Message-----
> From: Rafael Gago Castano on Friday, June 02, 2017 11:26 PM
> Just now I showed my proof of concept with fuego to our lead dev.
> Unfortunately he found (and I agree) one showstopper.
> 
> It's a bit related with how the fuego repositories are stored on the server.
> 
> On our devices locally we can't develop everything on our machines and then
> push to the server, as the test rig has some device networks and external
> hardware that we haven't on our machines.
> 
> This means that to develop a test we may need to do it directly on the server
> filesystem, affecting other scheduled runs (by other users) if we happen to
> touch a common file (e.g. the board setup/teardown files). This should
> happen rarely IMO.
Indeed, if you have to modify board files for test execution, something
is wrong.  From previous e-mails, it looks like we are missing some flexibility
in handling other configuration parameters for your tests, and we should
work on the best way to handle those.

We have been talking a lot about current features, almost
complete features, and features planned for the future, and Fuego is
undergoing some churn at the moment, making it difficult to see if 
what we have now will support your use case.

As Daniel said, we have already prototyped features to:
1) package a test for remote execution
2) push the test to a central server
3) install a test package (obtained from a central server)
4) execute a test 
5) publish the run data to a central server

Possibly this would solve the issue you have.  It is our goal to support
tests that are created and managed outside of Fuego's core repository.

If tests that are developed on one machine cannot be safely installed
and executed on another machine, that's a problem we need to address.
 
> 
> This implies too that only one developer can develop tests on the test rig at
> the same time (git commits), this is IMO the real showstopper.

Is this due to conflicts in updating the board files and spec files?  Different developers should
not need to update the board file, as a general practice (but maybe I'm being naïve
about what needs to be stored there.)

The hardware for the board should not be changing, but as tests are written
to address new hardware, I can see how new additions would need to be
made to add descriptions of that hardware to the board file, causing possible
conflicts.  This should only happen when the first test of a piece of hardware
is written, and then subsequent tests should use that same definition.

> 
> How LAVA works is that it each test is a YAML file that contains information
> that says where to git pull the test from (which I don't like as it has two
> levels of indirection).
> 
> Fullfilling this requirement would need severe refactoring in fuego.
I think supporting similar functionality (or at least 'close enough' functionality)
might be easier than expected.

> 
> Just to share my random ideas, If I were given an infinite time budget and
> the
> ability to break as many compatibility as I wish I'd do so:
> 
> -Build nodes/board files: Similar to what they are now with all the
> parameters
>  defined as jenkins variables available from the scripts. The build nodes
> would
>  use jenkins labels as I wrote in my previous mail.
> 
> -Each node would have only two jenkins project named "Benchmark.run"
> and
>  "Functional.run" that would take two jenkins parameters. One that would
> serve
>  as an URI:
> 
> 
> git=https://my_user_repo/;branch=master;sha=234df;path=tests/Functiona
> l.my_test"
> 
>  or for inbuilt fuego tests and modifications on the local filesystem (while
>  developing):
> 
>  "path=engine/tests/Benchmark.java"
> 
>  And another one that would receive a KV list with all the parameters
> 
>  "BAUDRATES='9600 115200'; DEV="/dev/ttuSUB0"
> 
> Then the test would be fetched if they are non-local/git prefixed and the
> generation scripts run (may need some fuego pregeneration steps).
> 
> Then ftc would not be required to start jobs, just the jenkins CLI would do.
> 
> This is a very spontaneous idea that doesn't solve a lot of problems
> (reporting/
> how to tell tests from each other, device setup/teardown etc).
> 
> So it seems like we abandon fuego for now, as we can't justify one-time
> refactors that big (suposing that everyone agreed to) to management when
> my
> coleagues have already invested a lot of time getting the existing LAVA setup
> working.
> 
> It's a shame, as I like the simplicity and the potential it has. Thank you all
> for your help. I'm going to take a small vacation until the 15th, so I won't
> be answering until then.
> 
> Thanks for all the help. Hopefully we will be able to revisit the project some
> time. I hope that you find the feedback useful.

I'm going on vacation myself, so won't be able to answer anything until after June 10.
I'm sorry that Fuego isn't able to meet your needs currently.  But I really appreciate
the feedback and the detailed descriptions of your situation, your use cases, and ideas
for how Fuego would better meet your needs.  Reporting usage scenarios is sometimes
an undervalued activity in Open Source projects.  But it is extremely valuable to help
us identify and fix deficiencies.

Thanks for your contributions to Fuego.

Best regards,
 -- Tim



More information about the Fuego mailing list