[Fuego] Configure: error: C preprocessor "aarch64-linux-gnu-cpp" fails sanity check (netperf, dbench tests)

Tim.Bird at sony.com Tim.Bird at sony.com
Tue Mar 27 00:08:02 UTC 2018



> -----Original Message-----
> From: Daniel Sangorrin on Monday, March 26, 2018 4:53 PM
> > -----Original Message-----
> > From: fuego-bounces at lists.linuxfoundation.org [mailto:fuego-
> bounces at lists.linuxfoundation.org] On Behalf Of Tim.Bird at sony.com
> > > -----Original Message-----
> > > From: Dhinakar Kalyanasundaram
> > > Hi Tim,
> > >
> > > Thanks for reporting back the status.  Let us know if any tests behave
> > > strangely
> > > with the different toolchains.
> > >
> > > ----> Sure I will keep you informed on this
> > >
> > > I am running kernel, dtb images built using Linaro toolchain and RFS is
> from
> > > Buildroot.
> > >
> > > Earlier I used to use my own shell script configured as given below.
> > >
> > > But later I started using the toolchain installed inside the docker
> container.
> >
> > Interesting.  Does buildroot produce an SDK or toolchain?
> >
> > > I am not sure if the below script is correct/enough or do I need to do the
> > > settings for all the FLAGS as well.
> >
> > Your call to 'export_tools' will set the other required environment
> > variables.
> >
> > The biggest problems we see with mis-matched toolchains and distros
> > is with libraries.  Some tests require libraries that the SDK or distro on the
> > board doesn't provide.
> >
> > If the tests are building OK and the test binaries are running OK on the
> board,
> > I think you're in good shape.  It's possible, but much less likely, that a
> > toolchain mismatch will produce subtle errors.  Usually the build fails or
> > the test execution (in test_run()) fails.
> 
> I see two solutions to the problems with toolchains (people using mis-
> matching toolchains,
> or lacking the libraries to build/run the tests):
> - Modify all fuego_test.sh files (as in dbench) to use the test binaries existing
> on the target preferentially.
>   This way people could add their tests to the filesystem (e.g. with yocto or
> debian) and do not care about
>   the kind of more advanced SDK setup. We will probably need to check that
> the fuego_test.sh/parser.py
>   support the test version installed on the target.
Yeah.  Siemens asked quite a while ago to be able to use test programs already
in the filesystem.  If we have something like a single program name, then
we can do something in pre-test to check for that program name, and if
present have Fuego avoid the build and deploy.   This can be done more simply
than how we did this for LTP, as there are less other dependencies for other
tests.

What I envision is basically the same operations you did in dbench, but
maybe a little bit more automatic.  Like maybe we declare:
TEST_PROGRAM=dbench
and the dependency system checks for it automatically.
And the build and deploy systems are short-circuited automatically, if it's
found on the board.

But the second issue you raise - making sure that the fuego test works
correctly with the test program already on target, is important.  Doing so
may be problematic in the general case.  If we don't
do good checking we may run into subtle bugs.  But, checking should
not be too difficult either, for most tests.

> - When building a test, try to use static linking to avoid library mismatches as
> much as possible (as in iperf3).


Agreed.

This is good practice for tests in general, and we should encourage it and
use build instructions that do static linking when we can.
A test should have as few dependencies as possible on the target (besides
the things it is actually testing).
 -- Tim



More information about the Fuego mailing list