[Fuego] [PATCH 04/10] kernel_build: add some pre_checks

Tim.Bird at sony.com Tim.Bird at sony.com
Wed Mar 28 01:30:36 UTC 2018



> -----Original Message-----
> From: Daniel Sangorrin
> 
> > -----Original Message-----
> > From: Tim Bird
> >
> > OK - this is a bit different.
> >
> > So if I understand correctly, you would have jobs like the following:
> > docker.arm.Functional.kernel_build
> > and
> > docker.arm64.Functional.kernel_build
> >
> > and something like:
> > beaglebone.arm.Functional.kernel_build
> > is not allowed.  That seems strange to me.
> >
> > Why not just always build inside the container, but ignore the
> > node name?
> 
> The reason is because Fuego will try to ssh into the node name and
> run the test there.
You can run all kinds of host-side stuff in test_run(), without contacting
the client board.  It runs in the docker container.

I have several tests that only use the target to reflect the testlog back to
the fuego system.  This is a bit awkward, and I've been thinking of adding
support for host-side-only test.  (See Functional.fuego_lint for an example).

There's nothing in Fuego that requires anything to execute on the target board
(with the exception of this weird log bouncing).

Now, there's an issue with Fuego's pre_test phase contacting the target board
to gather information (doing things like ps, free, mount, cat /proc/interrupts, etc.)
(that is, stuff from ov_rootfs_state()).  It would be nice to avoid that, or even checking
if the target is alive, when we're not going to actually communicate with the board
for the test.


> Why don't want to build the kernel in the node,
> we want to build it in docker
Agreed.  It's built in the fuego container, but is not "running" on the
docker virtual test node (at least in my mind).

> I know why you are confused, but please
> notice that test_build here only _clones_ the initial source code, it does
> not build anything. The real test (building the kernel) is done in "test_run"
> which needs to run in docker, not in other nodes.
> 
> Previously I put everything in test_build, but that wasn't a good idea:
> 1) if the first time the build failed, the second time we would have to
>      clone the kernel from scratch again (slow).
> 2) if the first time the build was succesful, the second time it would skip
>      the would phase and would do nothing. An we wouldn't be able to test
>      the kernel incrementally either.

I think putting the actual kernel compilation into test_run is the correct
thing.  I just think we should find a way to avoid doing it in the overall
context of the docker test node.

OK - let me think about these problems.
 
> > Do you intend to ultimately support cross-building
> > from other architectures?
> 
> No, that is too much I think.
OK, agreed.
> 
> > See more questions below.
> >
> >
> > > -----Original Message-----
> > > From: Daniel Sangorrin
> > >
> > > - The first precheck is about the node being docker. This can
> > > be hard to understand at first, but this is a build test and
> > > it occurs in docker for now.
> > > - The second list of prechecks are regarding the SDK for the
> > > x86_64 builds. This was pointed out by Tim on 22nov2017.
> > > Sorry for the super slow answer.
> > >
> > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin at toshiba.co.jp>
> > > ---
> > >  engine/tests/Functional.kernel_build/fuego_test.sh | 22
> > > ++++++++++++++++++----
> > >  1 file changed, 18 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/engine/tests/Functional.kernel_build/fuego_test.sh
> > > b/engine/tests/Functional.kernel_build/fuego_test.sh
> > > index d8b3980..b88d07e 100755
> > > --- a/engine/tests/Functional.kernel_build/fuego_test.sh
> > > +++ b/engine/tests/Functional.kernel_build/fuego_test.sh
> > > @@ -3,14 +3,28 @@
> FUNCTIONAL_KERNEL_BUILD_PER_JOB_BUILD="true"
> > >  function test_pre_check {
> > >      echo "Doing a pre_check"
> > >      # FIXTHIS: if making uImage, check for mkimage
> > > -}
> > >
> > > -function test_build {
> > > -    # Configuration
> > > -    if [ -z ${FUNCTIONAL_KERNEL_BUILD_ARCH+x} ]; then
> > > +    if [ "$NODE_NAME" != "docker" ]; then
> > > +        abort_job "This test can only run on docker currently."
> > > +    fi
> > > +
> > > +    if [ -z "$FUNCTIONAL_KERNEL_BUILD_ARCH" ]; then
> > >          FUNCTIONAL_KERNEL_BUILD_ARCH="x86_64"
> > >      fi
> > >
> > > +    if [ "$FUNCTIONAL_KERNEL_BUILD_ARCH" = "x86_64" ]; then
> > > +        is_on_sdk libelf.a LIBELF
> /usr/lib/$FUNCTIONAL_KERNEL_BUILD_ARCH-
> > > linux-*/
> > > +        assert_define LIBELF
> > > +        is_on_sdk libssl.a LIBSSL
> /usr/lib/$FUNCTIONAL_KERNEL_BUILD_ARCH-
> > > linux-*/
> > FUNCTIONAL_KERNEL_BUILD_ARCH is always x86_64, so why use a
> variable
> > here?  I think the line reads easier without the variable.
> 
> No, ARCH is the variable that goes in the kernel compilation line (eg: make
> ARCH=arm CROSS_COMPILE=... uImage).
> So it is not always x86_64. You can see in the spec.json.
It is inside an if statement that requires it to be x86_64.  So, no matter
what's in the spec.json, for these uses of the variable it will always
have the value 'x86_64'.

> 
> > Is /usr/lib/arm-linux-*/libssl.a required for building the ARM kernel?
> 
> It is not, that is why the is_on_sdk are only checked for x86_64.
I'm curious what libelf.a and  libssl.a are actually used for in the kernel
source.  The kernel code itself is self-contained, so I suspect these
are being used to build compilation helper tools or some other host-side
tools, and should always be x86_64.

> 
> > That seems odd.  Does that code end up in the kernel image or in some
> > tool?  If it's a tool, are you sure it isn't a host-side tool, that should
> > be x86_64, and not $FUNCTIONAL_KERNEL)_BUILD_ARCH?
> >
> > I'm a bit confused by this.
> >
> > > +        assert_define LIBSSL
> > > +        is_on_sdk bison PROGRAM_BISON /usr/bin/
> > > +        assert_define PROGRAM_BISON
> > > +        is_on_sdk flex PROGRAM_FLEX /usr/bin/
> > These are obviously generic, and the same for any kernel.
> 
> I will check if they are actually needed for arm. Here i am only
> checking for x86_64.

I'm pretty sure they are needed for any architecture.  There are
lex and yacc files in the kernel source tree.  (bison and flex
are the gnu replacements).
 tlinux:~/work/torvalds/linux$ find . -name "*.y"
./tools/bpf/bpf_exp.y
./tools/perf/util/pmu.y
./tools/perf/util/expr.y
./tools/perf/util/parse-events.y
./scripts/genksyms/parse.y
./scripts/kconfig/zconf.y
./scripts/dtc/dtc-parser.y
./drivers/scsi/aic7xxx/aicasm/aicasm_gram.y
./drivers/scsi/aic7xxx/aicasm/aicasm_macro_gram.y
tlinux:~/work/torvalds/linux$ find . -name "*.l"
./tools/bpf/bpf_exp.l
./tools/perf/util/pmu.l
./tools/perf/util/parse-events.l
./scripts/genksyms/lex.l
./scripts/kconfig/zconf.l
./scripts/dtc/dtc-lexer.l
./drivers/scsi/aic7xxx/aicasm/aicasm_macro_scan.l
./drivers/scsi/aic7xxx/aicasm/aicasm_scan.l

Most of these are for host-side tools (compilation aids, but
some can end up in the kernel on the target.  But the lex
and yacc tools produce parser code that should be architecture
neutral (and is later compiled by the cross-toolchain for
the correct architecture).

 -- Tim

> 
> Thanks,
> Daniel
> 
> >
> > > +        assert_define PROGRAM_FLEX
> > > +    fi
> > > +}
> > > +
> > > +function test_build {
> > >      if [ -z ${FUNCTIONAL_KERNEL_BUILD_CONFIG+x} ]; then
> > >          FUNCTIONAL_KERNEL_BUILD_CONFIG="defconfig"
> > >      fi
> > > --
> > > 2.7.4
> > >
> > >
> > > _______________________________________________
> > > Fuego mailing list
> > > Fuego at lists.linuxfoundation.org
> > > https://lists.linuxfoundation.org/mailman/listinfo/fuego
> 
> 
> 
> _______________________________________________
> Fuego mailing list
> Fuego at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/fuego


More information about the Fuego mailing list