[Fuego] Adding new test case to fuego

Kumar Thangavel thangavel.k at hcl.com
Mon Sep 30 05:40:14 UTC 2019


Thanks for your valuable suggestions.

I am working on this test cases.  Coding partially completed.


Thanks,
Kumar.
________________________________
From: Tim.Bird at sony.com <Tim.Bird at sony.com>
Sent: 28 September 2019 06:37
To: Kumar Thangavel <thangavel.k at hcl.com>; fuego at lists.linuxfoundation.org <fuego at lists.linuxfoundation.org>
Subject: RE: Adding new test case to fuego

Kumar,

Are you doing anything with this idea?

It's been a while, so I presume not, but I'll comment on the ideas below, just in case.

> -----Original Message-----
> From: Kumar Thangavel on Tuesday, August 13, 2019 7:45 PM
>
> Yes, Nice Idea. Thanks for your valuable feedback.
>
> As per your suggestions, my test spec idea will be like,
>
> 1.    Test spec will get the expected mount configurations of their board
> from the user.
How?  I would suggest specifying these in a list in a text file, one-per line,
that is placed in the rw board directory for a board.  That is, for a board called
'min1', I would put the file in /fuego-rw/boards/min1/expected_mounts.txt

For extra checking (at some point in the future) you could extend this and
check additional contents of each filesystem, as follows:
For each filesystem, you could gather the filesystem information and
put it in its own file, as baseline data.  For example, I would put the
root data into a file called:
/fuego-rw/boards/min1/root_fs_data.txt

> 2.    User may not be knowing all the expected file systems. But user
> might be knowing some important expected file systems. So, Test spec
> would compare each and every expected filesystems with mounted
> filesystems list.
OK, for the actual test you would create something in
fuego-core/tests/Functional.check_mounts
(creating fuego_test.sh, parser.py, and spec.json)

the spec.json file should include the following specs:
'default' - which performs a normal test of the filesystems, output pass or fail for each one
'save_baseline' - which collects the information about the mounts and sets the new
baseline data for it, by writing it into expected_mounts.txt

> 3.    If expected file system is not presented/matched with mounted file
> system list, it will display the errors, and test will fail.
I would output the pass/fail results in TAP format.

> 4.    If expected file system is presented/matched with mounted file
> system list,  test will pass and ask the user to save these configurations.
If the data is as expected, then there should be no need to save anything.
I'm not sure I'm following this.

> 5.    If user would like to save their configurations, the spec will save their
> configurations in the path you mentioned.
See the'save_baseline' spec above.

> 6.    If user don't want to save their configurations, it will not save the
> configurations.
> 7.    Next time, if user run the test for same board, then it will take the
> expected file systems from that path and compare with the mounted
> filesystems.
I've added a feature to not just examine the mounts, but the
actual filesystem contents as well.  But maybe it would be good to
start with processing the mounts only.

> 8.    Test will display the status of all expected filesystems.
There should be one testcase per filesystem.

>
>           Could you please check these and provide your suggestions on this.
>
>
>            Also, I am just thinking to make easy for users,  So that Instead of
> getting mount configurations from users, test spec should give default or
> common file systems if they entered board names. I am not sure will this
> work for all boards. Any suggestion on this.

Yes, Fuego could store the 'expected values' for mounted filesystems
for some common boards in fuego-ro/boards/<board_name>/expected_mounts.txt
And people could share these between each other.

I hope these suggestions are helpful.
 -- Tim

> ________________________________
>
> From: Tim.Bird at sony.com <Tim.Bird at sony.com>
> Sent: 03 August 2019 05:42:49
> To: Kumar Thangavel <thangavel.k at hcl.com>;
> fuego at lists.linuxfoundation.org <fuego at lists.linuxfoundation.org>
> Subject: RE: Adding new test case to fuego
>
>
>
> > -----Original Message-----
> > From: Kumar Thangavel
> >
> > Hi All,
> >
> >           I would like to contribute to fuego framework. So I am planning to add
> a
> > test for "file systems were mounted with correct permissions and
> > attributes".
> >
> > is this ok to start  or please suggest any good test case/test ideas to start
> > working.
>
> Thank you for wanting to contribute to Fuego.
>
> Here is some feedback on your idea.
>
> I think many people would like a simple test that verified that file systems
> were mounted correctly.  Something can easily be done using the 'mount'
> command, or by looking at mtab.
>
> In order to make this test general-purpose, you will probably want to
> allow the user to hand in board-specific data to the test, that reflects
> what their board is supposed to have mounted.  If the comparison
> is just done with static code, it will be hard for others to use this test
> in their scenario.
>
> Also, you may want to consider whether you want to test all mounted
> filesystems, or just the "real" ones (like those of type ext4, nfs, etc.
> as opposed to pseudo filesystems of type tmpfs, cgroup, etc., or the
> weird snap ones of type squashfs used by Ubuntu).
>
> So you might define multiple test specs (variants), that let the user
> choose whether to only check 'real' filesystems, or to check all
> filesystems, or filesystems of a particular type.
>
> I have been thinking for a while about how to make it easy for people
> to generalize tests for their own use.  I think that the local customization
> of tests (with expected values for the local use case) is one of the big
> barriers to people sharing tests.
>
> I've been thinking it would be a good idea to allow the user to provide
> data about their expected values.  Also, I think it would be good to
> have a way to very easily update the expected values to ones that
> match their configuration of Linux.
>
> One thing I've considered is adding a spec to perform an "expected value
> update".
> What this would do is take the current data from the system, and set the
> expected value for the test to that data.
>
> For example, if  your test had the spec "default", that did a mount command,
> and
> compared with a text file that had the expected results for 'mount', then you
> could
> easily detect if there was a difference in the data.
>
> If your test had another spec "update", that did a mount command, and set
> the
> expected results from the data that was returned, then the following flow
> would
> allow a user with a different mounted filesystem configuration to use your
> test:
>
> 1) you publish the test with the expected mount configuration for your board
> 2) another user runs the test and sees errors, because the mount
> configuration
> for their board is different.
> 3) if the user verifies that their current mount configuration is actually OK,
> then
> 4) the user can run the test with the "update" spec, to save their mount
> configuration
> data to the expected data file (saving it into, say, the /fuego-
> rw/boards/<board>/ directory)
> 5) the user can then use the test to verify that the mount status of their
> board(s)
> 6) the user could potentially publish their expected results, to augment your
> test,
> for other people to use with boards that had a similar configuration as theirs
>
> Does that all make sense?
>
> Please let me know your ideas for making a mounted filesystem verification
> test.
> I'd be happy to discuss with you ideas for making it a nice, generic, reusable
> test,
> and a nice addition to Fuego.
>  -- Tim
>
>
> --------------------------------------------------------------

::DISCLAIMER::
________________________________
The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents (with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates. Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of authorized representative of HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any email and/or attachments, please check them for viruses and other defects.
________________________________
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/fuego/attachments/20190930/6c717d11/attachment-0001.html>


More information about the Fuego mailing list