[Fuego] Patch for Functional.check_mounts

Tim.Bird at sony.com Tim.Bird at sony.com
Sat Jan 18 00:12:23 UTC 2020


Kumar,

OK - I accepted your test, and have committed it to the master branch.

I made several additions to the test in subsequent commits, that you
can review in the master branch.

Here are some features I added:
 - Using the 'save_baseline' spec is no longer required.  If the
 baseline data for the mounted filesystems is missing, it will
 automatically be created on the first run of the test.  A user
 can still create a "save_baseline" job, if they want to be able
 to automatically update the baseline data from the Jenkins
 interface (e.g. after some change is made which alters the
 mount points or options, and the user wants to update the
 baseline data
 - I don't filter the mount data down to just the first field
  For things like cgroups, tmpfs, or snaps, this would eliminate
 a lot of data about the mounted filesystem that a user might
 want to verify has not changed.
 - I added the ability to filter the mount data with a regular expression.
 You can now specify a spec which will filter the data to only a specific
  set of filesystems or mount devices, or attributes (anything which shows
 up in the 'mount' line.
 - I use the output from 'mount' rather than /proc/mounts.

Please run the test, and let me know if it works for you as you would like.
Let me know if you see any problems with the test.

I think this is a nice addition to Fuego's test suite.

Thanks and regards,
 -- Tim


> -----Original Message-----
> From: Bird, Tim
> 
> > From: Kumar T <kumar.hcl.ers.epl at gmail.com>
> > Subject: Re: [Fuego] Patch for Functional.check_mounts
> >
> > Hi Tim,
> >
> >        I have restructured the code as per your suggestions. I have taken one of your patch as reference as created this "check_mounts" test
> cases without using C code.  Fixed all the review comments.
> >
> >        Please find the attached PATCH and other source files.
> >        Could you please review this patch.
> 
> I took a look.  I modified a few minor things, but overall the code looks good.
> The submission is a bit weird.  You've got a patch against previous materials,
> along with all the raw files that comprise the test.  I was able to figure out
> where stuff goes and install it myself, but I couldn't use my normal patch
> application and review steps.  In the future, patches should be submitted
> as patches against pristine upstream code (either from the master branch
> or next branch, whichever makes sense).  I think you have part of this
> test already committed in your git tree.  You should move the code to a
> pristine upstream source tree, commit it, format the patch from
> that commit, and then send that for review.
> 
> Because the code is not inline in the email message body, it's hard to review
> by quoting specific lines, but I'll put some general notes below.
> 
> 
> Here are a few items I saw and just fixed myself:
> I modified the code so that the 'awk' command is only run on the host, not on the
> target.  Then I removed the assert_has_program awk.
> 
> I modified the wording in a few places to refer to 'mounted filesystems'
> rather than 'mount points'.  And I changed 'is' to 'are' in a few places.
> 
> I had an idea to simplify usage of the test, which is to save a baseline file
> instead of abort, if no baseline file is found.  The saved baseline file
> would have a warning saying it was autogenerated and should be reviewed
> by a human for correctness before being used for testing for regressions.
> 
> This would mean that the flow of usage of the test would change:
> 
> Currently, the flow is:
>  - user creates a save_baseline job and a default job for check_mounts
>  - user verifies that filesystem mounts are correct on their system
>  - user executes the save_baseline spec of Functional.check_mounts
>    - this creates a baseline file
> - user periodically executes the default spec of Functional.check_mounts
>    - the test succeeds if the mounts are the same as the baseline, and fails otherwise
> 
> The new flow would be:
>  - user create a default job for check_mounts
>  - user executes Functional.check_mounts
>    - since no baseline file exists, it creates one with a warning message
>    that the data needs to be verified
>  - user reviews the baseline data to verify that the mounts are correct
>    and removes the warning message from the baseline data
>  - user periodically executes Functional.check_mounts
>    - test succeeds if the mounts are the same as the baseline, and fails otherwise
>    - Note that the test fails with a warning that the baseline data has not
>    been verified, if the warning message has not been removed from the baseline data.
> 
> This avoid the need for a separate spec (the 'save-baseline' spec) for the test,
> but still requires manual intervention in order to make the test succeed.
> At some point the user needs to review that the baseline data is correct, before
> it can be used to test for regressions.  But now the test (and Fuego) automatically
> generate the first draft of the baseline data, when the test is executed the first
> time.
> 
> Let me know what you think.
> 
> Finally I wanted to test it before committing it, but I'm having problems with
> access to my lab today.  I inadvertently wiped out some of my containers and
> I'm having to rebuild some things, AND I'm accessing my lab remotely, so things
> are a bit complicated today.
> 
> I may not finish my testing today, and I'm traveling tomorrow, so unfortunately
> I may not be able to commit the code until later this week.  Ping me if you don't
> hear back from me in a few days.
> 
> Thanks for the submission.
>  -- Tim
> 



More information about the Fuego mailing list