[Ksummit-discuss] [MAINTAINERS SUMMIT] & [TECH TOPIC] Improve regression tracking

Eric W. Biederman ebiederm at xmission.com
Wed Aug 2 17:33:27 UTC 2017


Shuah Khan <shuahkh at osg.samsung.com> writes:

> On 07/31/2017 10:54 AM, Eric W. Biederman wrote:
>> Steven Rostedt <rostedt at goodmis.org> writes:
>> 
>>> On Wed, 5 Jul 2017 09:48:31 -0700
>>> Guenter Roeck <linux at roeck-us.net> wrote:
>>>
>>>> On 07/05/2017 08:27 AM, Steven Rostedt wrote:
>>>>> On Wed, 5 Jul 2017 08:16:33 -0700
>>>>> Guenter Roeck <linux at roeck-us.net> wrote:  
>>>> [ ... ]
>>>>>>
>>>>>> If we start shaming people for not providing unit tests, all we'll accomplish is
>>>>>> that people will stop providing bug fixes.  
>>>>>
>>>>> I need to be clearer on this. What I meant was, if there's a bug
>>>>> where someone has a test that easily reproduces the bug, then if
>>>>> there's not a test added to selftests for said bug, then we should
>>>>> shame those into doing so.
>>>>>   
>>>>
>>>> I don't think that public shaming of kernel developers is going to work
>>>> any better than public shaming of children or teenagers.
>>>>
>>>> Maybe a friendlier approach would be more useful ?
>>>
>>> I'm a friendly shamer ;-)
>>>
>>>>
>>>> If a test to reproduce a problem exists, it might be more beneficial to suggest
>>>> to the patch submitter that it would be great if that test would be submitted
>>>> as unit test instead of shaming that person for not doing so. Acknowledging and
>>>> praising kselftest submissions might help more than shaming for non-submissions.
>>>>
>>>>> A bug that is found by inspection or hard to reproduce test cases are
>>>>> not applicable, as they don't have tests that can show a regression.
>>>>>   
>>>>
>>>> My concern would be that once the shaming starts, it won't stop.
>>>
>>> I think this is a communication issue. My word for "shaming" was to
>>> call out a developer for not submitting a test. It wasn't about making
>>> fun of them, or anything like that. I was only making a point
>>> about how to teach people that they need to be more aware of the
>>> testing infrastructure. Not about actually demeaning people.
>>>
>>> Lets take a hypothetical sample. Say someone posted a bug report with
>>> an associated reproducer for it. The developer then runs the reproducer
>>> sees the bug, makes a fix and sends it to Linus and stable. Now the
>>> developer forgets this and continues on their merry way. Along comes
>>> someone like myself and sees a reproducing test case for a bug, but
>>> sees no test added to kselftests. I would send an email along the lines
>>> of "Hi, I noticed that there was a reproducer for this bug you fixed.
>>> How come there was no test added to the kselftests to make sure it
>>> doesn't appear again?" There, I "shamed" them ;-)
>> 
>> I just want to point out that kselftests are hard to build and run.
>> 
>> As I was looking at another issue I found a bug in one of the tests.  It
>> had defined a constant wrong.  I have a patch.  It took me a week of
>> poking at the kselftest code and trying one thing or another (between
>> working on other things) before I could figure out which combination of
>> things would let the test build and run.
>> 
>> Until kselftests get easier to run I don't think they are something we
>> want to push to hard.
>> 
>
> I would say it is easy to run ksefltests - "make kseflttest" from the
> main Makefile does this for you. You can also run individual tests:

On 4.13-rc1  That doesn't work.

$ make O=$PWD-build -j8 kselftests
make[1]: Entering directory 'linux-build'
make[1]: *** No rule to make target 'kselftests'.  Stop.
make[1]: Leaving directory 'linux-build'
Makefile:145: recipe for target 'sub-make' failed
make: *** [sub-make] Error 2

And why I have to use some esoteric command and not just the
traditional "make path/to/test/output" to run an individual
test is beyond me.

Eric


More information about the Ksummit-discuss mailing list