[Ksummit-2013-discuss] [ATTEND] static checking; COMPILE_TEST

Alexey Khoroshilov khoroshilov at ispras.ru
Fri Jul 19 16:05:04 UTC 2013


> On 07/17/2013 11:59 AM, Mark Brown wrote:
> >/ On Wed, Jul 17, 2013 at 12:21:13PM +0300, Dan Carpenter wrote:
> />/ 
> />>/ I would prefer if people don't silence false positives unless it 
> />>/ makes the code more readable to do that.
> />/ 
> />/ Indeed, and this has been a real problem with some of the reports
> />/ from checkers - I see a lot of patches which do things like just
> />/ set a variable to some value without any analysis to see if there's
> />/ code paths that should be setting it but don't.
> /
> Yes, this is exactly my point. There are outputs of analyzers (I give
> coverity as an example), but maintainers ignore those (one random
> example is at [1]). Then people which do not understand the code well
> enough, come up with fixes which are inappropriate [2] (again random
> example) -- the issue should be marked as a false positive and it will
> dismiss (again coverity allows that in their web interface). Ignoring
> the reports effectively means we are not fixing 0-day bugs like the
> one where gcc removed the NULL pointer check allowing root access.
> This was reported by coverity three months before the exploit was even
> made public [3].
>
> - From my point of view, I can only confirm this. I recall when I
> provided kernel developers with our static analyzer (stanse) output
> and only few fellows looked at it. I had to send patches on my own and
> of course some of them were wrong (inappropriately hiding a false
> positive).
>
> OTOH false positive rate is very high due to complexity of the code.
> It uses to be above 50%.
>
> This is why I would like to discuss this.
I also would be interested to discuss this topic from point of view of
our LDV experience [1].

In my opinion, to make usage of static analysis tools more efficient in
settings of kernel development process we have to shift focus of
analysis to patches that are flown around before they are landed to
mainline. In this case analysis tools can help reviewers to consider
patches from various point of views and as soon as a reviewer is already
focused on a particular chunk of code it is easy to make a decision
regarding a related report from static analysis tool. And if a tool
supports a database of known false positives, information about the
decision can help to improve subsequent analysises. By the way, that is
a direction that Aiaiai [2] sticks to.

[1] http://linuxtesting.org/results/ldv
[2] http://git.infradead.org/users/dedekind/aiaiai.git

--
Alexey Khoroshilov
Linux Verification Center, ISPRAS
http://linuxtesting.org


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/ksummit-2013-discuss/attachments/20130719/554123d1/attachment.html>


More information about the Ksummit-2013-discuss mailing list