[Ksummit-discuss] [CORE TOPIC] Recruitment (Reviewers, Testers, Maintainers, Hobbyists)

Theodore Ts'o tytso at mit.edu
Thu Jul 9 23:56:45 UTC 2015


On Thu, Jul 09, 2015 at 04:13:52PM -0700, josh at joshtriplett.org wrote:
> 
> That assumes the patch actually has issues.  To use the reviews I do on
> RCU patches as an example, in a patch series, I might reply to a few
> patches with "here are some issues; with those fixed, Reviewed-by...",
> and then reply to the remaining unproblematic patches (individually or
> in aggregate) with just the Reviewed-by.

Right, that's why I talked about doing this on a holistic way.  It's
true that individual patches, and maybe even all of the patches in a
patch series, might be *perfect*.  But presumably that won't be true
for **all** of the patches you review.

This is why creating a system to do patch reviewer ranking is so
complicated.  You need to do this by looking at the entire corpus of
patches reviewed by the reviewer, so (for example) you can find out
which patch reviewers are allowing bad patches to get through to Linus
by giving a thumbs up to a patch that needed to be reverted.

(At $WORK, if someone creates a CL that causes a massive failure, it's
not just the engineer who submitted the CL who is at fault, but also
the reviewers who gave the CL a positive review --- and that's as it
should be.  Of course we also ask if there is something about the
development and regression test environment that might be a systematic
cause of problems, but if someone is consistently giving LGTM's
without doing enough due diligence, that's a issue that ultimately
need to be addressed by the engineer's manager.  The question is how to
deal with this kind of failure mode in an open source, volunteer
world.)

          	     	     	     	- Ted


More information about the Ksummit-discuss mailing list