[Ksummit-discuss] [CORE TOPIC] Recruitment (Reviewers, Testers, Maintainers, Hobbyists)

Luis R. Rodriguez mcgrof at do-not-panic.com
Thu Jul 9 23:30:04 UTC 2015


On Thu, Jul 9, 2015 at 4:11 PM,  <josh at joshtriplett.org> wrote:
> On Thu, Jul 09, 2015 at 05:24:06PM -0400, Julia Lawall wrote:
>> On Thu, 9 Jul 2015, josh at joshtriplett.org wrote:
>>
>> > On Thu, Jul 09, 2015 at 10:38:30PM +0200, Luis R. Rodriguez wrote:
>> > > On Thu, Jul 09, 2015 at 01:11:27PM -0700, josh at joshtriplett.org wrote:
>> > > > Bonus if this is also wired into the 0day bot, so that you also find out
>> > > > if you introduce a new warning or error.
>> > >
>> > > No reason to make bots do stupid work, if we really wanted to consider
>> > > this a bit more seriously the pipeline could be:
>> > >
>> > >   mailing-list | coccinelle coccicheck| smatch | sparse | 0-day-bot
>> >
>> > That would effectively make the bot duplicate part of 0-day.  Seems
>> > easier to have some way to tell 0-day "if you see obvious procedural
>> > issues, don't bother with full-scale testing, just reject".
>>
>> Not sure to understand.  Isn't it better to have the most feedback
>> possible?
>
> If 0-day has enough bandwidth, sure.  However, if this is going to
> encourage a large number of new contributors to quickly iterate a pile
> of patches, many of which are likely to have basic procedural issues in
> the first few iterations, then that may waste quite a lot of build time
> in 0-day.

I'm not being empathetic towards machines as they are not [yet]
sentient, but has anyone estimated the average cost of a full 0-day
test cycle on a full kernel tree? I'm all for using machines when
available to due our bidding, but I was simply trying to consider how
we can be more efficient about it. That's all.

 Luis


More information about the Ksummit-discuss mailing list