[Ksummit-discuss] [CORE TOPIC] Reviewing new API/ABI

Andy Lutomirski luto at amacapital.net
Tue May 6 19:16:49 UTC 2014


On Tue, May 6, 2014 at 12:12 PM, Shuah Khan <shuahkhan at gmail.com> wrote:
> On Tue, May 6, 2014 at 11:58 AM,  <josh at joshtriplett.org> wrote:
>>
>> I'm interested in this topic, and I'll second that nomination.  I'd
>> like to partipate in this discussion.
>>
>> We need to have better processes for vetting new syscalls and ABIs far
>> more carefully than we currently do.  Right now, we require benchmarks
>> for any claimed performance increase; it's almost a given that if you
>> post an optimization without including benchmarks in the commit message,
>> it'll get rejected with a request to come back with numbers.  We need
>> similar standards for new syscalls or other userspace ABIs: come back
>> with test programs, test coverage information, etc.
>>
>
> I am interested in this topic as well. To be effective and keep the
> momentum going long term , we will need a way to regression test when
> new APIs, new syscalls and ABIs are introduced. That would require a
> look at existing tests and look into putting in some kind of framework
> to easily test for regressions. It would also mean, when new API, ABIs
> get added, "strongly" encourage developers add documentation and tests
> cases.

I think there was some discussion about in-tree kernel tests.  This
might fit in.

--Andy


More information about the Ksummit-discuss mailing list