[Ksummit-discuss] [TOPIC] Application performance: regressions, controlling preemption

Jan Kara jack at suse.cz
Tue May 13 12:27:24 UTC 2014


On Tue 13-05-14 01:16:48, Greg KH wrote:
> On Mon, May 12, 2014 at 10:32:27AM -0400, Chris Mason wrote:
> > Hi everyone,
> > 
> > We're in the middle of upgrading the tiers here from older kernels 
> > (2.6.38, 3.2) into 3.10 and higher.
> > 
> > I've been doing this upgrade game for a number of years now, with 
> > different business cards taped to my forehead and with different target 
> > workloads.
> > 
> > The result is always the same...if I'm really lucky the system isn't 
> > slower, but usually I'm left with a steaming pile of 10-30% regressions.
> 
> How long have we been having this discussion?  8 years?  It's not like
> people don't know that performance testing needs to be constantly
> happening, we've been saying that for a long time.  It's just that no
> one seems to listen to us :(
  I agree with you that this is coming over and over again for quite a few
years.

> And that is the larger problem, what can we do about that issue.
> Honestly, I don't think much, as it takes money from companies to commit
> to do this work, which no one seems to ever want to do.  What make this
> year the year that something different happens?
  So what I found interesting about the topic is:
1) What has regressed this time in particular? This might be interesting
for a broader audience I believe. With my SUSE hat on, I can learn which
problems should I expect our customers to see that we didn't find during
our testing.  With my community hat on, I can possibly learn some things
which are easy enough to test for...

2) We are looking into some more continuous testing in SUSE so learning
what others have tested would be interesting for me.

								Honza
-- 
Jan Kara <jack at suse.cz>
SUSE Labs, CR


More information about the Ksummit-discuss mailing list