[Bugme-new] [Bug 4379] New: Default sampling rates for ondemand governor are too high on a amd64

bugme-daemon at osdl.org bugme-daemon at osdl.org
Mon Mar 21 04:22:12 PST 2005


http://bugme.osdl.org/show_bug.cgi?id=4379

           Summary: Default sampling rates for ondemand governor are too
                    high on a amd64
    Kernel Version: 2.6.11
            Status: NEW
          Severity: normal
             Owner: cpufreq at www.linux.org.uk
         Submitter: gpiez at web.de


Distribution: Gentoo   
Hardware Environment: AMD Athlon(tm) 64 Processor 3200+, Clawhammer, 754   
Software Environment:   
Problem Description: Default sampling rates for ondemand governor are too high 
on a amd64. This leads to bad responsivness for desktop apps.  
   
Steps to reproduce: 
 
I gave the ondemand governor a try and noticed a slow system (bad 
responsivness).  
  
The frequencies/voltages seemed to switch fine, so I wondered where this slow 
feeling came from.  
 
I tried  
  
# watch -n 0.1 cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq  
  
and watched the output while starting "kdevelop" (huge, bloated, very nice 
app). With max frequency this needs something about 3 seconds to start, if 
everything is in the disk cache. Now it needed 5-6 seconds. I noticed the 
frequency didn't go up at the the moment I started it, but about 1-2 seconds 
later, when app was almost loaded.  
Further investigating, I inspected the values 
at /sys/devices/system/cpu/cpu0/cpufreq/ondemand .  
  
It turns out, that on my system (clawhammer 754, 2 GHz, 1MB 2nd Level Cache) 
the default sampling rates are quite high:  
 
ondemand # cat sampling_rate  
1240000  
ondemand # cat sampling_down_factor  
10  
  
Which means, for a frequency transition upwards, the cpu usage is sampled 
every 1,24 seconds, for scaling downwards every 12,4 (!) seconds. Which means, 
every action which needs less than about a second (for instance opening a 
konqueror window) is likely to be done at slow speed (800 MHz in my case), 
others, which take longer than the sampling period, get fullspeed only after a 
noticable latency, and so, need up to 3 seconds longer to complete (1,24 
seconds * 2.5 speed factor). Which was exactly what i observed. 
 
Additionally, the scaling down takes 12,4 seconds, during the this time the 
cpu will burn with 2 GHz doing essentially nothing. This is a unneccessary 
power consumption. 
 
I think sampling_rate_min and sampling_rate_max are way too high. What use may 
a 620 second sampling time have? On the other hand, it can't be faster than 
0,62 s which reduces the lag to 1.5 seconds. For me this is not sufficient. 
 
The intel cpus seem to have faster transition times (by a factor of 10), in 
this case, the effect is probably not noticable. 
 
I suggest a fixed default sampling time (say like 50 ms), making it a factor 
(currently 1000) of cpuinfo.transition_latency leads to bad behavior, if the 
transition latency is not in the "intel range". The value of sampling_rate_min 
should be adjusted downwards.

------- You are receiving this mail because: -------
You are on the CC list for the bug, or are watching someone who is.



More information about the Bugme-new mailing list