[Netem] Rate throttling behaves unexpectedly

Nik Unger njunger at uwaterloo.ca
Tue Feb 21 20:24:56 UTC 2017


Hello,

I am using netem as part of some network emulation software based on 
network namespaces. However, the rate throttling (applied via tc's 
"rate" argument), does not behave as I would expect. I could not find 
any clues in the man page, and the online documentation about rate 
throttling is sparse (since it is comparatively new), so I am not sure 
if it is working as intended.

Specifically:
- The measured link bandwidth appears higher than the specified limit
- The measured link bandwidth *increases* when a higher delay is added
- The measured link bandwidth is substantially different than when using 
a netem/tbf qdisc combination
- The measured link bandwidth for the same very slow settings varies 
significantly across machines

Here's the steps to reproduce these observations:
====================================================================
# Set up two network namespaces and link them with a veth pair
# This uses static ARP entries to avoid ARP lookup delays
ip netns add net1
ip netns add net2
ip link add name veth address 00:00:00:00:00:01 netns net1 type veth
   peer name veth address 00:00:00:00:00:02 netns net2
ip netns exec net1 ip addr add 10.0.0.1/24 dev veth
ip netns exec net2 ip addr add 10.0.0.2/24 dev veth
ip netns exec net1 ip link set dev veth up
ip netns exec net2 ip link set dev veth up
ip netns exec net1 ip neigh add 10.0.0.2 lladdr 00:00:00:00:00:02
   dev veth
ip netns exec net2 ip neigh add 10.0.0.1 lladdr 00:00:00:00:00:01
   dev veth

# Test the delay and rate without any qdisc applied. I'm using iperf
# to measure the bandwidth here. The server should remain running when
# testing with the iperf client
ip netns exec net2 ping 10.0.0.1 -c 4
ip netns exec net1 iperf -s
ip netns exec net2 iperf -c 10.0.0.1
# On my machine: rtt min/avg/max/mdev = 0.049/0.052/0.062/0.010 ms
#                Bandwidth: 31.2 Gbits/sec
# (Results are as expected)

# Now test with a 512kb/s netem rate throttle
ip netns exec net2 tc qdisc add dev veth root netem rate 512kbit
ip netns exec net2 ping 10.0.0.1 -c 4
ip netns exec net1 iperf -s
ip netns exec net2 iperf -c 10.0.0.1
# On my machine: rtt min/avg/max/mdev = 1.662/1.664/1.667/0.028 ms
#                Bandwidth: 640 Kbits/sec
# (Expected results: bandwidth should be less than 512 Kbits/sec since
# TCP won't perfectly saturate the link)

# Add 100ms delay to the rate throttle
ip netns exec net2 tc qdisc change dev veth root netem rate 512kbit
   delay 100ms
ip netns exec net2 ping 10.0.0.1 -c 4
ip netns exec net1 iperf -s
ip netns exec net2 iperf -c 10.0.0.1
# On my machine: rtt min/avg/max/mdev = 101.597/101.658/101.708/0.039 ms
#                Bandwidth: 1.17 Mbits/sec
# (Expected results: bandwidth should be less than the previous test)

# Now test the same condition using tbf for rate throttling instead
ip netns exec net2 tc qdisc delete dev veth root
ip netns exec net2 tc qdisc add dev veth root handle 1:0 netem
   delay 100ms
ip netns exec net2 tc qdisc add dev veth parent 1:1 handle 10: tbf
   rate 512kbit latency 5ms burst 2048
ip netns exec net2 ping 10.0.0.1 -c 4
ip netns exec net1 iperf -s
ip netns exec net2 iperf -c 10.0.0.1
# On my machine: rtt min/avg/max/mdev = 100.069/100.110/100.152/0.031 ms
#                Bandwidth: 270 Kbits/sec
# (Results are as expected)

# Cleanup
ip netns del net1
ip netns del net2
====================================================================

My uname -a:
Linux 4.8.0-1-amd64 #1 SMP Debian 4.8.7-1 (2016-11-13) x86_64 GNU/Linux

I get similar results on my faster machine using 4.9.0-2-amd64, except 
that the results with the same commands are more dramatic: roughly 80 
Gbits/sec unthrottled, 1 Mbit/sec with 512kbit throttle and no delay, 
and almost 5 MBits/sec with 512kbit throttle and 100ms delay.

Applying qdiscs on both ends of the veth pair does not substantially 
affect the results.

Am I missing something about the way that netem's rate throttling works 
in relation to tbf, network namespaces, and iperf?

Thanks,
~Nik


More information about the Netem mailing list