[Fuego] [PATCH 06/11] tests: add new iperf3

Tim.Bird at sony.com Tim.Bird at sony.com
Thu Mar 1 20:52:21 UTC 2018


See a few comments inline below.  Nothing that blocked acceptance.

> -----Original Message-----
> From: Daniel Sangorrin
> iperf3 is the next version of iperf2 for which we already have
> a test in Fuego. However, iperf3 was rewritten from scratch
> and is not fully compatible with iperf2. For that reason,
> it needs a separated one.
> 
> The tarball is sent on the next patch. It was downloaded
> from https://iperf.fr/download/source/iperf-3.1.3-source.tar.gz

It would be good to document the source of the tarball inside
the fuego_test.sh (or in test.yaml, but we're not really using that yet).

What do you think about adding the following, to fuego_test.sh:
tarball_src=https://iperf.fr/download/source/iperf-3.1.3-source.tar.gz

I'm not sure about the variable name we should use for this, or I would
have added this myself.   The idea is
NOT to have Fuego automatically download the source, but rather just
to give maintainers (and users) a reference to the source for test
maintenance work.  This could also be put into a comment, to more
clearly indicate that it's not being processed (currently) by Fuego.
Let me know what you think.  

> Signed-off-by: Daniel Sangorrin <daniel.sangorrin at toshiba.co.jp>
> ---
>  engine/tests/Benchmark.iperf3/chart_config.json |  4 ++
>  engine/tests/Benchmark.iperf3/criteria.json     | 82
> ++++++++++++++++++++++++
>  engine/tests/Benchmark.iperf3/fuego_test.sh     | 48 ++++++++++++++
>  engine/tests/Benchmark.iperf3/parser.py         | 57 +++++++++++++++++
>  engine/tests/Benchmark.iperf3/reference.json    | 83
> +++++++++++++++++++++++++
>  engine/tests/Benchmark.iperf3/spec.json         | 18 ++++++
>  6 files changed, 292 insertions(+)
>  create mode 100644 engine/tests/Benchmark.iperf3/chart_config.json
>  create mode 100644 engine/tests/Benchmark.iperf3/criteria.json
>  create mode 100755 engine/tests/Benchmark.iperf3/fuego_test.sh
>  create mode 100755 engine/tests/Benchmark.iperf3/parser.py
>  create mode 100644 engine/tests/Benchmark.iperf3/reference.json
>  create mode 100644 engine/tests/Benchmark.iperf3/spec.json
> 
> diff --git a/engine/tests/Benchmark.iperf3/chart_config.json
> b/engine/tests/Benchmark.iperf3/chart_config.json
> new file mode 100644
> index 0000000..d7756e7
> --- /dev/null
> +++ b/engine/tests/Benchmark.iperf3/chart_config.json
> @@ -0,0 +1,4 @@
> +{
> +    "chart_type": "measure_plot",
> +    "measures": ["sum.sent.bits_per_second",
> "cpu_utilization.remote.total", "sum.udp.bits_per_second"]
> +}
> diff --git a/engine/tests/Benchmark.iperf3/criteria.json
> b/engine/tests/Benchmark.iperf3/criteria.json
> new file mode 100644
> index 0000000..8a50e1e
> --- /dev/null
> +++ b/engine/tests/Benchmark.iperf3/criteria.json
> @@ -0,0 +1,82 @@
> +{
> +    "schema_version":"1.0",
> +    "criteria":[
> +        {
> +            "tguid":"sum.sent.bits_per_second",
> +            "reference":{
> +                "value":1,
> +                "operator":"ge"
> +            }
> +        },
> +        {
> +            "tguid":"sum.sent.retransmits",
> +            "reference":{
> +                "value":100,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"sum.received.bits_per_second",
> +            "reference":{
> +                "value":1,
> +                "operator":"ge"
> +            }
> +        },
> +        {
> +            "tguid":"sum.udp.bits_per_second",
> +            "reference":{
> +                "value":1,
> +                "operator":"ge"
> +            }
> +        },
> +        {
> +            "tguid":"sum.udp.jitter_ms",
> +            "reference":{
> +                "value":10,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"cpu_utilization.host.total",
> +            "reference":{
> +                "value":90,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"cpu_utilization.host.system",
> +            "reference":{
> +                "value":90,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"cpu_utilization.host.user",
> +            "reference":{
> +                "value":90,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"cpu_utilization.remote.total",
> +            "reference":{
> +                "value":90,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"cpu_utilization.remote.system",
> +            "reference":{
> +                "value":90,
> +                "operator":"le"
> +            }
> +        },
> +        {
> +            "tguid":"cpu_utilization.remote.user",
> +            "reference":{
> +                "value":90,
> +                "operator":"le"
> +            }
> +        }
> +    ]
> +}
> diff --git a/engine/tests/Benchmark.iperf3/fuego_test.sh
> b/engine/tests/Benchmark.iperf3/fuego_test.sh
> new file mode 100755
> index 0000000..4bbfc22
> --- /dev/null
> +++ b/engine/tests/Benchmark.iperf3/fuego_test.sh
> @@ -0,0 +1,48 @@
> +# gitrepo=https://github.com/esnet/iperf.git
> +tarball=iperf-3.1.3-source.tar.gz

My proposed new line indicating the tarball source would go here.

> +
> +# Spec parameters
> +#  - [Optional] server_ip (BENCHMARK_IPERF3_SERVER_IP): ip address of
> the server machine
> +#        - if not provided, then SRV_IP _must_ be provided on the board file.
> Otherwise the test will fail.
> +#        - if the server ip is assigned to the host, the test automatically starts
> the iperf3 server daemon
> +#            - otherwise, the tester _must_ make sure that 'iperf3 -V -s -D' is
> already running on the server machine.
> +#              FIXTHIS: add functionality to setup the server using a board file
Indeed - that would be nice.  Good FIXTHIS to help remind us of a nice
feature to add in the future.

> +#  - [Optional] server_params (BENCHMARK_IPERF3_SERVER_PARAMS):
> extra parameters for the server
> +#  - [Optional] client_params (BENCHMARK_IPERF3_SERVER_PARAMS):
> extra parameters for the client
> +
> +function test_build {
> +    ./configure --host=$HOST --prefix=$(pwd)/build
> +    make
> +    make install
> +}
> +
> +function test_deploy {
> +    put build/bin/iperf3  $BOARD_TESTDIR/fuego.$TESTDIR/
> +}
> +
> +function test_run {
> +    # FIXTHIS: validate the server and client ips, and make sure they can
> communicate
> +
> +    # Get the server ip
> +    IPERF3_SERVER_IP=${BENCHMARK_IPERF3_SERVER_IP:-$SRV_IP}

I like the way this is done - defaulting to the host SRV_IP that Fuego
detects, but allowing override via a test (/spec) variable.

> +    if [ -z "$IPERF3_SERVER_IP" ]; then
> +        echo "ERROR: set the server ip on the spec or board file"
> +        return 1
> +    fi
> +    echo "Using server ip address: $IPERF3_SERVER_IP"
> +
> +    # Check if the server ip belongs to us
> +    ROUTE=$(ip route get $IPERF3_SERVER_IP | cut -f 1 -d " ")
> +    if [ "$ROUTE" = "local" ]; then
> +        echo "Starting iperf3 server on localhost (SERVER IP:
> $IPERF3_SERVER_IP)"
> +        killall iperf3 2> /dev/null || true
> +        iperf3 -V -s -D -f M $BENCHMARK_IPERF3_SERVER_PARAMS
> +    fi
> +
> +    echo "Starting iperf3 client on the target (CLIENT IP: $IPADDR)"
> +    report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./iperf3 -V -c
> $IPERF3_SERVER_IP -f M -J --get-server-output
> $BENCHMARK_IPERF3_CLIENT_PARAMS"
> $BOARD_TESTDIR/fuego.$TESTDIR/${TESTDIR}.log
> +}
> +
> +function test_cleanup {
> +    kill_procs iperf3
> +}
> diff --git a/engine/tests/Benchmark.iperf3/parser.py
> b/engine/tests/Benchmark.iperf3/parser.py
> new file mode 100755
> index 0000000..1097803
> --- /dev/null
> +++ b/engine/tests/Benchmark.iperf3/parser.py
> @@ -0,0 +1,57 @@
> +#!/usr/bin/python
> +# FIXTHIS: support parsing streams
> +
> +import os, sys
> +import json
> +import matplotlib
> +matplotlib.use('Agg')
> +import pylab as plt
I suppose both of these are guaranteed to be in the container, so this is OK>

Interesting to have a test that provides its own visualization.  Also
interesting to see how you've integrated the visualization into the
overall Fuego framework.  Nice!

> +
> +sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
> +import common as plib
> +
> +measurements = {}
> +with open(plib.TEST_LOG) as f:
> +    data = json.load(f)
> +
> +    # Measurements to apply criteria
> +    if 'sum_sent' in data['end']:
> +        measurements["sum.sent"] = [
> +            {"name": "bits_per_second", "measure" :
> float(data['end']['sum_sent']['bits_per_second'])},
> +            {"name": "retransmits", "measure" :
> float(data['end']['sum_sent']['retransmits'])}
> +        ]
> +    if 'sum_received' in data['end']:
> +        measurements["sum.received"] = [
> +            {"name": "bits_per_second", "measure" :
> float(data['end']['sum_received']['bits_per_second'])}
> +        ]
> +    if 'sum' in data['end']:
> +        measurements["sum.udp"] = [
> +            {"name": "bits_per_second", "measure" :
> float(data['end']['sum']['bits_per_second'])},
> +            {"name": "jitter_ms", "measure" :
> float(data['end']['sum']['jitter_ms'])}
> +        ]
> +    measurements["cpu_utilization.host"] = [
> +        {"name": "total", "measure" :
> float(data['end']['cpu_utilization_percent']['host_total'])},
> +        {"name": "system", "measure" :
> float(data['end']['cpu_utilization_percent']['host_system'])},
> +        {"name": "user", "measure" :
> float(data['end']['cpu_utilization_percent']['host_user'])}
> +    ]
> +    measurements["cpu_utilization.remote"] = [
> +        {"name": "total", "measure" :
> float(data['end']['cpu_utilization_percent']['remote_total'])},
> +        {"name": "system", "measure" :
> float(data['end']['cpu_utilization_percent']['remote_system'])},
> +        {"name": "user", "measure" :
> float(data['end']['cpu_utilization_percent']['remote_user'])}
> +    ]
> +    # Create graph with interval data
> +    time = []
> +    bits_per_second = []
> +    for interval in data['intervals']:
> +        if interval['sum']['omitted']:
> +            continue
> +        time.append(interval['sum']['start'])
> +        bits_per_second.append(interval['sum']['bits_per_second'])
> +    fig = plt.figure()
> +    plt.plot(time, bits_per_second)
> +    plt.title('iperf3 client results')
> +    plt.xlabel('time (s)')
> +    plt.ylabel('Mbps')
> +    fig.savefig(os.environ['LOGDIR'] + '/iperf3.png')
> +
> +sys.exit(plib.process(measurements))
> diff --git a/engine/tests/Benchmark.iperf3/reference.json
> b/engine/tests/Benchmark.iperf3/reference.json
> new file mode 100644
> index 0000000..930a94d
> --- /dev/null
> +++ b/engine/tests/Benchmark.iperf3/reference.json
> @@ -0,0 +1,83 @@
> +{
> +    "test_sets":[
> +        {
> +            "name":"sum",
> +            "test_cases":[
> +                {
> +                    "name":"sent",
> +                    "measurements":[
> +                        {
> +                            "name":"bits_per_second",
> +                            "unit":"Mbps"
> +                        },
> +                        {
> +                            "name":"retransmits",
> +                            "unit":"retransmits"
> +                        }
> +                    ]
> +                },
> +                {
> +                    "name":"received",
> +                    "measurements":[
> +                        {
> +                            "name":"bits_per_second",
> +                            "unit":"Mbps"
> +                        }
> +                    ]
> +                },
> +                {
> +                    "name":"udp",
> +                    "measurements":[
> +                        {
> +                            "name":"bits_per_second",
> +                            "unit":"Mbps"
> +                        },
> +                        {
> +                            "name":"jitter_ms",
> +                            "unit":"ms"
> +                        }
> +                    ]
> +                }
> +            ]
> +        },
> +        {
> +            "name":"cpu_utilization",
> +            "test_cases":[
> +                {
> +                    "name":"host",
> +                    "measurements":[
> +                        {
> +                            "name":"total",
> +                            "unit":"%"
> +                        },
> +                        {
> +                            "name":"system",
> +                            "unit":"%"
> +                        },
> +                        {
> +                            "name":"user",
> +                            "unit":"%"
> +                        }
> +                    ]
> +                },
> +                {
> +                    "name":"remote",
> +                    "measurements":[
> +                        {
> +                            "name":"total",
> +                            "unit":"%"
> +                        },
> +                        {
> +                            "name":"system",
> +                            "unit":"%"
> +                        },
> +                        {
> +                            "name":"user",
> +                            "unit":"%"
> +                        }
> +                    ]
> +                }
> +            ]
> +        }
> +    ]
> +}
> diff --git a/engine/tests/Benchmark.iperf3/spec.json
> b/engine/tests/Benchmark.iperf3/spec.json
> new file mode 100644
> index 0000000..95d02c2
> --- /dev/null
> +++ b/engine/tests/Benchmark.iperf3/spec.json
> @@ -0,0 +1,18 @@
> +{
> +    "testName": "Benchmark.iperf3",
> +    "specs": {
> +        "default": {
> +            "client_params": "-O 4 -t 64",
> +            "extra_success_links": {"png": "iperf3.png"}
> +        },
> +        "zerocopy": {
> +            "client_params": "-O 4 -t 64 -Z",
> +            "extra_success_links": {"png": "iperf3.png"}
> +        },
> +        "udp": {
> +            "client_params": "-t 60 -u -b 400M",
> +            "extra_success_links": {"png": "iperf3.png"}
> +        }
> +    }
> +}
> +
> --
> 2.7.4

Thanks.  Lots of interesting stuff here.

Applied and pushed.  But not tested, yet.  I'm under some time pressure today.
Please test and let me know if everything works as expected.
 -- Tim



More information about the Fuego mailing list