NetPerfMeter
A TCP/MPTCP/UDP/SCTP/DCCP Network Performance Meter Tool

https://www.nntb.no/~dreibh/netperfmeter
NetPerfMeter is a network performance meter for the TCP, MPTCP, SCTP, UDP, and DCCP transport protocols over IPv4 and IPv6. It simultaneously transmits bidirectional flows to an endpoint and measures the resulting flow bandwidths and QoS. Flows can be saturated (i.e. "send as much as possible") or non-saturated with frame rate and frame sizes (like a multimedia transmission). Non-saturated flows can be configured with constant or variable frame rate/frame size, i.e. to realise Constant Bit Rate (CBR) or Variable Bit Rate (VBR) traffic. For both, frame rate and frame size, it is not only possible to set constant values but to also to use random distributions. Furthermore, flows can be set up as on/off flows. Of course, the flow parameters can be configured individually per flow and flow direction. The measurement results can be recorded as scalar files (summary of the run) and vector files (time series). These files can be processed further, e.g. for detailed analysis and plotting of the results. The Wireshark network protocol analyser provides out-of-the-box support for analysing NetPerfMeter packet traffic.
A NetPerfMeter Run with an SCTP Flow
The key goal of NetPerfMeter is to provide a tool for the performance comparison of multiple transport connections, which are further denoted as Flows. That is, it is possible to configure different flows between two systems using varying parameters, in order run a configured measurement, collect the obtained results and post-process them for statistical analyses. Particularly, all five relevant IETF Transport Layer protocols are supported:
- UDP (User Datagram Protocol; see RFC 768),
- DCCP (Datagram Congestion Control Protocol; see RFC 4340),
- TCP (Transmission Control Protocol; see RFC 793),
- MPTCP (Multipath TCP; see RFC 8684),
- SCTP (Stream Control Transmission Protocol; see RFC 9260).
Of course, this support includes the possibility to parametrise various protocol-specific options. Note, that the protocol support by NetPerfMeter depends on the underlying operating system. DCCP, MPTCP, as well as some SCTP extensions are not available on all platforms, yet.
Furthermore, each flow is able to apply its specific traffic behaviour:
- Each flow may use its own Transport Layer protocol (i.e. UDP, DCCP, TCP, MTCP or SCTP).
- Bidirectional data transfer is possible, with individual parameters for each direction.
- Flows may either be saturated (i.e. try to send as much as possible) or non-saturated. In the latter case, a frame rate and a frame size have to be configured. Both may be distributed randomly, using a certain distribution (like uniform, negative exponential, etc.). This feature allows to mimic multimedia traffic.
- For the stream-oriented SCTP, an independent traffic configuration is possible for each stream.
- Support for on-off traffic is provided by allowing to specify a sequence of time stamps when to start, stop and restart a flow or stream.
- Also, for SCTP, it is possible to configure partial reliability (see RFC 3758) as well as ordered and unordered delivery (see RFC 9260).
Clearly, the NetPerfMeter application provides features similar to the NetPerfMeter simulation model in OMNeT++. It is therefore relatively easy – from the parametrisation perspective – to reproduce NetPerfMeter simulation scenarios in reality.
The Concept of a NetPerfMeter Measurement
Similar to the NetPerfMeter simulation model in OMNeT++, an application instance may either be in Active Mode (client side) or Passive Mode (server side). The figure above illustrates the concept of a NetPerfMeter measurement. The passive instance accepts incoming NetPerfMeter connections from the active instance. The active instance controls the passive instance, by using a control protocol denoted as NetPerfMeter Control Protocol (NPMP-CONTROL). That is, the passive instance may run as a daemon; no manual interaction by the user – e.g. to restart it before a new measurement run – is required. This feature is highly practical for a setup distributed over multiple Internet sites (e.g. like the NorNet Testbed) and allows for parameter studies consisting of many measurement runs.
The payload data between active and passive instances is transported using the NetPerfMeter Data Protocol (NPMP-DATA). The figure below shows the protocol stack of a NetPerfMeter node.
The NetPerfMeter Protocol Stack
The NPMP-DATA protocol transmits data as frames, with a given frame rate. In case of a saturated sender, the flow tries to send as many frames as possible (i.e. as allowed by the underlying transport and its flow and congestion control). Otherwise, the configured frame rate is used (e.g. 25 frames/s as for typical video transmissions). NPMP-DATA breaks down frames into messages, to make sure that large frames can be transported over the underlying transport protocol. The maximum message size can be configured. Frames larger than the message size limit are split into multiple messages before sending them. On the receiving side, the messages are combined back into frames. The underlying transport protocol handles the messages as its payload.
The following figure presents the message sequence of a NetPerfMeter measurement run:
A Measurement Run with NetPerfMeter
Note that the Wireshark network protocol analyser provides out-of-the-box support for NetPerfMeter. That is, it is able to dissect and further analyse NPMP-CONTROL and NPMP-DATA packets using all supported Transport Layer protocols.
A new measurement run setup is initiated by the active NetPerfMeter instance by establishing an NPMP-CONTROL association to the passive instance first. The NPMP-CONTROL association by default uses SCTP for transport. If SCTP is not possible in the underlying networks (e.g. due to firewalling restrictions), it is optionally possible to use TCP for the NPMP-CONTROL association instead. Then, the configured NPMP-DATA connections are established by their configured Transport Layer protocols. For the connection-less UDP, the message transfer is just started. The passive NetPerfMeter instance is informed about the identification and parameters of each new flow by using NPMP-CONTROL Add Flow messages. On startup of the NPMP-DATA flow, an NPMP-DATA Identify message allows the mapping of a newly incoming connection to a configured flow by the passive instance. It acknowledges each newly set up flow by an NPMP-CONTROL Acknowledge message. After setting up all flows, the scenario is ready to start the measurement run.
The actual measurement run is initiated from the active NetPerfMeter instance using an NPMP-CONTROL Start Measurement message, which is also acknowledged by an NPMP-CONTROL Acknowledge message. Then, both instances start running the configured scenario by transmitting NPMP-DATA Data messages over their configured flows.
During the measurement run, incoming and outgoing flow bandwidths may be recorded as vectors – i.e. time series – at both instances, since NPMP-DATA Data traffic may be bidirectional. Furthermore, the CPU utilisations – separately for each CPU and CPU core – are also tracked. This allows to identify performance bottlenecks, which is particularly useful when debugging and comparing transport protocol implementation performance. Furthermore, the one-way delay of messages can be recorded. Of course, in order to use this feature, the clocks of both nodes need to be appropriately synchronised, e.g. by using the Network Time Protocol (NTP).
The end of a measurement run is initiated – from the active NetPerfMeter instance – by using an NPMP-CONTROL Stop Measurement message. Again, it is acknowledged by an NPMP-CONTROL Acknowledge message. At the end of the measurement, average bandwidth and one-way delay of each flow and stream are recorded as scalars (i.e. single values). They may provide an overview of the long-term system performance.
After stopping the measurement, the passive NetPerfMeter instance sends its global vector and scalar results (i.e. over all flows) to the active instance, by using one or more NPMP-CONTROL Results messages. Then, the active NetPerfMeter instance sequentially removes the flows by using NPMP-CONTROL Remove Flow messages, which are acknowledged by NPMP-CONTROL Acknowledge messages. On flow removal, the passive instance sends its per-flow results for the corresponding flow, again by using NPMP-CONTROL Results messages.
The active instance, as well, archives its local vector and scalar results data and stores them – together with the results received from its peer – locally. All result data is compressed by using BZip2 compression (see bzip2), which may save a significant amount of bandwidth (of course, the passive node compresses the data before transfer) and disk space.
By using shell scripts, it is possible to apply NetPerfMeter for parameter studies, i.e. to create a set of runs for each input parameter combination. For example, a script could iterate over a send buffer size σ from 64 KiB to 192 KiB in steps of 64 KiB as well as a path bandwidth ρ from 10 Mbit/s to 100 Mbit/s in steps of 10 Mbit/s and perform 5 measurement runs for each parameter combination.
When all measurement runs have eventually been processed, the results have to be visualised for analysis and interpretation. The NetPerfMeter package provides support to visualise the scalar results, which are distributed over the scalar files written by measurement runs. Therefore, the first step necessary is to bring the data from the various scalar files into an appropriate form for further post-processing. This step is denoted as Summarisation; an introduction is also provided in "SimProcTC – The Design and Realization of a Powerful Tool-Chain for OMNeT++ Simulations".
The summarisation task is performed by the tool createsummary. An external program – instead of just using GNU R itself to perform this step – is used due to the requirements on memory and CPU power. createsummary iterates over all scalar files of a measurement M. Each file is read – with on-the-fly BZip2-decompression – and each scalar value as well as the configuration m∈M having led to this value – are stored in memory. Depending on the number of scalars, the required storage space may have a size of multiple GiB.
Since usually not all scalars of a measurement are required for analysis (e.g. for an SCTP measurement, it may be unnecessary to include unrelated statistics), a list of scalar name prefixes to be excluded from summarisation can be provided to createsummary, in form of the so-called Summary Skip List. This feature may significantly reduce the memory and disk space requirements of the summarisation step. Since the skipped scalars still remain stored in the scalar files themselves, it is possible to simply re-run createsummary with updated summary skip list later, in order to also include them.
Having all relevant scalars stored in memory, a data file – which can be processed by GNU R, LibreOffice or other programs – is written for each scalar. The data file is simply a table in text form, containing the column names on the first line. Each following line contains the data, with line number and an entry for each column (all separated by spaces); an example is provided in Listing 3 of "SimProcTC – The Design and Realization of a Powerful Tool-Chain for OMNeT++ Simulations". That is, each line consists of the settings of all parameters and the resulting scalar value. The data files are also BZip2-compressed on the fly, in order to reduce the storage space requirements.
NetPerfMeter uses the SCTP protocol. It may be necessary to allow loading the SCTP kernel module first, if not already enabled. The following code blocks show how to enable it permanently.
echo "sctp" | sudo tee /etc/modules-load.d/sctp.conf if [ -e /etc/modprobe.d/sctp-blacklist.conf ] ; then sudo sed -e 's/^blacklist sctp/# blacklist sctp/g' -i /etc/modprobe.d/sctp-blacklist.conf fi sudo modprobe sctp lsmod | grep sctp
echo 'sctp_load="YES"' | sudo tee --append /boot/loader.conf sudo kldload sctp kldstat | grep sctp
-
Run a passive instance (i.e. server side), using port 9000:
user@server:~$ netperfmeter 9000
⚠️ Important: By default, SCTP transport is used for the NPMP-CONTROL control communication. In certain setups, this can cause problems. In this case, it may be necessary to use control over TCP (or MPTCP) instead (to be shown in the next example):- Firewalls blocking SCTP traffic, e.g many public Wi-Fi networks.
- Routing over NAT/PAT may not work well due to lack of support for SCTP.
- The Docker daemon, by default, creates a local interface dummy0 with IP address 172.17.0.1 for the default bridge network setup. If this is enabled on active and passive side, the SCTP out-of-the blue (OOTB) message handling causes the SCTP association to be aborted, since both devices have an identical IP address.
-
Run a passive instance (i.e. server side), using port 9000, and allowing NPMP-CONTROL control communication over TCP support:
user@server:~$ netperfmeter 9000 -control-over-tcp
-
Run an active instance (i.e. client side), with a saturated bidirectional TCP flow:
user@client:~$ netperfmeter <SERVER>:9000 -tcp const0:const1400:const0:const1400
Replace <SERVER> by the IP address or hostname of the passive instance!
The flow parameter specifies a saturated flow (frame rate 0 – send a much as possible) with a constant frame size of 1400 B. The first block specifies the direction from active (client) to passive (server) instance, the second block specifies the direction from passive (server) to active (client) instance.
⚠️ Important: By default, SCTP transport is used for the NPMP-CONTROL control communication. In certain setups, this can cause problems. In this case, it may be necessary to use control over TCP (or MPTCP) instead (to be shown in the next example):- Firewalls blocking SCTP traffic, e.g many public Wi-Fi networks.
- Routing over NAT/PAT may not work well due to lack of support for SCTP.
- The Docker daemon, by default, creates a local interface dummy0 with IP address 172.17.0.1 for the default bridge network setup. If this is enabled on active and passive side, the SCTP out-of-the blue (OOTB) message handling causes the SCTP association to be aborted, since both devices have an identical IP address.
-
Run an active instance (i.e. client side), with a saturated bidirectional TCP flow, using NPMP-CONTROL control communication over TCP.
user@client:~$ netperfmeter <SERVER>:9000 -control-over-tcp -tcp const0:const1400:const0:const1400
Note: The passive instance must be started with -control-over-tcp as well!
-
Run an active instance (i.e. client side), with a saturated bidirectional TCP flow, using NPMP-CONTROL control communication over SCTP (this is the default):
user@client:~$ netperfmeter <SERVER>:9000 -tcp const0:const1400:const0:const1400
-
Run an active instance (i.e. client side), with a download-only TCP flow (server to client):
user@client:~$ netperfmeter <SERVER>:9000 -tcp const0:const0:const0:const1400
Setting both, frame rate and frame size to 0, means to send nothing in the corresponding direction.
-
Run an active instance (i.e. client side), with a upload-only TCP flow (client to server):
user@client:~$ netperfmeter <SERVER>:9000 -tcp const0:const1400:const0:const0
-
Run an active instance (i.e. client side), with bidirectional UDP flow:
- Active to passive instance: constant 2 frames/s, constant 200 B/frame;
- Passive to active instance: constant 25 frames/s, constant 5000 B/frame.
user@client:~$ netperfmeter <SERVER>:9000 -udp const2:const200:const25:const5000
Setting both, frame rate and frame size to constant 0, which means to send nothing in the corresponding direction.
Note: UDP does not have flow and congestion control. A saturated UDP flow is therefore not possible!
-
Run an active instance (i.e. client side), with bidirectional DCCP flow:
- Active to passive instance: constant 10 frames/s, constant 128 B/frame;
- Passive to active instance: constant 25 frames/s, constant 1200 B/frame.
user@client:~$ netperfmeter <SERVER>:9000 -dccp const10:const128:const25:const1200
Note: DCCP is only available when provided by the operating system kernel!
-
Run an active instance (i.e. client side), with 2 bidirectional SCTP flows over a single SCTP association (i.e. 2 streams):
Stream 0:
- Active to passive instance: constant 2 frames/s, constant 200 B/frame;
- Passive to active instance: constant 25 frames/s, constant 5000 B/frame.
Stream 1:
- Active to passive instance: constant 10 frames/s, constant 128 B/frame;
- Passive to active instance: constant 25 frames/s, constant 1200 B/frame.
user@client:~$ netperfmeter <SERVER>:9000 -sctp const2:const200:const25:const5000 const10:const128:const25:const1200
-
Run an active instance (i.e. client side), with a saturated bidirectional MPTCP flow:
user@client:~$ netperfmeter <SERVER>:9000 -mptcp const0:const1400:const0:const1400
Notes:
- MPTCP is only available when provided by the operating system kernel!
- NetPerfMeter ≥2.0 is required! Older versions <2.0 only support the expermental Linux MTCP with incompatible API!
-
Run an active instance (i.e. client side), with 7 flows, stopping the measurement after 60 s:
- TCP flow, constant 10 frames/s, constant 4096 B/frame, in both directions;
- UDP flow, constant 10 frames/s, constant 1024 B/frame, in both directions;
- SCTP flows with 5 streams, and for each stream constant 1, 2, 3, 4, or 5 frames/s, constant 512 B/frame, in both directions, but with reversed frame rate order in backwards direction, all over a single SCTP association.
user@client:~$ netperfmeter <SERVER>:9000 \ -runtime=60 \ -tcp const10:const4096:const10:const4906 \ -udp const10:const1024:const10:const1024 \ -sctp \ const1:const512:const5:const512 \ const2:const512:const4:const512 \ const3:const512:const3:const512 \ const4:const512:const2:const512 \ const5:const512:const1:const512
-
Run an active instance (i.e. client side), with 9 flows, stopping the measurement after 60 s:
- TCP flow, constant 10 frames/s, constant 4096 B/frame, in both directions;
- MPTCP flow, constant 10 frames/s, constant 4096 B/frame, in both directions;
- UDP flow, constant 10 frames/s, constant 1024 B/frame, in both directions;
- DCCP flow, constant 10 frames/s, constant 1024 B/frame, in both directions;
- SCTP flows with 5 streams, and for each stream constant 1, 2, 3, 4, or 5 frames/s, constant 512 B/frame, in both directions, but with reversed frame rate order in backwards direction, all over a single SCTP association.
The example above, but recording measurement data and flow information into files, including descriptions for active/passive instance and the flows:
- Configuration file: multi.config;
- Vector files: multi-<active|passive>-<FLOW>-<STREAM>.vec;
- Scalar files: multi-<active|passive>.sca.
user@client:~$ netperfmeter <SERVER>:9000 \ -runtime=60 \ -config=multi.config \ -vector=multi.vec \ -scalar=multi.sca \ -activenodename "Active Instance" \ -passivenodename "Passive Instance" \ -tcp const10:const4096:const10:const4906:description="TCP" \ -mptcp const10:const4096:const10:const4906:description="MPTCP" \ -udp const10:const1024:const10:const1024:description="UDP" \ -dccp const10:const1024:const10:const1024:description="DCCP" \ -sctp \ const1:const512:const5:const512:description="SCTP Stream 0" \ const2:const512:const4:const512:description="SCTP Stream 1" \ const3:const512:const3:const512:description="SCTP Stream 2" \ const4:const512:const2:const512:description="SCTP Stream 3" \ const5:const512:const1:const512:description="SCTP Stream 4"
Notes:
- Note: DCCP and MPTCP are only available when provided by the operating system kernel!
- NetPerfMeter ≥2.0 is required! Older versions <2.0 only support the expermental Linux MTCP with incompatible API!
-
An example output of the multi-flow example above, measurered in a multi-homed testbed setup, provides the following output:
-
The configuration file multi.config. It contains the flows and their parameters. It can be used to further process the scalar and vector output.
-
Scalar files (i.e. summaries of the single measurement run) from active side (multi-active.sca) and passive side (multi-passive.sca). The scalar file format is the same as used by OMNeT++.
-
Vector files (i.e. time series) for each flow, from active and passive side:
-
Flow 0 (TCP flow): multi-active-00000000-0000.vec, multi-passive-00000000-0000.vec.
-
Flow 1 (MPTCP flow): multi-active-00000001-0000.vec, multi-passive-00000001-0000.vec.
-
Flow 2 (UDP flow): multi-active-00000002-0000.vec, multi-passive-00000002-0000.vec.
-
Flow 3 (DCCP flow): multi-active-00000003-0000.vec, multi-passive-00000003-0000.vec.
-
Flow 4 (SCTP flow for SCTP stream 0): multi-active-00000004-0000.vec, multi-passive-00000004-0000.vec.
-
Flow 5 (SCTP flow for SCTP stream 1): multi-active-00000005-0001.vec, multi-passive-00000005-0001.vec.
-
Flow 6 (SCTP flow for SCTP stream 2): multi-active-00000006-0002.vec, multi-passive-00000006-0002.vec.
-
Flow 7 (SCTP flow for SCTP stream 3): multi-active-00000007-0003.vec, multi-passive-00000007-0003.vec.
-
Flow 8 (SCTP flow for SCTP stream 4): multi-active-00000008-0004.vec, multi-passive-00000008-0004.vec.
-
The vector file format is a table, which can be read with CSV import of tools like GNU R, LibreOffice, etc.
-
-
Run T-Shark (the command-line version of the Wireshark network protocol analyser) to record a PCAP trace:
user@client:~$ sudo tshark -i any -n -w output.pcap \ -f '(sctp port 9001) or ((tcp port 9000) or (tcp port 8999) or (udp port 9000) or (sctp port 9000) or (ip proto 33))'
Notes:
-
Filter parameters for protocols and ports can ensure to record only the relevant NetPerfMeter traffic.
-
In case of using port 9000 for NetPerfMeter, use:
- SCTP, port 9000 and 9001 (data and control traffic over SCTP);
- TCP, port 8999, 9000 and 9001 (data and control traffic over TCP and MPTCP);
- UDP, port 9000;
- DCCP, port 9000 (ip proto 33).
-
-
Run Wireshark network protocol analyser to display the packet flow of the multi-flows example above in PCAP file multi.pcap.gz:
user@client:~$ wireshark multi.pcap.gz
A Wireshark Run with NetPerfMeter Traffic from multi.pcap.gzNotes:
- Wireshark provides out-of-the-box support for NetPerfMeter, i.e. a dissector is included in all recent Wireshark packages.
- Color filtering rules can colorise NetPerfMeter traffic, e.g. to mark different packet types or flows/streams. An example configuration is provided in colorfilters (needs to be merged into own configuration, usually in ~/.config/wireshark/colorfilters).
-
Take a look into the manual page of NetPerfMeter for further information and options:
man netperfmeter
-
Obtain the NetPerfMeter version:
netperfmeter -version
Note: NetPerfMeter ≥2.0 is required!
Please use the issue tracker at https://github.com/dreibh/netperfmeter/issues to report bugs and issues!
For ready-to-install Ubuntu Linux packages of NetPerfMeter, see Launchpad PPA for Thomas Dreibholz!
sudo apt-add-repository -sy ppa:dreibh/ppa sudo apt-get update sudo apt-get install netperfmeter
For ready-to-install Fedora Linux packages of NetPerfMeter, see COPR PPA for Thomas Dreibholz!
sudo dnf copr enable -y dreibh/ppa sudo dnf install netperfmeter
For ready-to-install FreeBSD packages of NetPerfMeter, it is included in the ports collection, see FreeBSD ports tree index of benchmarks/netperfmeter/!
pkg install netperfmeter
Alternatively, to compile it from the ports sources:
cd /usr/ports/benchmarks/netperfmeter make make install
NetPerfMeter is released under the GNU General Public Licence (GPL).
Please use the issue tracker at https://github.com/dreibh/netperfmeter/issues to report bugs and issues!
The Git repository of the NetPerfMeter sources can be found at https://github.com/dreibh/netperfmeter:
git clone https://github.com/dreibh/netperfmeter cd netperfmeter cmake . make
Contributions:
-
Issue tracker: https://github.com/dreibh/netperfmeter/issues. Please submit bug reports, issues, questions, etc. in the issue tracker!
-
Pull Requests for NetPerfMeter: https://github.com/dreibh/netperfmeter/pulls. Your contributions to NetPerfMeter are always welcome!
-
CI build tests of NetPerfMeter: https://github.com/dreibh/netperfmeter/actions.
-
Coverity Scan analysis of NetPerfMeter: https://scan.coverity.com/projects/dreibh-netperfmeter.
See https://www.nntb.no/~dreibh/netperfmeter/#current-stable-release for release packages!
NetPerfMeter BibTeX entries can be found in netperfmeter.bib!
-
Dreibholz, Thomas; Becke, Martin; Adhari, Hakim and Rathgeb, Erwin Paul: «Evaluation of A New Multipath Congestion Control Scheme using the NetPerfMeter Tool-Chain» (PDF, 360 KiB, 6 pages, 🇬🇧), in Proceedings of the 19th IEEE International Conference on Software, Telecommunications and Computer Networks (SoftCOM), pp. 1–6, ISBN 978-953-290-027-9, Hvar, Dalmacija/Croatia, September 16, 2011.
-
Dreibholz, Thomas: «Evaluation and Optimisation of Multi-Path Transport using the Stream Control Transmission Protocol» (PDF, 36779 KiB, 264 pages, 🇬🇧), Habilitation Treatise, University of Duisburg-Essen, Faculty of Economics, Institute for Computer Science and Business Information Systems, URN urn:nbn:de:hbz:464-20120315-103208-1, March 13, 2012.
- HiPerConTracer – High-Performance Connectivity Tracer
- Dynamic Multi-Homing Setup (DynMHS)
- SubNetCalc – An IPv4/IPv6 Subnet Calculator
- TSCTP – An SCTP test tool
- System-Tools – Tools for Basic System Management
- Thomas Dreibholz's Multi-Path TCP (MPTCP) Page
- Thomas Dreibholz's SCTP Page
- Michael Tüxen's SCTP page
- NorNet – A Real-World, Large-Scale Multi-Homing Testbed
- GAIA – Cyber Sovereignty
- NEAT – A New, Evolutive API and Transport-Layer Architecture for the Internet
- Wireshark