1. Installing ViNePerf¶
1.1. Downloading ViNeperf¶
The ViNePerf can be downloaded from its official git repository, which is
hosted by OPNFV. It is necessary to install a
git at your DUT before downloading
vineperf. Installation of
git is specific to the packaging system used by
Linux OS installed at DUT.
Example of installation of GIT package and its dependencies:
in case of OS based on RedHat Linux:
sudo yum install git
in case of Ubuntu or Debian:
sudo apt-get install git
git is successfully installed at DUT, then vineperf can be downloaded
git clone https://gerrit.opnfv.org/gerrit/vineperf
The last command will create a directory
vineperf with a local copy of vineperf
1.2. Supported Operating Systems¶
Fedora 24 (kernel 4.8 requires DPDK 16.11 and newer)
Fedora 25 (kernel 4.9 requires DPDK 16.11 and newer)
RedHat 7.2 Enterprise Linux
RedHat 7.3 Enterprise Linux
RedHat 7.5 Enterprise Linux
Ubuntu 16.10 (kernel 4.8 requires DPDK 16.11 and newer)
1.3. Supported vSwitches¶
The vSwitch must support Open Flow 1.3 or greater.
Open vSwitch with DPDK support
TestPMD application from DPDK (supports p2p and pvp scenarios)
1.4. Supported Hypervisors¶
Qemu version 2.3 or greater (version 2.5.0 is recommended)
1.5. Supported VNFs¶
In theory, it is possible to use any VNF image, which is compatible with supported hypervisor. However such VNF must ensure, that appropriate number of network interfaces is configured and that traffic is properly forwarded among them. For new ViNePerf users it is recommended to start with official vloop-vnf image, which is maintained by ViNePerf community.
The official VM image is called vloop-vnf and it is available for free download from OPNFV artifactory. This image is based on Linux Ubuntu distribution and it supports following applications for traffic forwarding:
Custom l2fwd module
The vloop-vnf can be downloaded to DUT, for example by
NOTE: In case that
wget is not installed at your DUT, you could install it at RPM
based system by
sudo yum install wget or at DEB based system by
sudo apt-get install
Changelog of vloop-vnf:
only 1 NIC is configured by default to speed up boot with 1 NIC setup
security updates applied
Linux kernel 4.4.0 installed
security updates applied
snmpd service is disabled by default to avoid error messages during VM boot
security updates applied
version with development tools required for build of DPDK and l2fwd
The test suite requires Python 3.3 or newer and relies on a number of other system and python packages. These need to be installed for the test suite to function.
Updated kernel and certain development packages are required by DPDK, OVS (especially Vanilla OVS) and QEMU. It is necessary to check if the versions of these packages are not being held-back and if the DNF/APT/YUM configuration does not prevent their modification, by enforcing settings such as “exclude-kernel”.
Installation of required packages, preparation of Python 3 virtual environment and compilation of OVS, DPDK and QEMU is performed by script systems/build_base_machine.sh. It should be executed under the user account, which will be used for vsperf execution.
NOTE: Password-less sudo access must be configured for given user account before the script is executed.
$ cd systems $ ./build_base_machine.sh
NOTE: you don’t need to go into any of the systems subdirectories, simply run the top level build_base_machine.sh, your OS will be detected automatically.
Script build_base_machine.sh will install all the vsperf dependencies in terms of system packages, Python 3.x and required Python modules. In case of CentOS 7 or RHEL it will install Python 3.8 from an additional repository provided by Software Collections (a link). The installation script will also use virtualenv to create a vsperf virtual environment, which is isolated from the default Python environment, using the Python3 package located in /usr/bin/python3. This environment will reside in a directory called vsperfenv in $HOME.
It will ensure, that system wide Python installation is not modified or broken by ViNePerf installation.
The complete list of Python
packages installed inside virtualenv can be found in the file
requirements.txt, which is located at the ViNePerf repository.
NOTE: For RHEL 7.3 Enterprise and CentOS 7.3 OVS Vanilla is not built from upstream source due to kernel incompatibilities. Please see the instructions in the ViNePerf_design document for details on configuring OVS Vanilla for binary package usage.
NOTE: For RHEL 7.5 Enterprise DPDK and Openvswitch are not built from upstream sources due to kernel incompatibilities. Please use subscription channels to obtain binary equivalents of openvswitch and dpdk packages or build binaries using instructions from openvswitch.org and dpdk.org.
1.6.1. VPP installation¶
VPP installation is now included as part of the VSPerf installation scripts.
In case of an error message about a missing file such as “Couldn’t open file /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7” you can resolve this issue by simply downloading the file.
$ wget https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
1.7. Using ViNePerf¶
You will need to activate the virtual environment every time you start a new shell session. Its activation is specific to your OS:
CentOS 7 and RHEL
$ scl enable rh-python34 bash $ source $HOME/vsperfenv/bin/activate
Fedora and Ubuntu
$ source $HOME/vsperfenv/bin/activate
After the virtual environment is configued, then ViNePerf can be used. For example:
(vsperfenv) $ cd vineperf (vsperfenv) $ ./vsperf --help
In case you will see following error during environment activation:
$ source $HOME/vsperfenv/bin/activate Badly placed ()s.
then check what type of shell you are using:
$ echo $SHELL /bin/tcsh
See what scripts are available in $HOME/vsperfenv/bin
$ ls $HOME/vsperfenv/bin/ activate activate.csh activate.fish activate_this.py
source the appropriate script
$ source bin/activate.csh
1.7.2. Working Behind a Proxy¶
If you’re behind a proxy, you’ll likely want to configure this before running any of the above. For example:
export http_proxy=proxy.mycompany.com:123 export https_proxy=proxy.mycompany.com:123
1.7.3. Bind Tools DPDK¶
VSPerf supports the default DPDK bind tool, but also supports driverctl. The driverctl tool is a new tool being used that allows driver binding to be persistent across reboots. The driverctl tool is not provided by VSPerf, but can be downloaded from upstream sources. Once installed set the bind tool to driverctl to allow ViNePerf to correctly bind cards for DPDK tests.
PATHS['dpdk']['src']['bind-tool'] = 'driverctl'
1.8. Hugepage Configuration¶
Systems running vsperf with either dpdk and/or tests with guests must configure hugepage amounts to support running these configurations. It is recommended to configure 1GB hugepages as the pagesize.
The amount of hugepages needed depends on your configuration files in vsperf.
Each guest image requires 2048 MB by default according to the default settings
GUEST_MEMORY = ['2048']
The dpdk startup parameters also require an amount of hugepages depending on
your configuration in the
DPDK_SOCKET_MEM = ['1024', '0']
DPDK_SOCKET_MEM is used by all vSwitches with DPDK support.
It means Open vSwitch, VPP and TestPMD.
VSPerf will verify hugepage amounts are free before executing test environments. In case of hugepage amounts not being free, test initialization will fail and testing will stop.
NOTE: In some instances on a test failure dpdk resources may not release hugepages used in dpdk configuration. It is recommended to configure a few extra hugepages to prevent a false detection by VSPerf that not enough free hugepages are available to execute the test environment. Normally dpdk would use previously allocated hugepages upon initialization.
Depending on your OS selection configuration of hugepages may vary. Please refer to your OS documentation to set hugepages correctly. It is recommended to set the required amount of hugepages to be allocated by default on reboots.
Information on hugepage requirements for dpdk can be found at http://doc.dpdk.org/guides/linux_gsg/sys_reqs.html
You can review your hugepage amounts by executing the following command
cat /proc/meminfo | grep Huge
If no hugepages are available vsperf will try to automatically allocate some.
Allocation is controlled by
HUGEPAGE_RAM_ALLOCATION configuration parameter in
02_vswitch.conf file. Default is 2GB, resulting in either 2 1GB hugepages
or 1024 2MB hugepages.
1.9. Tuning Considerations¶
With the large amount of tuning guides available online on how to properly tune a DUT, it becomes difficult to achieve consistent numbers for DPDK testing. VSPerf recommends a simple approach that has been tested by different companies to achieve proper CPU isolation.
The idea behind CPU isolation when running DPDK based tests is to achieve as few interruptions to a PMD process as possible. There is now a utility available on most Linux Systems to achieve proper CPU isolation with very little effort and customization. The tool is called tuned-adm and is most likely installed by default on the Linux DUT
VSPerf recommends the latest tuned-adm package, which can be downloaded from the following location:
Follow the instructions to install the latest tuned-adm onto your system. For current RHEL customers you should already have the most current version. You just need to install the cpu-partitioning profile.
yum install -y tuned-profiles-cpu-partitioning.noarch
Proper CPU isolation starts with knowing what NUMA your NIC is installed onto. You can identify this by checking the output of the following command
cat /sys/class/net/<NIC NAME>/device/numa_node
You can then use utilities such as lscpu or cpu_layout.py which is located in the src dpdk area of VSPerf. These tools will show the CPU layout of which cores/hyperthreads are located on the same NUMA.
Determine which CPUS/Hyperthreads will be used for PMD threads and VCPUs for VNFs. Then modify the /etc/tuned/cpu-partitioning-variables.conf and add the CPUs into the isolated_cores variable in some form of x-y or x,y,z or x-y,z, etc. Then apply the profile.
tuned-adm profile cpu-partitioning
After applying the profile, reboot your system.
After rebooting the DUT, you can verify the profile is active by running
Now you should have proper CPU isolation active and can achieve consistent results with DPDK based tests.
The last consideration is when running TestPMD inside of a VNF, it may make sense to enable enough cores to run a PMD thread on separate core/HT. To achieve this, set the number of VCPUs to 3 and enable enough nb-cores in the TestPMD config. You can modify options in the conf files.
GUEST_SMP = ['3'] GUEST_TESTPMD_PARAMS = ['-l 0,1,2 -n 4 --socket-mem 512 -- ' '--burst=64 -i --txqflags=0xf00 ' '--disable-hw-vlan --nb-cores=2']
Verify you set the VCPU core locations appropriately on the same NUMA as with your PMD mask for OVS-DPDK.