dpdk-testpmd command executed and then hangs - dpdk

I made ready dpdk compatible environment and then I tried to send packets using dpdk-testpmd and wanted to see it being received in another server.
I am using vfio-pci driver in no-IOMMU (unsafe) mode.
I ran
$./dpdk-testpmd -l 11-15 -- -i
which had output like
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Probe PCI driver: net_i40e (8086:1572) device: 0000:01:00.1 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_1>: n=179456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 0)
Port 0: E4:43:4B:4E:82:00
Checking link statuses...
Done
then
$set nbcore 4
Number of forwarding cores set to 4
testpmd> show config fwd
txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 12 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=BE:A6:27:C7:09:B4
my nbcore is not being set correctly, even 'txonly' mode was not being set before I set the eth-peer addr. but some parameters are working. Moreover if I don't change the burst delay my server gets crashed as soon as I start transmitting through it has 10G ethernet port (80MBps available bandwidth by calculation). Hence, I am not seeing packets at receiving server by tailing tcpdump at corresponding receiving interface. What is happening here and what am I doing wrong?

based on the question & answers in the comments, the real intention is to send packets from DPDK testpmd using Intel Fortville (net_i40e) to the remote server.
The real issue for traffic not being generated is neither the application command line nor the interactive option is set to create packets via dpdk-testpmd.
In order to generate packets there are 2 options in testpmd
start tx_first: this will send out a default burst of 32 packets as soon the port is started.
forward mode tx-only: this puts the port under dpdk-testpmd in transmission-only mode. once the port is start it will transmit packets with the default packet size.
Neither of these options is utilized, hence my suggestion is
please have a walk through DPDK documentation on testpmd and its configuratio
make use of either --tx-first or use --forward-mode=txonly as per DPDK Testpmd Command-line Options
make use of either start txfirst or set fwd txonly or set fwd flwogen under interactive mode refer Testpmd Runtime Functions
with this traffic will be generated from testpmd and sent to the device (remote server). A quick example of the same is "dpdk-testpmd --file-prefix=test1 -a81:00.0 -l 7,8 --socket-mem=1024 -- --burst=128 --txd=8192 --rxd=8192 --mbcache=512 --rxq=1 --txq=1 --nb-cores=2 -a --forward-mode=io --rss-udp --enable-rx-cksum --no-mlockall --no-lsc-interrupt --enable-drop-en --no-rmv-interrupt -i"
From the above example config parameters
numbers of packets for rx-tx burst is set by --burst=128
number of rx-tx queues is configured by --rxq=1 --txq=1
number of cores to use for rx-tx is set by --nb-cores=2
to set flowgen, txonly, rxonly or io mode we use --forward-mode=io
hence in comments, it is mentioned neither set nbcore 4 or there are any configurations in testpmd args or interactive which shows the application is set for TX only.
The second part of the query is really confusing, because as it states
Moreover if I don't change the burst delay my server gets crashed as
soon as I start transmitting through it has 10G ethernet port (80MBps
available bandwidth by calculation). Hence, I am not seeing packets at
receiving server by tailing tcpdump at corresponding receiving
interface. What is happening here and what am I doing wrong?
assuming my server is the remote server to which packets are being sent by dpdk testpmd. because there is mention of I see packets with tcpdump (since Intel fortville X710 when bound with UIO driver will remove kernel netlink interface).
it is mentioned 80MBps which is around 0.08Gbps, is really strange. If the remote interface is set to promiscuous mode and there is AF_XDP application or raw socket application configured to receive traffic at line rate (10Gbps) it works. Since there is no logs or crash dump of the remote server, and it is highly unlikely actual traffic is generated from testpmd, this looks more of config or setup issue in remote server.
[EDIT-1] based on the live debug, it is confirmed
The DPDK is not installed - fixed by using ninja isntall
the DPDK nic port eno2 - is not connected to remote server directly.
the dpdk nic port eno2 is connected through switch
DPDk application testpmd is not crashing - confirmed with pgrep testpmd
instead when used with set fwd txonly, packets flood the switch and SSH packets from other port is dropped.
Solution: please use another switch for data path testing, or use direct connection to remote server.

Related

tcp packet loss occurs occasionally when use dpdk19.11 i40e NIC

I am using XL710 i40e NIC on dpdk19.11. I found that the NIC occasionally lost packets when I enabled the TSO feature.
The detailed information is as follow:
https://github.com/JiangHeng12138/dpdk-issue/issues/1
I gussed lost packet is caused by i40e NIC dirver, but I dont know how to debug i40e driver code, could you please provide me an effective way.
Based on the problem statement tcp packet loss occurs occasionally when use dpdk19.11 i40e NIC, one needs to isolate the issue whether is it is client (peer system) or server (dpdk DUT) which leads to packet loss. So to debug the issue at DPDK server side, one needs to evaluate both RX and TX issues. DPDK tool dpdk-procinfo can retrieve port statistics, which can be used for the analysis of the issue.
Diagnose the issue:
Run the application (dpdk primary) to reproduce the issue in terminal-1.
In terminal-2, run the command dpdk-procinfo -- --stats. refer link for more details
Check RX-errors counter, this will show if the packets which were faulty were dropped at PMD level.
Check RX-nombuf counter, this will show if the packets from NIC were not able to be DMA to DDR memory on the HOST.
Check TX-errors counter, this will show if the copy of packet descriptor (DMA descriptors) to NIC had been faulty or not.
Also check the HW nic statistics with dpdk-procinfo -- --xstats for any error or drop counter updates.
sample of the capture of stats and xstats counters on the desired nic
Note:
"tx_good_packets" means the number of packets sent by the dpdk NIC. if the number of packets tried to be sent is equal to "tx_good_packets", there is no packet dropped at the sent client.
"rx-missed-errors" means packets loss at the receiver; this means you are processing packets more than what the Current CPU can handle. So either you will need to increase CPU frequency, or use additional cores to distribute the traffic.
If none of these counters is updated or errors are found, then the issue is at the peer (client non-dpdk) side.

How to send packet another server using dpdk?

My question is how to send packet another Physical Server from my Computer using dpdk.
I already watched example code rxtx_callbacks and i want to use this code.
but there is no place to enter a specific ip and port to another server.
how i can send packets to places on a server using dpdk with specified ip and port?
and how i can receive packets using dpdk?
Is l3fwd correct or is this another concept?
help me
DPDK is an open-source library that allows one to bypass Kernel and ETH-IP-TCP stack to send packets from userspace directly on NIC or other custom hardware. There are multiple examples and projects like pktgen and TREX which uses to generate user-defined packets (desired MAC address, VLAN, IP and TCP-UDP) payload.
For the queries
how i can send packets to places on a server using dpdk with specified ip and port?
[Answer] make use of DPDK PKTGEN as an easy way to generate. Other examples are pcap based burst replay and trex.
But the easiest way to generate and send traffic is using scapy with DPDK sample application skeleton. Following are the steps required to achieve the same.
Install DPDK to desired platform (preferably Linux)
build the DPDK example skeleton found in path [dpdk root folder]/examples/skeleton
bind a physical NIC (if traffic needs to be send out of server) with userspace drivers like igb_uio, uio_pci_generic or vfio-pci
start the application with options '-l 1 --vdev=net_tap0,iface=scapyEth'. this will create TAP interface with name scapyEth.
using scapy now create your custom packet with desired MAC, VLAN, IP and Port numbers.
and how i can receive packets using dpdk?
[Answer] on receiver side run DPDK application like testpmd, l2fwd, or skeleton if packets needs to received by Userspace DPDK application or any Linux sockets can receive the UDP packets.
Note: easiest way to check whether packets are received is to run tcpudmp. example tcpdump -eni eth1 -Q in (where eth1 is physical interface on Reciever Server.
Note: Since the request how i can send packets to places on a server is not clear
Using DPDK one can send packets through a physical interface using dedicated NIC, FPGA and wireless devices
DPDK can send packets among applications using memif interface
DPDK can send packets between VM using virtio and vhost
DPDK can send and receive packets to kernel, where Kernel routing stack and ARP table determine which kernel interface will forward the packets.

How to test basicfwd application in DPDK

I am using Ubuntu 16.04 virtual machine with Kernel 4.4 and DPDK version 17.11. I managed to configure igb_uio drivers using setup utility.
Then I compiled basicfwd application DPDK. I also configured two ports with igb_uio driver and verified that it's associated with DPDK and not shown in Linux Kernel.
basicfwd application is listening on two ports where MAC address is displayed.
I am not sure how to send packet for MAC address. Can anyone suggest how to create packets for given MAC address using a command or utility. Windows is host OS in my laptop.
I also see PMD application and packetgen application are used for testing purpose. I am not sure whether they can be used to test basicfwd application.
Also I would like to know How to assign ip address for DPDK port so that they can receive packets in live environment. I need to study more on DPDK on these aspects.
Would appreciate any help on this.
DPDK is an alternative for Kernel stack Processing, so any ports bound to DPDK via uio_pci_generic/vfio-pci/igb_uio will not be supporting IPv4/IPv6 address as kernel netdev. Hence the expectation of How to assign ip address for DPDK port is incorrect.
With respect to sending packet into a virtual machine, there are a couple of combination
have complete NIC pass through to VM (PF/VF) - in this scenario, one needs to send a packet through the physical interface it.
have port representation like TAP/VETH-Pair passed as virtio interface - in this scenario on the host machine, there will be representation port. So you can use tools like ping/arping/packeth/pktgen/scapy/ostinato to generate packets for you.
Note: if you are using testpmd dpdk application you can make it run in Promiscuous mode. For examples like l2fwd/skeleton the ports are by default set into promiscuous mode.

DPDK: Zero Tx or Rx packets while runnning TestPMD

I have setup DPDK 20.11. While running the basic TestPMD code, the number of Transmitted packets and received packets are zero. I need help and I am new to this.
I have attached the terminal screenshot of running TestPmd. I would like to know where I am making mistake.
OS: Ubuntu 16.04.6LTS (Xenial Xerus)
Testpmd was provided with no arguments (just gave 'sudo ./dpdk-testpmd' command)
Physical NIC
Firmware Details:
The driver details and NIC firware has been provided in the link
[Edit 1] port info of first port
Port info of second port
Had a live debug on the setup, The physical were not physically connected NIC or switch. In Linux Kernel ethtool the links are down. Hence in DPDK application, we get same message as link Down
Solution: connect the interfaces with either NIC or switch to get ports state up

DPDK getting too many rx_crc_errors on one port

what may cause rx_crc_erros in DPDK ports?
is it a software thing ? or a hardware thing related to the port or the traffic coming from the other end ?
DPDK Version: 19.02
PMD: I40E
This Port is running on customer Network, worth mentioning that this is the only port (out of 4) having this behaviour, so this may be a router/traffic thing but I couldnt verify that
used dpdk-proc-info to get this data
could not do any additional activity as this is running on customer site
DPDK I40E PMD has only option to enable or disable CRC on the port. Hence the assumption of DPDK I40E PMD is causing CRC error on 1 port out of 4 can be ruled out fully.
The `RX packets are validated by ASIC per port for CRC and then DMA to mbuf for packet buffer. The PMD copies the descirptor states to mbuf struct (one among them is CRC). The packet descriptor indicates the CRC result of the packet buffer to the driver (Kernel/DPDK-PMD). So the CRC error on a given port can arise due to the following reasons as
the port connected to ASIC is faulty (very rare case)
the SFP+ is not properly connected (possible).
the SFP+ is not the recommended one (possible).
the traffic coming from the other end is sending CRC packets as faulty.
One needs to isolate the issue by
binding the port to Linux Driver i40e and checking the statistics via ethtool -S [port].
Check SFP+ for compatibility on the faulty port by swapping with a working one.
re-seat the SFP+ again.
swap the data cables between working and faulty port. Then check if the error is present or not.
If all the above 4 cases the error only comes on the fault port, then indeed the NIC card has only 3 working ports among 4. The NIC card needs replacements or one should ignore the faulty port altogether. Hence this is not a DPDK PMD or library issue.