what may cause rx_crc_erros in DPDK ports?
is it a software thing ? or a hardware thing related to the port or the traffic coming from the other end ?
DPDK Version: 19.02
PMD: I40E
This Port is running on customer Network, worth mentioning that this is the only port (out of 4) having this behaviour, so this may be a router/traffic thing but I couldnt verify that
used dpdk-proc-info to get this data
could not do any additional activity as this is running on customer site
DPDK I40E PMD has only option to enable or disable CRC on the port. Hence the assumption of DPDK I40E PMD is causing CRC error on 1 port out of 4 can be ruled out fully.
The `RX packets are validated by ASIC per port for CRC and then DMA to mbuf for packet buffer. The PMD copies the descirptor states to mbuf struct (one among them is CRC). The packet descriptor indicates the CRC result of the packet buffer to the driver (Kernel/DPDK-PMD). So the CRC error on a given port can arise due to the following reasons as
the port connected to ASIC is faulty (very rare case)
the SFP+ is not properly connected (possible).
the SFP+ is not the recommended one (possible).
the traffic coming from the other end is sending CRC packets as faulty.
One needs to isolate the issue by
binding the port to Linux Driver i40e and checking the statistics via ethtool -S [port].
Check SFP+ for compatibility on the faulty port by swapping with a working one.
re-seat the SFP+ again.
swap the data cables between working and faulty port. Then check if the error is present or not.
If all the above 4 cases the error only comes on the fault port, then indeed the NIC card has only 3 working ports among 4. The NIC card needs replacements or one should ignore the faulty port altogether. Hence this is not a DPDK PMD or library issue.
Related
I am using XL710 i40e NIC on dpdk19.11. I found that the NIC occasionally lost packets when I enabled the TSO feature.
The detailed information is as follow:
https://github.com/JiangHeng12138/dpdk-issue/issues/1
I gussed lost packet is caused by i40e NIC dirver, but I dont know how to debug i40e driver code, could you please provide me an effective way.
Based on the problem statement tcp packet loss occurs occasionally when use dpdk19.11 i40e NIC, one needs to isolate the issue whether is it is client (peer system) or server (dpdk DUT) which leads to packet loss. So to debug the issue at DPDK server side, one needs to evaluate both RX and TX issues. DPDK tool dpdk-procinfo can retrieve port statistics, which can be used for the analysis of the issue.
Diagnose the issue:
Run the application (dpdk primary) to reproduce the issue in terminal-1.
In terminal-2, run the command dpdk-procinfo -- --stats. refer link for more details
Check RX-errors counter, this will show if the packets which were faulty were dropped at PMD level.
Check RX-nombuf counter, this will show if the packets from NIC were not able to be DMA to DDR memory on the HOST.
Check TX-errors counter, this will show if the copy of packet descriptor (DMA descriptors) to NIC had been faulty or not.
Also check the HW nic statistics with dpdk-procinfo -- --xstats for any error or drop counter updates.
sample of the capture of stats and xstats counters on the desired nic
Note:
"tx_good_packets" means the number of packets sent by the dpdk NIC. if the number of packets tried to be sent is equal to "tx_good_packets", there is no packet dropped at the sent client.
"rx-missed-errors" means packets loss at the receiver; this means you are processing packets more than what the Current CPU can handle. So either you will need to increase CPU frequency, or use additional cores to distribute the traffic.
If none of these counters is updated or errors are found, then the issue is at the peer (client non-dpdk) side.
My question is how to send packet another Physical Server from my Computer using dpdk.
I already watched example code rxtx_callbacks and i want to use this code.
but there is no place to enter a specific ip and port to another server.
how i can send packets to places on a server using dpdk with specified ip and port?
and how i can receive packets using dpdk?
Is l3fwd correct or is this another concept?
help me
DPDK is an open-source library that allows one to bypass Kernel and ETH-IP-TCP stack to send packets from userspace directly on NIC or other custom hardware. There are multiple examples and projects like pktgen and TREX which uses to generate user-defined packets (desired MAC address, VLAN, IP and TCP-UDP) payload.
For the queries
how i can send packets to places on a server using dpdk with specified ip and port?
[Answer] make use of DPDK PKTGEN as an easy way to generate. Other examples are pcap based burst replay and trex.
But the easiest way to generate and send traffic is using scapy with DPDK sample application skeleton. Following are the steps required to achieve the same.
Install DPDK to desired platform (preferably Linux)
build the DPDK example skeleton found in path [dpdk root folder]/examples/skeleton
bind a physical NIC (if traffic needs to be send out of server) with userspace drivers like igb_uio, uio_pci_generic or vfio-pci
start the application with options '-l 1 --vdev=net_tap0,iface=scapyEth'. this will create TAP interface with name scapyEth.
using scapy now create your custom packet with desired MAC, VLAN, IP and Port numbers.
and how i can receive packets using dpdk?
[Answer] on receiver side run DPDK application like testpmd, l2fwd, or skeleton if packets needs to received by Userspace DPDK application or any Linux sockets can receive the UDP packets.
Note: easiest way to check whether packets are received is to run tcpudmp. example tcpdump -eni eth1 -Q in (where eth1 is physical interface on Reciever Server.
Note: Since the request how i can send packets to places on a server is not clear
Using DPDK one can send packets through a physical interface using dedicated NIC, FPGA and wireless devices
DPDK can send packets among applications using memif interface
DPDK can send packets between VM using virtio and vhost
DPDK can send and receive packets to kernel, where Kernel routing stack and ARP table determine which kernel interface will forward the packets.
I have setup DPDK 20.11. While running the basic TestPMD code, the number of Transmitted packets and received packets are zero. I need help and I am new to this.
I have attached the terminal screenshot of running TestPmd. I would like to know where I am making mistake.
OS: Ubuntu 16.04.6LTS (Xenial Xerus)
Testpmd was provided with no arguments (just gave 'sudo ./dpdk-testpmd' command)
Physical NIC
Firmware Details:
The driver details and NIC firware has been provided in the link
[Edit 1] port info of first port
Port info of second port
Had a live debug on the setup, The physical were not physically connected NIC or switch. In Linux Kernel ethtool the links are down. Hence in DPDK application, we get same message as link Down
Solution: connect the interfaces with either NIC or switch to get ports state up
I have setup a DPDK 17 version testpmd setup. I have two high end servers which have two NIC mapped using a physical direct connection. The issue is that when i try and send traffic from one server to another using testpmd, it either does not send traffic or sends very small number of packets. I have checked multiple documentations and nothing seems to work. My configuration also seems correct.
What am i doing wrong? Please help.
For anyone who faces similar issues, it is important that both servers have testpmd running and correct NICs bound to DPDK.
I want to develop a bandwidth allocator to a network which will be behind my machine.
Now, I've read about NDIS but I am not sure whether the network traffic that is neither originating from my machine nor is destined for my machine will enter my TCP/IP stack, so that I can block/unblock packets via NDIS on a windows machine.
NDIS (kernel) drivers live in the Windows network stack, and so can only intercept packets which are handled by this stack.
You cannot filter packets which are not send to your computer.
(When the computer acts as a router, the packets are send to the computer and the computer forwards the packets to the actual recepient, if that was the question)
In normal operation mode the irrelevant traffic will be dropped by the NIC driver/firmware, like pointed above. However, this is a SW issue so this behavior can be changed by adding an appropriate logic into the device driver and/or firmware. This is how sniffers operate, for example.