DPDK driver compatible with Intel X710 NIC - dpdk

Could you please suggest which Intel DPDK driver in Virtual Machine is compatible with Intel X710 NIC driver in Host?The igb_uio driver which we are currently using may be only compatible with Intel NICs like 82599.

Since the question is not clear, I have to make certain assumptions.
Assumptions:
you want to run your application (DPDK) in guest OS.
You are having x710 (Fortville) NW card on host.
To achieve the same, you will have 3 options.
a. X710 pass through to guest os.
b. X710 as SRIOV to guest OS.
c. Using intermediate Application like OVS, Virtual Switch, VPP or Snabb switch to connect to guest OS.
For case a and b you still can use igb_uio or 'vfio-pcias the kernel driver is still i40e and device is seen as x710. For casecyou can use 'igb_uio` with virtio-pci as kernel driver.

Thanks for updating the details, as this makes clear of the environment and setup. please find the answers to the queries and what can be done to fix things
Environment:
host os: RHEL 7.6, X710 PF lets call it eno1, kernel PF driver is i40e
guest os: RHEL 7.6, X710 VF created from eno1 let us call them eno2 and eno3, these are passed to VF and bound with igb_uio
expected behaviour: Ingress (RX) and Egress (TX) should work
observed behaviour: Egress (TX) only works and Ingress (RX) to VM ports are not work
Fix for incoming packets from Host’s Physical port are not reaching VM via VF is to redirect traffic from Physical X710 to the required SRIOV port we have 2 options
using virtual switches like OVS, Snabb Switch or VPP
using PF flow director to set rules.
Current description I am not able to find the same.
Answer to your queries
why does X710 NIC VF driver remove the VLAN without RX offload VLAN strip flags set? The unexpected VLAN removal behaviour of X710 NIC VF driver vfio-pci is a known bug?
I believe, this is to do with the port init configuration you pass as you might be passing eth_conf in API rte_eth_dev_configure as default. This will use default RX offload behaviour which is dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_QINQ_STRIP.
The outgoing packets from DPDK application are leaving VM via VF towards Host’s Physical ports
this is because if you use default config for rte_eth_dev_configure the tx offload is to support VLAN
But the incoming packets from Host’s Physical port are not reaching VM via VF,
this has to be dictated by HOST PF, flow director rule and VF settings. I assume you are not using flow director on the host and set rte_eth_dev_configure as default values in guest OS.

Related

How to add veth interfaces in dpdk

I need to create a veth device for the slowpath for control packets.
What I tried till now:
I have created veth interfaces using below command
sudo ip link add veth1-2 type veth peer name veth2-1
when I use command sudo dpdk-devbind.py --bind=igb_uio veth1-2 to add veth into dpdk.
It gives me an error that "Unknown device: veth2-1
Is there any way we can add veth interfaces in dpdk ?
If you want a "dpdk-solution", then what you'll want to look at is KNI: https://doc.dpdk.org/guides/prog_guide/kernel_nic_interface.html
From their docs:
The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux* control plane.
The benefits of using the DPDK KNI are:
Faster than existing Linux TUN/TAP interfaces (by eliminating system
calls and copy_to_user()/copy_from_user() operations.
Allows management of DPDK ports using standard Linux net tools such as
ethtool, ifconfig and tcpdump.
Allows an interface with the kernel network stack.
If your fine using a non-dpdk solution, then a TUN/TAP device is a typical way to interface with the networking stack. Your application would receive packets on the dpdk-controlled nic, and if it is a control packet you would simply forward it on to the TUN/TAP device (or KNI if using DPDK's version of TUN/TAP). Similarly for the other direction, if the TUN/TAP/KNI device receives a packet from the networking stack, you would simply send it out the DPDK physical nic.

How to test basicfwd application in DPDK

I am using Ubuntu 16.04 virtual machine with Kernel 4.4 and DPDK version 17.11. I managed to configure igb_uio drivers using setup utility.
Then I compiled basicfwd application DPDK. I also configured two ports with igb_uio driver and verified that it's associated with DPDK and not shown in Linux Kernel.
basicfwd application is listening on two ports where MAC address is displayed.
I am not sure how to send packet for MAC address. Can anyone suggest how to create packets for given MAC address using a command or utility. Windows is host OS in my laptop.
I also see PMD application and packetgen application are used for testing purpose. I am not sure whether they can be used to test basicfwd application.
Also I would like to know How to assign ip address for DPDK port so that they can receive packets in live environment. I need to study more on DPDK on these aspects.
Would appreciate any help on this.
DPDK is an alternative for Kernel stack Processing, so any ports bound to DPDK via uio_pci_generic/vfio-pci/igb_uio will not be supporting IPv4/IPv6 address as kernel netdev. Hence the expectation of How to assign ip address for DPDK port is incorrect.
With respect to sending packet into a virtual machine, there are a couple of combination
have complete NIC pass through to VM (PF/VF) - in this scenario, one needs to send a packet through the physical interface it.
have port representation like TAP/VETH-Pair passed as virtio interface - in this scenario on the host machine, there will be representation port. So you can use tools like ping/arping/packeth/pktgen/scapy/ostinato to generate packets for you.
Note: if you are using testpmd dpdk application you can make it run in Promiscuous mode. For examples like l2fwd/skeleton the ports are by default set into promiscuous mode.

DPDK: Zero Tx or Rx packets while runnning TestPMD

I have setup DPDK 20.11. While running the basic TestPMD code, the number of Transmitted packets and received packets are zero. I need help and I am new to this.
I have attached the terminal screenshot of running TestPmd. I would like to know where I am making mistake.
OS: Ubuntu 16.04.6LTS (Xenial Xerus)
Testpmd was provided with no arguments (just gave 'sudo ./dpdk-testpmd' command)
Physical NIC
Firmware Details:
The driver details and NIC firware has been provided in the link
[Edit 1] port info of first port
Port info of second port
Had a live debug on the setup, The physical were not physically connected NIC or switch. In Linux Kernel ethtool the links are down. Hence in DPDK application, we get same message as link Down
Solution: connect the interfaces with either NIC or switch to get ports state up

DPDK getting too many rx_crc_errors on one port

what may cause rx_crc_erros in DPDK ports?
is it a software thing ? or a hardware thing related to the port or the traffic coming from the other end ?
DPDK Version: 19.02
PMD: I40E
This Port is running on customer Network, worth mentioning that this is the only port (out of 4) having this behaviour, so this may be a router/traffic thing but I couldnt verify that
used dpdk-proc-info to get this data
could not do any additional activity as this is running on customer site
DPDK I40E PMD has only option to enable or disable CRC on the port. Hence the assumption of DPDK I40E PMD is causing CRC error on 1 port out of 4 can be ruled out fully.
The `RX packets are validated by ASIC per port for CRC and then DMA to mbuf for packet buffer. The PMD copies the descirptor states to mbuf struct (one among them is CRC). The packet descriptor indicates the CRC result of the packet buffer to the driver (Kernel/DPDK-PMD). So the CRC error on a given port can arise due to the following reasons as
the port connected to ASIC is faulty (very rare case)
the SFP+ is not properly connected (possible).
the SFP+ is not the recommended one (possible).
the traffic coming from the other end is sending CRC packets as faulty.
One needs to isolate the issue by
binding the port to Linux Driver i40e and checking the statistics via ethtool -S [port].
Check SFP+ for compatibility on the faulty port by swapping with a working one.
re-seat the SFP+ again.
swap the data cables between working and faulty port. Then check if the error is present or not.
If all the above 4 cases the error only comes on the fault port, then indeed the NIC card has only 3 working ports among 4. The NIC card needs replacements or one should ignore the faulty port altogether. Hence this is not a DPDK PMD or library issue.

virtualbox bridged interface won't connect to internet

I'm trying to connect an ubuntu 12.04 to my local network + internet, i would like it to be directly reachable from the local network, NAT interface don't do it so i tried setting up a bridge (in the virtualbox GUI). But then i can access any other host on the LAN from the guest, but i can't access internet, the DHCP seems to work fine since my guest OS gets an IP in the correct range and with the correct mask. However i can't even ping the router which connect me to the internet (the same machine as the DHCP server).
here's my configuration:
host machine: linux mint debian edition X86-64
guests: win7 64 and ubuntu server 12.04 x86-64 (both have the same issue)
router-gateway-dhcp: livebox from orange ISP
host network interface: wifi usb dongle with chipset RTL8191SU (which works fine for my host)
I know the bridged mode isn't supported by all wireless adapter, but isn't it weird that i can access local network but not the internet?
maybe the problem comes from the gateway itself?
any advice would be very appreciated
In general, there is no problem in using VirtualBox in bridged mode on a laptop. I use it very often and on different platforms, yet it can be related to your network adapter.
Can you try to debug your network by using tcpdump (if familiar) or wireshark? I once had the problem that the mac address got rejected by the router. Can you check if the router is permissive for unauthorized mac addresses or anything alike?