I am trying to bind this nic I40E:
Ethernet Controller X710 for 10GbE backplane 1581
my OS is ubuntu 18.04
kernel: 4.15.0-74-generic
I used dpdk-setup.sh to Insert VFIO module.
i also add iommu=on to grub file.
running devbind command:
sudo ./dpdk-devbind.py -b vfio-pci 02:00.1
i Got this erros:
Error: bind failed for 0000:02:00.1 - Cannot bind to driver vfio-pci
dmesg output:
[ 5091.393436] vfio-pci: probe of 0000:02:00.1 failed with error -22
There is no issue in binding vfio-pci on Ethernet Controller X710. I have followed the following steps with success
DPDK version: dpdk-20.11
NIC: driverversion=2.1.14-k, firmware=6.01
modprobe vfio-pci
confirm via lsmod
bind with sudo ./usertools/dpdk-devbind.py -b vfio-pci [PCIe B:D:F]
[EDIT-1] as per DPDK documentation workaround for VT-d or iommu is suggested.
note:
If VT-D is not enabled in BIOS and kernel command line (grub option), the work around is to use echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode, then follow the above steps.
if the interface is already used in Kernel - Warning: routing table indicates that interface [PCIe B:D:F] is active. Not modifying. Simply disable the interface with ifconfig [interface name] down and bind again.
Related
I have a board with one ethernet interface (eth0) running Linux.
I'm trying to forward all incoming traffic from eth0 to my PMD driver, using dpdk-l2fwd example application.
Here is what I've tried:
./dpdk-l2fwd -c 0x3 --vdev={my_pmd}0 -- -p 0x3 -T 0
I can see that my rx_pkt_burst callback is polled by the application, but that's it.
How can I forward all incoming eth0 packets to my PMD?
I tried to use net_tap, using the following command:
./dpdk-l2fwd -c 0xff --vdev=net_tap0 --vdev={my_pmd}0 -- -p 0x7 -T 0 --portmap="(1,2)"
And my tx_pkt_burst callback is called occasionally, but not when I think it should be called.
For example, if I ping this board from another one, the ping is successful, but the tx_pkt_burst callback is not been called.
I tried to use devbind tool, but no devices are detected:
./usertools/dpdk-devbind.py --status
No 'Network' devices detected
=============================
No 'Baseband' devices detected
==============================
No 'Crypto' devices detected
============================
No 'Eventdev' devices detected
==============================
No 'Mempool' devices detected
=============================
No 'Compress' devices detected
==============================
No 'Misc (rawdev)' devices detected
===================================
No 'Regex' devices detected
===========================
Update
DPDK version - 20.11.
My HW is a embedded device based on NXP's Layerscape.
$ lshw -class network
*-network
description: Ethernet interface
physical id: 3
logical name: eth0
serial: 00:11:22:44:11:44
size: 1Gbit/s
capacity: 1Gbit/s
capabilities: ethernet physical tp mii 10bt-fd 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=fsl_dpaa2_eth driverversion=5.10.35-00002-g3434eea0e1e7-dir duplex=full firmware=7.17 ip=192.168.15.157 link=yes multicast=yes port=twisted pair speed=1Gbit/s
I'm trying to bypass all traffic to the PMD I'm currently developing.
Thanks.
[EDIT-1] clarification of using same interface for DPDK and Kernel routing
Answer> as discussed over comments please refer to DPDKD + kernel on same interface
Based on the information shared there are multiple questions to the single query I'm trying to bypass all traffic to the PMD I'm currently developing. Addressing each one separately below
question 1: using dpdk-l2fwd example application
Answer> DPDK application l2fwd application makes use of basic APi with almost no HW offloads. Based on your environment (I have a board with one ethernet interface (eth0)), the right set of parameters should be -p 0x1 --no-mac-updating -T 1. This will configure the application to receive and transmit packet using single DPDK interface (that is eth0 on your board).
Note: DPDK Application can work with DPDK PMD both physical and virtual
question 2: I tried to use net_tap, using the following command:
Answer> If the intend is to intercept the traffic from physical and then forward to tap interface, then one needs modify the eal arguments as ./build/l2fwd --vdev=net_tap0,iface="my_eth0" -- -p 0x3 -T 1 --no-mac-updating. This will allow the application to probe physical NXP interface (eth0) and make use of Linux TAP interface as secondary interface. Thus any traffic from NXP and TAP will be cross connected such as NXP (eth0) <==> TAP (my_eth0)
question 3: ./usertools/dpdk-devbind.py --status returns empty
Answer> Form the dpdk site supported NIC list NXP dpaa, dpaa2, enetc, enetfec, pfe. Cross checking the kernel driver fsl_dpaa2_eth I think it is safe to assume dpaa2 PMD is supported. As you have mentioned the NIC is not enumerated, it looks like there are certain caveats to such model revision, supported board, BSP package, vendor-sub vendor ID check etc. More details can be found Board Support Package, and DPAA2 NIC guide
Debug & Alternative solutions:
To start with use the Kernel Driver to bring in packets
Use extra logging and debug to identify why the NIC is shown in the application
Approach 1:
Make sure the NIC is bind with kernel driver fsl_dpaa2_eth.
ensure NIC is connected and link is up with ethtool eth0
set to promiscous mode with ifconfig eth0 promisc up
start DPDK application with PCAP PMD, ./build/l2fwd --vdev=net_pcap0,iface=eth0 -- -p 1 --no-mac-updating -T 1
Check packet are received and redirected to PCAP eth0 PMD by checking the statistics.
Approach 2:
Ideally the NIC should be categorized under network device to be probed by debind.py.
Check the device details using lshw -c net -businfo for network.
try checking with lspci -Dvmmnnk [PCIe BUS:Slot:Function id] for network details.
If above details does not show up as network device this might be reason for not getting listed.
Suggestions or workaround: You can try to forcefully bind with igb_uio or vfio-pci (I am not much famialr with NXP SoC) by dpdk-devbind -b vfio-pci [PCIe S:B:F]. Then cross check with lspci -ks [PCIe S:B:F]. Once successfully done, one can start dpdk l2fwd in PMD debug mode with ./build/l2fwd -a [PCIe S:B:F] --log-level=pmd,8 -- -p 1 --no-mac-updating | more. Thus by intercepting and interpreting the logs one can identify what is going
Note:
It is assumed the application is build with static libraries and not dynamic. To build with static libraries use make static for l2fwd.
For the described use case recommended application is basicfwd/skeleton rather than l2fwd.
Found the problem.
I had to unbind eth0 from Linux kernel.
Now I can simply run:
./dpdk-l2fwd -c 0x3 --vdev={MY_PMD}0 -- -p 0x3 -T 1
And all traffic in the physical port is forwarded to my PMD.
Some context first:
I installed and configured OVS-DPDK on VM0 [ubuntu + qemu/kvm + ovs-dpdk]. As a guest running on top of VM0 I have VM1 [ubuntu + dpdk].
After a bit of googling I was able to create vhost-user-client port in OVS and consume it with VM1.
At this point I can see the product of vhost-user-client port by running:
$ sudo dpdk-devbind.py --status
Network devices using kernel driver
===================================
0000:01:00.0 'Virtio network device 1041' if=enp1s0 drv=virtio-pci unused=vfio-pci *Active*
0000:07:00.0 'Virtio network device 1041' if=enp7s0 drv=virtio-pci unused=vfio-pci *Active* <--- this is it
However after binding it using command:
sudo dpdk-devbind.py --bind=virtio-pci 0000:07:00.0
When running dpdk-testpmd, I get nb forwarding ports=0
$ sudo dpdk-testpmd
EAL: Detected 3 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:01:00.0 (socket 0)
testpmd: No probed ethernet devices
testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: native
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=0
Press enter to exit
I dont see any clues for this in dmesg and not sure where else to look.
How can I debug this issue
As hinted in the comments, DPDK ports will work with uio_pci_generic | vfio-pci | igb_uio. Hence for a virtio network device to be used with DPDK it needs binding with vfio-pci.
Please follow the steps as
ifconfig enp7s0 down
sudo modprobe uio
sudo modprobe vfio-pci (for newer kernel this can be skipped)
sudo echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
sudo dpdk-devbind.py --bind=vfio-pci 0000:07:00.0
One done successfully dpdk application with virtio net can be used.
Note: as mentioned in the comments, it is not clear why virtio net device 0000:07:00.0 is bind with virtio-pci
I am facing the issue of master lcore cannot be used for a port while trying to run the default configuration of run.py in packet gen dpdk. When i run the same command using sudo, it gives the error that the packet gen is not available. I have searched all over the internet and did not came across anyone facing such error. kindly guide me about this error. I am attaching the screenshot of the error.Master lcore not available for port error
default configuration file part 1
default configuration file part 2
One can not pass the master core as RX or TX core for any port for pktgen. Since this will be running periodic handler for CLI and other services.
Please change the core to port mapping [0:1].0-3 to [1].0. This will make port 0 to use core 1 to run RX and TX.
We're trying to run DPDK example apps in a guest machine running Centos 7.5. The host is ESXi version 6.5.
I'm building dpdk on the guest machine where I'm trying to run it. I've tried both DPDK versions 18.05 and 18.08.
We have created five interfaces on esxi for connection to our guest. One management port and four data ports. We're binding theses four data ports to DPDK. The ports are all VMXNET3 interfaces. They are basically setup like the VMXNET3 interfaces in [https://doc.dpdk.org/guides/nics/vmxnet3.html], using a vswitch to connect to a physical interface. However note that we do not have any VF interfaces as shown in this document, only VMXNET3 interfaces. Unfortunately this document does not show any details on how to do the setup.
This document from vmware also shows a very similar setup. But again no details on how to setup.
Fundamentally, the roadblock we are hitting is that the VMXNET3 interfaces are failing initialization when starting the DPDK example app. Here is what we see:
[root#rg-vm ~]# ./dpdk-18.08/examples/packet_ordering/build/packet_ordering -c 0x0e0 -- -p 0xf
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:04:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
eth_vmxnet3_dev_init(): Incompatible hardware version: 0
EAL: Requested device 0000:04:00.0 cannot be used
We see this for all four interfaces that we are trying to bind to DPDK. However, strangely, sometimes after a reboot, the first two interfaces initialize correctly. But after that first attempt, all four interfaces then fail the same way.
Here are the commands we're using to setup DPDK.
modprobe uio
insmod ./dpdk-18.08/build/build/kernel/linux/igb_uio/igb_uio.ko
./dpdk-18.08/usertools/dpdk-devbind.py --bind=igb_uio 04:00.0
./dpdk-18.08/usertools/dpdk-devbind.py --bind=igb_uio 0c:00.0
./dpdk-18.08/usertools/dpdk-devbind.py --bind=igb_uio 13:00.0
./dpdk-18.08/usertools/dpdk-devbind.py --bind=igb_uio 1b:00.0
Note that we have also tried using the uio_pci_generic with the same results. We have not been able to get the vfio-pci driver to bind to the VMXNET3 interfaces.
I'm not sure it matters, but the physical interfaces on the other side of the vswitch that we're connecting to are:
17:00.0 Ethernet controller: Intel Corporation I350 Gigabit Fiber Network Connection (rev 01)
We have also tried using a Ethernet cards based on the intel 82576 chipset (this is the chipset DPDK shows being used in their documentation), and one based on the Intel X710. We see the same error using either of these cards as we did with the i350. So I think that eliminates the ethernet hardware, which makes sense, as using the vswitch between us and the ethernet controller should make us agnostic to what it actually is.
We are running on a Dell R540. Also note that when we run Centos 7.5 with DPDK on this hardware without VMWare, everything works fine. Also if we run in VMWare, but "passthrough" the i350 interfaces to the VM (instead of using vswitch and vmxnet) everything also works fine in that case.
I've tried updating the kernel (3.10) to the latest (4.18) but still get the same error.
If I try to read the version register (VRRS) (the one that causes this error) in the vmxnet3 pci bar registers (before I bind to DPDK) using ethtool, it looks fine (0xf). I've googled around a lot but can't seem to find much help on this. It is very possible the issue is with how I'm setting things up but I can't find any info that gives details on how else to do it.
Any help would be greatly appreciated. Thanks!
Try these steps:
cd /etc/default
vi grub
Edit GRUB-CMDLINE and Add “nopku”
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet nopku transparent_hugepage=never log_buf_len=8M"
Recompile grub: sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Reboot the VM and try DPDK.
To Configure dhcp ipv6 in my application I am using below mentioned command. Same command is getting executed in dhcp version 4.2.5-P1. But my application has dhclient version 4.3.0. And command cause failure for execution.
So Can any one guide me on this? What difference will be there in both version?
dhclient -6 -pf /var/run/udhcpc6.eth0.pid eth0 -e hostname:"ubuntu" -nw