all
I tried to run vhost app in examples/, and facing the issue below:
[]# examples/vhost/build/app/vhost-switch -l 0-3 -n 4 -- --socket-file /tmp/sock0 --client -p 0x1 --stats 20
EAL: Detected 24 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.rte_unix
EAL: Probing VFIO support...
EAL: WARNING: Master core has no memory on local socket!
EAL: PCI device 0000:07:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:07:00.1 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:09:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10d3 net_e1000_em
VHOST_PORT:
Specified port number(1) exceeds total system port number(0)
EAL: Error - exiting with code: 1
Cause: Cannot create mbuf pool
The HugePage information is like this:
[]# sudo cat /proc/meminfo | grep Huge
AnonHugePages: 3129344 kB
HugePages_Total: 4096
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
I tried to run other apps, helloworld goes well, but ptpclient has the same problem. Rebooting is not help. How can I fix it?
Any help is appreciated.
Thanks in advance.
The real issue is here:
Specified port number(1) exceeds total system port number(0)
This means no ethernet ports has been detected. Please make sure you have bound at least one ethernet device to the UIO or VFIO driver as described in the DPDK Getting Started Guide:
https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
Related
I am trying to execute the example code in dpdk.I am getting the following error I have 2 versions of dpdk installed in system Please suggest a solution for the issue.
Blockquote
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 16777216 kB hugepages reported
EAL: 512 hugepages of size 2097152 reserved, but no mounted hugetlbfs found fore
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Couldn't get fd on hugepage file
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
EAL: Error - exiting with code: 1
Cause: Error with EAL initialization
Edit:Solves the issue-
echo 512|sudo tee /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
[EDIT-1] based on the limited information
EAL: No available 16777216 kB huge pages reported
EAL: 512 huge pages of size 2097152 reserved, but no mounted hugetlbfs
this looks like more of
sudo issue
huge page not available issue.
[ima] Thanks, Vipin. Yes, it was hugepage related issue.
Setup
Pktgen version 21.01.0
DPDK version 20.11
OS: ubuntu 18.04
NIC: Mellanox
driver: mlx5_core
version: 5.1-2.5.8
firmware-version: 16.28.2006 (MT_0000000012)
expansion-rom-version:
bus-info: 0000:03:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes
Issue
When I run dpdk-testpmd in dpdk 20.11 and ./dpdk-devbind -s, it can find mlx port
Network devices using kernel driver
===================================
0000:03:00.0 'MT27800 Family [ConnectX-5] 1017' if=ens1f0 drv=mlx5_core unused=vfio-pci
0000:03:00.1 'MT27800 Family [ConnectX-5] 1017' if=ens1f1 drv=mlx5_core unused=vfio-pci
0000:05:00.0 'I210 Gigabit Network Connection 1533' if=enp5s0 drv=igb unused=vfio-pci *Active*
0000:06:00.0 'I210 Gigabit Network Connection 1533' if=enp6s0 drv=igb unused=vfio-pci
0000:07:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=enp7s0f0 drv=ixgbe unused=vfio-pci
0000:07:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=enp7s0f1 drv=ixgbe unused=vfio-pci
$sudo build/app/dpdk-testpmd -c7 --vdev=net_pcap0,iface=eth0 --vdev=net_pcap1,iface=eth1 -- -i --nb-cores=2 --nb-ports=2 --total-num-mbufs=2048
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:03:00.0 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:03:00.1 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
open_iface_live(): Couldn't open eth0: eth0: No such device exists (SIOCGIFHWADDR: No such device)
open_single_iface(): Couldn't open interface eth0
vdev_probe(): failed to initialize net_pcap0 device
open_iface_live(): Couldn't open eth1: eth1: No such device exists (SIOCGIFHWADDR: No such device)
open_single_iface(): Couldn't open interface eth1
vdev_probe(): failed to initialize net_pcap1 device
EAL: Bus (vdev) probe failed.
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 98:03:9B:06:AB:34
Configuring Port 1 (socket 0)
Port 1: 98:03:9B:06:AB:35
Checking link statuses...
Done
testpmd> quit
But when I run pktgen, it doesn't work.
$sudo ./Builddir/app/pktgen -c 0xff -n 3 -a 0000:03:00.1 -- -p 0x1 -P -m "[1:2].0"
Copyright (c) <2010-2020>, Intel Corporation. All rights reserved. Powered by DPDK
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: No legacy callbacks, legacy socket not created
*** Copyright (c) <2010-2020>, Intel Corporation. All rights reserved.
*** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
Port: Name IfIndex Alias NUMA PCI
!PANIC!: *** Did not find any ports to use ***
PANIC in pktgen_config_ports():
*** Did not find any ports to use ***
6: [./Builddir/app/pktgen(+0x977a) [0x556f614b477a]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f1721223bf7]]
4: [./Builddir/app/pktgen(+0x9319) [0x556f614b4319]]
3: [./Builddir/app/pktgen(+0x31fa7) [0x556f614dcfa7]]
2: [/usr/local/lib/x86_64-linux-gnu/librte_eal.so.21(__rte_panic+0xc5) [0x7f172221d285]]
1: [/usr/local/lib/x86_64-linux-gnu/librte_eal.so.21(rte_dump_stack+0x2e) [0x7f172223ef2e]]
Aborted
[Update-1]
I have found librte_net_mlx5.so existing in the system:
/usr/local/lib/x86_64-linux-gnu/librte_net_mlx5.so.21.0
/usr/local/lib/x86_64-linux-gnu/librte_net_mlx5.so.21
/usr/local/lib/x86_64-linux-gnu/librte_net_mlx5.so
/usr/local/lib/x86_64-linux-gnu/dpdk/pmds-21.0/librte_net_mlx5.so.21.0
/usr/local/lib/x86_64-linux-gnu/dpdk/pmds-21.0/librte_net_mlx5.so.21
/usr/local/lib/x86_64-linux-gnu/dpdk/pmds-21.0/librte_net_mlx5.so
/usr/local/lib/x86_64-linux-gnu/librte_net_mlx5.a
/home/dpdk-20.11/build/drivers/librte_net_mlx5.so.21.0
/home/dpdk-20.11/build/drivers/librte_net_mlx5.so.21
/home/dpdk-20.11/build/drivers/librte_net_mlx5.so
/home/dpdk-20.11/build/drivers/librte_net_mlx5.a.p
/home/dpdk-20.11/build/drivers/librte_net_mlx5.so.21.0.p
/home/dpdk-20.11/build/drivers/librte_net_mlx5.a
I tried add -d librte_net_mlx5.so as #Vipin Varghese's advice, and got following output:
$sudo ./Builddir/app/pktgen -c 0xff -n 3 -a 0000:03:00.1 -d librte_net_mlx5.so -- -p 0x1 -P -m "[1:2].0"
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:03:00.1 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
*** Copyright (c) <2010-2020>, Intel Corporation. All rights reserved.
*** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
Port: Name IfIndex Alias NUMA PCI
0: mlx5_pci 11 0 15b3:1017/03:00.1
Initialize Port 0 -- TxQ 1, RxQ 1
MBUF: error setting mempool handler
!PANIC!: Cannot create mbuf pool (Default RX 0:0) port 0, queue 0, nb_mbufs 4096, socket_id 0: Invalid argument
PANIC in pktgen_mbuf_pool_create():
Cannot create mbuf pool (Default RX 0:0) port 0, queue 0, nb_mbufs 4096, socket_id 0: Invalid argument
6: [./Builddir/app/pktgen(+0x977a) [0x55fd3145f77a]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f5479069bf7]]
4: [./Builddir/app/pktgen(+0x9319) [0x55fd3145f319]]
3: [./Builddir/app/pktgen(+0x3198b) [0x55fd3148798b]]
2: [/usr/local/lib/x86_64-linux-gnu/librte_eal.so.21(__rte_panic+0xc5) [0x7f547a063285]]
1: [/usr/local/lib/x86_64-linux-gnu/librte_eal.so.21(rte_dump_stack+0x2e) [0x7f547a084f2e]]
Aborted
It is evident the pktgen utility is
either not built with Mellanox PMD mlx5 based on the logs
or pktgen is not passed shared library for initlailizing MLX5 PMD
Since the DPDK used for building is DPDK version 20.11. The probability of pktgen build with the shared library is high. Passing eal argument as -d librte_net_mlx5.so should resolve the shared library issue.
Reason for not suggesting static library path is because of the logs of testpmd shows MLX5 is identified while eth0 and eth1 are non-existing interface and skipping PCAP PMD
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:03:00.0 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:03:00.1 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
[EDIT-1] requested for the proper details in comments too, none available.
[EDIT-2] to check if PKTGEN 21.01 is built with MXL5 PMD, in console check
pkg-config --cflags --libs --static libdpdk | grep -i mlx5
nm [pktgen-application] | grep -i mlx5
. Based on the updated logs, pktgen is not build with DPDK which is placed in /usr/local/lib/x86_64-linux-gnu
I'm new to DPDK, and I'm installing a DPDK version of suricata on the server. When I run suricata --list-dpdkports, it shows
EAL: Detected 128 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /tmp/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: Couldn't get fd on hugepage file
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
5/11/2020 -- 21:41:45 - <Error> - [ERRCODE: SC_ERR_DPDK_CONFIG(319)] - DPDK init failed
What does EAL: No available hugepages reported in hugepages-1048576kB mean? No matter how many hugepages I set, it always show that.
AnonHugePages: 104448 kB
HugePages_Total: 8192
HugePages_Free: 8191
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
I'm new to DPDK, and most of the solutions I found online is about No Free hugepages reported. I really wanna know what it means. Thank you for your help.
#Ericsun the log EAL: No available hugepages reported in hugepages-1048576kB is normal as your huge page is 2048 kB. DPDK on x86 can use both 2MB or 1GB huge page. In function rte_eal_init both are probed. In your current setup, 1GB is not found. Hence rte_eal_init logs the same.
your error is
EAL: Couldn't get fd on hugepage file
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
Use sudo for elevating privellege and accessing hugepage via mmap.
I'm trying to use dpdk-pdump with dpdk-stable-18.02.1.
my configurations:
CONFIG_RTE_LIBRTE_BNX2X_PMD=y
CONFIG_RTE_LIBRTE_BNX2X_DEBUG_RX=y
CONFIG_RTE_LIBRTE_BNX2X_DEBUG_TX=y
CONFIG_RTE_LIBRTE_BNX2X_MF_SUPPORT=y
CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=y
CONFIG_RTE_LIBRTE_PMD_PCAP=y
CONFIG_RTE_LIBRTE_PDUMP=y
bind the device to DPDK:
Network devices using DPDK-compatible driver
============================================
0000:03:00.1 'NetXtreme II BCM57810 10 Gigabit Ethernet 168e' drv=igb_uio unused=vfio-pci
And I start the primary process first:
# ./testpmd -c 3 -n 4 -- -i --total-num-mbufs=16384 --port-topology=chained
EAL: Detected 32 lcore(s)
EAL: Multi-process socket /var/run/.rte_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL: probe driver: 14e4:168e net_bnx2x
EAL: PCI device 0000:03:00.1 on NUMA socket 0
EAL: probe driver: 14e4:168e net_bnx2x
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=16384, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
rte_mempool_ops_get_capabilities: Function not supported
rte_mempool_ops_register_memory_area: Function not supported
[...]
Configuring Port 0 (socket 0)
PMD: bnx2x_interrupt_action(): Interrupt handled
PMD: bnx2x_interrupt_action(): Interrupt handled
PMD: bnx2x_interrupt_action(): Interrupt handled
Port 0: C4:34:6B:B0:EA:64
Checking link statuses...
Done
PMD: bnx2x_interrupt_action(): Interrupt handled
testpmd>
However, the secondary pdump process failed.
# ./dpdk-pdump -- --pdump 'port=0,queue=*,rx-dev=./capture.pcap'
EAL: Detected 32 lcore(s)
EAL: Multi-process socket /var/run/.rte_unix_14636_fe8ed726aadaf
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in the kernel.
EAL: This may cause issues with mapping memory into secondary processes
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL: probe driver: 14e4:168e net_bnx2x
EAL: PCI device 0000:03:00.1 on NUMA socket 0
EAL: probe driver: 14e4:168e net_bnx2x
dpdk-pdump: /root/dpdk-stable-18.02.1/drivers/net/bnx2x/bnx2x_ethdev.c:563: bnx2x_common_dev_init: Assertion `sc->bar[0].base_addr' failed.
Aborted (core dumped)
Have I missed something? Please give me some information to understand this issue.
I found that I can run dpdk-pdump with ixgbe driver.
After comparing bnx2x_ethdev.c and ixgbe_ethdev.c, I found that bnx2x doesn't support multi-process:
https://dpdk.org/doc/guides/nics/overview.html
Since dpdk-pdump runs as a secondary process, it seems that it is not able to run with bnx2x.
I have configured NIC cards as below:-
[root#localhost ethtool]# ../../tools/dpdk-devbind.py -s
Network devices using DPDK-compatible driver
============================================
0000:81:00.0 'NetXtreme BCM5722 Gigabit Ethernet PCI Express' drv=igb_uio unused=tg3
Network devices using kernel driver
===================================
0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens513f0 drv=ixgbe unused=igb_uio
0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens513f1 drv=ixgbe unused=igb_uio
0000:04:00.0 'I350 Gigabit Network Connection' if=enp4s0f0 drv=igb unused=igb_uio
0000:04:00.3 'I350 Gigabit Network Connection' if=enp4s0f3 drv=igb unused=igb_uio
Other network devices
=====================
<none>
Crypto devices using DPDK-compatible driver
===========================================
<none>
Crypto devices using kernel driver
==================================
0000:84:00.0 'DH895XCC Series QAT' drv=dh895xcc unused=qat_dh895xcc,igb_uio
Other crypto devices
====================
<none>
When i run ethtool sample application it is giving error as 0 NIC ports as shown below:-
[root#localhost ethtool]# ./ethtool-app/ethtool-app/x86_64-native- EAL: Detected 47 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:02:00.1 on NUMA socket 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:04:00.3 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
Number of NICs: 0
EAL: Error - exiting with code: 1
Cause: No available NIC ports!
Can someone help me in configuring ports if port configuration is wrong or something else.
The above error was coming because the below driver are not supported:-
0000:81:00.0 'NetXtreme BCM5722 Gigabit Ethernet PCI Express' drv=igb_uio unused=tg3
So Binding DPDK with supported driver solved the problem.
The dpdk-devbind.py tool might be a bit misleading here. Not all of the devices using DPDK-compatible driver is in fact supported by DPDK.
Here is the list of supported Broadcom NICs in DPDK:
http://dpdk.org/doc/guides/nics/bnxt.html
Looks like the BCM5722 is not there.
On the other hand, it looks like you have four other NICs, which are supported by the DPDK:
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:02:00.1 on NUMA socket 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:04:00.3 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
So you need to bind one of those to igb_uio and try to run the example again.