I tried running the DPDK ipsec-secgw sample app with the following versions
DPDK version dpdk-stable-19.11.5
OS CentOS Linux release 7.7.1908 (Core)
Kernel 3.10.0-1062.el7.x86_64
NIC type and driver
0000:00:04.0 'Virtio network device 1000' drv=igb_uio unused=virtio_pci,uio_pci_generic
Command and cmd line args used to run the app
./build/ipsec-secgw -l 6 -w 00:04.0 -w 00:05.0 --vdev "crypto_null" --log-level 8 \
--socket-mem 1024 -- -p 0xf -P -u 0x2 \
--config="(0,0,6),(1,0,6)" -f /root/config_file
Output:
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: PCI device 0000:00:04.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 1af4:1000 net_virtio
EAL: PCI device 0000:00:05.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 1af4:1000 net_virtio
CRYPTODEV: Creating cryptodev crypto_null
CRYPTODEV: Initialisation parameters - name: crypto_null,socket id: 0, max queue pairs: 8
Promiscuous mode selected
librte_ipsec usage: disabled
replay window size: 0
ESN: disabled
SA flags: 0
Frag TTL: 10000000000 ns
Allocated mbuf pool on socket 0
CRYPTODEV: elt_size 64 is expanded to 176
Allocated session pool on socket 0
Allocated session priv pool on socket 0
Configuring device port 0:
Address: 52:54:00:A5:82:2D
Creating queues: nb_rx_queue=1 nb_tx_queue=1...
EAL: Error - exiting with code: 1
Cause: Error: port 0 required RX offloads: 0xe, avaialbe RX offloads: 0xa1d
Config file contents:
#SP IPv4 rules
sp ipv4 out esp protect 1005 pri 1 dst 192.168.105.0/24 sport 0:65535 dport 0:65535
#SA rules
sa out 1005 aead_algo aes-128-gcm aead_key 2b:7e:15:16:28:ae:d2:a6:ab:f7:15:88:09:cf:4f:3d:de:ad:be:ef \
mode ipv4-tunnel src 172.16.1.5 dst 172.16.2.5 \
port_id 1 \
type inline-crypto-offload \
sa in 5 aead_algo aes-128-gcm aead_key 2b:7e:15:16:28:ae:d2:a6:ab:f7:15:88:09:cf:4f:3d:de:ad:be:ef \
mode ipv4-tunnel src 172.16.1.5 dst 172.16.2.5 \
port_id 1 \
type inline-crypto-offload \
#Routing rules
rt ipv4 dst 172.16.2.5/32 port 1
rt ipv4 dst 192.168.105.10/32 port 0
It says that certain offload capabilities are missing.
I got the config file details and command line arguments from a DPDK test plan for Niantic NICs. Is the app only supposed to work with Niantic PFs/VFs. Is there anyway to get it to work with virtio paravirtualized NICs?
Instructions link followed:
Instructions
DPDK example ipsec-gw make use of RX offload .offloads = DEV_RX_OFFLOAD_CHECKSUM. For DPDK 19.11.5 LTS following are the list of devices which supports the same
axgbe
dpaa2
e1000
enic
hinic
ixgbe
mlx4
mlx5
mvneta
mvpp2
netvsc
octeontx
octeontx2
sfc
tap
thunderx
thunderx
vmxnet3
DPDK RX Checksum offload is defined as #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM). Based on the error log Cause: Error: port 0 required RX offloads: 0xe, available RX offloads: 0xa1d, looks like DEV_RX_OFFLOAD_IPV4_CKSUM is not present in the PMD.
For the question ipsec-gw only works for Niantic NIC assumption is not incorrect. Becuase IPSEC-GW application can run any NIC which has RX offload checksum available. List is shared above.
For the question Is there any way to get it to work with virtio para-virtualized NICs? one can always disable the RX_CHECKSUM and do the checksum of IPv4 in software. But you will need to edit the application and use rte_ip_cksum.
Related
I am ready to run l2fwd in two containers, both of them are in the same host, start container1 run l2fwd successful, once start run l2fwd in another container2, then both of them got Segmentation fault error, anyone met this error, thanks.
Host: 4 sriov-vf enabled, driver: vfio-pci
container1: docker run --privileged --name="vhost_user" -v /dev:/dev -v /tmp:/tmp -itd centos-cu:v3
container2: docker run --privileged --name="virtio_user" -v /dev:/dev -v /tmp:/tmp -itd centos-cu:v3
l2fwd logs:
container1:
container1:
# ./l2fwd -l 2-3 -n 2 -w 0000:04:10.7 -w 0000:04:10.5 -- -p 0x3
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:04:10.5 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
EAL: using IOMMU type 1 (Type 1)
EAL: PCI device 0000:04:10.7 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
MAC updating enabled
Lcore 2: RX port 0
Lcore 3: RX port 1
Initializing port 0... done:
Port 0, MAC address: 02:09:C0:11:47:97
Initializing port 1... done:
Port 1, MAC address: 02:09:C0:00:2C:47
Checking link statusdone
Port0 Link Up. Speed 10000 Mbps - full-duplex
Port1 Link Up. Speed 10000 Mbps - full-duplex
L2FWD: entering main loop on lcore 3
L2FWD: -- lcoreid=3 portid=1
L2FWD: entering main loop on lcore 2
L2FWD: -- lcoreid=2 portid=0
Port statistics ====================================
Statistics for port 0 ------------------------------
Packets sent: 0
Packets received: 0
Packets dropped: 0
Statistics for port 1 ------------------------------
Packets sent: 0
Packets received: 5
Packets dropped: 0
Aggregate statistics ===============================
Total packets sent: 0
Total packets received: 5
Total packets dropped: 0
====================================================
Port statistics ====================================
Statistics for port 0 ------------------------------
Packets sent: 23
Packets received: 16
Packets dropped: 0
Statistics for port 1 ------------------------------
Packets sent: 16
Packets received: 26
Packets dropped: 0
Aggregate statistics ===============================
Total packets sent: 39
Total packets received: 42
Total packets dropped: 0
====================================================
(start to run l2fwd in container2)
./run_l2fwd.sh: line 3: 116 Segmentation fault (core dumped) ./l2fwd -l 2-3 -n 2 -w 0000:04:10.7 -w 0000:04:10.5 -- -p 0x3
container2:
# /l2fwd -l 0-1 -n 2 -w 0000:04:10.3 -w 0000:04:10.1 -- -p 0x3
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:04:10.1 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
EAL: using IOMMU type 1 (Type 1)
EAL: PCI device 0000:04:10.3 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
MAC updating enabled
Lcore 0: RX port 0
Lcore 1: RX port 1
Initializing port 0... ./run_l2fwd.sh: line 3: 90 Segmentation fault (core dumped) ./l2fwd -l 0-1 -n 2 -w 0000:04:10.3 -w 0000:04:10.1 -- -p 0x3
Mapping hugepages from files in hugetlbfs is essential for multi-process, because secondary processes need to map the same hugepages. EAL creates files like rtemap_0 in directories specified with --huge-dir option (or in the mount point for a specific hugepage size). The rte prefix can be changed using --file-prefix. This may be needed for running multiple primary processes that share a hugetlbfs mount point. Each backing file by default corresponds to one hugepage, it is opened and locked for the entire time the hugepage is used. This may exhaust the number of open files limit (NOFILE).
I am getting DPDK MLX5 probing issue.
I have installed the mlx5/ofed driver
I have loaded the kernel modules.
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: Probe PCI driver: mlx5_pci (15b3:1013) device: 0000:5e:00.0 (socket 0)
mlx5_pci: unable to recognize master/representors on the multiple IB devices
common_mlx5: Failed to load driver = mlx5_pci.
EAL: Requested device 0000:5e:00.0 cannot be used
EAL: Bus (pci) probe failed.
As for the 'failing to load mlx5_pci' driver, I can see that
the mlx5_core driver is loaded.
dpdk-devbind.py -s
Network devices using kernel driver
===================================
0000:5e:00.0 'MT27700 Family [ConnectX-4]' if=enp94s0 drv=mlx5_core unused=
I am assuming both of them are same?
What does failing to recognize master/representors on multiple IB devices mean?
My configuration is: CentOS 7.9, Linux Kernel 5.12, OFED 4.9 (LTS), DPDK 21.02
lsmod | egrep 'mlx|ib'
libceph 413696 1 ceph
ib_isert 49152 0
iscsi_target_mod 315392 1 ib_isert
ib_srpt 61440 0
target_core_mod 372736 3 iscsi_target_mod,ib_srpt,ib_isert
ib_srp 61440 0
scsi_transport_srp 28672 1 ib_srp
ib_iser 45056 0
ib_umad 36864 0
rdma_cm 114688 6 rpcrdma,ib_srpt,ib_srp,ib_iser,ib_isert,rdma_ucm
ib_ipoib 114688 0
libiscsi 65536 1 ib_iser
scsi_transport_iscsi 126976 2 ib_iser,libiscsi
ib_cm 122880 4 rdma_cm,ib_ipoib,ib_srpt,ib_srp
mlx5_ib 331776 0
mlx4_ib 196608 0
ib_uverbs 147456 3 mlx4_ib,rdma_ucm,mlx5_ib
ib_core 356352 14 rdma_cm,ib_ipoib,rpcrdma,mlx4_ib,ib_srpt,ib_srp,iw_cm,ib_iser,ib_umad,ib_isert,rdma_ucm,ib_uverbs,ml
5_ib,ib_cm
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,xfs,libceph
mlx4_en 118784 0
mlx4_core 319488 2 mlx4_ib,mlx4_en
mlx5_core 700416 1 mlx5_ib
mlxfw 32768 1 mlx5_core
pci_hyperv_intf 16384 1 mlx5_core
ptp 28672 3 igb,mlx4_en,mlx5_core
libahci 36864 1 ahci
libata 253952 2 libahci,ahci
After cloning the dpdk git repository and building the helloworld application, I get the following error:
$ ./examples/helloworld/build/helloworld
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Couldn't get fd on hugepage file
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
PANIC in main():
Cannot init EAL
5: [./examples/helloworld/build/helloworld(+0x11de) [0x56175faac1de]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f31f60fe0b3]]
3: [./examples/helloworld/build/helloworld(+0x111c) [0x56175faac11c]]
2: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(__rte_panic+0xc5) [0x7f31f62d537e]]
1: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(rte_dump_stack+0x32) [0x7f31f62ecc52]]
Aborted (core dumped)
Checked hugepage support and it seems fine:
$ cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 256
HugePages_Free: 255
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 524288 kB
$ mount | grep huge
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
$ cat /proc/filesystems | grep huge
nodev hugetlbfs
$ cat /proc/sys/vm/nr_hugepages
256
I saw a workaround in a related question; run it with the --no-huge option, which works:
$ ./examples/helloworld/build/helloworld --no-huge
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
hello from core 1
hello from core 2
hello from core 3
hello from core 0
But this is a limiting solution.
TL;DR Use sudo
Running with --log-level=eal,8 as suggested by #VipinVarghese revealed that this was a permissions issue:
$ ./examples/helloworld/build/helloworld --log-level=eal,8
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Detected lcore 2 as core 0 on socket 0
EAL: Detected lcore 3 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: open shared lib /usr/lib/x86_64-linux-gnu/dpdk/pmds-20.0/librte_pmd_qede.so
EAL: Registered [vdev] bus.
EAL: Registered [pci] bus.
EAL: Registered [eth] device class.
EAL: open shared lib /usr/lib/x86_64-linux-gnu/dpdk/pmds-20.0/librte_pmd_aesni_mb.so
...
EAL: Ask a virtual area of 0x61000 bytes
EAL: Virtual area found at 0xd00600000 (size = 0x61000)
EAL: Memseg list allocated: 0x800kB at socket 0
EAL: Ask a virtual area of 0x400000000 bytes
EAL: Virtual area found at 0xd00800000 (size = 0x400000000)
EAL: TSC frequency is ~2590000 KHz
EAL: Master lcore 0 is ready (tid=7fc11ed01d00;cpuset=[0])
EAL: lcore 2 is ready (tid=7fc116ffd700;cpuset=[2])
EAL: lcore 3 is ready (tid=7fc1167fc700;cpuset=[3])
EAL: lcore 1 is ready (tid=7fc1177fe700;cpuset=[1])
EAL: Trying to obtain current memory policy.
EAL: Setting policy MPOL_PREFERRED for socket 0
EAL: get_seg_fd(): open failed: Permission denied
EAL: Couldn't get fd on hugepage file
EAL: attempted to allocate 1 segments, but only 0 were allocated
EAL: Restoring previous memory policy: 0
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
PANIC in main():
Cannot init EAL
5: [./examples/helloworld/build/helloworld(+0x11de) [0x56459e5391de]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7fc11eed00b3]]
3: [./examples/helloworld/build/helloworld(+0x111c) [0x56459e53911c]]
2: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(__rte_panic+0xc5) [0x7fc11f0a737e]]
1: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(rte_dump_stack+0x32) [0x7fc11f0bec52]]
Aborted (core dumped)
Tried solving the permissions problem (EAL: get_seg_fd(): open failed: Permission denied), but it only worked when I ran it as root:
$ sudo ./examples/helloworld/build/helloworld
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
hello from core 1
hello from core 2
hello from core 3
hello from core 0
As it turns out, this is the correct approach, even though the documentation appears to assume that this is obvious. There is no mention of root privileges required on section "6.2. Running a Sample Application", excerpt below:
Copy the DPDK application binary to your target, then run the
application as follows (assuming the platform has four memory channels
per processor socket, and that cores 0-3 are present and are to be
used for running the application):
./dpdk-helloworld -l 0-3 -n 4
However, this point is mentioned later in the documentation, see "8.2. Running DPDK Applications Without Root Privileges" where there's a clear note:
The instructions below will allow running DPDK as non-root with older
Linux kernel versions. However, since version 4.0, the kernel does not
allow unprivileged processes to read the physical address information
from the pagemaps file, making it impossible for those processes to
use HW devices which require physical addresses
There is also a hint in the FAQ:
What does “EAL: map_all_hugepages(): open failed: Permission denied Cannot init memory” mean? This is most likely due to the test
application not being run with sudo to promote the user to a
superuser. Alternatively, applications can also be run as regular
user. For more information, please refer to DPDK Getting Started
Guide.
And an email touching on this topic:
2016-07-07 16:47, Jez Higgins:
> Is it possible to get DPDK up and running as non-root - if so, can
> anyone guide me to what I'm missing? Or should I be giving this up as a
> bad job?
You can try the --no-huge option.
But most of drivers won't work without hugepage currently.
A rework of the memory allocation is needed to make it work better.
That was four years ago. Perhaps there is already a solution which does not require sudo or --no-huge? If so, other answers are most welcome. For now, I'm going with this.
#Nagev I request you to check dpdk as non root stack overflow question in Nov 2020
[EDIT-1] Noticed the above question is removed, hence access to the details are limited, updating with the answer how to run without sudo or root privellege section
Note: I have been running DPDK application as non root with 18.11.5 LTS and 19.11.3 LTS
I am building the Helloworld application in DPDK. I get an error that says No free hugepages reported in hugepages-1048567.
(1) I build the DPDK-18.11 using RTE_TARGET=x86_64-linuxapp-native-gcc.
(2) I run usertools/dpdk-setup.sh, run [15] (build DPDK).
(3) run [22], allocate hugepages. I set 1024 hugepages.
(4) run [18], insert igb_uio module.
(5) run [24], bind my NIC (e1000e) to igb_uio module.
Then, I go to examples/helloworld/, run make to build the app. When I run
./build/app/helloworld -l 0-1 -n 4, I get the following nofication (No free hugepage):
xiarui#wcf-OptiPlex-7060:~/dpdk/dpdk-18.11/examples/helloworld/build/app$ sudo ./helloworld -l 0 -n 4
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
hello from core 0
I have already allocated hugepages in the setup script, and get the following output:
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 1024
HugePages_Free: 1024
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Furthermore, I find e1000e cannot bind to VFIO, so I only use igb_uio driver.
Network devices using DPDK-compatible driver
============================================
0000:00:1f.6 'Ethernet Connection (7) I219-LM 15bb' drv=igb_uio unused=e1000e
My host profile is :
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-8700 CPU # 3.20GHz
Stepping: 10
CPU MHz: 800.493
CPU max MHz: 4600.0000
CPU min MHz: 800.0000
BogoMIPS: 6384.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-11
Memory:
xiarui#wcf-OptiPlex-7060:~/dpdk/dpdk-18.11/examples/helloworld/build/app$ free -h
total used free shared buff/cache available
Mem: 7.6G 2.4G 4.4G 159M 809M 4.8G
Swap: 2.0G 0B 2.0G
The things go worse when I run pktgen-3.6.0. I get the following error:
>>> sdk '/home/xiarui/dpdk/dpdk-18.11', target 'x86_64-native-linuxapp-gcc'
Trying ./app/x86_64-native-linuxapp-gcc/pktgen
sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen -l 0-1 -n 4 --proc-type auto --log-level 7 --file-prefix pg -- -T -P --crc-strip -m 1.0 -f themes/black-yellow.theme
Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. Powered by DPDK
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/pg/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
Lua 5.3.3 Copyright (C) 1994-2016 Lua.org, PUC-Rio
*** Copyright (c) <2010-2019>, Intel Corporation. All rights reserved.
*** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 8c:ec:4b:a5:17:4f
eth_em_start(): Unable to initialize the hardware
!PANIC!: rte_eth_dev_start: port=0, Input/output error
PANIC in pktgen_config_ports():
rte_eth_dev_start: port=0, Input/output error6: [./app/x86_64-native-linuxapp-gcc/pktgen(_start+0x2a) [0x56038a3d29ba]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7fe0b33a3b97]]
4: [./app/x86_64-native-linuxapp-gcc/pktgen(main+0xe52) [0x56038a3ca782]]
3: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_config_ports+0x1ef1) [0x56038a403761]]
2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc5) [0x56038a3bb544]]
1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2e) [0x56038a4f5f4e]]
Could you share me some idea? Thank you for your time.
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
hello from core 0
No free 1GB hugepages is not an error, it's just an info.
You got the hello from core 0 output, so your hello world application works just fine, congratulations!
I run testpmd in the usertools/dpdk-setup.sh and the thing is bad. I get the following error:
Launching app
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
EAL: Error - exiting with code: 1
Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory
It seems like the app cannot allocate memory from hugepages, I guess.
Thank you for your time.
EDIT
I guess I was too mean for hugepage allocation. So I try to allocate 2000*2MB hugepages. Then, all works fine.
bitmask: 0x0f
Launching app
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
testpmd: No probed ethernet devices
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done
testpmd>
I find in my VBox, I only have two lcores, I only allocate 128 hugepages, and all things work fine. However, when I use a desktop with 12 lcores, 128 hugepages are not sufficient.
Could you share some principles for allocating hugepages? or just the more, the better. Thank you for sharing your ideas.
I have a quad port Intel 1G network card. I am using DPDK to send data on one physical port and receive on another.
I saw a few examples in DPDK code, but could not make it work. If anybody knows how to do that please send me simple instructions so I can follow and understand. I setup my PC properly for huge pages, loading driver, and assigning network port to use dpdk driver etc... I can run helloworld from DPDK so system setup looks ok to me.
Thanks in advance.
temp5556
After building DPDK:
cd to the DPDK directory.
Run sudo build/app/testpmd -- --interactive
You should see output like this:
$ sudo build/app/testpmd -- --interactive
EAL: Detected 8 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.rte_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: PCI device 0002:00:02.0 on NUMA socket 0
EAL: probe driver: 15b3:1004 net_mlx4
PMD: net_mlx4: PCI information matches, using device "mlx4_0" (VF: true)
PMD: net_mlx4: 1 port(s) detected
PMD: net_mlx4: port 1 MAC address is 00:0d:3a:f4:6e:17
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port
will pair with itself.
Configuring Port 0 (socket 0)
Port 0: 00:0D:3A:F4:6E:17
Checking link statuses...
Done
testpmd>
Don't worry about the "No free hugepages" message. It means it couldn't find any 1024 MB hugepages but it since it continued OK, it must have found some 2 MB hugepages. It'd be nice if it said "EAL: Using 2 MB huge pages" instead.
At the prompt type, start tx_first, then quit. You should see something like:
testpmd> start tx_first
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0:
CRC stripping enabled
RX queues=1 - RX desc=1024 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX queues=1 - TX desc=1024 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ offloads=0x0
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 32 TX-dropped: 0 TX-total: 32
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 32 TX-dropped: 0 TX-total: 32
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In my system there is only one DPDK port, so I sent 32 packets but did not receive any. If I had a multi-port card with a cable directly between the ports, then I'd see the RX count also increase.
you can use TESTPMD to test DPDK.
TestPMD can work as a packet generator (tx_only mode) , a receiver (rx_only mode) , or a forwarder(io mode).
you will need generator nodes to be connected to your box if you are willing to use TESTPMD as a forwarder only.
I propose that you start with the following examples :
generator(pktgen) ------> testPMD (io mode )----------> recevier (testPMD rx_only mode).
at the pktgen generator specify the mac address destination which is the MAC address of the receive's receiving PORT.
PKTGEN and how it works in detail is explained more in this link :
http://pktgen.readthedocs.io/en/latest/getting_started.html
TESTPMD and how it works is explained here :
http://www.intel.com/content/dam/www/public/us/en/documents/guides/dpdk-testpmd-application-user-guide.pdf
I hope this helps.