run l2fwd fail in two containers - dpdk

I am ready to run l2fwd in two containers, both of them are in the same host, start container1 run l2fwd successful, once start run l2fwd in another container2, then both of them got Segmentation fault error, anyone met this error, thanks.
Host: 4 sriov-vf enabled, driver: vfio-pci
container1: docker run --privileged --name="vhost_user" -v /dev:/dev -v /tmp:/tmp -itd centos-cu:v3
container2: docker run --privileged --name="virtio_user" -v /dev:/dev -v /tmp:/tmp -itd centos-cu:v3
l2fwd logs:
container1:
container1:
# ./l2fwd -l 2-3 -n 2 -w 0000:04:10.7 -w 0000:04:10.5 -- -p 0x3
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:04:10.5 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
EAL: using IOMMU type 1 (Type 1)
EAL: PCI device 0000:04:10.7 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
MAC updating enabled
Lcore 2: RX port 0
Lcore 3: RX port 1
Initializing port 0... done:
Port 0, MAC address: 02:09:C0:11:47:97
Initializing port 1... done:
Port 1, MAC address: 02:09:C0:00:2C:47
Checking link statusdone
Port0 Link Up. Speed 10000 Mbps - full-duplex
Port1 Link Up. Speed 10000 Mbps - full-duplex
L2FWD: entering main loop on lcore 3
L2FWD: -- lcoreid=3 portid=1
L2FWD: entering main loop on lcore 2
L2FWD: -- lcoreid=2 portid=0
Port statistics ====================================
Statistics for port 0 ------------------------------
Packets sent: 0
Packets received: 0
Packets dropped: 0
Statistics for port 1 ------------------------------
Packets sent: 0
Packets received: 5
Packets dropped: 0
Aggregate statistics ===============================
Total packets sent: 0
Total packets received: 5
Total packets dropped: 0
====================================================
Port statistics ====================================
Statistics for port 0 ------------------------------
Packets sent: 23
Packets received: 16
Packets dropped: 0
Statistics for port 1 ------------------------------
Packets sent: 16
Packets received: 26
Packets dropped: 0
Aggregate statistics ===============================
Total packets sent: 39
Total packets received: 42
Total packets dropped: 0
====================================================
(start to run l2fwd in container2)
./run_l2fwd.sh: line 3: 116 Segmentation fault (core dumped) ./l2fwd -l 2-3 -n 2 -w 0000:04:10.7 -w 0000:04:10.5 -- -p 0x3
container2:
# /l2fwd -l 0-1 -n 2 -w 0000:04:10.3 -w 0000:04:10.1 -- -p 0x3
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:04:10.1 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
EAL: using IOMMU type 1 (Type 1)
EAL: PCI device 0000:04:10.3 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
MAC updating enabled
Lcore 0: RX port 0
Lcore 1: RX port 1
Initializing port 0... ./run_l2fwd.sh: line 3: 90 Segmentation fault (core dumped) ./l2fwd -l 0-1 -n 2 -w 0000:04:10.3 -w 0000:04:10.1 -- -p 0x3

Mapping hugepages from files in hugetlbfs is essential for multi-process, because secondary processes need to map the same hugepages. EAL creates files like rtemap_0 in directories specified with --huge-dir option (or in the mount point for a specific hugepage size). The rte prefix can be changed using --file-prefix. This may be needed for running multiple primary processes that share a hugetlbfs mount point. Each backing file by default corresponds to one hugepage, it is opened and locked for the entire time the hugepage is used. This may exhaust the number of open files limit (NOFILE).

Related

dpdk l2fwd-nv: Match error CPU and GPU pointers

I run l2fwd-nv with next command:
./l2fwdnv -l 0-3 -n 8 -a 07:00.0,txq_inline_max=0 -- m 1 -w 2 -b 64 -p 1 -v 0 z 0
Program output:
************ L2FWD-NV ************
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING! Base virtual address hint (0x100a96000 != 0x7f3b1fe00000) not respected!
EAL: This may cause issues with mapping memory into secondary processes
EAL: WARNING! Base virtual address hint (0x1016f7000 != 0x7f371fc00000) not respected!
EAL: This may cause issues with mapping memory into secondary processes
EAL: WARNING! Base virtual address hint (0x102358000 != 0x7f331fa00000) not respected!
EAL: This may cause issues with mapping memory into secondary processes
EAL: WARNING! Base virtual address hint (0x102fb9000 != 0x7f2f1f800000) not respected!
EAL: This may cause issues with mapping memory into secondary processes
EAL: Invalid NUMA socket, default to 0
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:07:00.0 (socket 0)
common_mlx5: RTE_MEM is selected.
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
Device driver name in use: mlx5_pci...
EAL: Error - exiting with code: 1
Cause: GPU pointer does not match CPU pointer pDev=0x2030c0000 pBuf=0x7f3b20400000
What is the error?
ps: https://github.com/NVIDIA/l2fwd-nv
Are you actually running with -m 1 on the last version of l2fwd-nv ? The device memory path enabled with -m 1, should work in your case. In your question, you lack the - of -m, is it only a typo ?
Your error comes from the host pinned memory path, which is according to the documentation enabled with -m 0. This path allocates memory on the CPU, and allows the GPU to access it using cudaHostGetDevicePointer.
I don't know why, but l2fwd-nv requires the device pointer returned by cudaGetHostPointer to be equal to the initial host pointer. This is the case when your GPU has the attribute cudaDevAttrCanUseHostPointerForRegisteredMem. In your case, it looks like your GPU does not have this attribute.

Unable to recognize master/representor on the multiple IB devices

I am getting DPDK MLX5 probing issue.
I have installed the mlx5/ofed driver
I have loaded the kernel modules.
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: Probe PCI driver: mlx5_pci (15b3:1013) device: 0000:5e:00.0 (socket 0)
mlx5_pci: unable to recognize master/representors on the multiple IB devices
common_mlx5: Failed to load driver = mlx5_pci.
EAL: Requested device 0000:5e:00.0 cannot be used
EAL: Bus (pci) probe failed.
As for the 'failing to load mlx5_pci' driver, I can see that
the mlx5_core driver is loaded.
dpdk-devbind.py -s
Network devices using kernel driver
===================================
0000:5e:00.0 'MT27700 Family [ConnectX-4]' if=enp94s0 drv=mlx5_core unused=
I am assuming both of them are same?
What does failing to recognize master/representors on multiple IB devices mean?
My configuration is: CentOS 7.9, Linux Kernel 5.12, OFED 4.9 (LTS), DPDK 21.02
lsmod | egrep 'mlx|ib'
libceph 413696 1 ceph
ib_isert 49152 0
iscsi_target_mod 315392 1 ib_isert
ib_srpt 61440 0
target_core_mod 372736 3 iscsi_target_mod,ib_srpt,ib_isert
ib_srp 61440 0
scsi_transport_srp 28672 1 ib_srp
ib_iser 45056 0
ib_umad 36864 0
rdma_cm 114688 6 rpcrdma,ib_srpt,ib_srp,ib_iser,ib_isert,rdma_ucm
ib_ipoib 114688 0
libiscsi 65536 1 ib_iser
scsi_transport_iscsi 126976 2 ib_iser,libiscsi
ib_cm 122880 4 rdma_cm,ib_ipoib,ib_srpt,ib_srp
mlx5_ib 331776 0
mlx4_ib 196608 0
ib_uverbs 147456 3 mlx4_ib,rdma_ucm,mlx5_ib
ib_core 356352 14 rdma_cm,ib_ipoib,rpcrdma,mlx4_ib,ib_srpt,ib_srp,iw_cm,ib_iser,ib_umad,ib_isert,rdma_ucm,ib_uverbs,ml
5_ib,ib_cm
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,xfs,libceph
mlx4_en 118784 0
mlx4_core 319488 2 mlx4_ib,mlx4_en
mlx5_core 700416 1 mlx5_ib
mlxfw 32768 1 mlx5_core
pci_hyperv_intf 16384 1 mlx5_core
ptp 28672 3 igb,mlx4_en,mlx5_core
libahci 36864 1 ahci
libata 253952 2 libahci,ahci

DPDK sample app ipsec-secgw failing with virtio NIC

I tried running the DPDK ipsec-secgw sample app with the following versions
DPDK version dpdk-stable-19.11.5
OS CentOS Linux release 7.7.1908 (Core)
Kernel 3.10.0-1062.el7.x86_64
NIC type and driver
0000:00:04.0 'Virtio network device 1000' drv=igb_uio unused=virtio_pci,uio_pci_generic
Command and cmd line args used to run the app
./build/ipsec-secgw -l 6 -w 00:04.0 -w 00:05.0 --vdev "crypto_null" --log-level 8 \
--socket-mem 1024 -- -p 0xf -P -u 0x2 \
--config="(0,0,6),(1,0,6)" -f /root/config_file
Output:
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: PCI device 0000:00:04.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 1af4:1000 net_virtio
EAL: PCI device 0000:00:05.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 1af4:1000 net_virtio
CRYPTODEV: Creating cryptodev crypto_null
CRYPTODEV: Initialisation parameters - name: crypto_null,socket id: 0, max queue pairs: 8
Promiscuous mode selected
librte_ipsec usage: disabled
replay window size: 0
ESN: disabled
SA flags: 0
Frag TTL: 10000000000 ns
Allocated mbuf pool on socket 0
CRYPTODEV: elt_size 64 is expanded to 176
Allocated session pool on socket 0
Allocated session priv pool on socket 0
Configuring device port 0:
Address: 52:54:00:A5:82:2D
Creating queues: nb_rx_queue=1 nb_tx_queue=1...
EAL: Error - exiting with code: 1
Cause: Error: port 0 required RX offloads: 0xe, avaialbe RX offloads: 0xa1d
Config file contents:
#SP IPv4 rules
sp ipv4 out esp protect 1005 pri 1 dst 192.168.105.0/24 sport 0:65535 dport 0:65535
#SA rules
sa out 1005 aead_algo aes-128-gcm aead_key 2b:7e:15:16:28:ae:d2:a6:ab:f7:15:88:09:cf:4f:3d:de:ad:be:ef \
mode ipv4-tunnel src 172.16.1.5 dst 172.16.2.5 \
port_id 1 \
type inline-crypto-offload \
sa in 5 aead_algo aes-128-gcm aead_key 2b:7e:15:16:28:ae:d2:a6:ab:f7:15:88:09:cf:4f:3d:de:ad:be:ef \
mode ipv4-tunnel src 172.16.1.5 dst 172.16.2.5 \
port_id 1 \
type inline-crypto-offload \
#Routing rules
rt ipv4 dst 172.16.2.5/32 port 1
rt ipv4 dst 192.168.105.10/32 port 0
It says that certain offload capabilities are missing.
I got the config file details and command line arguments from a DPDK test plan for Niantic NICs. Is the app only supposed to work with Niantic PFs/VFs. Is there anyway to get it to work with virtio paravirtualized NICs?
Instructions link followed:
Instructions
DPDK example ipsec-gw make use of RX offload .offloads = DEV_RX_OFFLOAD_CHECKSUM. For DPDK 19.11.5 LTS following are the list of devices which supports the same
axgbe
dpaa2
e1000
enic
hinic
ixgbe
mlx4
mlx5
mvneta
mvpp2
netvsc
octeontx
octeontx2
sfc
tap
thunderx
thunderx
vmxnet3
DPDK RX Checksum offload is defined as #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM). Based on the error log Cause: Error: port 0 required RX offloads: 0xe, available RX offloads: 0xa1d, looks like DEV_RX_OFFLOAD_IPV4_CKSUM is not present in the PMD.
For the question ipsec-gw only works for Niantic NIC assumption is not incorrect. Becuase IPSEC-GW application can run any NIC which has RX offload checksum available. List is shared above.
For the question Is there any way to get it to work with virtio para-virtualized NICs? one can always disable the RX_CHECKSUM and do the checksum of IPv4 in software. But you will need to edit the application and use rte_ip_cksum.

DPDK sample application aborts after EAL: Couldn't get fd on hugepage file

After cloning the dpdk git repository and building the helloworld application, I get the following error:
$ ./examples/helloworld/build/helloworld
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Couldn't get fd on hugepage file
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
PANIC in main():
Cannot init EAL
5: [./examples/helloworld/build/helloworld(+0x11de) [0x56175faac1de]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f31f60fe0b3]]
3: [./examples/helloworld/build/helloworld(+0x111c) [0x56175faac11c]]
2: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(__rte_panic+0xc5) [0x7f31f62d537e]]
1: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(rte_dump_stack+0x32) [0x7f31f62ecc52]]
Aborted (core dumped)
Checked hugepage support and it seems fine:
$ cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 256
HugePages_Free: 255
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 524288 kB
$ mount | grep huge
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
$ cat /proc/filesystems | grep huge
nodev hugetlbfs
$ cat /proc/sys/vm/nr_hugepages
256
I saw a workaround in a related question; run it with the --no-huge option, which works:
$ ./examples/helloworld/build/helloworld --no-huge
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
hello from core 1
hello from core 2
hello from core 3
hello from core 0
But this is a limiting solution.
TL;DR Use sudo
Running with --log-level=eal,8 as suggested by #VipinVarghese revealed that this was a permissions issue:
$ ./examples/helloworld/build/helloworld --log-level=eal,8
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Detected lcore 2 as core 0 on socket 0
EAL: Detected lcore 3 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: open shared lib /usr/lib/x86_64-linux-gnu/dpdk/pmds-20.0/librte_pmd_qede.so
EAL: Registered [vdev] bus.
EAL: Registered [pci] bus.
EAL: Registered [eth] device class.
EAL: open shared lib /usr/lib/x86_64-linux-gnu/dpdk/pmds-20.0/librte_pmd_aesni_mb.so
...
EAL: Ask a virtual area of 0x61000 bytes
EAL: Virtual area found at 0xd00600000 (size = 0x61000)
EAL: Memseg list allocated: 0x800kB at socket 0
EAL: Ask a virtual area of 0x400000000 bytes
EAL: Virtual area found at 0xd00800000 (size = 0x400000000)
EAL: TSC frequency is ~2590000 KHz
EAL: Master lcore 0 is ready (tid=7fc11ed01d00;cpuset=[0])
EAL: lcore 2 is ready (tid=7fc116ffd700;cpuset=[2])
EAL: lcore 3 is ready (tid=7fc1167fc700;cpuset=[3])
EAL: lcore 1 is ready (tid=7fc1177fe700;cpuset=[1])
EAL: Trying to obtain current memory policy.
EAL: Setting policy MPOL_PREFERRED for socket 0
EAL: get_seg_fd(): open failed: Permission denied
EAL: Couldn't get fd on hugepage file
EAL: attempted to allocate 1 segments, but only 0 were allocated
EAL: Restoring previous memory policy: 0
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
PANIC in main():
Cannot init EAL
5: [./examples/helloworld/build/helloworld(+0x11de) [0x56459e5391de]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7fc11eed00b3]]
3: [./examples/helloworld/build/helloworld(+0x111c) [0x56459e53911c]]
2: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(__rte_panic+0xc5) [0x7fc11f0a737e]]
1: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(rte_dump_stack+0x32) [0x7fc11f0bec52]]
Aborted (core dumped)
Tried solving the permissions problem (EAL: get_seg_fd(): open failed: Permission denied), but it only worked when I ran it as root:
$ sudo ./examples/helloworld/build/helloworld
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
hello from core 1
hello from core 2
hello from core 3
hello from core 0
As it turns out, this is the correct approach, even though the documentation appears to assume that this is obvious. There is no mention of root privileges required on section "6.2. Running a Sample Application", excerpt below:
Copy the DPDK application binary to your target, then run the
application as follows (assuming the platform has four memory channels
per processor socket, and that cores 0-3 are present and are to be
used for running the application):
./dpdk-helloworld -l 0-3 -n 4
However, this point is mentioned later in the documentation, see "8.2. Running DPDK Applications Without Root Privileges" where there's a clear note:
The instructions below will allow running DPDK as non-root with older
Linux kernel versions. However, since version 4.0, the kernel does not
allow unprivileged processes to read the physical address information
from the pagemaps file, making it impossible for those processes to
use HW devices which require physical addresses
There is also a hint in the FAQ:
What does “EAL: map_all_hugepages(): open failed: Permission denied Cannot init memory” mean? This is most likely due to the test
application not being run with sudo to promote the user to a
superuser. Alternatively, applications can also be run as regular
user. For more information, please refer to DPDK Getting Started
Guide.
And an email touching on this topic:
2016-07-07 16:47, Jez Higgins:
> Is it possible to get DPDK up and running as non-root - if so, can
> anyone guide me to what I'm missing? Or should I be giving this up as a
> bad job?
You can try the --no-huge option.
But most of drivers won't work without hugepage currently.
A rework of the memory allocation is needed to make it work better.
That was four years ago. Perhaps there is already a solution which does not require sudo or --no-huge? If so, other answers are most welcome. For now, I'm going with this.
#Nagev I request you to check dpdk as non root stack overflow question in Nov 2020
[EDIT-1] Noticed the above question is removed, hence access to the details are limited, updating with the answer how to run without sudo or root privellege section
Note: I have been running DPDK application as non root with 18.11.5 LTS and 19.11.3 LTS

DPDK application cannot work for no free hugepage

I am building the Helloworld application in DPDK. I get an error that says No free hugepages reported in hugepages-1048567.
(1) I build the DPDK-18.11 using RTE_TARGET=x86_64-linuxapp-native-gcc.
(2) I run usertools/dpdk-setup.sh, run [15] (build DPDK).
(3) run [22], allocate hugepages. I set 1024 hugepages.
(4) run [18], insert igb_uio module.
(5) run [24], bind my NIC (e1000e) to igb_uio module.
Then, I go to examples/helloworld/, run make to build the app. When I run
./build/app/helloworld -l 0-1 -n 4, I get the following nofication (No free hugepage):
xiarui#wcf-OptiPlex-7060:~/dpdk/dpdk-18.11/examples/helloworld/build/app$ sudo ./helloworld -l 0 -n 4
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
hello from core 0
I have already allocated hugepages in the setup script, and get the following output:
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 1024
HugePages_Free: 1024
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Furthermore, I find e1000e cannot bind to VFIO, so I only use igb_uio driver.
Network devices using DPDK-compatible driver
============================================
0000:00:1f.6 'Ethernet Connection (7) I219-LM 15bb' drv=igb_uio unused=e1000e
My host profile is :
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-8700 CPU # 3.20GHz
Stepping: 10
CPU MHz: 800.493
CPU max MHz: 4600.0000
CPU min MHz: 800.0000
BogoMIPS: 6384.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-11
Memory:
xiarui#wcf-OptiPlex-7060:~/dpdk/dpdk-18.11/examples/helloworld/build/app$ free -h
total used free shared buff/cache available
Mem: 7.6G 2.4G 4.4G 159M 809M 4.8G
Swap: 2.0G 0B 2.0G
The things go worse when I run pktgen-3.6.0. I get the following error:
>>> sdk '/home/xiarui/dpdk/dpdk-18.11', target 'x86_64-native-linuxapp-gcc'
Trying ./app/x86_64-native-linuxapp-gcc/pktgen
sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen -l 0-1 -n 4 --proc-type auto --log-level 7 --file-prefix pg -- -T -P --crc-strip -m 1.0 -f themes/black-yellow.theme
Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. Powered by DPDK
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/pg/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
Lua 5.3.3 Copyright (C) 1994-2016 Lua.org, PUC-Rio
*** Copyright (c) <2010-2019>, Intel Corporation. All rights reserved.
*** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 8c:ec:4b:a5:17:4f
eth_em_start(): Unable to initialize the hardware
!PANIC!: rte_eth_dev_start: port=0, Input/output error
PANIC in pktgen_config_ports():
rte_eth_dev_start: port=0, Input/output error6: [./app/x86_64-native-linuxapp-gcc/pktgen(_start+0x2a) [0x56038a3d29ba]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7fe0b33a3b97]]
4: [./app/x86_64-native-linuxapp-gcc/pktgen(main+0xe52) [0x56038a3ca782]]
3: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_config_ports+0x1ef1) [0x56038a403761]]
2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc5) [0x56038a3bb544]]
1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2e) [0x56038a4f5f4e]]
Could you share me some idea? Thank you for your time.
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
hello from core 0
No free 1GB hugepages is not an error, it's just an info.
You got the hello from core 0 output, so your hello world application works just fine, congratulations!
I run testpmd in the usertools/dpdk-setup.sh and the thing is bad. I get the following error:
Launching app
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
EAL: Error - exiting with code: 1
Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory
It seems like the app cannot allocate memory from hugepages, I guess.
Thank you for your time.
EDIT
I guess I was too mean for hugepage allocation. So I try to allocate 2000*2MB hugepages. Then, all works fine.
bitmask: 0x0f
Launching app
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
testpmd: No probed ethernet devices
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done
testpmd>
I find in my VBox, I only have two lcores, I only allocate 128 hugepages, and all things work fine. However, when I use a desktop with 12 lcores, 128 hugepages are not sufficient.
Could you share some principles for allocating hugepages? or just the more, the better. Thank you for sharing your ideas.