Packet Generator: Error in run.sh - dpdk

I executed the below command to run the run.sh file in pktgen-dpdk, which throws the error. I do understand that it does not match the memory requirements. But I did try to allocate a huge-page size of 2048KB. But it still gives me the same error.
The command I tried to execute is:
sudo -E ./tools/run.sh
EAL: Not enough memory available on socket 1! Requested: 2048MB, available: 0MB
EAL: FATAL: Cannot init memory
EAL: Cannot init memory

If the host you are trying to run the Pktgen has NUMA, i.e node 0 and node 1, you have to configure hugepages on both NUMA nodes as described in DPDK Getting Started Guide, i.e.:
echo 2048 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
echo 2048 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
If the host does not have NUMA, you need to fix the Pktgen's arguments. Open run.sh script and change --socket-mem 2048,2048 (i.e. allocate 2K hugepages on NUMA 0 and 2K on NUMA 1) to --socket-mem 2048 (i.e. allocate 2K hugepages on NUMA 0 only)

Related

run l2fwd fail in two containers

I am ready to run l2fwd in two containers, both of them are in the same host, start container1 run l2fwd successful, once start run l2fwd in another container2, then both of them got Segmentation fault error, anyone met this error, thanks.
Host: 4 sriov-vf enabled, driver: vfio-pci
container1: docker run --privileged --name="vhost_user" -v /dev:/dev -v /tmp:/tmp -itd centos-cu:v3
container2: docker run --privileged --name="virtio_user" -v /dev:/dev -v /tmp:/tmp -itd centos-cu:v3
l2fwd logs:
container1:
container1:
# ./l2fwd -l 2-3 -n 2 -w 0000:04:10.7 -w 0000:04:10.5 -- -p 0x3
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:04:10.5 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
EAL: using IOMMU type 1 (Type 1)
EAL: PCI device 0000:04:10.7 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
MAC updating enabled
Lcore 2: RX port 0
Lcore 3: RX port 1
Initializing port 0... done:
Port 0, MAC address: 02:09:C0:11:47:97
Initializing port 1... done:
Port 1, MAC address: 02:09:C0:00:2C:47
Checking link statusdone
Port0 Link Up. Speed 10000 Mbps - full-duplex
Port1 Link Up. Speed 10000 Mbps - full-duplex
L2FWD: entering main loop on lcore 3
L2FWD: -- lcoreid=3 portid=1
L2FWD: entering main loop on lcore 2
L2FWD: -- lcoreid=2 portid=0
Port statistics ====================================
Statistics for port 0 ------------------------------
Packets sent: 0
Packets received: 0
Packets dropped: 0
Statistics for port 1 ------------------------------
Packets sent: 0
Packets received: 5
Packets dropped: 0
Aggregate statistics ===============================
Total packets sent: 0
Total packets received: 5
Total packets dropped: 0
====================================================
Port statistics ====================================
Statistics for port 0 ------------------------------
Packets sent: 23
Packets received: 16
Packets dropped: 0
Statistics for port 1 ------------------------------
Packets sent: 16
Packets received: 26
Packets dropped: 0
Aggregate statistics ===============================
Total packets sent: 39
Total packets received: 42
Total packets dropped: 0
====================================================
(start to run l2fwd in container2)
./run_l2fwd.sh: line 3: 116 Segmentation fault (core dumped) ./l2fwd -l 2-3 -n 2 -w 0000:04:10.7 -w 0000:04:10.5 -- -p 0x3
container2:
# /l2fwd -l 0-1 -n 2 -w 0000:04:10.3 -w 0000:04:10.1 -- -p 0x3
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:04:10.1 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
EAL: using IOMMU type 1 (Type 1)
EAL: PCI device 0000:04:10.3 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
MAC updating enabled
Lcore 0: RX port 0
Lcore 1: RX port 1
Initializing port 0... ./run_l2fwd.sh: line 3: 90 Segmentation fault (core dumped) ./l2fwd -l 0-1 -n 2 -w 0000:04:10.3 -w 0000:04:10.1 -- -p 0x3
Mapping hugepages from files in hugetlbfs is essential for multi-process, because secondary processes need to map the same hugepages. EAL creates files like rtemap_0 in directories specified with --huge-dir option (or in the mount point for a specific hugepage size). The rte prefix can be changed using --file-prefix. This may be needed for running multiple primary processes that share a hugetlbfs mount point. Each backing file by default corresponds to one hugepage, it is opened and locked for the entire time the hugepage is used. This may exhaust the number of open files limit (NOFILE).

dpdk l2fwd-nv: Match error CPU and GPU pointers

I run l2fwd-nv with next command:
./l2fwdnv -l 0-3 -n 8 -a 07:00.0,txq_inline_max=0 -- m 1 -w 2 -b 64 -p 1 -v 0 z 0
Program output:
************ L2FWD-NV ************
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING! Base virtual address hint (0x100a96000 != 0x7f3b1fe00000) not respected!
EAL: This may cause issues with mapping memory into secondary processes
EAL: WARNING! Base virtual address hint (0x1016f7000 != 0x7f371fc00000) not respected!
EAL: This may cause issues with mapping memory into secondary processes
EAL: WARNING! Base virtual address hint (0x102358000 != 0x7f331fa00000) not respected!
EAL: This may cause issues with mapping memory into secondary processes
EAL: WARNING! Base virtual address hint (0x102fb9000 != 0x7f2f1f800000) not respected!
EAL: This may cause issues with mapping memory into secondary processes
EAL: Invalid NUMA socket, default to 0
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:07:00.0 (socket 0)
common_mlx5: RTE_MEM is selected.
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
Device driver name in use: mlx5_pci...
EAL: Error - exiting with code: 1
Cause: GPU pointer does not match CPU pointer pDev=0x2030c0000 pBuf=0x7f3b20400000
What is the error?
ps: https://github.com/NVIDIA/l2fwd-nv
Are you actually running with -m 1 on the last version of l2fwd-nv ? The device memory path enabled with -m 1, should work in your case. In your question, you lack the - of -m, is it only a typo ?
Your error comes from the host pinned memory path, which is according to the documentation enabled with -m 0. This path allocates memory on the CPU, and allows the GPU to access it using cudaHostGetDevicePointer.
I don't know why, but l2fwd-nv requires the device pointer returned by cudaGetHostPointer to be equal to the initial host pointer. This is the case when your GPU has the attribute cudaDevAttrCanUseHostPointerForRegisteredMem. In your case, it looks like your GPU does not have this attribute.

DPDK suddenly stopped working with error 'EAL: No available 1048576 kB hugepages reported'

I installed DPDK on my ubuntu server 18.04 TLS with kernel 5.4.82 and everything worked fine with dpdk_testpmd until a roundtrip upgrade to/downgrade from kernel 5.9. All of sudden it stopped working with error 'EAL: No available 1048576 kB hugepages reported' even after using hugeadm to create hugepages. From /proc/meminfo, HugePage_Free * HugePageSize is 1724416 kB which is more than 1048575 kB.
Is there any reason DPDK is not able to see those pages?
/usr/bin/hugeadm --pool-pages-min DEFAULT:2G -vvv
hugeadm:DEBUG: HUGETLB_VERBOSE='5'
hugeadm:INFO: page_size<DEFAULT> adjust<2G> counter<0>
hugeadm:DEBUG: Working with page_size of 2097152
hugeadm:DEBUG: Returning page count of 1024
hugeadm:INFO: 1024, 1024 -> 1024, 1024
hugeadm:INFO: setting HUGEPAGES_TOTAL to 1024
cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 1024
HugePages_Free: 842
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 2097152 kB
./dpdk-testpmd -- --total-num-mbufs=131072
EAL: Detected 40 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available 1048576 kB hugepages reported
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: DPDK is running on a NUMA system, but is compiled without NUMA support.
EAL: This will have adverse consequences for performance and usability.
EAL: Please use --legacy-mem option, or recompile with NUMA support.
..... Removed text about PCIe device probe here .....
EAL: No legacy callbacks, legacy socket not created
testpmd: create a new mbuf pool <mb_pool_0>: n=131072, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=131072, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
EAL: Error - exiting with code: 1
Cause: Creation of mbuf pool for socket 1 failed: Cannot allocate memory
I figured out the problem. The system is missing libnuma-dev, probably was removed by mistake during my previous kernel update. After re-installing libnuma-dev and recompile DPDK with meson and ninja, everything is working fine again.

DPDK sample application aborts after EAL: Couldn't get fd on hugepage file

After cloning the dpdk git repository and building the helloworld application, I get the following error:
$ ./examples/helloworld/build/helloworld
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Couldn't get fd on hugepage file
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
PANIC in main():
Cannot init EAL
5: [./examples/helloworld/build/helloworld(+0x11de) [0x56175faac1de]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f31f60fe0b3]]
3: [./examples/helloworld/build/helloworld(+0x111c) [0x56175faac11c]]
2: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(__rte_panic+0xc5) [0x7f31f62d537e]]
1: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(rte_dump_stack+0x32) [0x7f31f62ecc52]]
Aborted (core dumped)
Checked hugepage support and it seems fine:
$ cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 256
HugePages_Free: 255
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 524288 kB
$ mount | grep huge
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
$ cat /proc/filesystems | grep huge
nodev hugetlbfs
$ cat /proc/sys/vm/nr_hugepages
256
I saw a workaround in a related question; run it with the --no-huge option, which works:
$ ./examples/helloworld/build/helloworld --no-huge
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
hello from core 1
hello from core 2
hello from core 3
hello from core 0
But this is a limiting solution.
TL;DR Use sudo
Running with --log-level=eal,8 as suggested by #VipinVarghese revealed that this was a permissions issue:
$ ./examples/helloworld/build/helloworld --log-level=eal,8
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Detected lcore 2 as core 0 on socket 0
EAL: Detected lcore 3 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: open shared lib /usr/lib/x86_64-linux-gnu/dpdk/pmds-20.0/librte_pmd_qede.so
EAL: Registered [vdev] bus.
EAL: Registered [pci] bus.
EAL: Registered [eth] device class.
EAL: open shared lib /usr/lib/x86_64-linux-gnu/dpdk/pmds-20.0/librte_pmd_aesni_mb.so
...
EAL: Ask a virtual area of 0x61000 bytes
EAL: Virtual area found at 0xd00600000 (size = 0x61000)
EAL: Memseg list allocated: 0x800kB at socket 0
EAL: Ask a virtual area of 0x400000000 bytes
EAL: Virtual area found at 0xd00800000 (size = 0x400000000)
EAL: TSC frequency is ~2590000 KHz
EAL: Master lcore 0 is ready (tid=7fc11ed01d00;cpuset=[0])
EAL: lcore 2 is ready (tid=7fc116ffd700;cpuset=[2])
EAL: lcore 3 is ready (tid=7fc1167fc700;cpuset=[3])
EAL: lcore 1 is ready (tid=7fc1177fe700;cpuset=[1])
EAL: Trying to obtain current memory policy.
EAL: Setting policy MPOL_PREFERRED for socket 0
EAL: get_seg_fd(): open failed: Permission denied
EAL: Couldn't get fd on hugepage file
EAL: attempted to allocate 1 segments, but only 0 were allocated
EAL: Restoring previous memory policy: 0
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
PANIC in main():
Cannot init EAL
5: [./examples/helloworld/build/helloworld(+0x11de) [0x56459e5391de]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7fc11eed00b3]]
3: [./examples/helloworld/build/helloworld(+0x111c) [0x56459e53911c]]
2: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(__rte_panic+0xc5) [0x7fc11f0a737e]]
1: [/lib/x86_64-linux-gnu/librte_eal.so.20.0(rte_dump_stack+0x32) [0x7fc11f0bec52]]
Aborted (core dumped)
Tried solving the permissions problem (EAL: get_seg_fd(): open failed: Permission denied), but it only worked when I ran it as root:
$ sudo ./examples/helloworld/build/helloworld
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
hello from core 1
hello from core 2
hello from core 3
hello from core 0
As it turns out, this is the correct approach, even though the documentation appears to assume that this is obvious. There is no mention of root privileges required on section "6.2. Running a Sample Application", excerpt below:
Copy the DPDK application binary to your target, then run the
application as follows (assuming the platform has four memory channels
per processor socket, and that cores 0-3 are present and are to be
used for running the application):
./dpdk-helloworld -l 0-3 -n 4
However, this point is mentioned later in the documentation, see "8.2. Running DPDK Applications Without Root Privileges" where there's a clear note:
The instructions below will allow running DPDK as non-root with older
Linux kernel versions. However, since version 4.0, the kernel does not
allow unprivileged processes to read the physical address information
from the pagemaps file, making it impossible for those processes to
use HW devices which require physical addresses
There is also a hint in the FAQ:
What does “EAL: map_all_hugepages(): open failed: Permission denied Cannot init memory” mean? This is most likely due to the test
application not being run with sudo to promote the user to a
superuser. Alternatively, applications can also be run as regular
user. For more information, please refer to DPDK Getting Started
Guide.
And an email touching on this topic:
2016-07-07 16:47, Jez Higgins:
> Is it possible to get DPDK up and running as non-root - if so, can
> anyone guide me to what I'm missing? Or should I be giving this up as a
> bad job?
You can try the --no-huge option.
But most of drivers won't work without hugepage currently.
A rework of the memory allocation is needed to make it work better.
That was four years ago. Perhaps there is already a solution which does not require sudo or --no-huge? If so, other answers are most welcome. For now, I'm going with this.
#Nagev I request you to check dpdk as non root stack overflow question in Nov 2020
[EDIT-1] Noticed the above question is removed, hence access to the details are limited, updating with the answer how to run without sudo or root privellege section
Note: I have been running DPDK application as non root with 18.11.5 LTS and 19.11.3 LTS

DPDK application cannot work for no free hugepage

I am building the Helloworld application in DPDK. I get an error that says No free hugepages reported in hugepages-1048567.
(1) I build the DPDK-18.11 using RTE_TARGET=x86_64-linuxapp-native-gcc.
(2) I run usertools/dpdk-setup.sh, run [15] (build DPDK).
(3) run [22], allocate hugepages. I set 1024 hugepages.
(4) run [18], insert igb_uio module.
(5) run [24], bind my NIC (e1000e) to igb_uio module.
Then, I go to examples/helloworld/, run make to build the app. When I run
./build/app/helloworld -l 0-1 -n 4, I get the following nofication (No free hugepage):
xiarui#wcf-OptiPlex-7060:~/dpdk/dpdk-18.11/examples/helloworld/build/app$ sudo ./helloworld -l 0 -n 4
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
hello from core 0
I have already allocated hugepages in the setup script, and get the following output:
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 1024
HugePages_Free: 1024
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Furthermore, I find e1000e cannot bind to VFIO, so I only use igb_uio driver.
Network devices using DPDK-compatible driver
============================================
0000:00:1f.6 'Ethernet Connection (7) I219-LM 15bb' drv=igb_uio unused=e1000e
My host profile is :
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-8700 CPU # 3.20GHz
Stepping: 10
CPU MHz: 800.493
CPU max MHz: 4600.0000
CPU min MHz: 800.0000
BogoMIPS: 6384.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-11
Memory:
xiarui#wcf-OptiPlex-7060:~/dpdk/dpdk-18.11/examples/helloworld/build/app$ free -h
total used free shared buff/cache available
Mem: 7.6G 2.4G 4.4G 159M 809M 4.8G
Swap: 2.0G 0B 2.0G
The things go worse when I run pktgen-3.6.0. I get the following error:
>>> sdk '/home/xiarui/dpdk/dpdk-18.11', target 'x86_64-native-linuxapp-gcc'
Trying ./app/x86_64-native-linuxapp-gcc/pktgen
sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen -l 0-1 -n 4 --proc-type auto --log-level 7 --file-prefix pg -- -T -P --crc-strip -m 1.0 -f themes/black-yellow.theme
Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. Powered by DPDK
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/pg/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
Lua 5.3.3 Copyright (C) 1994-2016 Lua.org, PUC-Rio
*** Copyright (c) <2010-2019>, Intel Corporation. All rights reserved.
*** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 8c:ec:4b:a5:17:4f
eth_em_start(): Unable to initialize the hardware
!PANIC!: rte_eth_dev_start: port=0, Input/output error
PANIC in pktgen_config_ports():
rte_eth_dev_start: port=0, Input/output error6: [./app/x86_64-native-linuxapp-gcc/pktgen(_start+0x2a) [0x56038a3d29ba]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7fe0b33a3b97]]
4: [./app/x86_64-native-linuxapp-gcc/pktgen(main+0xe52) [0x56038a3ca782]]
3: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_config_ports+0x1ef1) [0x56038a403761]]
2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc5) [0x56038a3bb544]]
1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2e) [0x56038a4f5f4e]]
Could you share me some idea? Thank you for your time.
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
hello from core 0
No free 1GB hugepages is not an error, it's just an info.
You got the hello from core 0 output, so your hello world application works just fine, congratulations!
I run testpmd in the usertools/dpdk-setup.sh and the thing is bad. I get the following error:
Launching app
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
EAL: Error - exiting with code: 1
Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory
It seems like the app cannot allocate memory from hugepages, I guess.
Thank you for your time.
EDIT
I guess I was too mean for hugepage allocation. So I try to allocate 2000*2MB hugepages. Then, all works fine.
bitmask: 0x0f
Launching app
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15bb net_e1000_em
testpmd: No probed ethernet devices
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done
testpmd>
I find in my VBox, I only have two lcores, I only allocate 128 hugepages, and all things work fine. However, when I use a desktop with 12 lcores, 128 hugepages are not sufficient.
Could you share some principles for allocating hugepages? or just the more, the better. Thank you for sharing your ideas.