Host: Win7,
VM: Fedora 20,
VMware player 7.
Hello, I have been working on a virtual Fedora 20 OS for a while and randomly the connection dropped. The Wifi on the Host machine is fine, but the vm cannot connect. Here is the output for ifconfig -a:
eno16777736: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 00:50:56:3b:a0:42 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 19 base 0x2000
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 480 bytes 75417 (73.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 480 bytes 75417 (73.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I don't have much experience with troublshooting on a linux OS. Some guidance would be much appreciated.
I solved this issue by adding NM_CONTROLLED=yes to the network card config file located in etc/sysconfig/network-scripts. After which I then ran the command
sudo ifup <name of vm NIC>
Related
I have upgraded DPDK from 17.02 to 21.11. RPM build was built and installed successfully. While running the custom application I saw the following error:
Cannot allocate memory#012ms_dpdk::port::port: Failed to create packet memory pool (rte_pktmbuf_pool_create failed) - for port_id
Function call parameters : rte_pktmbuf_pool_create(port-0,267008,32,0,2176,0)
I have added std::string msg = rte_strerror(rte_errno); in error logs and it gives the output as
Cannot allocate memory
LDD output shows the libraries are linked properly and there are no "no found" entries.
ldd /opt/NETAwss/proxies/proxy | grep "buf"
librte_mbuf.so.22 => /lib64/librte_mbuf.so.22 (0x00007f795873f000)
ldd /opt/NETAwss/proxies/proxy | grep "pool"
librte_mempool_ring.so.22 => /lib64/librte_mempool_ring.so.22 (0x00007f7a1da3f000)
librte_mempool.so.22 => /lib64/librte_mempool.so.22 (0x00007f7a1da09000)
igb_uio is also loaded successfully.
lsmod | grep uio
igb_uio 4190 1
uio 8202 3 igb_uio
cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
512
grep Huge /proc/meminfo
AnonHugePages: 983040 kB
ShmemHugePages: 0 kB
HugePages_Total: 512
HugePages_Free: 511
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
When I run dpdk-testpmd it seems to be working fine. Below is the output of the test application.
./dpdk-testpmd
EAL: Detected CPU lcores: 2
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probe PCI driver: net_vmxnet3 (15ad:7b0) device: 0000:13:00.0 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 0)
Port 0: 00:50:56:88:9A:43
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
Press enter to exit
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 2 RX-dropped: 0 RX-total: 2
TX-packets: 2 TX-dropped: 0 TX-total: 2
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 2 RX-dropped: 0 RX-total: 2
TX-packets: 2 TX-dropped: 0 TX-total: 2
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
Stopping port 0...
Stopping ports...
Done
Shutting down port 0...
Closing ports...
Port 0 is closed
Done
Bye...
I am not able to figure out the root cause of this error. Any help is appreciated. Thanks
Memory allocation failure happens by moving from DPDK 17.02 to 21.11. This is expected for fixed 512 * 2MB and memory requirements from custom application.
DPDK 21.11 introduces new features like telemetry, fb_arrary, MP communication sockets, service cores, which requires more internal memory allocation (not everything is from HEAP region but hugepage).
rte_pktmbuf_pool_create tries to create (267008 * 2176 + additional place holder) is about 0.8GB.
hence with the above new memory model and services, a total huge page potential shoots over 1GB MMAPED area. Currently, the Huge pages allocated in the system are 512 * 2MB only.
Solutions:
reduce the number of MBUF from 267008 to a lower value like 200000 to satisfy the memory requirement.
Increase the number of available huge pages from 512 to 600
use the new EAL to use legacy memory, no telemetry, no multiprocess, no service cores, to reduce memory footprint.
use real arg --socket-mem or -m, to fix the memory allocations.
Note: the RPM package was not initially housing libdpdk.pc. This is required for obtaining platform-specific CFLAGS and LDFLAGS.
I've been playing around with DPDK and trying to create the following scenario.
I have 2 identical servers with 2 different NICs each. The goal is from server1 to send packets in different rates (up to the link maximum using DPDK), and capture the packets on the other side where an app will be running.
On server1 one NIC (Netronome) is taken by DPDK, but on server 2 it's not. The NICs are directly connected with fiber.
On server1 I run
./dpdk-devbind.py --bind=vfio-pci 0000:05:00.0
and then pktgen. It appears to be working (packets are being reported as sent by pktgen). However on the other side (server2), the inteface goes down the moment I devbind. From:
Settings for enp6s0np1:
Supported ports: [ FIBRE ]
Supported link modes: Not reported
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: None
Advertised link modes: Not reported
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: None
Speed: 40000Mb/s
Duplex: Full
Auto-negotiation: off
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Link detected: yes
it goes to:
ethtool enp6s0np1
Settings for enp6s0np1:
Supported ports: [ FIBRE ]
Supported link modes: Not reported
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: None
Advertised link modes: Not reported
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: None
Speed: Unknown!
Duplex: Unknown! (255)
Auto-negotiation: off
Port: Other
PHYAD: 0
Transceiver: internal
Link detected: no
It thinks that there is no physical connection between the two NICs and obviously this is not the case.
enp6s0np1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:15:4d:13:30:5c brd ff:ff:ff:ff:ff:ff
The pktgen output:
\ Ports 0-1 of 2 <Main Page> Copyright(c) <2010-2021>, Intel Corporation
Flags:Port : P------Sngl :0
Link State : <--Down--> ---Total Rate---
Pkts/s Rx : 0 0
Tx : 14,976 14,976
MBits/s Rx/Tx : 0/10 0/10
Pkts/s Rx Max : 0 0
Tx Max : 15,104 15,104
Broadcast : 0
Multicast : 0
Sizes 64 : 0
65-127 : 0
128-255 : 0
256-511 : 0
512-1023 : 0
1024-1518 : 0
Runts/Jumbos : 0/0
ARP/ICMP Pkts : 0/0
Errors Rx/Tx : 0/0
Total Rx Pkts : 0
Tx Pkts : 2,579,072
Rx/Tx MBs : 0/1,733
TCP Flags : .A....
TCP Seq/Ack : 305419896/305419920
Pattern Type : abcd...
Tx Count/% Rate : Forever /0.1%
Pkt Size/Tx Burst : 64 / 128
TTL/Port Src/Dest : 10/ 1234/ 8000
Pkt Type:VLAN ID : IPv4 / UDP:0001
802.1p CoS/DSCP/IPP : 0/ 0/ 0
VxLAN Flg/Grp/vid : 0000/ 0/ 0
IP Destination : 192.168.0.2
Source : 192.168.0.1
MAC Destination : 00:15:4d:13:30:5c
Source : 00:15:4d:13:30:81
PCI Vendor/Addr : 19ee:4000/05:00.0
when I try to capture with tcpdump -i enp6s0np1, it doesn't record anything. Are those issues related and if yes, is there any workaround? Shouldn't some packets be captured by tcpdump on server2?
Nvm this got resolved. Apparently the NIC has two ports and only one was connected. Therefore the issue was the Link State down below.
Ports 0-1 of 2 <Main Page> Copyright(c) <2010-2021>, Intel Corporation
Flags:Port : P------Sngl :0
Link State : <--Down-->
To resolve it I had to use port 1 instead of port 0
I am using Netronome 4000 SmartNIC. After binding a VF to dpdk driver, I try to run dpdk testpmd but it shows 'Link status: down'. I also try to bring the port up in testpmd, but it also fails.
Do I need to explicitly bring up the interface/port in some way?
lspci -kd 19ee: 05:00.0 Ethernet controller: Netronome Systems, Inc. Device 4000
Subsystem: Netronome Systems, Inc. Device 4000
Kernel driver in use: nfp
Kernel modules: nfp 05:08.0 Ethernet controller: Netronome Systems, Inc. Device 6003
Subsystem: Netronome Systems, Inc. Device 4000
Kernel driver in use: igb_uio
Kernel modules: nfp
testpmd> show port info all
********************* Infos for port 0 *********************
MAC address: 00:15:4D:00:00:00
Device name: 0000:05:08.0
Driver name: net_nfp_vf
Connect to socket: 0
memory allocation on the socket: 0
Link status: down
Link speed: 0 Mbps
Link duplex: half-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 40
Redirection table size: 128
Supported RSS offload flow types:
ipv4
ipv4-tcp
ipv4-udp
ipv6
ipv6-tcp
ipv6-udp
Minimum size of RX buffer: 68
Maximum configurable length of RX packet: 9216
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 1
Max possible RX queues: 1
Max possible number of RXDs per queue: 32768
Min possible number of RXDs per queue: 64
RXDs number alignment: 128
Current number of TX queues: 1
Max possible TX queues: 1
Max possible number of TXDs per queue: 32768
Min possible number of TXDs per queue: 64
TXDs number alignment: 128
Max segment number per packet: 255
Max segment number per MTU/TSO: 8
testpmd> set link-up port 0
nfp_net_set_link_up(): Set link up
Set link up fail.
I am getting DPDK MLX5 probing issue.
I have installed the mlx5/ofed driver
I have loaded the kernel modules.
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: Probe PCI driver: mlx5_pci (15b3:1013) device: 0000:5e:00.0 (socket 0)
mlx5_pci: unable to recognize master/representors on the multiple IB devices
common_mlx5: Failed to load driver = mlx5_pci.
EAL: Requested device 0000:5e:00.0 cannot be used
EAL: Bus (pci) probe failed.
As for the 'failing to load mlx5_pci' driver, I can see that
the mlx5_core driver is loaded.
dpdk-devbind.py -s
Network devices using kernel driver
===================================
0000:5e:00.0 'MT27700 Family [ConnectX-4]' if=enp94s0 drv=mlx5_core unused=
I am assuming both of them are same?
What does failing to recognize master/representors on multiple IB devices mean?
My configuration is: CentOS 7.9, Linux Kernel 5.12, OFED 4.9 (LTS), DPDK 21.02
lsmod | egrep 'mlx|ib'
libceph 413696 1 ceph
ib_isert 49152 0
iscsi_target_mod 315392 1 ib_isert
ib_srpt 61440 0
target_core_mod 372736 3 iscsi_target_mod,ib_srpt,ib_isert
ib_srp 61440 0
scsi_transport_srp 28672 1 ib_srp
ib_iser 45056 0
ib_umad 36864 0
rdma_cm 114688 6 rpcrdma,ib_srpt,ib_srp,ib_iser,ib_isert,rdma_ucm
ib_ipoib 114688 0
libiscsi 65536 1 ib_iser
scsi_transport_iscsi 126976 2 ib_iser,libiscsi
ib_cm 122880 4 rdma_cm,ib_ipoib,ib_srpt,ib_srp
mlx5_ib 331776 0
mlx4_ib 196608 0
ib_uverbs 147456 3 mlx4_ib,rdma_ucm,mlx5_ib
ib_core 356352 14 rdma_cm,ib_ipoib,rpcrdma,mlx4_ib,ib_srpt,ib_srp,iw_cm,ib_iser,ib_umad,ib_isert,rdma_ucm,ib_uverbs,ml
5_ib,ib_cm
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,xfs,libceph
mlx4_en 118784 0
mlx4_core 319488 2 mlx4_ib,mlx4_en
mlx5_core 700416 1 mlx5_ib
mlxfw 32768 1 mlx5_core
pci_hyperv_intf 16384 1 mlx5_core
ptp 28672 3 igb,mlx4_en,mlx5_core
libahci 36864 1 ahci
libata 253952 2 libahci,ahci
I have a quad port Intel 1G network card. I am using DPDK to send data on one physical port and receive on another.
I saw a few examples in DPDK code, but could not make it work. If anybody knows how to do that please send me simple instructions so I can follow and understand. I setup my PC properly for huge pages, loading driver, and assigning network port to use dpdk driver etc... I can run helloworld from DPDK so system setup looks ok to me.
Thanks in advance.
temp5556
After building DPDK:
cd to the DPDK directory.
Run sudo build/app/testpmd -- --interactive
You should see output like this:
$ sudo build/app/testpmd -- --interactive
EAL: Detected 8 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.rte_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: PCI device 0002:00:02.0 on NUMA socket 0
EAL: probe driver: 15b3:1004 net_mlx4
PMD: net_mlx4: PCI information matches, using device "mlx4_0" (VF: true)
PMD: net_mlx4: 1 port(s) detected
PMD: net_mlx4: port 1 MAC address is 00:0d:3a:f4:6e:17
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port
will pair with itself.
Configuring Port 0 (socket 0)
Port 0: 00:0D:3A:F4:6E:17
Checking link statuses...
Done
testpmd>
Don't worry about the "No free hugepages" message. It means it couldn't find any 1024 MB hugepages but it since it continued OK, it must have found some 2 MB hugepages. It'd be nice if it said "EAL: Using 2 MB huge pages" instead.
At the prompt type, start tx_first, then quit. You should see something like:
testpmd> start tx_first
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0:
CRC stripping enabled
RX queues=1 - RX desc=1024 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX queues=1 - TX desc=1024 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ offloads=0x0
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 32 TX-dropped: 0 TX-total: 32
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 32 TX-dropped: 0 TX-total: 32
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In my system there is only one DPDK port, so I sent 32 packets but did not receive any. If I had a multi-port card with a cable directly between the ports, then I'd see the RX count also increase.
you can use TESTPMD to test DPDK.
TestPMD can work as a packet generator (tx_only mode) , a receiver (rx_only mode) , or a forwarder(io mode).
you will need generator nodes to be connected to your box if you are willing to use TESTPMD as a forwarder only.
I propose that you start with the following examples :
generator(pktgen) ------> testPMD (io mode )----------> recevier (testPMD rx_only mode).
at the pktgen generator specify the mac address destination which is the MAC address of the receive's receiving PORT.
PKTGEN and how it works in detail is explained more in this link :
http://pktgen.readthedocs.io/en/latest/getting_started.html
TESTPMD and how it works is explained here :
http://www.intel.com/content/dam/www/public/us/en/documents/guides/dpdk-testpmd-application-user-guide.pdf
I hope this helps.