I have DPDK-20.11.3 installed.
Given that a Dpdk application is a process from the Linux point of view, I would assume that there should be ways (possibly with constraints) of communication between the Dpdk application and Linux native application (capable of using Linux sockets interface).
What are the possible options, that I could explore? Pipes, shared memory? I don't have any tcp/ip stack ported on dpdk, so I likely can't use sockets at all?
I'd appreciate links and documents that could shed some light on this. Thanks!
You can use KNI interface. Here is the Sample app for the same.
https://doc.dpdk.org/guides-20.11/sample_app_ug/kernel_nic_interface.html
As clarified over comments the real intention is to send and receive full or partial packets into the Kernel network subsystem. The easiest way is to make use of DPDK PCAP PMD or TAP PMD.
How to use:
Tap:
ensure the DPDK application is running in a Linux environment.
making use of DPDK testpmd, l2fwd or skeleton update DPDK EAL by --vdev=net_tap0.
Starting DPDK application will result in tap interface dtap0
bring the interface up by sudo ip link set dtap0 up
One can assign an IP address or use a raw promiscuous device.
pinging both kernel thread and DPDK TAP PMD thread, up to 4Gbps of packet throughput can be achieved for small packets.
PCAP:
Create veth interface pair in Linux using ip link add dev v1 type veth peer name v2
use v1 in linux network subsystem
use v2 in dpdk application by --vdev=net_pcap0,iface=v2
Note:
my recommendation is to use the TAP interface since it is a dedicated PMD handling probe and removed with the DPDK application. Assigning IP address from Linux also allows it to be part of a local termination, firewall and netfilter processing. All kernel network knobs for ipv4formward, TCP, udp and sctp can be exercised too.
I do not recommend the use of KNI PMD, since it is deprecated and will be removed, additional thread in the kernel to handle the buffer management and Netlink, external dependency to be built (not done for most distros package distribution).
environment
Related
I have a C++ application that records the data from an external camera sensor. The sensor sends out UDP packets and it has it's own API. The API automatically scans the subnet and connects to the camera sensor. I know the IP address of the camera but there is no possibility to specify it manually using this API.
Unfortunately, I must use a Windows host machine to use my CPP application. I use CPP libraries that are only available for linux. Therefore I am constrained to use linux in a docker container on the Windows host. The camera API does not automatically find the device because it is not on the same subnet. The host network feature in docker would have solved this issue but it is not available on Windows.
Has someone been in a similar situation? or Can someone explain me how to get around this problem? I do not have much knowledge of networking.
I need to create a veth device for the slowpath for control packets.
What I tried till now:
I have created veth interfaces using below command
sudo ip link add veth1-2 type veth peer name veth2-1
when I use command sudo dpdk-devbind.py --bind=igb_uio veth1-2 to add veth into dpdk.
It gives me an error that "Unknown device: veth2-1
Is there any way we can add veth interfaces in dpdk ?
If you want a "dpdk-solution", then what you'll want to look at is KNI: https://doc.dpdk.org/guides/prog_guide/kernel_nic_interface.html
From their docs:
The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux* control plane.
The benefits of using the DPDK KNI are:
Faster than existing Linux TUN/TAP interfaces (by eliminating system
calls and copy_to_user()/copy_from_user() operations.
Allows management of DPDK ports using standard Linux net tools such as
ethtool, ifconfig and tcpdump.
Allows an interface with the kernel network stack.
If your fine using a non-dpdk solution, then a TUN/TAP device is a typical way to interface with the networking stack. Your application would receive packets on the dpdk-controlled nic, and if it is a control packet you would simply forward it on to the TUN/TAP device (or KNI if using DPDK's version of TUN/TAP). Similarly for the other direction, if the TUN/TAP/KNI device receives a packet from the networking stack, you would simply send it out the DPDK physical nic.
I've been hitting the performance limits of the kernel TUN driver and I'm looking to the DPDK KNI driver as an alternative since it cites itself as a replacement for Linux TUN/TAP and provides a more efficient interface to get packets to the network stack.
I've been experimenting with it as a replacement, but it operates at L2 and I'm unsure how to configure the interface or what scaffolding I need to build to make it behave as an L3 point-to-point TUN driver. I can see packets (with Ethernet headers) being presented to the interface, including a DHCP packet from the OS, but I'm not sure how to respond to these messages and a short while adding an IP to the interface the kernel drops it.
Is it possible to use the KNI driver as a replacement for TUN and, if so, is there any existing tooling / open-source projects to use the KNI interface as a L3 TUN device?
I am using Ubuntu 16.04 virtual machine with Kernel 4.4 and DPDK version 17.11. I managed to configure igb_uio drivers using setup utility.
Then I compiled basicfwd application DPDK. I also configured two ports with igb_uio driver and verified that it's associated with DPDK and not shown in Linux Kernel.
basicfwd application is listening on two ports where MAC address is displayed.
I am not sure how to send packet for MAC address. Can anyone suggest how to create packets for given MAC address using a command or utility. Windows is host OS in my laptop.
I also see PMD application and packetgen application are used for testing purpose. I am not sure whether they can be used to test basicfwd application.
Also I would like to know How to assign ip address for DPDK port so that they can receive packets in live environment. I need to study more on DPDK on these aspects.
Would appreciate any help on this.
DPDK is an alternative for Kernel stack Processing, so any ports bound to DPDK via uio_pci_generic/vfio-pci/igb_uio will not be supporting IPv4/IPv6 address as kernel netdev. Hence the expectation of How to assign ip address for DPDK port is incorrect.
With respect to sending packet into a virtual machine, there are a couple of combination
have complete NIC pass through to VM (PF/VF) - in this scenario, one needs to send a packet through the physical interface it.
have port representation like TAP/VETH-Pair passed as virtio interface - in this scenario on the host machine, there will be representation port. So you can use tools like ping/arping/packeth/pktgen/scapy/ostinato to generate packets for you.
Note: if you are using testpmd dpdk application you can make it run in Promiscuous mode. For examples like l2fwd/skeleton the ports are by default set into promiscuous mode.
I am a newbie to Intel DPDK.
I am planning to write an http web server.
Can it be implemented using the following logic using DPDK ?
Get the packets and send it to Worker Logical Cores.
A Worker Logical Core build 'http reuqest' sent by the client, using
the incoming packets.
Process the 'http reuest' in the Worker Logical Core and produce an
'http response'.
Create packets for the 'http response' and dispatch them to output
software rings.
I am not sure whether the above is feasible or not.
Is it possible to write a web server using Intel DPDK?
It is lot of work since you'll need a TCP/IP stack on top of the DPDK. Even once you'll have ported a TCP/IP stack on top of DPDK (or reusing a port from an OS), you won't have the performance because it is easy to write C code that runs, but writting a TCP/IP stack that sustains good performances, it is a very difficult development.
You can try http://www.6wind.com/6windgate-performance/tcp-termination/ : they do not provide a HTTP server, but they provide a L7 like TCP socket support to build the fastest HTTP servers.
Yes, its possible to build a Web Server using DPDK. You could either use a KNI interface provided by DPDK. All packets received on a KNI interfaces are still routed through the kernel network stack -- however, and heres the catch, this is still faster than directly receiving packets from the kernel (requires multiple copies). With DPDK you could still ping cores to RX and different lcores to TX. You could then instruct your OS not to use these lcores for anything else. So you really have dedicated lcores for packet TX and RX. Ensure that Tx and RX lcores lie on different CPU sockets.
More information at:
http://dpdk.org/doc/guides/sample_app_ug/kernel_nic_interface.html