Does DPDK provide an interface to forward packets from one process to another? - dpdk

Does DPDK provide an interface to forward packets from one process to another? Similar to the loopback mode of the kernel, but faster than the loopback mode.

Related

enable Dpdk application talk to Linux process

I have DPDK-20.11.3 installed.
Given that a Dpdk application is a process from the Linux point of view, I would assume that there should be ways (possibly with constraints) of communication between the Dpdk application and Linux native application (capable of using Linux sockets interface).
What are the possible options, that I could explore? Pipes, shared memory? I don't have any tcp/ip stack ported on dpdk, so I likely can't use sockets at all?
I'd appreciate links and documents that could shed some light on this. Thanks!
You can use KNI interface. Here is the Sample app for the same.
https://doc.dpdk.org/guides-20.11/sample_app_ug/kernel_nic_interface.html
As clarified over comments the real intention is to send and receive full or partial packets into the Kernel network subsystem. The easiest way is to make use of DPDK PCAP PMD or TAP PMD.
How to use:
Tap:
ensure the DPDK application is running in a Linux environment.
making use of DPDK testpmd, l2fwd or skeleton update DPDK EAL by --vdev=net_tap0.
Starting DPDK application will result in tap interface dtap0
bring the interface up by sudo ip link set dtap0 up
One can assign an IP address or use a raw promiscuous device.
pinging both kernel thread and DPDK TAP PMD thread, up to 4Gbps of packet throughput can be achieved for small packets.
PCAP:
Create veth interface pair in Linux using ip link add dev v1 type veth peer name v2
use v1 in linux network subsystem
use v2 in dpdk application by --vdev=net_pcap0,iface=v2
Note:
my recommendation is to use the TAP interface since it is a dedicated PMD handling probe and removed with the DPDK application. Assigning IP address from Linux also allows it to be part of a local termination, firewall and netfilter processing. All kernel network knobs for ipv4formward, TCP, udp and sctp can be exercised too.
I do not recommend the use of KNI PMD, since it is deprecated and will be removed, additional thread in the kernel to handle the buffer management and Netlink, external dependency to be built (not done for most distros package distribution).
environment

Is it possible to use the DPDK KNI as a TUN Device

I've been hitting the performance limits of the kernel TUN driver and I'm looking to the DPDK KNI driver as an alternative since it cites itself as a replacement for Linux TUN/TAP and provides a more efficient interface to get packets to the network stack.
I've been experimenting with it as a replacement, but it operates at L2 and I'm unsure how to configure the interface or what scaffolding I need to build to make it behave as an L3 point-to-point TUN driver. I can see packets (with Ethernet headers) being presented to the interface, including a DHCP packet from the OS, but I'm not sure how to respond to these messages and a short while adding an IP to the interface the kernel drops it.
Is it possible to use the KNI driver as a replacement for TUN and, if so, is there any existing tooling / open-source projects to use the KNI interface as a L3 TUN device?

Is it possible to implement a Modbus Master based on libmodbuspp which uses RTU over TCP to talk to RTU slaves behind a TCP/RTU gateway?

I'm developing a C++ Modbus application which already uses the libmodbuspp library to implement a Modbus Master device to query Modbus Slaves either in TCP or RTU modes (respectively for devices connected over an Ethernet network or RS-232/485 serial links). It is already working fine, but recently a new requirement was set that this application should also implement RTU over TCP, so it would be able to communicate over TCP with a gateway which has many RTU Modbus slave devices connected to its many serial ports (the gateway just forwards the RTU packets to the corresponding slave IDs set in them). Basically this means our application should send Modbus RTU PDUs over a TCP/IP connection instead of a serial port.
For a quick and dirty solution (might help somebody else), I used socat to create a virtual serial port with an outbound TCP connection to the target TCP gateway, making my application just work in its regular RTU mode (the virtual serial port being /dev/ttyS4 and the TCP gateway endpoint 192.168.0.10:8000):
socat -d -d pty,link=/dev/ttyS4,raw,echo=0 tcp-connect:192.168.0.10:8000
but for the end application I wanted something cleaner, without depending on external apps.
I was wondering if I could use libmodbuspp's virtual-rtu layer to achieve this. Although the libmodbuspp documentation is very good, it seems this is a new feature, so it is not so clear yet to me. The docs and examples only show it working for receiving RTU over TCP through a Server or Router (an specialized Server), which in turn can reach RTU slaves associated to these objects, but what I need is exactly the opposite: the ability to connect to a TCP endpoint as a client, but sending it Modbus RTU instead of Modbus TCP datagrams.
Since the libmodbuspp library is actually a C++ wrapper to the well known Modbus C library libmodbus, there is also the possibility of using this fork of libmodbus which added support for RTU over TCP, but I would rather not have to reimplement the support in the C++ wrapper library libmodbuspp if somehow it already supports it (but maybe in an incomplete way - not in the client connection). Maybe somebody more familiar with libmodbuspp codebase could give me some pointers here? Thanks!

Receiving multicast on linux host with multiple interfaces

I have a host running Ubuntu 16.04 connected to one network via the primary wired network interface, and to another network via a USB-to-Ethernet adapter. Using tcpdump, I am able to verify incoming multicast packets on both network interfaces. However, my application does not receive any multicast data from the secondary interface. If I disconnect the cable to the primary interface, and then restart my application, then I do receive the data from the secondary interface. It is only with both interfaces connected that the application does not receive from the secondary interface.
I found a similar problem (Raspberry Pi Zero with USB-to-Ethernet adaptor, fails to respond to mDNS queries). To work out if your problem is the same, does does your app correctly receive multicast traffic whilst running tcpdump at the same time? And does running tcpdump with --no-promiscuous-mode not see multicast traffic?
If your answer is yes to both, then the workaround I've found is simply ip link set eth0 promisc on. I don't know if it is a hardware bug (I'm using a Kontron DM9601 adaptor, ID 0FE6:9700) or a driver bug, but either way, enabling promiscuous mode seems to fix multicast reception for me. Alternatively you could try a better USB-to-ethernet adaptor.
The ip_mreq stucture is passed as the option value for the IP_ADD_MEMBERSHIP socket option to join a multicast group. From the Multicast programming HOWTO from the The Linux Documentation Project:
The first member, imr_multiaddr, holds the group address you want to join. Remember that memberships are also associated with interfaces, not just groups. This is the reason you have to provide a value for the second member: imr_interface. This way, if you are in a multihomed host, you can join the same group in several interfaces. You can always fill this last member with the wildcard address (INADDR_ANY) and then the kernel will deal with the task of choosing the interface.
The IP_MULTICAT_IF socket option is also relevant on a multihomed host to set the outbound interface for multicast data sent via the socket. More information on these socket options, the ip_mreq structure, and the newer ip_mreqn stucture is found here.
For those using Boost on a multihomed host, you will need to use the native handle to join the group on specific interfaces. As of Boost 1.58 running on Ubuntu 16.04, the socket option abstraction ip::multiast::join_group() joins the group on an interface of the kernel's choosing and does not allow the developer to specify an interface. The socket option abstraction ip::multicast::outbound_interface() controls the outbound interface but does not affect which interface the socket receives on.
Here is a code sample to join a group on a specific interface based on the local interface IP address:
struct ip_mreq mreq;
mreq.imr_multiaddr.s_addr = inet_addr(discovery_ip);
mreq.imr_interface.s_addr = inet_addr(local_interface_ip);
if(setsockopt(socket_.native_handle(), IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq))) {
... handle error ...
}

p2p open source library tcp/udp multicast support

I have a certain application running on my computer. The same application can run on many computers on a LAN or different places in the world. I want to communicate between them. So I basically want a p2p system. But I will always know which computers(specific IP address) will be peers. I just want peers to have join and leave functionality. The single most important aim will be communication speed and time required. I assume simple UDP multicast (if anything like that exists) between peers will be fastest possible solution. I dont want to retransmit messages even if lost. Should I use an existing p2p library e.g. libjingle,etc. or just create some basic framework from scratch as my needs are pretty basic?
I think you're missing the point of UDP. It's not saving any time in a sense that a message gets faster to the destination, it's just you're posting the message and don't care if it arrives safely to the other side. On WAN - it will probably not arrive on the other side. UDP accross networks is problematic, as it can be thrown out by any router on the way which is tight on bandwidth - there's no guarantee of delivery for it.
I wouldn't suggest using UDP out of the topology under your control.
As to P2P vs directed sockets - the question is what it is that you need to move around. Do you need bi/multidirectional communication between all the peers, or you're talking to a single server from all the nodes?
You mentioned multicast - that would mean that you have some centralized source of data that transmits information and all the rest listen - in this case there's no benefit for P2P, and multicast, as a UDP protocol, may not work well accross multiple networks. But you can use TCP connections to each of the nodes, and "multicast" on your own, and not through IGMP. You can (and should) use threading and non-blocking sockets if you're concerned about sending blocking you, and of course you can use the QoS settings to "ask" routers to rush your sockets through.
You can use zeromq for support all network communication:
zeromq is a simple library encapsulate TCP and UDP for high level communication.
For P2P you can use the different mode of 0mq :
mode PGM/EPGM for discover member of P2P on your LAN (it use multicast)
mode REQ/REP for ask a question to one member
mode PULL/PUSH for duplicate one resource on the net
mode Publish/subscribe for transmission a file to all requester
Warning, zeromq is hard to install on windows...
And for HMI, use green-shoes ?
i think you should succeed using multicast,
unfortunately i do not know any library,
but still in case you have to do it from scratch
take a look at this:
http://www.tldp.org/HOWTO/Multicast-HOWTO.html
good luck :-)