My question is how to send packet another Physical Server from my Computer using dpdk.
I already watched example code rxtx_callbacks and i want to use this code.
but there is no place to enter a specific ip and port to another server.
how i can send packets to places on a server using dpdk with specified ip and port?
and how i can receive packets using dpdk?
Is l3fwd correct or is this another concept?
help me
DPDK is an open-source library that allows one to bypass Kernel and ETH-IP-TCP stack to send packets from userspace directly on NIC or other custom hardware. There are multiple examples and projects like pktgen and TREX which uses to generate user-defined packets (desired MAC address, VLAN, IP and TCP-UDP) payload.
For the queries
how i can send packets to places on a server using dpdk with specified ip and port?
[Answer] make use of DPDK PKTGEN as an easy way to generate. Other examples are pcap based burst replay and trex.
But the easiest way to generate and send traffic is using scapy with DPDK sample application skeleton. Following are the steps required to achieve the same.
Install DPDK to desired platform (preferably Linux)
build the DPDK example skeleton found in path [dpdk root folder]/examples/skeleton
bind a physical NIC (if traffic needs to be send out of server) with userspace drivers like igb_uio, uio_pci_generic or vfio-pci
start the application with options '-l 1 --vdev=net_tap0,iface=scapyEth'. this will create TAP interface with name scapyEth.
using scapy now create your custom packet with desired MAC, VLAN, IP and Port numbers.
and how i can receive packets using dpdk?
[Answer] on receiver side run DPDK application like testpmd, l2fwd, or skeleton if packets needs to received by Userspace DPDK application or any Linux sockets can receive the UDP packets.
Note: easiest way to check whether packets are received is to run tcpudmp. example tcpdump -eni eth1 -Q in (where eth1 is physical interface on Reciever Server.
Note: Since the request how i can send packets to places on a server is not clear
Using DPDK one can send packets through a physical interface using dedicated NIC, FPGA and wireless devices
DPDK can send packets among applications using memif interface
DPDK can send packets between VM using virtio and vhost
DPDK can send and receive packets to kernel, where Kernel routing stack and ARP table determine which kernel interface will forward the packets.
Related
I need to create a veth device for the slowpath for control packets.
What I tried till now:
I have created veth interfaces using below command
sudo ip link add veth1-2 type veth peer name veth2-1
when I use command sudo dpdk-devbind.py --bind=igb_uio veth1-2 to add veth into dpdk.
It gives me an error that "Unknown device: veth2-1
Is there any way we can add veth interfaces in dpdk ?
If you want a "dpdk-solution", then what you'll want to look at is KNI: https://doc.dpdk.org/guides/prog_guide/kernel_nic_interface.html
From their docs:
The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux* control plane.
The benefits of using the DPDK KNI are:
Faster than existing Linux TUN/TAP interfaces (by eliminating system
calls and copy_to_user()/copy_from_user() operations.
Allows management of DPDK ports using standard Linux net tools such as
ethtool, ifconfig and tcpdump.
Allows an interface with the kernel network stack.
If your fine using a non-dpdk solution, then a TUN/TAP device is a typical way to interface with the networking stack. Your application would receive packets on the dpdk-controlled nic, and if it is a control packet you would simply forward it on to the TUN/TAP device (or KNI if using DPDK's version of TUN/TAP). Similarly for the other direction, if the TUN/TAP/KNI device receives a packet from the networking stack, you would simply send it out the DPDK physical nic.
I am using Ubuntu 16.04 virtual machine with Kernel 4.4 and DPDK version 17.11. I managed to configure igb_uio drivers using setup utility.
Then I compiled basicfwd application DPDK. I also configured two ports with igb_uio driver and verified that it's associated with DPDK and not shown in Linux Kernel.
basicfwd application is listening on two ports where MAC address is displayed.
I am not sure how to send packet for MAC address. Can anyone suggest how to create packets for given MAC address using a command or utility. Windows is host OS in my laptop.
I also see PMD application and packetgen application are used for testing purpose. I am not sure whether they can be used to test basicfwd application.
Also I would like to know How to assign ip address for DPDK port so that they can receive packets in live environment. I need to study more on DPDK on these aspects.
Would appreciate any help on this.
DPDK is an alternative for Kernel stack Processing, so any ports bound to DPDK via uio_pci_generic/vfio-pci/igb_uio will not be supporting IPv4/IPv6 address as kernel netdev. Hence the expectation of How to assign ip address for DPDK port is incorrect.
With respect to sending packet into a virtual machine, there are a couple of combination
have complete NIC pass through to VM (PF/VF) - in this scenario, one needs to send a packet through the physical interface it.
have port representation like TAP/VETH-Pair passed as virtio interface - in this scenario on the host machine, there will be representation port. So you can use tools like ping/arping/packeth/pktgen/scapy/ostinato to generate packets for you.
Note: if you are using testpmd dpdk application you can make it run in Promiscuous mode. For examples like l2fwd/skeleton the ports are by default set into promiscuous mode.
I'm developing an application for WAN data optimisation, including SQUID (using TPROXY redirect) for web caching. The software modifies the TCP options to negotiate parameters with another remote instance of the software (used in the optimisation algorithm). Since SQUID will establish the TCP connection with the requesting browser and the WAN packets may be sent over an IPSec tunnel the software MUST run between these two components.
I've be able configure the system such that SQUID will correctly handle the LAN side request, and on a cache miss send packets into my software (using a TUN/TAP interface), modify the TCP header (and correct the csum) and send it back into the kernel through a second TUN/TAP interface.
For packets being sent into the WAN after a cache miss:
For IPv4 if I sent rp_filter=2 on the first tap (and manually add the ARP entries) the packets are correctly routed
For IPv6 the kernel seems to black hole the TCP SYN sent from SQUID. This is a packet associated with a socket created locally, received back into the (same) kernel to be routed out to the WAN. If I modify the source or destination ports (i.e. make it look like a different socket) of the packet it is correctly routed out the tunnel/interface.
Are there any sysctl parameters / cleverness in iptables that could explain why these packets are dropped and how do I fix it?
I am a newbie to Intel DPDK.
I am planning to write an http web server.
Can it be implemented using the following logic using DPDK ?
Get the packets and send it to Worker Logical Cores.
A Worker Logical Core build 'http reuqest' sent by the client, using
the incoming packets.
Process the 'http reuest' in the Worker Logical Core and produce an
'http response'.
Create packets for the 'http response' and dispatch them to output
software rings.
I am not sure whether the above is feasible or not.
Is it possible to write a web server using Intel DPDK?
It is lot of work since you'll need a TCP/IP stack on top of the DPDK. Even once you'll have ported a TCP/IP stack on top of DPDK (or reusing a port from an OS), you won't have the performance because it is easy to write C code that runs, but writting a TCP/IP stack that sustains good performances, it is a very difficult development.
You can try http://www.6wind.com/6windgate-performance/tcp-termination/ : they do not provide a HTTP server, but they provide a L7 like TCP socket support to build the fastest HTTP servers.
Yes, its possible to build a Web Server using DPDK. You could either use a KNI interface provided by DPDK. All packets received on a KNI interfaces are still routed through the kernel network stack -- however, and heres the catch, this is still faster than directly receiving packets from the kernel (requires multiple copies). With DPDK you could still ping cores to RX and different lcores to TX. You could then instruct your OS not to use these lcores for anything else. So you really have dedicated lcores for packet TX and RX. Ensure that Tx and RX lcores lie on different CPU sockets.
More information at:
http://dpdk.org/doc/guides/sample_app_ug/kernel_nic_interface.html
I'm developing a peer-to-peer communications network for use over a LAN in an industrial environment. Some messages are are just asynchronous, and don't require a response. Others are request-response. The request messages (and the async messages) are sent to a multicast group, and the replies to requests are sent unicast. Each endpoint, therefore, receives UDP packets that are sent to the multicast group, and also receives messages that are just sent to it using plain unicast.
So far it's working fine, but there doesn't seem to be any way in boost::asio to find out the destination address of a received UDP packet (using socket.async_receive_from) - whether it was sent to the multicast group or the actual interface. I can use the contents of the message to infer whether it was sent multicast or unicast, but it would be nice to be able to also check the destination address.
We are currently using Windows 7, but will be transitioning to Linux in the future.
Is there a way to find the destination address of a UDP packet received using boost::asio?
Unfortunately this is not possible with boost::asio, and usually is not "the way to do" it, as you try to access Transport Layer information at the Application Layer.
So you basically have two options:
a) Write non-portable system code with for example IP_PKTINFO or SO_BINDTODEVICE on Linux. Example Code can be found on the boost asio mailing list here
b) use two distinct sockets, one for the multicast and one for the unicast. You therefore need to specify a listen_address other than "0.0.0.0" on each socket.
udp::endpoint(address_v4::from_string("239.192.152.143"), 6771)
This Question on SO might also be helpful: Using a specific network interface for a socket in windows