Is there a way to disable Receive Side Scaling but still using multiple RX queues in a hardware round robin fashion in DPDK? I set mq_mode to ETH_MQ_RX_NONE rather than ETH_MQ_RX_RSS but it seems like there is only one queue avaliable when receive packets.
RSS is diabled with ETH_MQ_RX_NONE, the default queue on which packets will be received is queue 0. this is the right behaviour, if you want Round robin or specific queue you should use RTE_FLOW. As per the current question or information I do not see the same. Please update with information with DPDK version, NIC PMD, and sample code.
Related
My system is CentOS 8 with kernel: 4.18.0-240.22.1.el8_3.x86_64 and I am using DPDK 20.11.1. Kernel:
I want to calculate the round trip time in an optimized manner such that the packet sent from Machine A to Machine B is looped back from Machine B to A and the time is measured. While this being done, Machine B has a DPDK forwarding application running (like testpmd or l2fwd/l3fwd).
One approach can be to use DPDK pktgen application (https://pktgen-dpdk.readthedocs.io/en/latest/), but I could not find it to be calculating the Round Trip Time in such a way. Though ping is another way but when Machine B receives ping packet from Machine A, it would have to process the packet and then respond back to Machine A, which would add some cycles (which is undesired in my case).
Open to suggestions and approaches to calculate this time. Also a benchmark to compare the RTT (Round Trip Time) of a DPDK based application versus non-DPDK setup would also give a better comparison.
Edit: There is a way to enable latency in DPDK pktgen. Can anyone share some information that how this latency is being calculated and what it signifies (I could not find solid information regarding the page latency in the documentation.
It really depends on the kind of round trip you want to measure. Consider the following timestamps:
-> t1 -> send() -> NIC_A -> t2 --link--> t3 -> NIC_B -> recv() -> t4
host_A host_B
<- t1' <- recv() <- NIC_A <- t2' <--link-- t3' <- NIC_B <- send() <- t4'
Do you want to measure t1' - t1? Then it's just a matter of writing a small DPDK program that stores the TSC value right before/after each transmit/receive function call on host A. (On host b runs a forwarding application.) See also rte_rdtsc_precise() and rte_get_tsc_hz() for converting the TSC deltas to nanoseconds.
For non-DPDK programs you can read out the TSC values/frequency by other means. Depending on your resolution needs you could also just call clock_gettime(CLOCK_REALTIME) which has an overhead of 18 ns or so.
This works for single packet transmits via rte_eth_tx_burst() and single packet receives - which aren't necessarily realistic for your target application. For larger bursts you would have to use get a timestamp before the first transmit and after the last transmit and compute the average delta then.
Timestamps t2, t3, t2', t3' are hardware transmit/receive timestamps provided by (more serious) NICs.
If you want to compute the roundtrip t2' - t2 then you first need to discipline the NIC's clock (e.g. with phc2ys), enable timestamping and get those timestamps. However, AFAICS dpdk doesn't support obtaining the TX timestamps, in general.
Thus, when using SFP transceivers, an alternative is to install passive optical TAPs on the RX/TX end of NIC_A and connect the monitor ports to a packet capture NIC that supports receive hardware timestamping. With such as setup, computing the t2' - t2 roundtrip is just a matter of writing a script that reads the timestamps of the matching packets from your pcap and computes the deltas between them.
The ideal way to latency for sending and receiving packets through an interface is setup external Loopback device on the Machine A NIC port. This will ensure the packet sent is received back to the same NIC without any processing.
The next best alternative is to enable Internal Loopback, this will ensure the desired packet is converted to PCIe payload and DMA to the Hardware Packet Buffer. Based on the PCIe config the packet buffer will share to RX descriptors leading to RX of send packet. But for this one needs a NIC
supports internal Loopback
and can suppress Loopback error handlers.
Another way is to use either PCIe port to port cross connect. In DPDK, we can run RX_BURST for port-1 on core-A and RX_BURST for port-2 on core-B. This will ensure an almost accurate Round Trip Time.
Note: Newer Hardware supports doorbell mechanism, so on both TX and RX we can enable HW to send a callback to driver/PMD which then can be used to fetch HW assisted PTP time stamps for nanosecond accuracy.
But in my recommendation using an external (Machine-B) is not desirable because of
Depending upon the quality of the transfer Medium, the latency varies
If machine-B has to be configured to the ideal settings (for almost 0 latency)
Machine-A and Machine-B even if physical configurations are the same, need to be maintained and run at the same thermal settings to allow the right clocking.
Both Machine-A and Machine-B has to run with same PTP grand master to synchronize the clocks.
If DPDK is used, either modify the PMD or use rte_eth_tx_buffer_flush to ensure the packet is sent out to the NIC
With these changes, a dummy UDP packet can be created, where
first 8 bytes should carry the actual TX time before tx_burst from Machine-A (T1).
second 8 bytes is added by machine-B when it actually receives the packet in SW via rx_burst (2).
third 8 bytes is added by Machine-B when tx_burst is completed (T3).
fourth 8 bytes are found in Machine-A when packet is actually received via rx-burst (T4)
with these Round trip Time = (T4 - T1) - (T3 - T2), where T4 and T1 gives receive and transmit time from Machine A and T3 and T2 gives the processing overhead.
Note: depending upon the processor and generation, no-variant TSC is available. this will ensure the ticks rte_get_tsc_cycles is not varying per frequency and power states.
[Edit-1] as mentioned in comments
#AmmerUsman, I highly recommend editing your question to reflect the real intention as to how to measure the round trip time is taken, rather than TX-RX latency from DUT?, this is because you are referring to DPDK latency stats/metric but that is for measuring min/max/avg latency between Rx-Tx on the same DUT.
#AmmerUsman latency library in DPDK is stats representing the difference between TX-callback and RX-callback and not for your use case described. As per Keith explanation pointed out Packet send out by the traffic generator should send a timestamp on the payload, receiver application should forward to the same port. then the receiver app can measure the difference between the received timestamp and the timestamp embedded in the packet. For this, you need to send it back on the same port which does not match your setup diagram
IMPORTANT NOTE: I'm aware that UDP is an unreliable protocol. But, as I'm not the manufacturer of the device that delivers the data, I can only try to minimize the impact. Hence, please don't post any more statements about UDP being unreliable. I need suggestions to reduce the loss to a minimum instead.
I've implemented an application C++ which needs to receive a large amount of UDP packets in short time and needs to work under Windows (Winsock). The program works, but seems to drop packets, if the Datarate (or Packet Rate) per UDP stream reaches a certain level... Note, that I cannot change the camera interface to use TCP.
Details: It's a client for Gigabit-Ethernet cameras, which send their images to the computer using UDP packets. The data rate per camera is often close to the capacity of the network interface (~120 Megabytes per second), which means even with 8KB-Jumbo Frames the packet rate is at 10'000 to 15'000 per camera. Currently we have connected 4 cameras to one computer... and this means up to 60'000 packets per second.
The software handles all cameras at the same time and the stream receiver for each camera is implemented as a separate thread and has it's own receiving UDP socket.
At a certain frame rate the software seems miss a few UDP frames (even the network capacity is used only by ~60-70%) every few minutes.
Hardware Details
Cameras are from foreign manufacturers! They send UDP streams to a configurable UDP endpoint via ethernet. No TCP-support...
Cameras are connected via their own dedicated network interface (1GBit/s)
Direct connection, no switch used (!)
Cables are CAT6e or CAT7
Implementation Details
So far I set the SO_RCVBUF to a large value:
int32_t rbufsize = 4100 * 3100 * 2; // two 12 MP images
if (setsockopt(s, SOL_SOCKET, SO_RCVBUF, (char*)&rbufsize, sizeof(rbufsize)) == -1) {
perror("SO_RCVBUF");
throw runtime_error("Could not set socket option SO_RCVBUF.");
}
The error is not thrown. Hence, I assume the value was accepted.
I also set the priority of the main process to HIGH-PRIORITY_CLASS by using the following code:
SetPriorityClass(GetCurrentProcess(), HIGH_PRIORITY_CLASS);
However, I didn't find any possibility to change the thread priorities. The threads are created after the process priority is set...
The receiver threads use blocking IO to receive one packet at a time (with a 1000 ms timeout to allow the thread to react to a global shutdown signal). If a packet is received, it's stored in a buffer and the loop immediately continues to receive any further packets.
Questions
Is there any other way how I can reduce the probability of a packet loss? Any possibility to maybe receive all packets that are stored in the sockets buffer with one call? (I don't need any information about the sender side; just the contained payload)
Maybe, you can also suggest some registry/network card settings to check...
To increase the UDP Rx performance for GigE cameras on Widnows you may want to look into writing a custom filter driver (NDIS). This allows you to intercept the messages in the kernel, stop them from reaching userspace, pack them into some buffer and then send to userspace via a custom ioctl to your application. I have done this, it took about a week of work to get done. There is a sample available from Microsoft which I used as base for it.
It is also possible to use an existing generic driver, such as pcap, which I also tried and that took about half a week. This is not as good because pcap cannot determine when the frames end so packet grouping will be sub optimal.
I would suggest first digging deep in the network stack settings and making sure that the PC is not starved for resources. Look at guides for tuning e.g. Intel network cards for this type of load, that could potentially have a larger impact than a custom driver.
(I know this is an older thread and you have probably solved your problem. But things like this is good to document for future adventurers..)
IOCP and WSARecv in overlapped mode, you can setup around ~60k WSARecv
on the thread that handles the GetQueuedCompletionStatus process the data and also do a WSARecv in that thread to comnpensate for the one being used when receiving the data
please note that your udp packet size should stay below the MTU above it will cause drops depending on all the network hardware between the camera and the software
write some UDP testers that mimuc the camera to test the network just to be sure that the hardware will support the load.
https://www.winsocketdotnetworkprogramming.com/winsock2programming/winsock2advancediomethod5e.html
I want to measure UDP latency and drop rate between two machines on Linux. Preferably (but not crucial) to perform measurement between multiple machines at the same time.
As a result I want to get a histogram, e.g. RTT times of each individual packet at every moment during measurement. Expected frequency is about 10 packets per second.
Do you know of any tool that I can use for this purpose?
What I tried so far is:
ping - uses icmp instead of UDP
iperf - measures only jitter but not latency.
D-ITG - measures per flow statistics, no histograms
tshark - uses TCP for pings instead UDP
I have also created a simple C++ socket program where I have Client and Server on each side, and I send UDP packets with counter and timestamp. My program seems to work ok, although since I am not a network programmer I am not 100% sure that I handled buffers correctly (specifically in the case of partial packets etc). So I would prefer to use some proven software for this task.
Can you recommend something?
Thanks
It depends. If all you want is a trace with timestamps, Wireshark is your friend: https://www.wireshark.org/
I would like to remind you that UDP is a message based protocol and packets have definite boundaries. There cannot be reception of partial packets. That is, you will either get the complete message or you will not get it. So, you need not worry about partial packets in UDP.
The method of calculating packet drop using counter & calculating latency using time delta appears fine for UDP. However the important point to be taken in to consideration is ensuring the synchronization of the system time of client and server.
I am working on Connecting an Embedded Circuit board to PC via TCP.
The board contains a chip which, sadly, doesn't generate any interrupt on Receiving data. But it does generates an interrupt on receiving "Keep-Alive" signal.
Currently I have to poll for data.
Instead, I am thinking that, I will send data from PC and then a KeepAlive Signal. Whenever a KeepAlive is received, I will read data too.
I do understand that this might generate false alarms but it's better than continuous polling.
I observed a Keep-Alive packet on Wireshark, it has One byte of Data and it is "00".
And then I tried to send TCP Packet with Data as "00":
I can see, Only Flag Section is different.
I got Two questions:
(Broadly) How to manually send a Keep-Alive Signal?
How to change that flag setting? (Flags in send and sendto are different)
Update:
I have tried RawSockets, but that didn't help me or I missed something. I just change Flag to ACK in RAW Sockets header.
RFC 1122 section 4.2.3.6 might be worth reading.
It states that keepalive is an optional feature of the TCP implementation. It also states that keepalive signals should be limited to at most one every two hours. So manually emitting one from your application isn't a desired feature in general.
Furthermore, it describes details about the implementation, in particular pointing out the sequence number involved. This is one difference visible in your screen shots which you apparently failed to notice: the real keepalive packet has a very high relative sequence number, which is simply the unsigned representation of -1. To reproduce this with raw sockets, I think you'd have to somehow get your hands on the current TCP sequence number of the existing connection. Haven't worked enough with RawSockets to know details on how to do this.
The supported means to have the system send keepalives periodically is using the SO_KEEPALIVE option. But that won't be of much use to emit such a signal at a specific moment in time, I think.
I am developing an application that distributes rendering across several devices (a university project).
Each frame consists of several blocks (16x16 pixels), and each device is "assigned" a number of blocks to be rendered. These blocks, when rendered, are compressed and serialized into a buffer until the max size of this is reached, at which point it will be sent.
My problem is on the receiving end, which needs to recieve several datagrams per frame from several devices. Currently I call recv for each packet, but this requires a context switch for each packet. It would be better to receive many packets with one call. A packet identify which client it is from, so the addresses of these are irrelevant.
I have looked at WSARecv and WSARecvFrom but neither seems to be able to receive several packets from several hosts.
Thanks in advance :)
EDIT
Using a separate thread to fetch packets from the OS layer does not seem to improve the situation. In fact the solution proposed by Amardeep has a performance that is lower than the previous scheme.
Fetching several packets would be nice to try out too, anyone knows how to do that?
That is probably not the cause of any perceived performance issue you are trying to address. Without seeing your code, I'll take a guess that what you're doing is:
1. recv a packet
2. Process a packet
3. repeat
What you should be doing is:
1. use one thread to recv packets with a pool of buffers and shove the received buffer pointers into a queue.
2. use another slightly lower priority thread to pull items from the queue and process them.
This would greatly improve your system's ability to digest the data and miss fewer datagrams.