How to make RakNet more reliable? - c++

Here's the summary, I send a packet from a server to a client run on the same computer. For some reason the packet sent is not the same as the packet received.
Here's the details:
The packet was sent using RakNet with the calling function:
rakPeer->Send(&bitStream, MEDIUM_PRIORITY, RELIABLE_ORDERED, 0, UNASSIGNED_RAKNET_GUID, true);
Here are the first 10 bytes of the packet sent by the server:
27,50,39,133,202,135,0,0,0,99 ... 1180 more bytes
Here are the first 10 bytes of the packet as seen by the receiving client (Note: 50% of the time it is right, the other half it is this):
27,50,43,40,247,134,255,255,255,99 ... 1180 more bytes
The first byte is ID_TIMESTAMP. Bytes 2-5 contain the time stamp, which I presume RakNet messes with somehow. Byte 6 is the packed ID which is clearly changed, as well as the following 3 bytes.
My suspicion is that the error is some how caused by the length of the packet, as smaller packets seem to send without any detectable errors, however I understand RakNet automatically handles packet corruption and internally splits the packet if it is too large.
Any help is appreciated.

Well for anyone who has the same issue, here is the solution.
RakNet time stamps are 32 bit or 64 bit depending on your build configuration. In this case I was sending 32 bit timestamps using a 64 bit build. That is a no-no since RakNet will change the bits it thinks are the timestamp to account for the relative time between computers.

Related

Connectx-6 LX scheduled sending only sending 25 packets

We are trying to use send scheduling on a Connectx-6 LX. If we set no timestamps on the packet buffers and manually send each packet at approximately the right time everything works. However if we set timestamps in the buffers then the first 25 packets are sent and the packets are received at the expected times but all subsequent calls to rte_eth_tx_burst return 0. If its relevant we are sending a single packet in each burst with timestamps 125 us apart.
We've tried setting the timestamps to low values and the packets are transmitted correctly and as expected the tx_pp_timestamp_past_errors value is incremented. We also set high values and this worked too with tx_pp_timestamp_future_errors incrementing.
Any ideas where to start debugging this? I couldn't see any API which would give an error code for rte_eth_tx_burst failing.
We're using DPDK 21.08 with the 5.4.3.1 Mellanox driver on Ubuntu 20.04.
It looks like this was caused by not having enough TX descriptors, we were only specifying 64, increasing to 1024 fixes the problem.

Winsock udp recv not receiving all data even within a loop

I have some UDP recv code utilizing winsock, specifically AF_INET, SOCK_DGRAM, and IPPROTO_UDP. The recv call exists in a loop that reads until all data is received. The problem is that I only get partial data every time. Furthermore I shouldn't even need the loop since I can see that all the data exists in a single packet. In wireshark I see that the real data length is 299 bytes. Wireshark successfully sees the entire packet so I know the packet sender is working correctly.
With my windows UDP code I only get 8 bytes until my receive method returns. I can see that the UDP code is stopping when it sees a 00 byte. Wireshark is able to read past those 00 bytes but windows udp recv is not able to read past it. It seems like windows sees the 00 as the end and therefore concludes that the packet is only 8 bytes.
I have looked at the header in wireshark and I see that the length specified is correct so i'm very confused as to why its only reading 8 bytes. Thank you for your help in advance.
This is what my receive call looks like for reference:
int ret = recv(socket, buffer, 1500, 0);
ret becomes 8 and buffer, which has been allocated to be 1500 bytes, is only filled with 8 bytes
UDP is message-oriented, not stream-oriented like TCP is.
In UDP, the only way that recv() would return 8 bytes is if either:
you set the len parameter to > 8, and a datagram containing exactly 8 bytes is received.
you set the len parameter to == 8, and a datagram containing more than 8 bytes is received.
Given your claim that that you are setting the len to 1500, then the only possibility is that the datagram received only has 8 bytes in it, not 299 bytes like you claim.
Unlike in TCP, in UDP you cannot read a datagram in pieces, a read is an all-or-nothing operation (unless you peek the data, which you are not doing). In UDP, recv()/recvfrom() reads whole datagrams at a time, and if the provided buffer is too small to receive the data (which it is not in this case) then an WSAEMSGSIZE error is reported and the data is truncated, discarding the rest of the data that does not fit in the buffer.
The 00 null bytes you describe have NO EFFECT WHATEVER on recv()/recvfrom()'s ability to read a whole datagram. A datagram carries a payload length, and recv()/recvfrom() will read up to that payload length or the specified buffer size, whichever is smaller, regardless of the actual payload content.
So, chances are, in your code, recv() is simply not receiving the datagram you are expecting.
For instance, in order to use recv() in UDP at all, you must connect() the UDP socket first to associate it with a specific peer IP/port, thus allowing recv() to ignore inbound datagrams from other peers (and for send() to send datagrams to a specific peer). Maybe you are connect'ing your UDP socket to the wrong peer, and thus reading datagrams that are meant for something else. Or, maybe you are connect'ing to the correct peer and it really is sending an 8-byte datagram that you are not expecting.
Since you did not provide any context info about your setup, the peer involved, the protocol involved, the Wireshark capture, nothing, there is really no way for anyone here to diagnose your situation with any certainty. But, what I described above would account for the symptom you have described. If you feel that is not the case, then you need to edit your question to provide more details.

Socket not sending entire contents on Linux (ubuntu)

I've encountered an issue when sending large segments of data through a TCP socket, having spend about 3 days trying to pick apart the issue and failing I decided it was best to turn here for help / advice.
My Project
I've written a basic HTTP server which (slightly irrelevant) can run lua scripts to output pages. This all works perfectly fine under Windows (32 bit).
The Problem
When sending medium/large files (anything from roughly 8000 bytes and above appears to have issues) over the TCP socket on Ubuntu Linux(64bit), they appear to cut out at different lengths (the result displayed in the browser is a value between 8000 and 10200 bytes. When I check the return value of the send function it's exactly 9926 bytes every time the send ends. No error.
Smaller files send absolutely fine, and there are no issues under windows. Going on this information I thought it could be a buffer size issues, so I did
cat /proc/sys/net/ipv4/tcp_mem
which outputted 188416 192512 196608
those numbers are far above 9926 so I assume that isn't the problem.
I'm using CSimpleSockets as a socket library, I haven't had any issues before. In case the issue is inside of this library the code I dug around for what the send function used under unix is:
#define SEND(a,b,c,d) send(a, (const int8 *)b, c, d)
send(socket, buffer, bytestosend, 0);
buffer gets cast from a const char * to const unsigned char* to const int8 * before getting passed to the OS to be sent.
OK, I think that covers everything I checked. If you need any more information or I've missed anything glaringly obvious I'll do my best to provide. Thanks for your help!
Your problem is that send does not guarantee to send the amount of data passed to it.
It has internal buffers that can fill, socket parameters that affect buffers, etc. You need to note how many bytes were sent, wait for a few milliseconds (for the send to move data over the wire and empty the buffer), then send the remaining data. There is no automatic way to do this and you'll need to write a bit of logic which advances your buffer by the amount of bytes that were actually sent.
Are you using blocking or non-blocking sockets? If you're using non-blocking sockets, you must (and with blocking sockets, you should) check for a short send (one where the return value is fewer than the number of bytes you meant to send).

Using Tcp, why do large blocks of data get transmitted with a lower bandwidth then small blocks of data?

Using 2 PC's with Windows XP, 64kB Tcp Window size, connected with a crossover cable
Using Qt 4.5.3, QTcpServer and QTcpSocket
Sending 2000 messages of 40kB takes 2 seconds (40MB/s)
Sending 1 message of 80MB takes 80 seconds (1MB/s)
Anyone has an explanation for this? I would expect the larger message to go faster, since the lower layers can then fill the Tcp packets more efficiently.
This is hard to comment on without seeing your code.
How are you timing this on the sending side? When do you know you're done?
How does the client read the data, does it read into fixed sized buffers and throw the data away or does it somehow know (from the framing) that the "message" is 80MB and try and build up the "message" into a single data buffer to pass up to the application layer?
It's unlikely to be the underlying Windows sockets code that's making this work poorly.
TCP, from the application side, is stream-based which means there are no packets, just a sequence of bytes. The kernel may collect multiple writes to the connection before sending it out and the receiving side may make any amount of the received data available to each "read" call.
TCP, on the IP side, is packets. Since standard Ethernet has an MTU (maximum transfer unit) of 1500 bytes and both TCP and IP have 20-byte headers, each packet transferred over Ethernet will pass 1460 bytes (or less) of the TCP stream to the other side. 40KB or 80MB writes from the application will make no difference here.
How long it appears to take data to transfer will depend on how and where you measure it. Writing 40KB will likely return immediately since that amount of data will simply get dropped in TCP's "send window" inside the kernel. An 80MB write will block waiting for it all to get transferred (well, all but the last 64KB which will fit, pending, in the window).
TCP transfer speed is also affected by the receiver. It has a "receive window" that contains everything received from the peer but not fetched by the application. The amount of space available in this window is passed to the sender with every return ACK so if it's not being emptied quickly enough by the receiving application, the sender will eventually pause. WireShark may provide some insight here.
In the end, both methods should transfer in the same amount of time since an application can easily fill the outgoing window faster than TCP can transfer it no matter how that data is chunked.
I can't speak for the operation of QT, however.
Bug in Qt 4.5.3
..................................

UDP packets are dropped when its size is less than 12 byte in a certain PC. how do i figure it out the reason?

i've stuck in a problem that is never heard about before.
i'm making an online game which uses UDP packets in a certain character action. after i developed the udp module, it seems to work fine. though most of our team members have no problem, but a man, who is my boss, told me something is wrong for that module.
i have investigated the problem, and finally i found the fact that... on his PC, if udp packet size is less than 12, the packet is never have been delivered to the other host.
the following is some additional information:
1~11 bytes udp packets are dropped, 12 bytes and over 12 bytes packets are OK.
O/S: Microsoft Windows Vista Business
NIC: Attansic L1 Gigabit Ethernet 10/100/1000Base-T Controller
WSASendTo returns TRUE.
loopback udp packet works fine.
how do you think of this problem? and what do you think... what causes this problem?
what should i do for the next step for the cause?
PS. i don't want to padding which makes length of all the packets up to 12 bytes.
Just to get one of the non-obvious answers in: maybe UDP checksum offload is broken on that card, i.e. the packets are sent, but dropped by the receiver?
You can check for this by looking at the received packets using Wireshark.
IF you already checked firewall, antivirus, network firewall, network intrusion. read this
For a UDP packet ethernet_header(14 bytes) + IPv4_header(20 bytes min) + UDP_header (8 bytes) = 42 bytes
Now since its less than the 64 bytes or 60 on linux, network driver will pad the packet with (64-42 = 22 ) zeros to make it 60 bytes before it send out the packet.
that's the minimum length for a UDP packet.
theoretically you can send 0 data bytes packet, but haven't tried it yet.
as for your issue it must be an OS issue . check your network's driver's manual or check with manufacturer. because this isn't suuposed to happen.
REF:http://www.freesoft.org/CIE/Course/Section4/8.htm
REF:http://en.wikipedia.org/wiki/User_Datagram_Protocol
Run Wireshark on his PC AND on the destination PC.
Does the log show the udp packet leaving his machine? Does it show it arriving on the destination PC?
What kind of router hardware or switches are between his PC and the destination? Can you remove them and link the two with a cross over cable? (or replace the destination with a laptop and link that to his PC with a cross over cable?)
Have you removed or at least listed all anti virus and firewall products on his machine and anything that installs a Winsock LSP ?
Do ALL 12 byte or less packets get dropped or just some, can you generate packets with random content and see if it's something in the content, rather than just the size, that's causing the issue.
Assuming your problem is with sending from his PC: First, run a packet sniffer on the problematic PC to see if it arrives at the NIC. If it makes it there, there may be a problem in the NIC or NIC driver.
Next, check for any running firewall software. Try disabling it and see what happens.
If that doesn't work, clear out any Winsock Layered Service Providers with netsh winsock catalog reset.
If that doesn't work, I'm stumped :)
Finally, you're probably going to find other customers with the same problem; you might want to think about that workaround anyway. Try sending a few small-size UDP packets on connect, and if they consistently fail to go through, enable a padding workaround. For hosts where the probe packets make it through, you don't need to pad them out.
Pure conjecture: RTP, which is a very common packet to send on UDP, defines a 12 byte header. I wonder if some layer of network software is assuming that anything smaller is a malformed RTP packet and throwing it away?