I have a Qt GUI application that uses QTcpSocket to send and receive TCP packets to and from a server. So far I've had success making the TCP socket connections (there are 2 separate socket connections because there are 2 different message sets. Same IP address for both but 2 different port numbers) and sending and receiving packets. Most of the messages that my application sends are kicked off via push-button on the GUI's main window (one message is sent periodically using a QTimer that expires every 1667ms).
The server has a FIFO (128 messages deep) and sends a specific message to my application that communicates when the FIFO is 1/2 full, 3/4 full, and full. It's tedious to test this functionality by just mashing the send button on the GUI so I had the idea of loading a .csv file that could be pre-filled (the message has several different configurable parameters) with what I want to send. Each line gets read and turned into a message and sent on the TCP socket.
From my main window I open up a QFileDialog when a push-button on the GUI is clicked. Then when a .csv file is navigated to and selected the function reads the .csv file one line at a time, pulls out all the individual parameters, fills the message with the parameters, and then sends it out to the socket. Each message is 28 bytes. It repeats this until there are no lines left in the .csv file.
What I am noticing on Wireshark is that instead of sending a bunch of individual TCP packets they are all being put together and sent as one large TCP packet.
When I first tested this out I did not know about the LowDelayOption so when I found the information about it in the documentation for QAbstractSocket I thought "Aha! That must be it! The solution to my problem!" but when I added it to my code it did not seem to have any kind of effect at all. It's still being sent as one large TCP packet. For each socket, I am calling setSocketOption to set the LowDelayOption to 1 in the slot function that receives the connected() signal from the socket. I thought maybe the setSocketOption call wasn't working so I checked this by calling socketOption to get the value of the LowDelayOption and it's 1.
Is there something else I need to be doing? Am I doing something wrong?
Thanks for your time and your help. If it matters I am developing this on Windows and I am using Qt 5.9.1
... send and TCP packets to and from a server.
From this I am getting the vibe that your application relies on a certain amount of data - 'a packet' being received in a single receive call.
You can't really rely on that. Data you send over TCP can also be fragmented on the way. Also in your receiving end TCP implementation multiple packets received from the network may be put in the receiving sockets buffer before you have read the first one, and you have no way of telling which kind of fragments they were originally sent in.
So you should just treat TCP as a pipe through which bytes of data flow with some unknown and potentially variable delay. That variable delay causes data to be received in bigger or smaller chunks at random.
If you want to have a packet structure, you should add a packet header containing at least the packet length to the data you transmit.
I hope this helps.
From QTcpSocket documentation:
TCP (Transmission Control Protocol) is a reliable, stream-oriented, connection-oriented transport protocol. It is especially well suited for continuous transmission of data.
Stream-orientet means that there is no something like datagrams in UDP sockets.
There is only stream of data, and you never know in what parts it will be sent.
TCP protocol gives only reliability and you have to provide message extraction on your own. I.e send message length before each message, or use QDataStream (check
Fortune server and Fortune client for examples).
LowDelayOption from QAbstractSocket::SocketOption
Try to optimize the socket for low latency. For a QTcpSocket this would set the TCP_NODELAY option and disable Nagle's algorithm. Set this to 1 to enable.
It is equavilent of setsockopt with TCP_NODELAY option
First thing is:
The TCP_NODELAY option is specific to TCP/IP service providers.
And it doesn't work for me too :)
MSDN says that they do not recommend to disable Nagle's algorithm:
It is highly recommended that TCP/IP service providers enable the Nagle Algorithm by default, and for the vast majority of application protocols the Nagle Algorithm can deliver significant performance enhancements. However, for some applications this algorithm can impede performance, and TCP_NODELAY can be used to turn it off. These are applications where many small messages are sent, and the time delays between the messages are maintained. Application writers should not set TCP_NODELAY unless the impact of doing so is well-understood and desired because setting TCP_NODELAY can have a significant negative impact on network and application performance.
The question is: Do you really need to send your messages as fast as possible?
If yes, consider using QUdpSocket. Maybe tell us more about messages that you are sending.
Related
I've been using boost asio sockets (UDP and TCP) to handle a custom protocol between my client server program. Its been working great until I discovered that on TCP async_send/async_recieve calls that data can arrived in combined chunks.
For example, if I make two send calls each with it's own packet, they can arrive combined at a single receive call. I wrongly assumed that every send corresponds to a receive, but I'm obviously wrong. It however has worked well for the longest time until I found the issue running the client for a different OS.
So my question is: are there any guarantees to the completeness of the data on arrival for every receive call? (e.g. async_send 128 bytes arrive in multiples of 128 bytes, or how it arrives must always be treated as random, like 1 bytes arrives then 127 bytes is possible)
More specifically, does this mean that:
Data can arrive concatenated or partial for every send call, and I
have to always handle the concatenated/partial data manually
Is this true for both UDP and TCP asio sockets?
I searched around and couldn't find any documentation on this so I was wondering if anyone have any idea.
First its important to understand that boost asio socket receive and sends methods just mean that they ordered the underlying network stack to receive or send data. By network stack this could be the windows socket API.
If you are sending data right to the same computer, via so called loopback addresses, the operating system (if there is any) can just "give" it to the listening i.e. receiving program. Thats the scenario where you would be most lucky to get things in order and always complete for all cases.
However if you want you are addressing another computer or because the operating system is in the mood, you will have different behaviour:
TCP was designed that you will get you data in the order you have send it. But the chunks or packet size if will be sent differs even on the same connection and is a key feature of TCP. Your OS or hardware network adapter might do some send or receive buffering too, before informing you. However things won't get lost.
So in short for TCP: You can make sure the data is complete by waiting for a certain point in your data async_read_until is just there for this case. Data from multiple send calls might be in one receive or many
UDP was designed to have a low latency in contrast to TCP, but without its ordering and completeness guarantees. So when you send a UDP datagram i.e. packet, usually the OS and network adapter will try to send it out ASAP. However on the way to the other computer, the internet might loose it, or hold one packet back until the one you send after the first, so that data you send later, could be received later, while you can also get the sent first, later, or might not. But when you receive a datagram it's complete in it self.
So in short for UDP: Data will arrive in datagram chunks, but some datagrams might be missing, or might arrive in another order than sent. The data from one send might be in one receive, might not, or later
So after some more testing here's what I concluded: the answer is no. Boost Asio sockets does not have magic that can enforce data completeness beyond what the TCP/UDP protocols enforces.
Edit:
So here's more of my research:
For TCP, it acts like a data stream. So packets may arrive partial or combined and is complete. So the user application need to handle deserialization of combined or partial data.
For UDP, because it is a datagram packet, if the packet arrives, it is guaranteed to be independent and complete. So there is no need to handle partial or combined packets.
I am working on Connecting an Embedded Circuit board to PC via TCP.
The board contains a chip which, sadly, doesn't generate any interrupt on Receiving data. But it does generates an interrupt on receiving "Keep-Alive" signal.
Currently I have to poll for data.
Instead, I am thinking that, I will send data from PC and then a KeepAlive Signal. Whenever a KeepAlive is received, I will read data too.
I do understand that this might generate false alarms but it's better than continuous polling.
I observed a Keep-Alive packet on Wireshark, it has One byte of Data and it is "00".
And then I tried to send TCP Packet with Data as "00":
I can see, Only Flag Section is different.
I got Two questions:
(Broadly) How to manually send a Keep-Alive Signal?
How to change that flag setting? (Flags in send and sendto are different)
Update:
I have tried RawSockets, but that didn't help me or I missed something. I just change Flag to ACK in RAW Sockets header.
RFC 1122 section 4.2.3.6 might be worth reading.
It states that keepalive is an optional feature of the TCP implementation. It also states that keepalive signals should be limited to at most one every two hours. So manually emitting one from your application isn't a desired feature in general.
Furthermore, it describes details about the implementation, in particular pointing out the sequence number involved. This is one difference visible in your screen shots which you apparently failed to notice: the real keepalive packet has a very high relative sequence number, which is simply the unsigned representation of -1. To reproduce this with raw sockets, I think you'd have to somehow get your hands on the current TCP sequence number of the existing connection. Haven't worked enough with RawSockets to know details on how to do this.
The supported means to have the system send keepalives periodically is using the SO_KEEPALIVE option. But that won't be of much use to emit such a signal at a specific moment in time, I think.
Using Winsock, C++, I send and receive the data with send()/recv(), TCP connection. I want to be sure that the data has been delivered to the other party, and wonder if it is recommended to send back some acknowledgment message after (if) receiving data with recv.
Here are two possibilities, and please advice which way to go:
If send returns the size of passed buffer, assume that the data has been delivered at least to recv function on the other side of wire. When I say "at least", I mean even if the recv fails there (e.g. due to insufficient buffer, etc.), I don't care, I just want to be sure I've done my server part of work properly - I've sent the data completely (i.e. the data reached the other machine).
Use additional acknowledgment: after receiving the data with recv, send back some ID of received packet (part of header of each data sent) signaling the successful receive operation of that packet. If I don't receive such "acknowledgment message" after some interval, return failure code from the sender function.
The second answer looks more safe, but I don't want to complicate the transfer protocol if it is redundant. Also please note that I'm talking about the TCP connection (which is more safe by itself than UDP).
Is there any other mechanisms (maybe some other APIs? maybe WSARecv()/WSASend() work differently?) of ensuring that the data was delivered to the recv function on the other side?
If you recommend the second way, could you please give me some code snippet that allows me to use recv with timeout to receive the acknowledgment? recv is a blocking operation so it will hang forever if the previous send attempt failed (the other party was not notified). Is there any simple way of using recv with timeout (without creating separate thread every time which would probably be the overkill for each and every send operation).
Also the amount of data I pass to send function might be quite big (several megabytes), so how to choose the timeout for "acknowledgment message"? Maybe I should "split" large buffers and use several send calls? I think it will get quite complicated, please advice!
EDIT: OK, you people are suggesting that TCP/IP stack will handle it (i.e. no manual acknowledgment required), but this is what I found on MSDN page: "The successful completion of a send function does not indicate that the data was successfully delivered and received to the recipient. This function only indicates the data was successfully sent." So even if the TCP mechanism has the ability to ensure data delivery, I can't get that status (success or not) via send() function, or any other Winsock function I know. Do you know any way of getting the status from the TCP layer? Again - return value of send() function seems to be not enough!
========================================================
EDIT 2: OK, I think we agree that even though TCP protocol considers the error handling when something goes wrong, the send() function of Winsock is not capable of reporting the errors (simply because it returns before actual transmitting of data starts by the network driver). So here is a million dollar question: Does the send() function of Winsock at least ensure that no other packets will be delivered to the other party until the current packet will be? In other words, if the sending fails for some network failure (but not reported by send() call), and then the network failure will be fixed before next call of send() function with next chunk of data, will it be ensured that the previous packet (which failed but not reported by send()) will be delivered before the next packet? In other words, is there a chance that the one particular send() function will fail "silently", so that subsequent send() calls will succeed but the first packet will be lost? AGAIN - I'm not talking at the TCP level, I'm talking at the Winsock API level!
Why don't you trust your TCP/IP stack to guarantee delivery. After all, that is the whole point of using TCP instead of UDP.
The existing answers here are mostly correct: if you use TCP you really don't need to worry about reliable delivery of your packets to your peer.
But this is a dangerous view for some systems where data integrity must be taken to the next level: the common criteria auditing requirement FAU_STG.4.1 requires the ability to prevent auditable events if the audit log might suffer a loss of audit entries. (For example, the Linux auditd(8) audit logging daemon can be configured to place the computer in single-user-mode or halt the system completely when there is no more space left for audit logs.) Audit logs from remote systems should probably be maintained until it is known that they have been successfully written to centralized log servers.
Financial transactions would probably be best handled with a more reliable protocol than simple TCP as well -- crediting or debiting accounts would be best handled with a multi-staged protocol to ensure availability of funds, perform the transaction, then report the result of the transaction to the origination point.
TCP allows nearly a gigabyte of in-flight data between two peers (under extreme conditions); depending upon the requirements of your application, you might need to maintain that data at the sending side until you receive positive confirmation from your peer that the data has been properly handled.
Thankfully, most applications aren't this critical; losing a megabyte of data here or there down a socket that reports a closed connection at some point "in the future" really isn't horrible -- we just re-try our HTTP request, or re-attempt the SFTP connection.
Update
A socket will only accept enough data to fill its available window. The window size is negotiated between the two peers during the session handshake. So your calls to send() will begin blocking when the socket's window fills. (The OS might keep letting you add data to its internal buffers too, but at some point the writes will block.) If the peer breaks the connection with a RST or ICMP Unreachable message, a future call to send() will return an error value for Connection Reset or Broken Pipe.
Update 2
I'm not talking at the TCP level, I'm talking at the Winsock API level
This might be the source of confusion. send() has no choice but to adhere to the TCP behavior when used with TCP.
TCP guarantees in-order reliable delivery of a stream of bytes, to the extent that packets can be delivered. (See #Hans's comment about a pony and careless people kicking power cords.) The peer program will see bytes in the correct order they were sent. (Well, okay, TCP also has out-of-band urgent packet delivery, but I haven't actually seen any applications that use it. Using OOB packets, you can get some data out-of-line. Forget I mentioned it.)
If the remote program receives a byte sent on a TCP stream, it reliably received all preceding bytes as well. (Well, there are entire classes of replay attacks that splice together legitimate and fake packets for the remote peer, but those are increasingly difficult on systems with randomized initial sequence numbers. If this is within your threat model, you should be using TLS on top of TCP to provide cryptographically strong tamper evident information. But TLS can't provide better per-packet delivery notification.)
If you use UDP and you care about the data actually being received by the other side you NEED to use ACK, but if you don't need the speed of UDP you should use TCP, as it does the ACKing for you.
I think you are over complicating this, trust your TCP/IP software stack and the reliable delivery it offers. TCP sockets operate on streams of data, not packets. Also one call to send does not guarantee one call to recv.
I am using SDL and Net2 lib for a client-server application.
The problem I am facing is that I am not receiving all of my TCP packets from my client unless I place a delay before sending each packet from client.
Removing the delay I get only one packet.
A TCP connection is a stream of bytes. Your client could send 20 packets of 5 bytes each, and the server read it as one 100-byte sequence. You'll need to split the data up yourself.
Well you're not guaranteed (in regular sockets) to receive all packets at one time, you may have to call your receive function more than once, to receive all data. This is of course depends on your definition of a "packet" are you receiving all of your data?
+1 erik
Although it is not guaranteed to be reliable, you most likely want to use UDP, not TCP. Net2 handles UDP very well. UDP is actually very reliable. UDP is message oriented. UDP messages tend to get sent quickly and get special treatment by routers (not always a good thing :-). UDP is often used in games.
BTW, if you had asked this question on the SDL mailing list, or sent it to me directly, you would have gotten this advice many months ago.
I wrote Net2 and I hang out on the SDL list. I do not hang out here because this place is an infinite time sink.
Bob Pendleton
I don't mean how to connect to a socket. What should I know about UDP programming?
Do I need to worry about bad data in my socket?
I should assume if I send 200bytes I may get 120 and 60 bytes separately?
Should I worry about another connection sending me bad data on the same port?
If data doesnt arrive typically how long may I (typically) not see data for (250ms? 1 second? 1.75sec?)
What do I really need to know?
"i should assume if i send 200bytes i
may get 120 and 60bytes separately?"
When you're sending UDP datagrams your read size will equal your write size. This is because UDP is a datagram protocol, vs TCP's stream protocol. However, you can only write data up to the size of the MTU before the packet could be fragmented or dropped by a router. For general internet use, the safe MTU is 576 bytes including headers.
"i should worry about another
connection sending me bad data on the
same port?"
You don't have a connection, you have a port. You will receive any data sent to that port, regardless of where it's from. It's up to you to determine if it's from the right address.
If data doesnt arrive typically how
long may i (typically) not see data
for (250ms? 1 second? 1.75sec?)
Data can be lost forever, data can be delayed, and data can arrive out of order. If any of those things bother you, use TCP. Writing a reliable protocol on top of UDP is a very non trivial task and there is no reason to do so for almost all applications.
Should I worry about another
connection sending me bad data on the
same port?
Yes you should worry about it. Any application can send data to your open UDP port at any time. One of the big uses of UDP is many to one style communications where you multiplex communications with several peers on a single port using the addressed passed back during the recvfrom to differentiate between peers.
However, if you want to avoid this and only accept packets from a single peer you can actually call connect on your UDP socket. This cause the IP stack to reject packets coming from any host:port combo ( socket ) other than the one you want to talk to.
A second advantage of calling connect on your UDP socket is that in many OS's it gives a significant speed / latency improvement. When you call sendto on an unconnected UDP socket the OS actually temporarily connects the socket, sends your data and then disconnects the socket adding significant overhead.
A third advantage of using connected UDP sockets is it allows you to receive ICMP error messages back to your application, such as routing or host unknown due to a crash. If the UDP socket isn't connected the OS won't know where to deliver ICMP error messages from the network to and will silently discard them, potentially leading to your app hanging while waiting for a response from a crashed host ( or waiting for your select to time out ).
Your packet may not get there.
Your packet may get there twice or even more often.
Your packets may not be in order.
You have a size limitation on your packets imposed by the underlying network layers. The packet size may be quite small (possibly 576 bytes).
None of this says "don't use UDP". However you should be aware of all the above and think about what recovery options you may want to take.
Fragmentation and reassembly happens at the IP level, so you need not worry about that (Wikipedia). (This means that you won't receive split or truncated packets).
UDP packets have a checksum for the data and the header, so receiving bogus data is unlikely, but possible. Lost or duplicate packets are also possible. You should check your data in any case anyway.
There's no congestion control, so you may wish to consider that, if you plan on clogging the tubes with a lot of UDP packets.
UDP is a connectionless protocol. Sending data over UDP can get to the receiver, but can also get lost during transmission. UDP is ideal for things like broadcasting and streaming audio or video (i.e. a dropped packet is never a problem in those situations.) So if you need to ensure your data gets to the other side, stick with TCP.
UDP has less overhead than TCP and is therefore faster. (TCP needs to build a connection first and also checks data packets for data corruption which takes time.)
Fragmented UDP packets (i.e. packets bigger than about half a Kb) will probably be dropped by routers, so split your data into small chuncks before sending it over. (In some cases, the OS can take care of that.) Note that it is allways a packet that might make it, or not. Half packets aren't processed.
Latency over long distances can be quite big. If you want to do retransmission of data, I would go with something like 5 to 10 times the agerage latency time over the current connection. (You can measure the latency by sending and receiving a few packets.)
Hope this helps.
I won't follow suit with the other people who answered this, they all seem to push you toward TCP, and that's not for gaming at all, except maybe for login/chat info. Let's go in order:
Do I need to worry about bad data in my socket?
Yes. Even though UDP contains an extremely simple checksum for routers and such, it is not 100% efficient. You can add your own checksum device, but most of the time UDP is used when reliability is already not an issue, so data that doesn't conform should just be dropped.
I should assume if I send 200bytes I may get 120 and 60 bytes separately?
No, UDP is direct data write and read. However, if the data is too large, some routers will truncate and you lose part of the data permanently. Some have said roughly 576 bytes with header, I personally wouldn't use more than 256 bytes (nice round log2 number).
Should I worry about another connection sending me bad data on the same port?
UDP listens for any data from any computer on a port, so on this sense yes. Also note that UDP is a primitive and a raw format can be used to fake the sender, so you should use some sort of "key" in order for the listener to verify the sender against their IP.
If data doesnt arrive typically how long may I (typically) not see data for (250ms? 1 second? 1.75sec?)
Data sent on UDP is usually disposable, so if you don't receive data, then it can easily be ignored...however, sometimes you want "semi-reliable" but you don't want 'ordered reliable' like TCP uses, 1 second is a good estimate of a drop. You can number your packets on a rotation and write your own ACK communication. When a packet is received, it records the number and sends back a bitfield letting the sender know which packets it received. You can read this unfinished document for more information (although unfinished, it still yields valiable info):
http://gafferongames.com/networking-for-game-programmers/
The big thing to know when attempting to use UDP is:
Your packets might not all make it over the line, which means there is going to be possible data corruption.
If you're working on an application where 100% of the data needs to arrive reliably to provide functionality, use TCP. If you're working on an application where some loss is allowable (streaming media, etc.) then go for UDP but don't expect everything to get from one of the pipe to the other intact.
One way to look at the difference between applications appropriate for UDP vs. TCP is that TCP is good when data delivery is "better late than never", UDP is good when data delivery is "better never than late".
Another aspect is that the stateless, best-effort nature of most UDP-based applications can make scalability a bit easier to achieve. Also note that UDP can be multicast while TCP can't.
In addition to don.neufeld's recommendation to use TCP.
For most applications TCP is easier to implement. If you need to maintain packet boundaries in a TCP stream, a good way is to transmit a two byte header before the data to delimit the messages. The header should contain the message length. At the receiving end just read two bytes and evaluate the value. Then just wait until you have received that many bytes. You then have a complete message and are ready to receive the next 2-byte header.
This gives you some of the benefit of UDP without the hassle of lost data, out-of-order packet arrival etc.
And don't assume that if you send a packet it got there.
If there is a packet size limitation imposed by some router along the way, your UDP packets could be silently truncated to that size.
Two things:
1) You may or may not received what was sent
2) Whatever you receive may not be in the same order it was sent.