I am trying to send a message over a TCP socket at a regular interval (every second). Sometimes the full message will not be sent or two-four messages will be stacked and sent at once. I have if statements for if the return value is 0 or < 0, but those are never true. I tried the obvious approach of checking the exact return value of send() to see if less or more bytes were sent. It just returns the number that I specify in the parameter to send (which makes sense if send blocks until it sends that much), even if less bytes are sent. So is there an accurate way to say "was the right size packet sent? no? - do something"?
TCP provides a reliable stream of bytes, there's no message boundary. If you need to know the length of the message you have to build this into the protocol, eg: send every message with a 2 byte header which specifies the message length.
There's no such facility with TCP. It's up to the in-kernel network stack how to slice TCP stream into packets. Having said that you can set TCP_NODELAY option on your socket to disable Nagle algorithm.
If I am understanding you right, sometimes you send two or more packets and they are received as one on the distant end.
This is the nature of TCP/IP. You cannot guarantee the packets will arrive as distinct, just that they will arrive in order and reliably.
Not sure what platform you are using or what syntax you are using (streams, FILE objects or file descriptors; some code would clarify this) but you may need to do an explicit flush operation after you write each message to force the kernel. I generally use C-style file descriptors and it is usually sufficient to call fflush on the descriptors to make whatever I've queued up go out immediately.
Related
I have a non-blocking winsock socket that is recv'ing data in a loop.
I noticed that when connecting with, say, putty and a raw socket, sending messages works just fine. However, when interfacing with this particular client, the packets seem to not be triggering a successful, non-MSG_PEEK call to recv. I recall having a similar issue a few years back and it ended up having to end the packets in \r or something coming from the client, which isn't possible in this case since I cannot modify the client.
Wireshark shows the packets coming through just fine; my server program, however, isn't working quite right.
How would I fix this?
EDIT: Turning the buffer size down to, say, 8 resulted in a few successful calls to recv without MSG_PEEK.
Recv call:
iLen = recv(group->clpClients[cell]->_sock, // I normally call without MSG_PEEK
group->clpClients[cell]->_cBuff, CAPS_CLIENT_BUFFER_SIZE, MSG_PEEK);
if(iLen != SOCKET_ERROR)
{
...
Socket is AF_INET, SOCK_STREAM and IPPROTO_TCP.
Use setsockopt to set TCP_NODELAY to TRUE.
Microsoft documentation states in several places that MSG_PEEK should be avoided altogether because it is inefficient and inaccurate. Use select(), WSAAsyncSelect(), or WSASelectEvent() instead to detect when a socket has data available for reading, then call recv() or WSARecv() to actually read it.
TCP socket is a stream of bytes, it does not preserve your application message boundaries. As soon as kernel has something to give to you, it returns from the poll. You have to collect received bytes until you have enough to decode whatever you need to decode.
The solution ended up being implementation-specific; I knew the length of all packets coming from the client were divisible by a certain amount of bytes. So, I just read that amount of bytes until the buffer was empty.
The max. number of bytes you can receive at a time in this situation must be less than the maximum length of the longest message, and must be the GCF (Greatest Common Factor) of that length.
This is far from a permanent solution, but it works for now.
I have a client application using winsock's sendto() method to send data to a server application with UDP. On my client application I make, say, 5 quick sendto(). On my server application, I wait for, say, 10 secs, and then do a select() and recvfrom(). Will the recvfrom() give me the first packet sent by the client or will it be an arbitrary one (whichever one arrived first)? Will I still be able to get the other 4 data packets or does winsock's UDP framework only buffer one?
Will the recvfrom() give me the first packet sent by the client or
will it be an arbitrary one
Since UDP doesn't handle reordering, you can get any of the messages. You could get less than 4 messages or even more (but that's rare today).
UDP gives no guarantee to the received ordering of packets, so basically, the first packet you recvfrom() might be the first packet you sent, but must not be - that's what TCP is for (which guarantees ordering of received data). You might not receive a part of the packets (or any, for that matter) at all if they were lost in transit.
For the second part: generally, the operating system will buffer a certain amount of packets for you, this depends on the socket buffer set up for UDP sockets - the buffer is specific to every socket, and not shared between them. On Windows, I'm not sure at how to get at the size of the buffer, on Linux, check out "/proc/sys/net/ipv4/udp_mem"; generally, you'll easily fit five UDP packets in there.
With 5 packets of reasonable size, you will probably get all of the packets and you will probably get the first one sent first. But they might be out of order, might not arrive, and might not contain the original data if they do arrive. You have to handle all of that yourself with UDP. (But depending on your application and requirements and the stability of your network, it might not be a real problem; it is certainly possible for certain situations to exist where receiving 99% of the data is perfectly fine).
If I write a server, how can I implement the receive function to get all the data sent by a specific client if I don't know how that client sends the data?
I am using a TCP/IP protocol.
If you really have no protocol defined, then all you can do is accept groups of bytes from the client as they arrive. Without a defined protocol, there is no way to know that you have received "all the bytes" that the client sent, since there is always the possibility that a network failure occurred somewhere between the client and your server during transmission, causing the last part of the stream not to arrive at the server. In that case, you would get the usual end-of-stream indication from the TCP socket (e.g. recv() returning 0, or EWOULDBLOCK if you are using non-blocking sockets), so you would know that you aren't going to receive any more data from the client (because the TCP connection is now disconnected)... but that isn't quite the same thing as knowing you have received all of the data the client meant for you receive.
Depending on your application, that might be good enough. If not, then you'll have to work out a protocol, and trust that your clients will abide by the rules of that protocol. Having the client send a header first saying how many bytes it plans to send is a good approach; or having it send some special "Okay, that's all I meant to send" indicator is also possible (although if you do it that way, you have to watch out for false positives if the special indicator could appear by chance inside the data itself)
One call to send does not equal one call to recv. Either send a header so the receiver know how much data to expect, or send some sort of sentinel value so the the receiver knows when to stop reading.
It depends on how you want to design your protocol.
ASCII protocols usually use a special character to delimit the end of the data, while binary protocols usually send the length of the data first as a fixed-size integer (both sides know this size) and then the variable-length data follows.
You can combine size with your data in one buffer and call send once. People usually use first 2 bytes for size of data in a packet. Like this,
|size N (2 bytes) | data (N bytes) |
In this case, you can contain 65535 byte-long custom data.
Since TCP does not preserve message boundary, it doesn't matter how many times you call send. You have to call receive until you get N size(2 bytes) then you can keep calling receive until you have N bytes data you sent.
UPDATE: This is just a sample to show how to check message boundary in TCP. Security/Encryption is a whole different story and it deserves a new thread. That said, do not simply copy this design. :)
TCP is stream-based, so there is no concept of a "complete message": it's given by a higher-level protocol (e.g. HTTP) or you'd have to invent it yourself. If you were free to use UDP (datagram-based), then there would be no need to do send() multiple times, or receive().
A newer SCTP protocol also supports the concept of a message natively.
With TCP, to implement messages, you have to tell the receiver the size of the message. It can be the first few bytes (commonly 2, since that allows messages up to 64K -- but you have to be careful of byte order if you may be communicating between different systems), or it can be something more complicated. HTTP, for example, has a whole set of rules by which the receiver determines the length of the message. One of them is the Content-Length HTTP header, which contains a string representing the number of bytes in the body of the message. Header-only HTTP messages are simply delimited by a blank line. As you can see, there are no easy (or standard) answers.
TCP is a stream based protocol. As such there is no concept of length of data built into TCP in the same way as there is no concept of data length for keyboard input.
It is therefore up to the higher level protocol to specify the end of the message. This can be done by including the packet length in the protocol or specifying a special end-of-message byte sequence.
For example HTTP headers are terminated by a double \r\n sequence and the length of the message body can be obtains from the Content-Length header.
I have a problem - when I'm trying to send huge amounts of data through posix sockets ( doesn't matter if it's files or some data ) at some point I don't receive what I expect - I used wireshark to determine what's causing errors, and I found out, that exactly at the point my app breaks there are packets marked red saying "zero window" or "window full" sent in both directions.
The result is, that the application layer does not get a piece of data sent by send() function. It gets the next part though...
Am I doing something wrong?
EDIT:
Lets say I want to send 19232 pieces of data 1024 bytes each - at some random point ( or not at all ) instead of the 9344th packet I get the 9345th. And I didn't implement any retransmission protocol because I thought TCP does it for me.
Zero Window / Window Full is an indication that one end of the TCP connection cannot recieve any more data, until its client application reads some of the data is has already recieved. In other words, it is one side of the connection telling the other side "do not send any more data until I tell you otherwise".
TCP does handle retransmissions. Your problem is likely that:
The application on the recieving side is not reading data fast enough.
This causes the recieving TCP to report Window Full to the sending TCP.
This in turn causes send() on the sending TCP side to return either 0 (no bytes written), or -1 with errno set to EWOULDBLOCK.
Your sending application is NOT detecting this case, and is assuming that send() sent all the data you asked to send.
This causes the data to get lost. You need to fix the sending side so that it handles send() failing, including returning a value smaller than the number of bytes you asked it to send. If the socket is non-blocking, this means waiting until select() tells you that the socket is writeable before trying again.
First of all, TCP is a byte stream protocol, not a packet-based protocol. Just because you sent a 1024 byte chunk doesn't mean it will be received that way. If you're filling the pipe fast enough to get a zero window condition (i.e., that there is no more room in either a receive buffer or send buffer) then it's very likely that the receiver code will at some point be able to read far more at one time than the size of your "packet".
If you haven't specifically requested non-blocking sockets, then both send and recv will block with a zero window/window full condition rather than return an error.
If you want to paste in the receiver-side code we can take a look, but from what you've described it sounds very likely that your 9344th read is actually getting more bytes than your packet size. Do you check the value returned from recv?
Does in your network iperf also fails to send this number of packets of this size? If not check how they send this amount of data.
Hm, from what I read on Wikipedia this may be some kind of buffer overflow (receiver reports zero receive window). Just a guess though.
I'm working with UDP sockets in C++ for the first time, and I'm not sure I understand how they work. I know that sendto/recvfrom and send/recv normally return the number of bytes actually sent or received. I've heard this value can be arbitrarily small (but at least 1), and depends on how much data is in the socket's buffer (when reading) or how much free space is left in the buffer (when writing).
If sendto and recvfrom only guarantee that 1 byte will be sent or received at a time, and datagrams can be received out of order, how can any UDP protocol remain coherent? Doesn't this imply that the bytes in a message can be arbitrarily shuffled when I receive them? Is there a way to guarantee that a message gets sent or received all at once?
It's a little stronger than that. UDP does deliver a full package; the buffer size can be arbitrarily small, but it has to include all the data sent in the packet. But there's also a size limit: if you want to send a lot of data, you have to break it into packets and be able to reassemble them yourself. It's also no guaranteed delivery, so you have to check to make sure everything comes through.
But since you can implement all of TCP with UDP, it has to be possible.
usually, what you do with UDP is you make small packets that are discrete.
Metaphorically, think of UDP like sending postcards and TCP like making a phone call. When you send a postcard, you have no guarantee of delivery, so you need to do something like have an acknowledgement come back. With a phone call, you know the connection exists, and you hear the answers right away.
Actually you can send a UDP datagram of 0 bytes length. All that gets sent is the IP and UDP headers. The UDP recvfrom() on the other side will return with a length of 0. Unlike TCP this does not mean that the peer closed the connection because with UDP there is no "connection".
No. With sendto you send out packets, which can contain down to a single byte.
If you send 10 bytes as a single sendto call, these 10 bytes get sent into a single packet, which will be received coherent as you would expect.
Of course, if you decide to send those 10 bytes one by one, each of them with a sendto call, then indeed you send and receive 10 different packets (each one containing 1 byte), and they could be in arbitrary order.
It's similar to sending a book via postal service. You can package the book as a whole into a single box, or tear down every page and send each one as an individual letter. In the first case, the package is bulkier but you receive the book as a single, ordered entity. In the latter, each package is very light, but good luck reading that ;)
I have a client program that uses a blocking select (NULL timeout parameter) in a thread dedicated to waiting for incoming data on a UDP socket. Even though it is blocking, the select would sometimes return with an indication that the single read descriptor was "ready". A subsequent recvfrom returned 0.
After some experimentation, I have found that on Windows at least, sending a UDP packet to a port on a host that's not expecting it can result in a subsequent recvfrom getting 0 bytes. I suspect some kind of rejection notice might be coming from the other end. I now use this as a reminder that I've forgotten to start the process on the server that looks for the client's incoming traffic.
BTW, if I instead "sendto" a valid but unused IP address, then the select does not return a ready status and blocks as expected. I've also found that blocking vs. non-blocking sockets makes no difference.