Read from socket less than is available to read - c++

I cannot find the answer for this one: what will happen if I read from socket 4bytes (I set the limit for 4 bytes) but there are actually 256bytes awaiting to be read? Will they be lost or will they wait until the next call of read function?

If it's a TCP socket, then no data will get lost; it'll get queued up.
Bear in mind that you have to be prepared to deal with partial reads, i.e. where you get fewer bytes than requested and have to call read() again to get more.

It depends what kind of socket you use. If it is stream socket (created with SOCK_STREAM), then it supports a stream of data, and you can read it even by 1 byte (though it will be not efficient), on another side you may request 1024 bytes but get only 1. And that almost irrelevant by what portions sender put them into stream (there is dependency, but you should not rely on that). So with stream you need to define end of data by higher level protocol. You may send strings with \n at the end, or use zero terminated string, or send some bytes of size of coming data before that data.
On another side if you use datagram protocol (created with SOCK_DGRAM) you will get data by packets - whatever size sender sent them. If you provide smaller buffer than data available, it will be truncated and remaining data is discarded.

Related

Use socket io in C++ for dynamic number of bytes

I'm trying to implement a simple client-server application where a client or the server can send a dynamic number of bytes in a single write() call.
For example, let's assume that the client sends a byte stream of 1500 bytes. And server reads every 1000 bytes.
int BUFFER_SIZE = 1000;
...
read( iSockFD, cBuffer, BUFFER_SIZE );
I can use a loop and call read until its return value is 0. But the client may have multiple write() calls in a loop (i.e. sending multiple messages).
My question is, will it affect the read() on the server side? Meaning, will two consecutive write() of 1000 bytes in the client side, be read by a single read() with 2000 bytes buffer size at the server side?
If that's the case, what are the recommended ways of implementing such a scenario? Should I use a separator for messages (using an encoding algorithm)?
I understand this more related to sockets itself rather than C++. But, your help and guidance are highly appreciated.
UPDATE:
The intention is to implement a simple middleware system to send different types of messages, where the messages will be encoded in binary before sending.
Nobody can guarantee you that a write(x) will trigger a read(x) at the receiver side. If x is larger than your socket receive buffer, or if you call read() before the entire message has been received in the socket receive buffer, then read() will only return a fraction of the data and require you to issue a subsequent read() to get the rest.
The recommended way of doing this would be to define a message buffer of sufficient size. Every call to read() will return 1 or more bytes, which you keep enqueuing into the buffer. Now, once the buffer is larger than 4 bytes + the be32toh(integer) stored in the first 4 bytes of the buffer, you have to consume the integer plus the following x bytes from the beginning of the buffer (and process them further). This will allow you to nicely handle cases where read() contains the end of a previously unfinished packet and at the same time contains the beginning of the next (incomplete) packet.
Just make sure that every payload you transmit is prepended by a htobe32(length).

Does WSASend send all WSABUF buffers as a single packet?

The title probably explains itself best.
Anyway, I have a data buffer received from an another source, and I want to send it in a single UDP packet that contains a sequence number (as the first byte) -> I want to add the sequence number to the given buffer!
Instead of allocating a new buffer, settings it's size to size+4, setting the sequence number as the first byte and copying the data into the buffer, I would like to just use the scatter gather mechanism of WSA.
Sadly though, no WSA document specifies explicitly that WSASend guarantees that all buffer will be sent a single packet (The packet size will be held as < 1500 bytes).
Can I be certain that it will work that way? Or should I re-build the packet?
Best,
Daniel
It is documented in a round-about way:
For message-oriented sockets, do not exceed the maximum message size of the underlying provider, which can be obtained by getting the value of socket option SO_MAX_MSG_SIZE. If the data is too long to pass atomically through the underlying protocol the error WSAEMSGSIZE is returned, and no data is transmitted.
So clearly it combines the data from the buffers to make a single UDP packet. If it didn't then there would no point in returning the WSAEMSGSIZE error.

How can I use boost::asio for a TCP protocol without a header which tells me the message's size?

The Boost chat server example demonstrates handling a simple TCP message protocol in which each message is preceded by a fixed-size header which tells you the size of the message which follows. This means you always know exactly how many bytes to read in your next call to async_read(); you alternate between reading a header whose size is always the same, and a message whose size is given in the header. This works well with the Boost i/o service model, which promises to call a handler when exactly the expected number of bytes have been received from the socket.
How can I use Boost to run a TCP protocol which doesn't use a header like this? My client has a protocol which uses special byte sequences to represent the start and end of each message, so I won't know how many bytes to read in each call to async_read(); I have to just get bytes from the socket as they arrive and watch for the special byte sequences. If I pick a sensible buffer size like 256 bytes, and if my handler will only be called when that many bytes have been read, I believe the i/o service will generally end up receiving the last few bytes of the most recent message from the network, but not passing them to my handler until the next message comes along and brings the byte total up to the number I'm expecting. The next message may not arrive for some time, and I want to handle the current message as soon as it arrives.
Reading one byte at a time isn't a good idea for performance reasons, correct?
http://www.boost.org/doc/libs/1_45_0/doc/html/boost_asio/examples.html
There is few options:
You can use async_read_until to read until your "ending
sequence"(so until end of message).
If your "ending sequence" depends
on "starting sequence", you can make it to read fixed buffer (equal
to starting sequence length); calculate the ending sequence; and then
setup async_read_until.
Also, you can make call to async_read_some to read any amount of bytes which arrived into socket buffer. Then check your buffer with your own function for containing complete packet or need to read next part.

TCP sockets: Where does incoming data go after ack(leaves tcp read buffer) but before read()/recv()?

If i have a TCP connection that transfers data at 200 KB/sec but i only read()/recv() from the socket once a second, where are those 200 KB of data stored in the meanwhile?
As much as I know, data leaves the TCP socket's read buffer after an ack gets sent to the sender, and it's too small anyways to hold 200KB of data, where does it wait in the meanwhile until it can be read()/recv() by my client?
Thanks!!
The following answer claims data leaves the TCP read buffer as soon as it is ACK'ed, before being read()/recv()d:
https://stackoverflow.com/a/12934115/2378033
"The size of the receiver's socket receive buffer determines how much data can be in flight without acknowledgement"
Could it be that my assumption is wrong and the data gets ACK'd only after it is read()/recv()d by the userspace program?
data leaves the TCP socket's read buffer after an ack gets sent to the sender
No. It leaves the receive buffer when you read it, via recv(), recvfrom(), read(), etc.
The following answer claims data leaves the TCP read buffer as soon as it is ACK'ed
Fiddlesticks. I wrote it, and it positively and absolutely doesn't 'claim' any such thing.
You are thinking of the send buffer. Data is removed from the sender's send buffer when it is ACKed by the receiver. That's because the sender now knows it has arrived and doesn't need it for any more resends.
Could it be that my assumption is wrong and the data gets ACK'd only after it is read()/recv()d by the userspace program?
Yes, your assumption is wrong, and so is this alternative speculation. The data gets ACK'd on arrival, and removed by read()/recv().
When data is correctly received it enters the TCP read buffer and is subject to acknowledgement immediately. That doesn't mean that the acknowledgement is sent immediately, as it will be more efficient to combine the acknowledgement with a window size update, or with data being sent over the connection in the other direction, or acknowledgement of more data.
For example suppose you are sending one byte at a time, corresponding to a user's typing, and the other side has a receive buffer of 50000 bytes. It tells you that the window size is 50000 bytes, meaning that you can send that many bytes of data without receiving anything further. Every byte of data you send closes the window by one byte. Now the receiver could send a packet acknowledging the single byte as soon as it was correctly received and entered the TCP receive buffer, with a window size of 49999 bytes because that is how much space is left in the receive buffer. The acknowledgement would allow you to remove the byte from your send buffer, since you now know that the byte was received correctly and will not need to be resent. Then when the application read it from the TCP receive buffer using read() or recv() that would make space in the buffer for one additional byte of data to be received, so it could then send another packet updating the TCP window size by one byte to allow you to once again send 50000 bytes, rather than 49999. Then the application might echo the character or send some other response to the data, causing a third packet to be sent. Fortunately, a well-designed TCP implementation will not do that as that would create a lot of overhead. It will ideally send a single packet containing any data going in the other direction as well as any acknowledgement and window size update as part of the same packet. It might appear that the acknowledgement is sent when the application reads the data and it leaves the receive buffer, but that may simply be the event that triggered the sending of the packet. However it will not always delay an acknowledgement and will not delay it indefinitely; after a short timeout with no other activity it will send any delayed acknowledgement.
As for the size of the receive buffer, which contains the received data not yet read by the application, that can be controlled using setsockopt() with the SO_RCVBUF option. The default may vary by OS, memory size, and other parameters. For example a fast connection with high latency (e.g. satellite) may warrant larger buffers, although that will increase memory use. There is also a send buffer (SO_SNDBUF) which includes data that has either not yet been transmitted, or has been transmitted but not yet acknowledged.
Your OS will buffer a certain amount of incoming TCP data. For example on Solaris this defaults to 56K but can be reasonably configured for up to several MB if heavy bursts are expected. Linux appears to default to much smaller values, but you can see instructions on this web page for increasing those defaults: http://www.cyberciti.biz/faq/linux-tcp-tuning/

Receiving all data sent with C sockets

If I write a server, how can I implement the receive function to get all the data sent by a specific client if I don't know how that client sends the data?
I am using a TCP/IP protocol.
If you really have no protocol defined, then all you can do is accept groups of bytes from the client as they arrive. Without a defined protocol, there is no way to know that you have received "all the bytes" that the client sent, since there is always the possibility that a network failure occurred somewhere between the client and your server during transmission, causing the last part of the stream not to arrive at the server. In that case, you would get the usual end-of-stream indication from the TCP socket (e.g. recv() returning 0, or EWOULDBLOCK if you are using non-blocking sockets), so you would know that you aren't going to receive any more data from the client (because the TCP connection is now disconnected)... but that isn't quite the same thing as knowing you have received all of the data the client meant for you receive.
Depending on your application, that might be good enough. If not, then you'll have to work out a protocol, and trust that your clients will abide by the rules of that protocol. Having the client send a header first saying how many bytes it plans to send is a good approach; or having it send some special "Okay, that's all I meant to send" indicator is also possible (although if you do it that way, you have to watch out for false positives if the special indicator could appear by chance inside the data itself)
One call to send does not equal one call to recv. Either send a header so the receiver know how much data to expect, or send some sort of sentinel value so the the receiver knows when to stop reading.
It depends on how you want to design your protocol.
ASCII protocols usually use a special character to delimit the end of the data, while binary protocols usually send the length of the data first as a fixed-size integer (both sides know this size) and then the variable-length data follows.
You can combine size with your data in one buffer and call send once. People usually use first 2 bytes for size of data in a packet. Like this,
|size N (2 bytes) | data (N bytes) |
In this case, you can contain 65535 byte-long custom data.
Since TCP does not preserve message boundary, it doesn't matter how many times you call send. You have to call receive until you get N size(2 bytes) then you can keep calling receive until you have N bytes data you sent.
UPDATE: This is just a sample to show how to check message boundary in TCP. Security/Encryption is a whole different story and it deserves a new thread. That said, do not simply copy this design. :)
TCP is stream-based, so there is no concept of a "complete message": it's given by a higher-level protocol (e.g. HTTP) or you'd have to invent it yourself. If you were free to use UDP (datagram-based), then there would be no need to do send() multiple times, or receive().
A newer SCTP protocol also supports the concept of a message natively.
With TCP, to implement messages, you have to tell the receiver the size of the message. It can be the first few bytes (commonly 2, since that allows messages up to 64K -- but you have to be careful of byte order if you may be communicating between different systems), or it can be something more complicated. HTTP, for example, has a whole set of rules by which the receiver determines the length of the message. One of them is the Content-Length HTTP header, which contains a string representing the number of bytes in the body of the message. Header-only HTTP messages are simply delimited by a blank line. As you can see, there are no easy (or standard) answers.
TCP is a stream based protocol. As such there is no concept of length of data built into TCP in the same way as there is no concept of data length for keyboard input.
It is therefore up to the higher level protocol to specify the end of the message. This can be done by including the packet length in the protocol or specifying a special end-of-message byte sequence.
For example HTTP headers are terminated by a double \r\n sequence and the length of the message body can be obtains from the Content-Length header.