I am using berkeley sockets and TCP (SOCK_STREAM sockets).
The process is:
I connect to a remote address.
I send a message to it.
I receive a message from it.
Imagine I am using the following buffer:
char recv_buffer[3000];
recv(socket, recv_buffer, 3000, 0);
Questions are:
How can I know if after calling recv first time the read buffer is empty or not? If it's not empty I would have to call recv again, but if I do that when it's empty I would have it blocking for much time.
How can I know how many bytes I have readed into recv_buffer? I can't use strlen because the message I receive can contain null bytes.
Thanks.
How can I know if after calling recv
first time the read buffer is empty or
not? If it's not empty I would have to
call recv again, but if I do that when
it's empty I would have it blocking
for much time.
You can use the select or poll system calls along with your socket descriptor to tell if there is data waiting to be read from the socket.
However, usually there should be an agreed-upon protocol that both sender and receiver follow, so that both parties know how much data is to be transferred. For example, perhaps the sender first sends a 2-byte integer indicating the number of bytes it will send. The receiver then first reads this 2-byte integer, so that it knows how many more bytes to read from the socket.
Regardless, as Tony pointed out below, a robust application should use a combination of length-information in the header, combined with polling the socket for additional data before each call to recv, (or using a non-blocking socket). This will prevent your application from blocking in the event that, for example, you know (from the header) that there should still be 100 bytes remaining to read, but the peer fails to send the data for whatever reason (perhaps the peer computer was unexpectedly shut off), thus causing your recv call to block.
How can I know how many bytes I have
readed into recv_buffer? I can't use
strlen because the message I receive
can contain null bytes.
The recv system call will return the number of bytes read, or -1 if an error occurred.
From the man page for recv(2):
[recv] returns the number of bytes
received, or -1 if an error occurred.
The return value will be 0 when the
peer has performed an orderly
shutdown.
How can I know if after calling recv first time the read buffer is empty or not?
Even the first time (after accepting a client), the recv can block and fail if the client connection has been lost. You must either:
use select or poll (BSD sockets) or some OS-specific equivalent, which can tell you whether there is data available on specific socket descriptors (as well as exception conditions, and buffer space you can write more output to)
you can set the socket to be nonblocking, such that recv will only return whatever is immediately available (possibly nothing)
you can create a thread that you can afford to have block recv-ing data, knowing other threads will be doing the other work you're concerned to continue with
How can I know how many bytes I have readed into recv_buffer? I can't use strlen because the message I receive can contain null bytes.
recv() returns the number of bytes read, or -1 on error.
Note that TCP is a byte stream protocol, which means that you're only guaranteed to be able to read and write bytes from it in the correct order, but the message boundaries are not guaranteed to be preserved. So, even if the sender has made a large single write to their socket, it can be fragmented en route and arrive in several smaller blocks, or several smaller send()/write()s can be consolidated and retrieved by one recv()/read().
For that reason, make sure you loop calling recv until you either get all the data you need (i.e. a complete logical message you can process) or an error. You should be prepared/able to handle getting part/all of subsequent sends from your client (if you don't have a protocol where each side only sends after getting a complete message from the other, and are not using headers with message lengths). Note that doing recvs for the message header (with length) then the body can result in a lot more calls to recv(), with a potential adverse affect on performance.
These reliability issues are often ignored. They manifest less often when on a single host, a reliable and fast LAN, with less routers and switches involved, and fewer or non-concurrent messages. Then they may break under load and over more complex networks.
If the recv() returns fewer than 3000 bytes, then you can assume that the read buffer was empty. If it returns 3000 bytes in your 3000 byte buffer, then you'd better know whether to continue. Most protocols include some variation on TLV - type, length, value. Each message contains an indicator of the type of message, some length (possibly implied by the type if the length is fixed), and the value. If, on reading through the data you did receive, you find that the last unit is incomplete, you can assume there is more to be read. You can also make the socket into a non-blocking socket; then the recv() will fail with EAGAIN or EWOULDBLOCK if there is no data read for reading.
The recv() function returns the number of bytes read.
ioctl() with the FIONREAD option tells you how much data can currently be read without blocking.
Related
According to https://www.boost.org/doc/libs/1_73_0/doc/html/boost_asio/reference/basic_stream_socket/write_some/overload1.html,
The function call (write_some) will block until one or more bytes of the data has
been written successfully, or until an error occurs.
Here's the function:
template<
typename ConstBufferSequence>
std::size_t write_some(
const ConstBufferSequence & buffers);
as we see, a reference to the buffer is passed, which means that the implementation of write_some must consume the buffer immediately and entirely. It cannot borrow the buffer for itself to write (to a file descriptor) later.
However, the explanation in the page suggests that it does exactly that: once the first byte is written it can return and continue to write the remaining bytes. How is it possible? The reference to buffer may be destructed after the call.
basic_stream_socket::write_some is equivalent of Berkley socket send()( or write()) function.
Normally send() blocks until all bytes has been sent. But in rare cases, it can be interrupted by SIGNAL handler or timeout SO_SNDTIMEO at the moment when only part of the data has been transmitted. In this case, send returns the number of bytes sent (one or more bytes). And one should advance in the buffer and send the remaining bytes later.
Calls such as send() and sendto() in the Winsock API take a primitive int to dictate the size of their buffer parameters. This obviously places a 32-bit limit on the maximum size buffer that may be sent.
Why is this? Is there a 64-bit Winsock2 API available that might use a more appropriate size type (e.g. size_t)?
On Linux similar calls use the size_t type for defining sizes.
There is no need for a function with larger size, actually the type could be short and still it won’t be a problem.
Sockets don’t send messages, they just transfer bytes. When you call send() the data may not get received in one chunk at the recv() call. You have to implement logic when receiving bytes to know if you got it all or not and call recv() again if not. So if you wanna send something larger than can be placed in an int? Just make multiple calls to send(). If your recv() code can’t handle that, it’s a bug, because it should.
I found this answer on how to set a timeout for posix socket. The linux part of that answer:
// LINUX
struct timeval tv;
tv.tv_sec = timeout_in_seconds;
tv.tv_usec = 0;
setsockopt(sockfd, SOL_SOCKET, SO_RCVTIMEO, (const char*)&tv, sizeof tv);
and the quote from the posix documentation:
SO_RCVTIMEO
Sets the timeout value that specifies the maximum amount of time an input function waits until it completes. It accepts a timeval
structure with the number of seconds and microseconds specifying the
limit on how long to wait for an input operation to complete. If a
receive operation has blocked for this much time without receiving
additional data, it shall return with a partial count or errno set to
[EAGAIN] or [EWOULDBLOCK] if no data is received. The default for this
option is zero, which indicates that a receive operation shall not
time out. This option takes a timeval structure. Note that not all
implementations allow this option to be set.
What I dont understand is: Can this cause loosing udp packages?
What if the timeout is reached while a udp package is received?
Also related: setting timeout for recv fcn of a UDP socket
PS: I am aware that UDP is inherently unreliable, so my question is really mainly about the case where the timeout occurs while an udp message is processed.
No; it doesn't make you more likely to drop packets.
Looking at how network transport happens at a lower level; you have a network card. As this card receives data, irrespective of what your program is doing, it stores the data into it's own memory area. When you call recv; you're asking the OS to move data from the network cards memory to your programs memory. This means that if a packet comes in while your thread is doing something else; it isn't just dropped, but processed the next time your thread comes to get data.
If your thread doesn't call recv often enough; then the memory for the network card will become full. When this happens no new packets can be stored; and if it's using TCP then the router will be told that it's not able to process it; if it's UDP then it will simply be dropped. It is this part that makes UDP inherently unreliable as it can happen at any point during the transport of the packet.
The timeout impacts how long the thread will wait for data to appear in the networkcard memory area; and unless you never call recv again; does not impact dropped packets or not.
The answer is no, losing UDP data would be in violation of POSIX:
The recv() function shall return the length of the message written to the buffer pointed to by the buffer argument. For message-based sockets, such as SOCK_DGRAM and SOCK_SEQPACKET, the entire message shall be read in a single operation.
Presumably, the "partial count" only happens is a connection-based socket when the MSG_WAITALL option is used.
That being said, the use of SO_RECVTIMEO is generally frowned upon, and the "proper" way to implement timeouts on sockets is by using nonblocking sockets and select(). This is for historical reasons, and not because setting a timeout is somehow inherently bad design or something. If you insist on using SO_RECVTIMEO, be aware of potential portability problems:
POSIX mentions SO_RECVTIMEO, but does not require it
On Windows, a timeout in rcv() will put the socket in a bad state and you should close it immediately afterwards. On POSIX you can (in my experience) still use the socket after a timeout caused by SO_RECVTIMEO, but one could argue this is not 100% guaranteed by the spec
I'm trying to make a simple tcp server according to this lesson. Everything works fine, and now I'm trying to encapsulate functions in the Socket class. I try to make a method that checks the amount of available bytes to read, and I can't find necessary the function. It could be some kind of ftell() or another method.
You should be respecting the communication protocol you are implementing on top of TCP. It dictates how you should be reading data and how you should be determining the sizes of the data to be read. If the protocol expects an integer, read an integer. If the protocol expects a length-preceded string, read the length value and then read how many bytes it specifies. If the protocol expects a delimited string, read until you encounter the delimiter. And so on.
That being said, to answer the actual question you asked - "[how to] check the amount of available bytes to read":
on Windows, you can use the ioctlsocket() function, where the cmd parameter is set to FIONREAD and the argp parameter is set to a pointer to a u_long variable.
u_long bytesAvailable;
ioctlsocket(socket, FIONREAD, &bytesAvailable);
on *Nix, you can use the ioctl() function, where the request parameter is set to FIONREAD and the third parameter is set to a pointer to a int variable.
int bytesAvailable;
ioctl(socket, FIONREAD, &bytesAvailable);
In either case, the output variable receives the number of unread bytes currently waiting on the socket. It is guaranteed that you can read at most this many bytes from the socket using recv() without blocking the calling thread (if the socket is running in a blocking mode). The socket may receive more bytes between the time you query FIONREAD and the time you call recv(), so if you try to read more than FIONREAD indicates, recv() may or may not have to block the calling thread waiting for more bytes.
Can you elaborate on why you would need to know that?
The preferred way to read data from a socket is to first make sure it has data you can read, via a call to select and then read into a predefined array, whose size is set to that of maximum size you and the client respect. Read will then report how many bytes it has read.
I might also add you should also select a socket before writing data to it.
Finally, you might want to look up circular buffers as they offer a way to append data to a buffer without risking to fill up your RAM. You would then read the data from the circular buffer once you have enough data.
As far as I know, it is not recommended to check how many bytes are available to be read on a socket.
I am developing a client server application (TCP) in Linux using C++. I want to send more than 65,000 bytes at the same time. In TCP, the maximum packet size is 65,535 bytes only.
How can I send the entire bytes without loss?
Following is my code at server side.
//Receive the message from client socket
if((iByteCount = recv(GetSocketId(), buffer, MAXRECV, MSG_WAITALL)) > 0)
{
printf("\n Received bytes %d\n", iByteCount);
SetReceivedMessage(buffer);
return LS_RESULT_OK;
}
If I use MSG_WAITALL it takes a long time to receive the bytes so how can I set the flag to receive more than 1 million bytes at time.
Edit: The MTU size is 1500 bytes but the absolute limitation on TCP Packet size if 65,535.
Judging from the comments above, it seems you don't understand how recv works, or how it is supposed to be used.
You really want to call recv in a loop, until either you know that the expected amount of data has been received or until you get a "zero bytes read" result, which means the other end has closed the connection. Always, no exceptions.
If you need to do other things concurrently (likely, with a server process!) then you will probably want to check descriptor readiness with poll or epoll first. That lets you multiplex sockets as they become ready.
The reason why you want to do it that way, and never any different, is that you don't know how the data will be packeted and how (or when) packets will arrive. Plus, recv gives no guarantee about the amount of data read at a time. It will offer what it has in its buffers at the time you call it, no more and no less (it may block if there's nothing, but then you still don't have a guarantee that any particular amount of data will be returned when it resumes, it may still return e.g. 50 bytes!).
Even if you only send, say, 5,000 bytes total, it is perfectly valid behaviour for TCP to break this into 5 (or 10, or 20) packets, and for recv to return 500 (or 100, or 20, or 1) bytes at a time, every time you call it. That's just how it works.
TCP guarantees that anything you send will eventually arrive at the other end or produce an error. And, it guarantees that whatever you send arrives in order. It does not guarantee much else. Above all, it does not guarantee that any particular amount of data is ready at any given time.
You must be prepared for that, and the only way to do it is calling recv repeatedly. Otherwise you will always lose data under some circumstances.
MSG_WAITALL should in principle make it work the way you expect, but that is bad behaviour, and it is not guaranteed to work. If the socket (or some other structure in the network stack) runs against a soft or hard limit, it may not, and probably will not fulfill your request. Some limits are obscure, too. For example, the number for SO_RCVBUF must be twice as large as what you expect to receive under Linux, because of implementation details.
Correct behaviour of a server application should never depend on assumptions such as "it fits into the receive buffer". Your application needs to be prepared, in principle, to receive terabytes of data using a 1 kilobyte receive buffer, and in chunks of 1 byte at a time, if need be. A larger receive buffer will make it more efficient, but that's it... it still has to work either way.
The fact that you only seee failures upwards of some "huge" limit is just luck (or rather, bad luck). The fact that it apparently "works fine" up to that limit suggests what you do is correct, but it isn't. It's an unlucky coincidence that it works.
EDIT:
As requested in below comment, here is what this could look like (Code is obviously untested, caveat emptor.)
std::vector<char> result;
int size;
char recv_buf[250];
for(;;)
{
if((size = recv(fd, recv_buf, sizeof(recv_buf), 0)) > 0)
{
for(unsigned int i = 0; i < size; ++i)
result.push_back(recv_buf[i]);
}
else if(size == 0)
{
if(result.size() < expected_size)
{
printf("premature close, expected %u, only got %u\n", expected_size, result.size());
}
else
{
do_something_with(result);
}
break;
}
else
{
perror("recv");
exit(1);
}
}
That will receive any amount of data you want (or until operator new throws bad_alloc after allocating a vector several hundred MiB in size, but that's a different story...).
If you want to handle several connections, you need to add poll or epoll or kqueue or a similar functionality (or... fork), I'll leave this as exercise for the reader.
It is possible that your problem is related to kernel socket buffer sizes. Try adding the following to your code:
int buffsize = 1024*1024;
setsockopt(s, SOL_SOCKET, SO_RCVBUF, &buffsize, sizeof(buffsize));
You might need to increase some sysctl variables too:
sysctl -w net.core.rmem_max=8388608
sysctl -w net.core.wmem_max=8388608
Note however, that relying on TCP to fill your whole buffer is generally a bad idea. You should rather call recv() multiple times. The only good reason why you would want to receive more than 64K is for improved performance. However, Linux should already have auto-tuning that will progressively increase the buffer sizes as required.
in tcp max packet sixe is 65,635,bytes
No it isn't. TCP is a byte-stream protocol over segments over IP packets, and the protocol has unlimited transmission sizes over any one connection. Look at all those 100MB downloads: how do you think they work?
Just send and receive the data. You'll get it.
I would suggest exploring kqueue or something similar. With event notification there is no need to loop on recv . Just call a simple read function upon an EV_READ event and use a single call to the recv function upon the socket that triggered the event. Your function can have a buffer size of 10 bytes or however much you want it doesn't matter because if you did not read the entire message the first time around you'll just get another EV_READ event on the socket and you recall your read function. When the data is read you'll get a EOF event. No need to hustle with loops that may or may not block other incoming connections.