Does a blocking send() returns immediately? - c++

I thought that calling send() on a blocking socket does not return until all data are sent (until the last chunk of data is sent to the send buffer that is), however the following test showed otherwise:
// buffer = "AAAAAAAA...B" (10 MB)
char *buffer = new char[10485760];
memset(buffer, 0x41, 10485760);
buffer[10485758] = 0x42;
buffer[10485759] = '\0';
// Send buffer
send(s, buffer, 10485760, 0) ;
printf("send() has returned");
So basically I connected to Netcat and sent buffer, and even after send() has returned, AAAAAAAAAAAAAA... was still being displayed to the console on the other end. You can close the sender at any moment and the sending would stop (so it is not that buffer has already arrived to the other end but it takes a long time to display it to the console).
This can only make sense if the send buffer is 10+ MB.
Edit: the return value of send() is 10485760 (i.e. buffer size).

send sends the data to the kernel, where it is placed in a socket buffer. If the kernel runs out of socket buffers, the send will block (or fail, if it is non-blocking).
That has very little to do with the kernel sending data to the network.
However, if you kill a program, all of its sockets are forcibly closed, which will discard any unsent data sitting in kernel buffers.

Related

QTcpSocket data transfer stops when read buffer is full and does not resumes when it frees up

I have server-client Qt application, where client sends data packets to server and server reads them at a set time intervals. It happens that client sends data faster than server can read thus filling all the memory on the server side. I am using QAbstractSocket::setReadBufferSize(size) to set max read buffer size on the server side and when it fills up, socket data transferring stops, and data is buffered on client side, which is what i want, but the problem is when server's QTcpSocket's internal read buffer frees up (is not full anymore), data transfer between client and server does not resume.
I've tried to use QAbstractSocket::resume() which seems to work, but Qt5.10 documentation says:
Continues data transfer on the socket. This method should only be used
after the socket has been set to pause upon notifications and a
notification has been received. The only notification currently
supported is QSslSocket::sslErrors(). Calling this method if the
socket is not paused results in undefined behavior.
I feel like I should not use that function in this situation, but is there any other solution? How do i know if socket is paused? Why data transfer does not continue automaticaly when QTcpSocket's internal read buffer is not full anymore?
EDIT 1 :
I have downloaded Qt(5.10.0) sources and pdb's to debug this situation and I can see that QAbstractSocket::readData() internal function have line "d->socketEngine->setReadNotificationEnabled(true)" which re-enables data transfering, but QAbstractSocket::readData() gets called only when QTcpSocket internal read buffer is empty (qiodevice.cpp; QIODevicePrivate::read(); line 1176) and in My situation it is never empty, because I read it only when it has enough data for complete packet.
Shouldn't QAbstractSocket::readData() be called when read buffer is not full anymore and not when it's completely empty? Or maybe i do something wrong?
Found a Workaround!
In Qt5.10 sources i can clearly see that QTcpSpcket internal read notifications is disabled (qabstractsocket.cpp; bool QAbstractSocketPrivate::canReadNotification(); line 697) when read buffer is full and to enable read notifications you need to read all buffer to make it empty OR use QAbstractSocket::setReadBufferSize(newSize) which internally enables read notifications WHEN newSize is not 0 (unlimited) and not equal to oldSize (qabstractsocket.cpp; void QAbstractSocket::setReadBufferSize(qint64 size); line 2824).
Here's a short function for that:
QTcpSocket socket;
qint64 readBufferSize; // Current max read buffer size.
bool flag = false; // flag for changing max read buffer size.
bool isReadBufferLimitReached = false;
void App::CheckReadBufferLimitReached()
{
if (readBufferSize <= socket.bytesAvailable())
isReadBufferLimitReached = true;
else if (isReadBufferLimitReached)
{
if (flag)
{
readBufferSize++;
flag = !flag;
}
else
{
readBufferSize--;
flag = !flag;
}
socket.setReadBufferSize(readBufferSize);
isReadBufferLimitReached = false;
}
}
In the function which reads data from QTcpSocket at the set intervals, BEFORE reading data, I call this function, which checks if read buffer is full and sets isReadBufferLimitReached if true. Then I read needed amount of data from QTcpSocket and AT THE END I call that function again, which, if buffer were full before, calls QAbstractSocket::setReadBufferSize(size) to set new buffer size and enable internal read notifications. Changing read buffer size by +/-1 should be safe, because you read at least 1 byte from socket.

What means blocking for boost::asio::write?

I'm using boost::asio::write() to write data from a buffer to a com-Port. It's a serial port with a baud rate 115200 which means (as far as my understanding goes) that I can write effectively 11520 byte/s or 11,52KB/s data to the socket.
Now I'm having a quite big chunk of data (10015 bytes) which i want to write. I think that this should take little less than a second to really write on the port. But boost::asio::write() returns already 300 microseconds after the call with the transferred bytes 10015. I think this is impossible with that baud rate?
So my question is what is it actually doing? Really writing it to the port, or just some other kind of buffer maybe, which later writes it to the port.
I'd like the write() to only return after all the bytes have really been written to the port.
EDIT with code example:
The problem is that i always run into the timeout for the future/promise because it takes alone more than 100ms to send the message, but I think the timer should only start after the last byte is sent. Because write() is supposed to block?
void serial::write(std::vector<uint8_t> message) {
//create new promise for the request
promise = new boost::promise<deque<uint8_t>>;
boost::unique_future<deque<uint8_t>> future = promise->get_future();
// --- Write message to serial port --- //
boost::asio::write(serial_,boost::asio::buffer(message));
//wait for data or timeout
if (future.wait_for(boost::chrono::milliseconds(100))==boost::future_status::timeout) {
cout << "ACK timeout!" << endl;
//delete pointer and set it to 0
delete promise;
promise=nullptr;
}
//delete pointer and set it to 0 after getting a message
delete promise;
promise=nullptr;
}
How can I achieve this?
Thanks!
In short, boost::asio::write() blocks until all data has been written to the stream; it does not block until all data has been transmitted. To wait until data has been transmitted, consider using tcdrain().
Each serial port has both a receive and transmit buffer within kernel space. This allows the kernel to buffer received data if a process cannot immediately read it from the serial port, and allows data written to a serial port to be buffered if the device cannot immediately transmit it. To block until the data has been transmitted, one could use tcdrain(serial_.native_handle()).
These kernel buffers allow for the write and read rates to exceed that of the transmit and receive rates. However, while the application may write data at a faster rate than the serial port can transmit, the kernel will transmit at the appropriate rates.

What is the size of a socket send buffer in Windows?

Based on my understanding, each socket is associated with two buffers, a send buffer and a receive buffer, so when I call the send() function, what happens is that the data to send will be placed into the send buffer, and it is the responsibility of Windows now to send the content of this send buffer to the other end.
In a blocking socket, the send() function does not return until the entire data supplied to it has been placed into the send buffer.
So what is the size of the send buffer?
I performed the following test (sending 1 GB worth of data):
#include <stdio.h>
#include <WinSock2.h>
#pragma comment(lib, "ws2_32.lib")
#include <Windows.h>
int main()
{
// Initialize Winsock
WSADATA wsa;
WSAStartup(MAKEWORD(2, 2), &wsa);
// Create socket
SOCKET s = socket(AF_INET, SOCK_STREAM, 0);
//----------------------
// Connect to 192.168.1.7:12345
sockaddr_in address;
address.sin_family = AF_INET;
address.sin_addr.s_addr = inet_addr("192.168.1.7");
address.sin_port = htons(12345);
connect(s, (sockaddr*)&address, sizeof(address));
//----------------------
// Create 1 GB buffer ("AAAAAA...A")
char *buffer = new char[1073741824];
memset(buffer, 0x41, 1073741824);
// Send buffer
int i = send(s, buffer, 1073741824, 0);
printf("send() has returned\nReturn value: %d\nWSAGetLastError(): %d\n", i, WSAGetLastError());
//----------------------
getchar();
return 0;
}
Output:
send() has returned
Return value: 1073741824
WSAGetLastError(): 0
send() has returned immediately, does this means that the send buffer has a size of at least 1 GB?
This is some information about the test:
I am using a TCP blocking socket.
I have connected to a LAN machine.
Client Windows version: Windows 7 Ultimate 64-bit.
Server Windows version: Windows XP SP2 32-bit (installed on Virtual Box).
Edit: I have also attempted to connect to Google (173.194.116.18:80) and I got the same results.
Edit 2: I have discovered something strange, setting the send buffer to a value between 64 KB and 130 KB will make send() work as expected!
int send_buffer = 64 * 1024; // 64 KB
int send_buffer_sizeof = sizeof(int);
setsockopt(s, SOL_SOCKET, SO_SNDBUF, (char*)send_buffer, send_buffer_sizeof);
Edit 3: It turned out (thanks to Harry Johnston) that I have used setsockopt() in an incorrect way, this is how it is used:
setsockopt(s, SOL_SOCKET, SO_SNDBUF, (char*)&send_buffer, send_buffer_sizeof);
Setting the send buffer to a value between 64 KB and 130 KB does not make send() work as expected, but rather setting the send buffer to 0 makes it block (this is what I noticed anyway, I don't have any documentation for this behavior).
So my question now is: where can I find a documentation on how send() (and maybe other socket operations) work under Windows?
After investigating on this subject. This is what I believe to be the correct answer:
When calling send(), there are two things that could happen:
If there are pending data which are below SO_SNDBUF, then send() would return immediately (and it does not matter whether you are sending 5 KB or you are sending 500 MB).
If there are pending data which are above or equal SO_SNDBUF, then send() would block until enough data has been sent to restore the pending data to below SO_SNDBUF.
Note that this behavior is only applicable to Windows sockets, and not to POSIX sockets. I think that POSIX sockets only use one fixed sized send buffer (correct me if I'm wrong).
Now back to your main question "What is the size of a socket send buffer in Windows?". I guess if you have enough memory it could grow beyond 1 GB if necessary (not sure what is the maximum limit though).
I can reproduce this behaviour, and using Resource Monitor it is easy to see that Windows does indeed allocate 1GB of buffer space when the send() occurs.
An interesting feature is that if you do a second send immediately after the first one, that call does not return until both sends have completed. The buffer space from the first send is released once that send has completed, but the second send() continues to block until all the data has been transferred.
I suspect the difference in behaviour is because the second call to send() was already blocking when the first send completed. The third call to send() returns immediately (and 1GB of buffer space is allocated) just as the first one did, and so on, alternating.
So I conclude that the answer to the question ("how large are the send buffers?") is "as large as Windows sees fit". The upshot is that, in order to avoid exhausting the system memory, you should probably restrict blocking sends to no more than a few hundred megabytes.
Your call to setsockopt() is incorrect; the fourth argument is supposed to be a pointer to an integer, not an integer converted to a pointer. Once this is corrected, it turns out that setting the buffer size to zero causes send() to always block.
To summarize, the observed behaviour is that send() will return immediately provided:
there is enough memory to buffer all the provided data
there is not a send already in progress
the buffer size is not set to zero
Otherwise, it will return once the data has been sent.
KB214397 describes some of this - thanks Hans! In particular it describes that setting the buffer size to zero disables Winsock buffering, and comments that "If necessary, Winsock can buffer significantly more than the SO_SNDBUF buffer size."
(The completion notification described does not quite match up to the observed behaviour, depending I guess on how you interpret "previously buffered send". But it's close.)
Note that apart from the risk of inadvertently exhausting the system memory, none of this should matter. If you really need to know whether the code at the other end has received all your data yet, the only reliable way to do that is to get it to tell you.
In a blocking socket, the send() function does not return until the entire data supplied to it has been placed into the send buffer.
That is not guaranteed. If there is available buffer space, but not enough space for the entire data, the socket can (and usually will) accept whatever data it can and ignore the rest. The return value of send() tells you how many bytes were actually accepted. You have to call send() again to send the remaining data.
So what is the size of the send buffer?
Use getsockopt() with the SO_SNDBUF option to find out.
Use setsockopt() with the SO_SNDBUF option to specify your own buffer size. However, the socket may impose a max cap on the value you specify. Use getsockopt() to find out what size was actually assigned.

Socket send data but the recv program dont work well

i created 2 programs, one to send and the other to recv the data.
So,
The portion to recv data is:
while ((recvMsgSize = sock->recv(echoBuffer, RCVBUFSIZE))>0) {
write(fileno(stdout), echoBuffer, recvMsgSize);
}
If i use it to recv a large file data, it works well, with small amount of data it dont work.
I know the problem is with the recv portion because if i use netcat to recv data it works well, it recv the entire data.
Is there any other way to receive the data?
Thanks
I would guess your socket is blocking and recv is waiting for RCVBUFSIZE bytes to be sent. You should send the size of the file that is going to be sent first and then count how much data you've received and only request the remaining portion when what you're missing is less than RCVBUFSIZE bytes.

ioctlsocket or recv takes more time to execute in windows socket programming?

In socket programming, some data is sent to the server, and as soon as server receives it sends the acknowledgement response message. it is more than 1 byte, so i check for more than one byte check while receiving, here i am losing around 120-200ms. Which is a very big issue. As client need to send ack back for this acknowledgement. I have sniffed to see data is arrived to my IP at the same time when server has sent. but recv or ioctlsocket(to check more than 1 byte is ready to be read) takes time to read more than one byte. How can i resolve this. The code is as follows.
DWORD RecvCount = 0;
char szBuff1[2048];
bool stop = false;
while(!stop)
{
ioctlsocket(*socket, FIONREAD, &RecvCount);
if(RecvCount > 1)
stop = true;
}
int Res = recv(*socket, szBuff1, RecvCount,0);
You should disable the Nagle algorithm on windows as otherwise the socket will sit on your data until the buffer is full (or at least wait a couple of hundred milliseconds before sending it anyway).
You do this by setting the TCP_NODELAY socket option:
int flag = 1;
int result = setsockopt(m_Socket,IPPROTO_TCP,TCP_NODELAY,(char *) &flag,sizeof(int));