Is this function doing something wrong with the sockets? - c++

I am using the following function to receive XML files for a while, but it has been going wrong for some time now and I think the problem is on the customer's network. I'm not sure, it's just a guess.
It happens some times when they try to send me XMLs files bigger than 13KB - the received buffer contains trash like this:
...
<Identifiers>
<Identifier>
<PID>E3744</PID>
</Identifier>
<Identifier IDType="SHC">
<PID>10021020</PID>
</Identifier>
<Identifier><*X| Å Å Ÿòc PV“R¢ E ·Â÷# #€ˆ
þõ
øæ=Ì×KåÅôdËÞ¦P s÷j
<PID>1002102-0</PID>
</Identifier>
<Identifier>
<PID>1002102</PID>
</Identifier>
</Identifiers>
...
Here is the fuction:
bool ReceiveBuffer(HWND hDlg, const SOCKET& socket, string& sBuffer)
{
WSAAsyncSelect(socket, hDlg, WM_WINSOCK, FD_CLOSE);
int iBufSize = 10000000; //10MB
int iBufVarSize = sizeof(iBufSize);
if (setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, iBufVarSize) == SOCKET_ERROR)
if (getsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, &iBufVarSize) == SOCKET_ERROR)
WriteLog("Unable to GET buffer receiving size");
char* buf = (char*)MALLOCZ(iBufSize);
if (!buf)
{
WriteLog("Unable to allocate memory");
return false;
}
int iCharsRead = 0;
do
{
memset(buf, 0, iBufSize);
iCharsRead = recv(socket, buf, iBufSize, 0);
if (iCharsRead > 0)
sBuffer.append(buf, iCharsRead);
}
while (iCharsRead > 0);
FREE(buf);
buf = NULL;
return true;
}

ReceiveBuffer() should not be calling WSAAsyncSelect() or setting SO_RCVBUF. That is the responsibility of whatever code initially creates the SOCKET.
But more importantly, WSAAsyncSelect() puts the socket into non-blocking mode, per the documentation:
The WSAAsyncSelect function automatically sets socket s to nonblocking mode, regardless of the value of lEvent.
However, your reading loop is not accounting for possible WSAEWOULDBLOCK errors from recv() so it can call recv() again to keep reading.
ReceiveBuffer() is also assuming that if setsockopt() succeeds then the actual buffer size is really the requested size, which is not guaranteed. So you need to call getsockopt() regardless of whether setsockopt() succeeds or fails, per the documentation:
SO_RCVBUF and SO_SNDBUF
When a Windows Sockets implementation supports the SO_RCVBUF and SO_SNDBUF options, an application can request different buffer sizes (larger or smaller). The call to setsockopt can succeed even when the implementation did not provide the whole amount requested. An application must call getsockopt with the same option to check the buffer size actually provided.
But really, setting SO_RCVBUF on every call to ReceiveBuffer() is not necessary in the first place. recv() returns whatever data is currently available at that moment, up to the requested buffer size. It is very unlikely that it will return anywhere close to 10MB of data on any given read. So you are just wasting a lot of memory for no real benefit. It is one thing to set the socket's internal buffer to 10MB if you are on a fast network. It is another thing to allocate a memory buffer of 10MB to receive data from each recv() call. You should use a much smaller memory buffer. 1K is a common size to use.
But beyond that, regardless of the buffer size you use, ReceiveBuffer() is reading arbitrary bytes in an endless loop until the socket is disconnected or errors (and not accounting for non-blocking errors). When the socket does eventually disconnect/error, ReceiveBuffer() is returning true instead of false, so the caller has no idea that something went wrong, or that sBuffer may be incomplete.
Also, in case the caller calls ReceiveBuffer() multiple times with the same variable for the sBuffer parameter, you should call sBuffer.clear() before starting the reading loop to make sure you are not appending new data to the end of stale data.
Now, all of the above is just technical issues with your code logic. But there is also a semantic element as well. XML has a finite length to it, but your current code has no way of knowing what that length actually is. It is the sender's responsibility to tell the receiver when the XML has stopped being sent. That could be by sending the XML's length before sending the XML itself, so the receiver knows how many bytes to expect. Or that could be by sending a unique delimiter, like a null terminator, at the end of the XML, so the receiver can stop reading when it sees the delimiter. Or that could be by gracefully closing the connection at the end of the XML (which is a bad idea, because then the receiver can't differentiate between end-of-data and data loss). But it has to do something.
Now, with all of that said, try something more like this instead (I'm assuming a graceful disconnect is the end-of-data indicator, since that is what your original code is doing - you need to seriously consider a different protocol design!):
bool ReceiveBuffer(SOCKET socket, string& sBuffer)
{
sBuffer.clear();
/*
int iBufSize = 1024 * 1024 * 10; //10MB
setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, sizeof(iBufSize));
if (getsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, sizeof(iBufSize)) == SOCKET_ERROR)
WriteLog("Unable to GET buffer receiving size");
*/
char* buf = (char*) malloc(1024);
if (!buf)
{
WriteLog("Unable to allocate memory");
return false;
}
int iCharsRead;
bool bRet = true;
do
{
iCharsRead = recv(socket, buf, 1024, 0);
if (iCharsRead > 0)
{
sBuffer.append(buf, iCharsRead);
}
else if (iCharsRead == 0)
{
// socket disconnected gracefully
break;
}
else
{
if (WSAGetLastError() != WSAEWOULDBLOCK)
{
// socket error!
WriteLog("Unable to read from socket");
bRet = false;
break;
}
// socket is non-blocking and there is no data available
// at this moment. Call recv() again...
// optional: call select() to wait for new data to arrive
// before calling recv() again. For instance, this will
// allow you to fail the function if no new data arrived
// within a timeout period...
//
/*
fd_set fd;
FD_ZERO(&fd);
FD_SET(socket, &fd);
timeval tv;
tv.tv_sec = 30;
tv.tv_usec = 0;
int ret = select(0, &fd, NULL, NULL, &tv);
if (ret <= 0)
{
if (ret == 0)
{
// timeout!
WriteLog("Timeout waiting for data from socket");
}
else
{
// socket error!
WriteLog("Unable to wait for data from socket");
}
bRet = false;
break;
}
*/
}
}
while (true);
free(buf);
return bRet;
}

Related

TCP C send data when not receiving data

I'm trying to send data to the connected client, even when the client did not send me a message first.
This is my current code:
while (true) {
// open a new socket to transmit data per connection
int sock;
if ((sock = accept(listen_sock, (sockaddr *) &client_address, &client_address_len)) < 0) {
logger.log(TYPE::ERROR, "server::could not open a socket to accept data");
exit(0);
}
int n = 0, total_received_bytes = 0, max_len = 4096;
std::vector<char> buffer(max_len);
logger.log(TYPE::SUCCESS,
"server::client connected with ip address: " + std::string(inet_ntoa(client_address.sin_addr)));
// keep running as long as the client keeps the connection open
while (true) {
n = recv(sock, &buffer[0], buffer.size(), 0);
if (n > 0) {
total_received_bytes += n;
std::string str(buffer.begin(), buffer.end());
KV key_value = kv_from(vector_from(str));
messaging.set_command(key_value);
}
std::string message = "hmc::" + messaging.get_value("hmc") + "---" + "sonar::" + messaging.get_value("sonar") + "\n";
send(sock, message.c_str(), message.length(), 0);
}
logger.log(TYPE::INFO, "server::connection closed");
close(sock);
}
I thought by moving the n = recv(sock, &buffer[0], buffer.size(), 0); outside the while condition that it would send the data indefinitely, but that is not what happened.
Thanks in advance.
Solution
Adding MSG_DONTWAIT to the recv function enabled non-blocking operations which I was looking for.
First I will explain, why it does not work, then I will make a proposal for solutions. Basically you will find the answer in the man7.org > Linux > man-pages and for recv specifially here.
When the function "recv" is called, then it will not return, until data is available and can be read. This behavior of functions is called "blocking". Means, the current execution thread is blocked until data has been read.
So, calling the function
n = recv(sock, &buffer[0], buffer.size(), 0);
as you did, causes the trouble. You need also to check the return code. 0 means, connection closed, -1 means error and you must check errno for further information.
You can modify the socket to work in non-blocking mode with the function fnctl and the O_NONBLOCK flag, for the lifetime of the socket. You can also use the the flag MSG_DONTWAIT as 4th parameter (flags), to unblock the function on a per-function-call base.
In both cases, if no data is available, the functions returns a -1 and you need to check errno for EAGAIN or EWOULDBLOCK.
return value 0 indicates that the connection has been closed.
But from the architecture point of view, I would not recommend to use this approach. You could use multiple threads for receiving and sending data, or, using Linux, one of select, poll or similar functions. There is even a common design pattern for this. It is called "reactor", There are also related patterns like "Acceptor/Connector" and "Proactor"/"ACT" available. If you plan to write a more robust application, then you may consider those.
You will find an implementation of Acceptor, Connector, Reactor, Proactor, ACT here
Hope this helps

hololens socket can't receive full packet

A simple summary:
boost asio server, send a video frame 720x768x3 with simple compression
packet size is 186476, not really to much
nothing to complicated, anyway, if i test it in the hololens emulator or on the physical device
// uint32_t data_length == size of frame 'data_ptr'
enum max_length = sizeof(uint32_t);
memcpy(data_, &data_length, max_length);
auto length = boost::asio::write(*socket_, boost::asio::buffer(data_, max_length), e);
length = boost::asio::write(*socket_, boost::asio::buffer(data_ptr, data_length), e);
// receive
char data_[max_length] = { 0 };
fd_set readSet;
FD_ZERO(&readSet);
FD_SET(_socket, &readSet);
timeval timeout;
timeout.tv_sec = 0; // Zero timeout (poll)
timeout.tv_usec = 0;
auto result = select(_socket, &readSet, nullptr, nullptr, &timeout);
if (result == 0)
continue;
result = recv(_socket, data_, max_length, 0);
if (result == SOCKET_ERROR) {
closesocket(_socket);
_socket = INVALID_SOCKET;
break;
}
uint32_t msg_size(0);
memcpy(&msg_size, data_, max_length);
std::vector<char> vec(msg_size);
result = recv(_socket, &vec[0], msg_size, 0);
while (result < msg_size) {
result += recv(_socket, &vec[result], msg_size - result, 0);
}
but the hololens can't receive the full packet, i try it also with the .net streamsockets, same result. it tried a few times and then recv blocks in the while loop and doesn't receive anymore.
anyone, any idea? is it an uwp app problem, that i can't receive 'bigger' packets, or get it killed because it takes too long?
You have two main problems:
First, you need to check the return value of recv for errors. If it returns 0 or -1, you need to handle that.
Second, you ignore all the data you received from your first call to recv. You set msg_size to zero when it should be result minus however many bytes the length took.
I would suggest writing a function that reads exactly the specified number of bytes, checking for errors. Call it first to receive four bytes and check if it returned an error. Then call it to receive the number of bytes indicated by the length data you received.
Smaller problems include:
What if the first recv only returns one byte?
What if the way your platform stores 32-bit integers isn't the same as the way the emulator sends it?

C/C++: Write and Read Sockets

I'm sending and receiving info with a unix socket, but I do not completely understand how it works. Basically, I send a message like this:
int wr_bytes = write(sock, msg.c_str(), msg.length());
And receive message like this:
int rd_bytes = read(msgsock, buf, SOCKET_BUFFER_SIZE);
This code works perfectly with thousands of bytes, what I don't understand is, how does the read function knows when the other part is done sending the message? I tried to read the read documentation and, on my understanding read will return once it reaches EOF or the SOCKET_BUFFER_SIZE, is that correct?
So I'm guessing that when I give my string to the write function, it adds an EOF at the end of my content so the read function knows when to stop.
I'm asking this question because, I did not add any code that checks whether the other part finished sending the message, however, I'm receiving big messages (thousands of bytes) without any problem, why is that happening, why am I not getting only parts of the message?
Here is the full function I'm using to send a message to a unix socket server:
string sendSocketMessage(string msg) {
int sock;
struct sockaddr_un server;
char buf[1024];
sock = socket(AF_UNIX, SOCK_STREAM, 0);
if (sock < 0) {
throw runtime_error("opening stream socket");
}
server.sun_family = AF_UNIX;
strcpy(server.sun_path, "socket");
if (connect(sock, (struct sockaddr *) &server, sizeof(struct sockaddr_un)) < 0) {
close(sock);
throw runtime_error("connecting stream socket");
}
if (write(sock, msg.c_str(), msg.length()) < 0){
throw runtime_error("writing on stream socket");
close(sock);
}
bzero(buf, sizeof(buf));
int rval = read(sock, buf, 1024);
return string( reinterpret_cast< char const* >(buf), rval );
}
And here is my server function (a little bit more complicated, the type vSocketHandler represents a function that I call to handle requests):
void UnixSocketServer::listenRequests(vSocketHandler requestHandler){
int sock, msgsock, rval;
struct sockaddr_un server;
char buf[SOCKET_BUFFER_SIZE];
sock = socket(AF_UNIX, SOCK_STREAM, 0);
if (sock < 0) {
throw runtime_error("opening stream socket");
}
server.sun_family = AF_UNIX;
strcpy(server.sun_path, SOCKET_FILE_PATH);
if (bind(sock, (struct sockaddr *) &server, sizeof(struct sockaddr_un))) {
throw runtime_error("binding stream socket");
}
listen(sock, SOCKET_MAX_CONNECTIONS);
while(true) {
msgsock = accept(sock, 0, 0);
if (msgsock == -1){
throw runtime_error("accept socket");
} else {
bzero(buf, sizeof(buf));
if((rval = read(msgsock, buf, SOCKET_BUFFER_SIZE)) < 0)
throw runtime_error("reading stream message");
else if (rval == 0){
//do nothing, client closed socket
break;
} else {
string msg = requestHandler(string( reinterpret_cast< char const* >(buf), rval ));
if(write(msgsock, msg.c_str(), msg.length()) < 0)
throw runtime_error("sending stream message");
}
close(msgsock);
}
}
close(sock);
unlink(SOCKET_FILE_PATH);
}
what I don't understand is, how does the read function knows when the other part is done sending the message?
For a stream-type socket, such as you're using, it doesn't. For a datagram-type socket, communication is broken into distinct chunks, but if a message spans multiple datagrams then the answer is again "it doesn't". This is indeed one of the key things to understand about the read() and write() (and send() and recv()) functions in general, and about sockets more specifically.
For the rest of this answer I'll focus on stream oriented sockets, since that's what you're using. I'll also suppose that the socket is not in non-blocking mode. If you intend for your data transmitted over such a socket to be broken into distinct messages, then it is up to you to implement an application-level protocol by which the other end can recognize message boundaries.
I tried to read the read documentation and, on my understanding read will return once it reaches EOF or the SOCKET_BUFFER_SIZE, is that correct?
Not exactly. read() will return if it reaches the end of the file, which happens when the peer closes its socket (or at least shuts down the write side of it) so that it is certain that no more data will be sent. read() will also return in the event of any of a variety of error conditions. And read() may return under other unspecified circumstances, provided that it has transferred at least one byte. In practice, this last case is generally invoked if the socket buffer fills, but it may also be invoked under other circumstances, such as when the buffer empties.
So I'm guessing that when I give my string to the write function, it adds an EOF at the end of my content so the read function knows when to stop.
No, it does no such thing. On success, the write() function sends some or all of the bytes you asked it to send, and nothing else. Note that it is not guaranteed even to send all the requested bytes; its return value tells you how many of them it actually did send. If that's fewer than "all", then ordinarily you should simply perform another write() to transfer the rest. You may need to do this multiple times to send the whole message. In any event, only the bytes you specify are sent.
I'm asking this question because, I did not add any code that checks whether the other part finished sending the message, however, I'm receiving big messages (thousands of bytes) without any problem, why is that happening, why am I not getting only parts of the message?
More or less because you're getting lucky, but the fact that you're using UNIX-domain sockets (as opposed to network sockets) helps. Your data are transferred very efficiently from sending process to receiving process through the kernel, and it is not particularly surprising that large writes() are received by single read()s. You cannot safely rely on that always to happen, however.

Why select() timeouts sometimes when the client is busy receiving data

I have written simple C/S applications to test the characteristics of non-blocking sockets, here is some brief information about the server and client:
//On linux The server thread will send
//a file to the client using non-blocking socket
void *SendFileThread(void *param){
CFile* theFile = (CFile*) param;
int sockfd = theFile->GetSocket();
set_non_blocking(sockfd);
set_sock_sndbuf(sockfd, 1024 * 64); //set the send buffer to 64K
//get the total packets count of target file
int PacketCOunt = theFile->GetFilePacketsCount();
int CurrPacket = 0;
while (CurrPacket < PacketCount){
char buffer[512];
int len = 0;
//get packet data by packet no.
GetPacketData(currPacket, buffer, len);
//send_non_blocking_sock_data will loop and send
//data into buffer of sockfd until there is error
int ret = send_non_blocking_sock_data(sockfd, buffer, len);
if (ret < 0 && errno == EAGAIN){
continue;
} else if (ret < 0 || ret == 0 ){
break;
} else {
currPacket++;
}
......
}
}
//On windows, the client thread will do something like below
//to receive the file data sent by the server via block socket
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
while (1){
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, NULL, &timeout );
if (ret == 0){
// log that timer expires
CLogger::log("RecvFileThread---Calling select() timeouts\n");
} else if (ret) {
//log the number of data it received
int ret = 0;
char buffer[1024 * 256];
int len = recv(sockfd, buffer, sizeof(buffer), 0);
// handle error
process_tcp_data(buffer, len);
} else {
//handle and break;
break;
}
}
}
What surprised me is that the server thread fails frequently because of socket buffer full, e.g. to send a file of 14M size it reports 50000 failures with errno = EAGAIN. However, via logging I observed there are tens of timeouts during the transfer, the flow is like below:
on the Nth loop, select() succeeds and read 256K's data successfully.
on the (N+1)th loop, select() failed with timeout.
on the (N+2)th loop, select() succeeds and read 256K's data successfully.
Why there would be timeouts interleaved during the receving? Can anyone explain this phenomenon?
[UPDATE]
1. Uploading a file of 14M to the server only takes 8 seconds
2. Using the same file with 1), the server takes nearly 30 seconds to send all data to the client.
3. All sockets used by the client are blocking. All sockets used by the server are non-blocking.
Regarding #2, I think timeouts are the reason why #2 takes much more time then #1, and I wonder why there would be so many timeouts when the client is busy in receiving data.
[UPDATE2]
Thanks for comments from #Duck, #ebrob, #EJP, #ja_mesa , I will do more investigation today
then update this post.
Regarding why I send 512 bytes per loop in the server thread, it is because I found the server thread sends data much faster than the client thread receiving them. I am very confused that why timeout happened to the client thread.
Consider this more of a long comment than an answer but as several people have noted the network is orders of magnitude slower than your processor. The point of non-blocking i/o is that the difference is so great that you can actually use it to do real work rather than blocking. Here you are just pounding on the elevator button hoping that makes a difference.
I'm not sure how much of your code is real and how much is chopped up for posting but in the server you don't account for (ret == 0) i.e. normal shutdown by the peer.
The select in the client is wrong. Again, not sure if that was sloppy editing or not but if not then the number of parameters are wrong but, more concerning, the first parameter - i.e. should be the highest file descriptor for select to look at plus one - is zero. Depending on the implementation of select I wonder if that is in fact just turning select into a fancy sleep statement.
You should be calling recv() first and then call select() only if recv() tells you to do so. Don't call select() first, that is a waste of processing. recv() knows if data is immediately available or if it has to wait for data to arrive:
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
char buffer[1024 * 256];
while (1){
int ret = 0;
int len = recv(sockfd, buffer, sizeof(buffer), 0);
if (len == -1) {
if (WSAGetLastError() != WSAEWOULDBLOCK) {
//handle error
break;
}
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, &timeout );
if (ret == -1) {
// handle error
break;
}
if (ret == 0) {
// log that timer expires
break;
}
// socket is readable so try read again
continue;
}
if (len == 0) {
// handle graceful disconnect
break;
}
//log the number of data it received
process_tcp_data(buffer, len);
}
}
Do something similar on the sending side as well. Call send() first, and then call select() waiting for writability only if send() tells you to do so.

clean window socket internal buffer

I am wondering if there is a way to clean up window socket internal buffer, because what I want to achieve is this
while(1){
for(i=0;i<10;i++){
sendto(...) //send 10 UDP datagrams
}
for(i=0;i<10;i++){
recvfrom (Socket, RecBuf, MAX_PKT_SIZE, 0,
(SOCKADDR*) NULL, NULL);
int Status = ProcessBuffer(RecBuf);
if (Status == SomeCondition)
MagicalSocketCleanUP(Socket); //clean up the rest of stuff in the socket, so that it doesn't effect the reading for next iteration of the outer while loop
break; //occasionally the the receive loop needs to terminate before finishing off all 10 iteration
}
}
so I am asking for is there a function to clean up whatever remaining in the socket so that it won't effect my next reading? Thank you
The way to clean up data from the internal receive socket buffer is to read data until there is no more data to read. If you do this in a non-blocking way, you do not need to wait for more data in select(), because the EWOUDBLOCK error value means the internal receive socket buffer is empty.
int MagicalSocketCleanUP(SOCKET Socket) {
int r;
std::vector<char> buf(128*1024);
do {
r = recv(Socket, &buf[0], buf.size(), MSG_DONTWAIT);
if (r < 0 && errno == EINTR) continue;
} while (r > 0);
if (r < 0 && errno != EWOULDBLOCK) {
perror(__func__);
//... code to handle unexpected error
}
return r;
}
But this is not exactly safe. The other end of the socket may have sent good data into the socket buffer too, so this routine may discard more than what you want to discard.
Instead, the data on the socket should be framed in such a way that you know when the data of interest arrives. So instead of a cleanup API, you could extend ProcessBuffer() to discard input until it finds data of interest.
A simpler mechanism would be a message exchange between the two sides of the socket. When the error state is entered, the sender sends a "DISCARDING UNTIL <TOKEN>" message. The receiver sends back "<TOKEN>" and knows that only the data after the "<TOKEN>" message will be processed. The "<TOKEN>" can be a random sequence.