I am wondering if there is a way to clean up window socket internal buffer, because what I want to achieve is this
while(1){
for(i=0;i<10;i++){
sendto(...) //send 10 UDP datagrams
}
for(i=0;i<10;i++){
recvfrom (Socket, RecBuf, MAX_PKT_SIZE, 0,
(SOCKADDR*) NULL, NULL);
int Status = ProcessBuffer(RecBuf);
if (Status == SomeCondition)
MagicalSocketCleanUP(Socket); //clean up the rest of stuff in the socket, so that it doesn't effect the reading for next iteration of the outer while loop
break; //occasionally the the receive loop needs to terminate before finishing off all 10 iteration
}
}
so I am asking for is there a function to clean up whatever remaining in the socket so that it won't effect my next reading? Thank you
The way to clean up data from the internal receive socket buffer is to read data until there is no more data to read. If you do this in a non-blocking way, you do not need to wait for more data in select(), because the EWOUDBLOCK error value means the internal receive socket buffer is empty.
int MagicalSocketCleanUP(SOCKET Socket) {
int r;
std::vector<char> buf(128*1024);
do {
r = recv(Socket, &buf[0], buf.size(), MSG_DONTWAIT);
if (r < 0 && errno == EINTR) continue;
} while (r > 0);
if (r < 0 && errno != EWOULDBLOCK) {
perror(__func__);
//... code to handle unexpected error
}
return r;
}
But this is not exactly safe. The other end of the socket may have sent good data into the socket buffer too, so this routine may discard more than what you want to discard.
Instead, the data on the socket should be framed in such a way that you know when the data of interest arrives. So instead of a cleanup API, you could extend ProcessBuffer() to discard input until it finds data of interest.
A simpler mechanism would be a message exchange between the two sides of the socket. When the error state is entered, the sender sends a "DISCARDING UNTIL <TOKEN>" message. The receiver sends back "<TOKEN>" and knows that only the data after the "<TOKEN>" message will be processed. The "<TOKEN>" can be a random sequence.
Related
I'm trying to send data to the connected client, even when the client did not send me a message first.
This is my current code:
while (true) {
// open a new socket to transmit data per connection
int sock;
if ((sock = accept(listen_sock, (sockaddr *) &client_address, &client_address_len)) < 0) {
logger.log(TYPE::ERROR, "server::could not open a socket to accept data");
exit(0);
}
int n = 0, total_received_bytes = 0, max_len = 4096;
std::vector<char> buffer(max_len);
logger.log(TYPE::SUCCESS,
"server::client connected with ip address: " + std::string(inet_ntoa(client_address.sin_addr)));
// keep running as long as the client keeps the connection open
while (true) {
n = recv(sock, &buffer[0], buffer.size(), 0);
if (n > 0) {
total_received_bytes += n;
std::string str(buffer.begin(), buffer.end());
KV key_value = kv_from(vector_from(str));
messaging.set_command(key_value);
}
std::string message = "hmc::" + messaging.get_value("hmc") + "---" + "sonar::" + messaging.get_value("sonar") + "\n";
send(sock, message.c_str(), message.length(), 0);
}
logger.log(TYPE::INFO, "server::connection closed");
close(sock);
}
I thought by moving the n = recv(sock, &buffer[0], buffer.size(), 0); outside the while condition that it would send the data indefinitely, but that is not what happened.
Thanks in advance.
Solution
Adding MSG_DONTWAIT to the recv function enabled non-blocking operations which I was looking for.
First I will explain, why it does not work, then I will make a proposal for solutions. Basically you will find the answer in the man7.org > Linux > man-pages and for recv specifially here.
When the function "recv" is called, then it will not return, until data is available and can be read. This behavior of functions is called "blocking". Means, the current execution thread is blocked until data has been read.
So, calling the function
n = recv(sock, &buffer[0], buffer.size(), 0);
as you did, causes the trouble. You need also to check the return code. 0 means, connection closed, -1 means error and you must check errno for further information.
You can modify the socket to work in non-blocking mode with the function fnctl and the O_NONBLOCK flag, for the lifetime of the socket. You can also use the the flag MSG_DONTWAIT as 4th parameter (flags), to unblock the function on a per-function-call base.
In both cases, if no data is available, the functions returns a -1 and you need to check errno for EAGAIN or EWOULDBLOCK.
return value 0 indicates that the connection has been closed.
But from the architecture point of view, I would not recommend to use this approach. You could use multiple threads for receiving and sending data, or, using Linux, one of select, poll or similar functions. There is even a common design pattern for this. It is called "reactor", There are also related patterns like "Acceptor/Connector" and "Proactor"/"ACT" available. If you plan to write a more robust application, then you may consider those.
You will find an implementation of Acceptor, Connector, Reactor, Proactor, ACT here
Hope this helps
I am using the following function to receive XML files for a while, but it has been going wrong for some time now and I think the problem is on the customer's network. I'm not sure, it's just a guess.
It happens some times when they try to send me XMLs files bigger than 13KB - the received buffer contains trash like this:
...
<Identifiers>
<Identifier>
<PID>E3744</PID>
</Identifier>
<Identifier IDType="SHC">
<PID>10021020</PID>
</Identifier>
<Identifier><*X| Å Å Ÿòc PV“R¢ E ·Â÷# #€ˆ
þõ
øæ=Ì×KåÅôdËÞ¦P s÷j
<PID>1002102-0</PID>
</Identifier>
<Identifier>
<PID>1002102</PID>
</Identifier>
</Identifiers>
...
Here is the fuction:
bool ReceiveBuffer(HWND hDlg, const SOCKET& socket, string& sBuffer)
{
WSAAsyncSelect(socket, hDlg, WM_WINSOCK, FD_CLOSE);
int iBufSize = 10000000; //10MB
int iBufVarSize = sizeof(iBufSize);
if (setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, iBufVarSize) == SOCKET_ERROR)
if (getsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, &iBufVarSize) == SOCKET_ERROR)
WriteLog("Unable to GET buffer receiving size");
char* buf = (char*)MALLOCZ(iBufSize);
if (!buf)
{
WriteLog("Unable to allocate memory");
return false;
}
int iCharsRead = 0;
do
{
memset(buf, 0, iBufSize);
iCharsRead = recv(socket, buf, iBufSize, 0);
if (iCharsRead > 0)
sBuffer.append(buf, iCharsRead);
}
while (iCharsRead > 0);
FREE(buf);
buf = NULL;
return true;
}
ReceiveBuffer() should not be calling WSAAsyncSelect() or setting SO_RCVBUF. That is the responsibility of whatever code initially creates the SOCKET.
But more importantly, WSAAsyncSelect() puts the socket into non-blocking mode, per the documentation:
The WSAAsyncSelect function automatically sets socket s to nonblocking mode, regardless of the value of lEvent.
However, your reading loop is not accounting for possible WSAEWOULDBLOCK errors from recv() so it can call recv() again to keep reading.
ReceiveBuffer() is also assuming that if setsockopt() succeeds then the actual buffer size is really the requested size, which is not guaranteed. So you need to call getsockopt() regardless of whether setsockopt() succeeds or fails, per the documentation:
SO_RCVBUF and SO_SNDBUF
When a Windows Sockets implementation supports the SO_RCVBUF and SO_SNDBUF options, an application can request different buffer sizes (larger or smaller). The call to setsockopt can succeed even when the implementation did not provide the whole amount requested. An application must call getsockopt with the same option to check the buffer size actually provided.
But really, setting SO_RCVBUF on every call to ReceiveBuffer() is not necessary in the first place. recv() returns whatever data is currently available at that moment, up to the requested buffer size. It is very unlikely that it will return anywhere close to 10MB of data on any given read. So you are just wasting a lot of memory for no real benefit. It is one thing to set the socket's internal buffer to 10MB if you are on a fast network. It is another thing to allocate a memory buffer of 10MB to receive data from each recv() call. You should use a much smaller memory buffer. 1K is a common size to use.
But beyond that, regardless of the buffer size you use, ReceiveBuffer() is reading arbitrary bytes in an endless loop until the socket is disconnected or errors (and not accounting for non-blocking errors). When the socket does eventually disconnect/error, ReceiveBuffer() is returning true instead of false, so the caller has no idea that something went wrong, or that sBuffer may be incomplete.
Also, in case the caller calls ReceiveBuffer() multiple times with the same variable for the sBuffer parameter, you should call sBuffer.clear() before starting the reading loop to make sure you are not appending new data to the end of stale data.
Now, all of the above is just technical issues with your code logic. But there is also a semantic element as well. XML has a finite length to it, but your current code has no way of knowing what that length actually is. It is the sender's responsibility to tell the receiver when the XML has stopped being sent. That could be by sending the XML's length before sending the XML itself, so the receiver knows how many bytes to expect. Or that could be by sending a unique delimiter, like a null terminator, at the end of the XML, so the receiver can stop reading when it sees the delimiter. Or that could be by gracefully closing the connection at the end of the XML (which is a bad idea, because then the receiver can't differentiate between end-of-data and data loss). But it has to do something.
Now, with all of that said, try something more like this instead (I'm assuming a graceful disconnect is the end-of-data indicator, since that is what your original code is doing - you need to seriously consider a different protocol design!):
bool ReceiveBuffer(SOCKET socket, string& sBuffer)
{
sBuffer.clear();
/*
int iBufSize = 1024 * 1024 * 10; //10MB
setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, sizeof(iBufSize));
if (getsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, sizeof(iBufSize)) == SOCKET_ERROR)
WriteLog("Unable to GET buffer receiving size");
*/
char* buf = (char*) malloc(1024);
if (!buf)
{
WriteLog("Unable to allocate memory");
return false;
}
int iCharsRead;
bool bRet = true;
do
{
iCharsRead = recv(socket, buf, 1024, 0);
if (iCharsRead > 0)
{
sBuffer.append(buf, iCharsRead);
}
else if (iCharsRead == 0)
{
// socket disconnected gracefully
break;
}
else
{
if (WSAGetLastError() != WSAEWOULDBLOCK)
{
// socket error!
WriteLog("Unable to read from socket");
bRet = false;
break;
}
// socket is non-blocking and there is no data available
// at this moment. Call recv() again...
// optional: call select() to wait for new data to arrive
// before calling recv() again. For instance, this will
// allow you to fail the function if no new data arrived
// within a timeout period...
//
/*
fd_set fd;
FD_ZERO(&fd);
FD_SET(socket, &fd);
timeval tv;
tv.tv_sec = 30;
tv.tv_usec = 0;
int ret = select(0, &fd, NULL, NULL, &tv);
if (ret <= 0)
{
if (ret == 0)
{
// timeout!
WriteLog("Timeout waiting for data from socket");
}
else
{
// socket error!
WriteLog("Unable to wait for data from socket");
}
bRet = false;
break;
}
*/
}
}
while (true);
free(buf);
return bRet;
}
I'm currently trying to get a client/daemon communication via an AF_UNIX socket up and running.
At the moment the client successfully sends a message, the daemon receives and processes it and then should send the message back.
Well, that's where the problem is. As soon as the daemon tries to send the message back nothing happens, the client hangs, trying to read a message, and if I kill the client the daemon dies with it.
Following is the daemon code:
//successful call to accept, I have a file descriptor now...
int c = 0;
while((c = recv(fd, (char*)&buf[0], bufferSize, 0)))
{
if(c == -1 || c == 0)
break;
tmp.append(buf.begin(), buf.begin()+c);
}
writeLog(tmp);
tmp = evaluateMsg(tmp);
writeLog(tmp);
//I assume this send call is hanging
if(send(fd, tmp.c_str(), tmp.size(), 0) < 0)
writeLog("Could not write message back!");
close(fd);
And this is the client code:
//connect(); is successful
//send(); as well - the recv(); call is hanging forever
while((c = recv(sockfd, (char*)&buf[0], 1024, 0)))
{
if(c == -1)
{
cout<<"Error";
break;
}
else if(c == 0)
break;
tmp.append(buf.begin(), buf.begin()+c);
}
Please note that the code is heavily cut down for the sake of simplicity and readability (especially the code to daemonize and create the actual AF_UNIX socket (which are both successful)).
UPDATE:
I could verify that the client-side recv() call is never returning, which means that the daemon-side send() call is hanging. Why?
I don't see any reason that the daemon side recv() loop will end. Why would recv() return 0 or -1 if the socket is still open?
You should understand when the client finished sending data on the application level, the content should make it clear, and then finish the recv() loop and continue to the send() part of the server.
Alright, the solution was pretty simple.
#selalerer was right about the return value of recv() which leads to this working code snippet:
while((c = recv(fd, (char*)&buf[0], bufferSize, 0)))
{
if(c == -1)
/* handle error */
tmp.append(buf.begin(), buf.begin()+c);
if(c < bufferSize)
//no more to read, therefore stop reading
break;
}
I have written simple C/S applications to test the characteristics of non-blocking sockets, here is some brief information about the server and client:
//On linux The server thread will send
//a file to the client using non-blocking socket
void *SendFileThread(void *param){
CFile* theFile = (CFile*) param;
int sockfd = theFile->GetSocket();
set_non_blocking(sockfd);
set_sock_sndbuf(sockfd, 1024 * 64); //set the send buffer to 64K
//get the total packets count of target file
int PacketCOunt = theFile->GetFilePacketsCount();
int CurrPacket = 0;
while (CurrPacket < PacketCount){
char buffer[512];
int len = 0;
//get packet data by packet no.
GetPacketData(currPacket, buffer, len);
//send_non_blocking_sock_data will loop and send
//data into buffer of sockfd until there is error
int ret = send_non_blocking_sock_data(sockfd, buffer, len);
if (ret < 0 && errno == EAGAIN){
continue;
} else if (ret < 0 || ret == 0 ){
break;
} else {
currPacket++;
}
......
}
}
//On windows, the client thread will do something like below
//to receive the file data sent by the server via block socket
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
while (1){
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, NULL, &timeout );
if (ret == 0){
// log that timer expires
CLogger::log("RecvFileThread---Calling select() timeouts\n");
} else if (ret) {
//log the number of data it received
int ret = 0;
char buffer[1024 * 256];
int len = recv(sockfd, buffer, sizeof(buffer), 0);
// handle error
process_tcp_data(buffer, len);
} else {
//handle and break;
break;
}
}
}
What surprised me is that the server thread fails frequently because of socket buffer full, e.g. to send a file of 14M size it reports 50000 failures with errno = EAGAIN. However, via logging I observed there are tens of timeouts during the transfer, the flow is like below:
on the Nth loop, select() succeeds and read 256K's data successfully.
on the (N+1)th loop, select() failed with timeout.
on the (N+2)th loop, select() succeeds and read 256K's data successfully.
Why there would be timeouts interleaved during the receving? Can anyone explain this phenomenon?
[UPDATE]
1. Uploading a file of 14M to the server only takes 8 seconds
2. Using the same file with 1), the server takes nearly 30 seconds to send all data to the client.
3. All sockets used by the client are blocking. All sockets used by the server are non-blocking.
Regarding #2, I think timeouts are the reason why #2 takes much more time then #1, and I wonder why there would be so many timeouts when the client is busy in receiving data.
[UPDATE2]
Thanks for comments from #Duck, #ebrob, #EJP, #ja_mesa , I will do more investigation today
then update this post.
Regarding why I send 512 bytes per loop in the server thread, it is because I found the server thread sends data much faster than the client thread receiving them. I am very confused that why timeout happened to the client thread.
Consider this more of a long comment than an answer but as several people have noted the network is orders of magnitude slower than your processor. The point of non-blocking i/o is that the difference is so great that you can actually use it to do real work rather than blocking. Here you are just pounding on the elevator button hoping that makes a difference.
I'm not sure how much of your code is real and how much is chopped up for posting but in the server you don't account for (ret == 0) i.e. normal shutdown by the peer.
The select in the client is wrong. Again, not sure if that was sloppy editing or not but if not then the number of parameters are wrong but, more concerning, the first parameter - i.e. should be the highest file descriptor for select to look at plus one - is zero. Depending on the implementation of select I wonder if that is in fact just turning select into a fancy sleep statement.
You should be calling recv() first and then call select() only if recv() tells you to do so. Don't call select() first, that is a waste of processing. recv() knows if data is immediately available or if it has to wait for data to arrive:
void *RecvFileThread(void *param){
int sockfd = (int) param; //blocking socket
set_sock_rcvbuf(sockfd, 1024 * 256); //set the send buffer to 256
char buffer[1024 * 256];
while (1){
int ret = 0;
int len = recv(sockfd, buffer, sizeof(buffer), 0);
if (len == -1) {
if (WSAGetLastError() != WSAEWOULDBLOCK) {
//handle error
break;
}
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, &timeout );
if (ret == -1) {
// handle error
break;
}
if (ret == 0) {
// log that timer expires
break;
}
// socket is readable so try read again
continue;
}
if (len == 0) {
// handle graceful disconnect
break;
}
//log the number of data it received
process_tcp_data(buffer, len);
}
}
Do something similar on the sending side as well. Call send() first, and then call select() waiting for writability only if send() tells you to do so.
I'm using the WSAEventSelect I/O model in Windows Sockets and now I want to know that how may I know that my send and receive operations have sent and received all of the data?
After I know that, how should I design a way so that it sends the data fully? Any examples would be really appreciated.
Here is the code (sample code from the book I'm learning from):
SOCKET SocketArray [WSA_MAXIMUM_WAIT_EVENTS];
WSAEVENT EventArray [WSA_MAXIMUM_WAIT_EVENTS],
NewEvent;
SOCKADDR_IN InternetAddr;
SOCKET Accept, Listen;
DWORD EventTotal = 0;
DWORD Index, i;
WSANETWORKEVENTS NetworkEvents;
// Set up socket for listening etc...
// ....
NewEvent = WSACreateEvent();
WSAEventSelect(Listen, NewEvent,
FD_ACCEPT │ FD_CLOSE);
listen(Listen, 5);
SocketArray[EventTotal] = Listen;
EventArray[EventTotal] = NewEvent;
EventTotal++;
while(TRUE)
{
// Wait for network events on all sockets
Index = WSAWaitForMultipleEvents(EventTotal,
EventArray, FALSE, WSA_INFINITE, FALSE);
Index = Index - WSA_WAIT_EVENT_0;
// Iterate through all events to see if more than one is signaled
for(i=Index; i < EventTotal ;i++
{
Index = WSAWaitForMultipleEvents(1, &EventArray[i], TRUE, 1000,
FALSE);
if ((Index == WSA_WAIT_FAILED) ││ (Index == WSA_WAIT_TIMEOUT))
continue;
else
{
Index = i;
WSAEnumNetworkEvents(
SocketArray[Index],
EventArray[Index],
&NetworkEvents);
// Check for FD_ACCEPT messages
if (NetworkEvents.lNetworkEvents & FD_ACCEPT)
{
if (NetworkEvents.iErrorCode[FD_ACCEPT_BIT] != 0)
{
printf("FD_ACCEPT failed with error %d\n",
NetworkEvents.iErrorCode[FD_ACCEPT_BIT]);
break;
}
// Accept a new connection, and add it to the
// socket and event lists
Accept = accept(
SocketArray[Index],
NULL, NULL);
NewEvent = WSACreateEvent();
WSAEventSelect(Accept, NewEvent,
FD_READ │ FD_CLOSE);
EventArray[EventTotal] = NewEvent;
SocketArray[EventTotal] = Accept;
EventTotal++;
printf("Socket %d connected\n", Accept);
}
// Process FD_READ notification
if (NetworkEvents.lNetworkEvents & FD_READ)
{
if (NetworkEvents.iErrorCode[FD_READ_BIT] != 0)
{
printf("FD_READ failed with error %d\n",
NetworkEvents.iErrorCode[FD_READ_BIT]);
break;
}
// Read data from the socket
recv(SocketArray[Index - WSA_WAIT_EVENT_0],
buffer, sizeof(buffer), 0);
// here I do some processing on the data received
DoSomething(buffer);
// now I want to send data
send(SocketArray[Index - WSA_WAIT_EVENT_0],
buffer, sizeof(buffer), 0);
// how can I be assured that the data is sent completely
}
// FD_CLOSE handling here
// ......
// ......
}
}
}
What I thought, that I would set a boolean flag to determine that the receive has completed (the message will have its length prefixed) and then start processing that data. But what about send()? Can you please tell me the possibilities.
**EDIT:**See the FD_READ event part
Unless the protocol (application layer) you are handling gives you any information about how many data you're about to receive, the only way to determine if there is nothing more to received is when the peer disconnects. If the server simply stop sending, you can't determine if its the end or its just busy. It ends when it ends. You also can't determine if the server disconnected because its the end or because the connection was broken.
Thats why most protocols inform the peer about how many bytes it is going to be sent before sending it, or by placing a boundary in the end of the data.
About sending, you must be aware of the buffer you're using. When you send(), it goes to a buffer (with 64KB by default). send() returns the number of bytes placed in the buffer, if its less then the bytes you were trying to send, you have to manage it to try again in the next time you receive a FD_WRITE event.
You can't have sure about how much data was already received by the peer unless it keeps you informed (mIRC DDC does that).
Not sure it clearfyed your doubts, hope it helped :)
When you are doing the recv, you need to save the return status to determine if the data was received. recv returns the number of bytes received, and I would use the flag MSG_WAITALL instead of zero for the fourth parameter to receive all of the message (based on the buffer size). If the status recv returns is negative, there was an error of some nature, such as connection was close from the other end or there was some other issue.
As for the send, you should save the return value as it also as the status, but in this case, there is not a flag to have all the data sent before returning. You will have to determine the amount send and adjust the buffer and send size based on the value. As with recv, a negative value indicates an error has occurred.
I would read the function descriptions on the Microsoft website for recv and send for more information on the return values and flags.