Winsocks Send and Receive - c++

I'm using the WSAEventSelect I/O model in Windows Sockets and now I want to know that how may I know that my send and receive operations have sent and received all of the data?
After I know that, how should I design a way so that it sends the data fully? Any examples would be really appreciated.
Here is the code (sample code from the book I'm learning from):
SOCKET SocketArray [WSA_MAXIMUM_WAIT_EVENTS];
WSAEVENT EventArray [WSA_MAXIMUM_WAIT_EVENTS],
NewEvent;
SOCKADDR_IN InternetAddr;
SOCKET Accept, Listen;
DWORD EventTotal = 0;
DWORD Index, i;
WSANETWORKEVENTS NetworkEvents;
// Set up socket for listening etc...
// ....
NewEvent = WSACreateEvent();
WSAEventSelect(Listen, NewEvent,
FD_ACCEPT │ FD_CLOSE);
listen(Listen, 5);
SocketArray[EventTotal] = Listen;
EventArray[EventTotal] = NewEvent;
EventTotal++;
while(TRUE)
{
// Wait for network events on all sockets
Index = WSAWaitForMultipleEvents(EventTotal,
EventArray, FALSE, WSA_INFINITE, FALSE);
Index = Index - WSA_WAIT_EVENT_0;
// Iterate through all events to see if more than one is signaled
for(i=Index; i < EventTotal ;i++
{
Index = WSAWaitForMultipleEvents(1, &EventArray[i], TRUE, 1000,
FALSE);
if ((Index == WSA_WAIT_FAILED) ││ (Index == WSA_WAIT_TIMEOUT))
continue;
else
{
Index = i;
WSAEnumNetworkEvents(
SocketArray[Index],
EventArray[Index],
&NetworkEvents);
// Check for FD_ACCEPT messages
if (NetworkEvents.lNetworkEvents & FD_ACCEPT)
{
if (NetworkEvents.iErrorCode[FD_ACCEPT_BIT] != 0)
{
printf("FD_ACCEPT failed with error %d\n",
NetworkEvents.iErrorCode[FD_ACCEPT_BIT]);
break;
}
// Accept a new connection, and add it to the
// socket and event lists
Accept = accept(
SocketArray[Index],
NULL, NULL);
NewEvent = WSACreateEvent();
WSAEventSelect(Accept, NewEvent,
FD_READ │ FD_CLOSE);
EventArray[EventTotal] = NewEvent;
SocketArray[EventTotal] = Accept;
EventTotal++;
printf("Socket %d connected\n", Accept);
}
// Process FD_READ notification
if (NetworkEvents.lNetworkEvents & FD_READ)
{
if (NetworkEvents.iErrorCode[FD_READ_BIT] != 0)
{
printf("FD_READ failed with error %d\n",
NetworkEvents.iErrorCode[FD_READ_BIT]);
break;
}
// Read data from the socket
recv(SocketArray[Index - WSA_WAIT_EVENT_0],
buffer, sizeof(buffer), 0);
// here I do some processing on the data received
DoSomething(buffer);
// now I want to send data
send(SocketArray[Index - WSA_WAIT_EVENT_0],
buffer, sizeof(buffer), 0);
// how can I be assured that the data is sent completely
}
// FD_CLOSE handling here
// ......
// ......
}
}
}
What I thought, that I would set a boolean flag to determine that the receive has completed (the message will have its length prefixed) and then start processing that data. But what about send()? Can you please tell me the possibilities.
**EDIT:**See the FD_READ event part

Unless the protocol (application layer) you are handling gives you any information about how many data you're about to receive, the only way to determine if there is nothing more to received is when the peer disconnects. If the server simply stop sending, you can't determine if its the end or its just busy. It ends when it ends. You also can't determine if the server disconnected because its the end or because the connection was broken.
Thats why most protocols inform the peer about how many bytes it is going to be sent before sending it, or by placing a boundary in the end of the data.
About sending, you must be aware of the buffer you're using. When you send(), it goes to a buffer (with 64KB by default). send() returns the number of bytes placed in the buffer, if its less then the bytes you were trying to send, you have to manage it to try again in the next time you receive a FD_WRITE event.
You can't have sure about how much data was already received by the peer unless it keeps you informed (mIRC DDC does that).
Not sure it clearfyed your doubts, hope it helped :)

When you are doing the recv, you need to save the return status to determine if the data was received. recv returns the number of bytes received, and I would use the flag MSG_WAITALL instead of zero for the fourth parameter to receive all of the message (based on the buffer size). If the status recv returns is negative, there was an error of some nature, such as connection was close from the other end or there was some other issue.
As for the send, you should save the return value as it also as the status, but in this case, there is not a flag to have all the data sent before returning. You will have to determine the amount send and adjust the buffer and send size based on the value. As with recv, a negative value indicates an error has occurred.
I would read the function descriptions on the Microsoft website for recv and send for more information on the return values and flags.

Related

Not all data is transfered via Socket

I can't send too large data packets over my setup (currently sending to 127.0.0.1), at about 30kB this functionality starts to fail. For testing I have an application that just starts a Receiver and a Sender, starts two threads, one for the sending, one for receiving, and when both have finished, compares if the sending string is the same as the received string.
void SenderThread(int count)
{
messageOut = "";
messageOut.append(count, 'A');
sender->sendData(messageOut);
}
void ReceivingThread()
{
receiver->ReceiveData(message);
}
main()
{
receiver = new utility::Receiver();
sender = new utility::Sender();
receiver->startSocket(9000);
sender->connectToSocket("127.0.0.1", 9000);
receiver->accept();
for (int count = 100; count < 1024 * 1024; count += 100)
{
std::thread sendThread(SenderThread, count);
std::thread recvThread(ReceivingThread);
sendThread.join();
recvThread.join();
printf("Sent data of length %d ", messageOut.length());
if (message == messageOut)
printf("successfully.\n");
else
{
printf("not successfully.\n");
printf("Length of original message: %d, Length of received message: %d.\n", messageOut.length(), message.length());
break;
}
}
delete receiver;
delete sender;
}
I have following code for my sending socket:
bool utility::Sender::sendData(const std::string & message)
{
int numBytes = 0;
int totalSent = 0;
// Break condition: send() fails, or whole message was transfered
while (totalSent < message.length() && send(message.substr(totalSent).c_str(), message.length() - totalSent, numBytes))
{
totalSent += numBytes;
}
return false;
}
bool utility::Sender::send(const char* pBuffer, int32_t lengthOfBuffer, int32_t &numBytes)
{
numBytes = ::send(connectSocket, pBuffer, lengthOfBuffer, 0);
if (numBytes == SOCKET_ERROR)
return false;
return true;
}
The receiving side:
bool utility::Receiver::ReceiveData(std::string& message)
{
int32_t numBytes = 0;
char data[defaultBufferLength];
// Set to blocking for the first data package
u_long iMode = 0;
ioctlsocket(tcpSocket, FIONBIO, &iMode);
bool success = receive(data, defaultBufferLength, numBytes);
message = std::string(data, numBytes);
// Set to non-blocking for the rest of the journey
iMode = 1;
ioctlsocket(tcpSocket, FIONBIO, &iMode);
while (numBytes == defaultBufferLength && receive(data, defaultBufferLength, numBytes))
{
message.append(data, numBytes);
}
return success;
}
bool utility::Receiver::receive(char* pBuffer, int32_t lengthOfBuffer, int32_t& numBytes)
{
int32_t flags = 0;
numBytes = recv(tcpSocket, pBuffer, lengthOfBuffer, flags);
if (numBytes == -1)
{
numBytes = 0;
if (errno == EAGAIN || errno == EWOULDBLOCK)
return false;
else
close();
}
return true;
}
The output I am getting is
Sent data of length 39200 successfully.
Sent data of length 39300 successfully.
Sent data of length 39400 successfully.
Sent data of length 39500 successfully.
Sent data of length 39600 successfully.
Sent data of length 39700 successfully.
Sent data of length 39800 successfully.
Sent data of length 39900 successfully.
Sent data of length 40000 successfully.
Sent data of length 40100 successfully.
Sent data of length 40200 successfully.
Sent data of length 40300 successfully.
Sent data of length 40400 successfully.
Sent data of length 40500 not successfully.
Length of original message: 40500, Length of received message: 29200.
The thing which is the most irritating, and probably the cause of this, is the ::send(...). I can give it 2 MB of char*, and it will just send it in one swoop (but the receiver fails miserably). What can I do about that?
TCP is a byte-oriented protocol, not message oriented.
send does not create a message. recv does not receive a message. They work on blocks of bytes, and multiple send calls can be combined at the network layer (for efficiency) or broken into multiple TCP packets. In practice, even if you turn off Nagle's algorithm, if a frame is lost at the physical layer and TCP has to retry the transmission, the retransmit will include as much data added to the buffer afterward as it can fit in an outgoing datagram.
So you can't rely on any particular mapping between send calls and recv calls. The only guarantee is that the bytes are delivered to your socket in the same order they were sent. If boundaries are important, you have to create them yourself. Length prefixes are popular in combination with TCP, special framing sequences less so.
You do already have a loop for reassembling messages... but you break out of the loop when you see EAGAIN / EWOULDBLOCK or a partly filled buffer, and continue processing. That's a problem, because you only have a partial message at that point. You need a way to delay processing until you have a complete message.
Adding to ben-voigt answer, you need to create an higher level message system for your socket, so you can send to the server the message size and on your socket receive method create a session or buffer storage that you append the received data till the message size match the total data received, once that requirement is met then you can process the data

Is this function doing something wrong with the sockets?

I am using the following function to receive XML files for a while, but it has been going wrong for some time now and I think the problem is on the customer's network. I'm not sure, it's just a guess.
It happens some times when they try to send me XMLs files bigger than 13KB - the received buffer contains trash like this:
...
<Identifiers>
<Identifier>
<PID>E3744</PID>
</Identifier>
<Identifier IDType="SHC">
<PID>10021020</PID>
</Identifier>
<Identifier><*X| Å Å Ÿòc PV“R¢ E ·Â÷# #€ˆ
þõ
øæ=Ì×KåÅôdËÞ¦P s÷j
<PID>1002102-0</PID>
</Identifier>
<Identifier>
<PID>1002102</PID>
</Identifier>
</Identifiers>
...
Here is the fuction:
bool ReceiveBuffer(HWND hDlg, const SOCKET& socket, string& sBuffer)
{
WSAAsyncSelect(socket, hDlg, WM_WINSOCK, FD_CLOSE);
int iBufSize = 10000000; //10MB
int iBufVarSize = sizeof(iBufSize);
if (setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, iBufVarSize) == SOCKET_ERROR)
if (getsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, &iBufVarSize) == SOCKET_ERROR)
WriteLog("Unable to GET buffer receiving size");
char* buf = (char*)MALLOCZ(iBufSize);
if (!buf)
{
WriteLog("Unable to allocate memory");
return false;
}
int iCharsRead = 0;
do
{
memset(buf, 0, iBufSize);
iCharsRead = recv(socket, buf, iBufSize, 0);
if (iCharsRead > 0)
sBuffer.append(buf, iCharsRead);
}
while (iCharsRead > 0);
FREE(buf);
buf = NULL;
return true;
}
ReceiveBuffer() should not be calling WSAAsyncSelect() or setting SO_RCVBUF. That is the responsibility of whatever code initially creates the SOCKET.
But more importantly, WSAAsyncSelect() puts the socket into non-blocking mode, per the documentation:
The WSAAsyncSelect function automatically sets socket s to nonblocking mode, regardless of the value of lEvent.
However, your reading loop is not accounting for possible WSAEWOULDBLOCK errors from recv() so it can call recv() again to keep reading.
ReceiveBuffer() is also assuming that if setsockopt() succeeds then the actual buffer size is really the requested size, which is not guaranteed. So you need to call getsockopt() regardless of whether setsockopt() succeeds or fails, per the documentation:
SO_RCVBUF and SO_SNDBUF
When a Windows Sockets implementation supports the SO_RCVBUF and SO_SNDBUF options, an application can request different buffer sizes (larger or smaller). The call to setsockopt can succeed even when the implementation did not provide the whole amount requested. An application must call getsockopt with the same option to check the buffer size actually provided.
But really, setting SO_RCVBUF on every call to ReceiveBuffer() is not necessary in the first place. recv() returns whatever data is currently available at that moment, up to the requested buffer size. It is very unlikely that it will return anywhere close to 10MB of data on any given read. So you are just wasting a lot of memory for no real benefit. It is one thing to set the socket's internal buffer to 10MB if you are on a fast network. It is another thing to allocate a memory buffer of 10MB to receive data from each recv() call. You should use a much smaller memory buffer. 1K is a common size to use.
But beyond that, regardless of the buffer size you use, ReceiveBuffer() is reading arbitrary bytes in an endless loop until the socket is disconnected or errors (and not accounting for non-blocking errors). When the socket does eventually disconnect/error, ReceiveBuffer() is returning true instead of false, so the caller has no idea that something went wrong, or that sBuffer may be incomplete.
Also, in case the caller calls ReceiveBuffer() multiple times with the same variable for the sBuffer parameter, you should call sBuffer.clear() before starting the reading loop to make sure you are not appending new data to the end of stale data.
Now, all of the above is just technical issues with your code logic. But there is also a semantic element as well. XML has a finite length to it, but your current code has no way of knowing what that length actually is. It is the sender's responsibility to tell the receiver when the XML has stopped being sent. That could be by sending the XML's length before sending the XML itself, so the receiver knows how many bytes to expect. Or that could be by sending a unique delimiter, like a null terminator, at the end of the XML, so the receiver can stop reading when it sees the delimiter. Or that could be by gracefully closing the connection at the end of the XML (which is a bad idea, because then the receiver can't differentiate between end-of-data and data loss). But it has to do something.
Now, with all of that said, try something more like this instead (I'm assuming a graceful disconnect is the end-of-data indicator, since that is what your original code is doing - you need to seriously consider a different protocol design!):
bool ReceiveBuffer(SOCKET socket, string& sBuffer)
{
sBuffer.clear();
/*
int iBufSize = 1024 * 1024 * 10; //10MB
setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, sizeof(iBufSize));
if (getsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, sizeof(iBufSize)) == SOCKET_ERROR)
WriteLog("Unable to GET buffer receiving size");
*/
char* buf = (char*) malloc(1024);
if (!buf)
{
WriteLog("Unable to allocate memory");
return false;
}
int iCharsRead;
bool bRet = true;
do
{
iCharsRead = recv(socket, buf, 1024, 0);
if (iCharsRead > 0)
{
sBuffer.append(buf, iCharsRead);
}
else if (iCharsRead == 0)
{
// socket disconnected gracefully
break;
}
else
{
if (WSAGetLastError() != WSAEWOULDBLOCK)
{
// socket error!
WriteLog("Unable to read from socket");
bRet = false;
break;
}
// socket is non-blocking and there is no data available
// at this moment. Call recv() again...
// optional: call select() to wait for new data to arrive
// before calling recv() again. For instance, this will
// allow you to fail the function if no new data arrived
// within a timeout period...
//
/*
fd_set fd;
FD_ZERO(&fd);
FD_SET(socket, &fd);
timeval tv;
tv.tv_sec = 30;
tv.tv_usec = 0;
int ret = select(0, &fd, NULL, NULL, &tv);
if (ret <= 0)
{
if (ret == 0)
{
// timeout!
WriteLog("Timeout waiting for data from socket");
}
else
{
// socket error!
WriteLog("Unable to wait for data from socket");
}
bRet = false;
break;
}
*/
}
}
while (true);
free(buf);
return bRet;
}

Can someone explain the function of writeable and readable fd_sets with WinSock?

I'm writing a network game for a university project and while I have messages being sent and received between a client and a server, I'm unsure on how I would go about implementing a writeable fd_set (my lecturer's example code only included a readable fd_set) and what the function is of both fd_sets with select(). Any insight you could give would be great in helping me understand this.
My server code is as such:
bool ServerSocket::Update() {
// Update the connections with the server
fd_set readable;
FD_ZERO(&readable);
// Add server socket, which will be readable if there's a new connection
FD_SET(m_socket, &readable);
// Add connected clients' sockets
if(!AddConnectedClients(&readable)) {
Error("Couldn't add connected clients to fd_set.");
return false;
}
// Set timeout to wait for something to happen (0.5 seconds)
timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 500000;
// Wait for the socket to become readable
int count = select(0, &readable, NULL, NULL, &timeout);
if(count == SOCKET_ERROR) {
Error("Select failed, socket error.");
return false;
}
// Accept new connection to the server socket if readable
if(FD_ISSET(m_socket, &readable)) {
if(!AddNewClient()) {
return false;
}
}
// Check all clients to see if there are messages to be read
if(!CheckClients(&readable)) {
return false;
}
return true;
}
A socket becomes:
readable if there is either data in the socket receive buffer or a pending FIN (recv() is about to return zero)
writable if there is room in the socket receive buffer. Note that this is true nearly all the time, so you should use it only when you've encountered a prior EWOULDBLOCK/EAGAIN on the socket, and stop using it when you don't.
You'd create an fd_set variable called writeable, initialize it the same way (with the same sockets), and pass it as select's third argument:
select(0, &readable, &writeable, NULL, &timeout);
Then after select returns you'd check whether each socket is still in the set writeable. If so, then it's writeable.
Basically, exactly the same way readable works, except that it tells you a different thing about the socket.
select() is terribly outdated and it's interface is arcane. poll (or it's windows counterpart WSAPoll is a modern replacement for it, and should be always preferred.
It would be used in following manner:
WSAPOLLFD pollfd = {m_socket, POLLWRNORM, 0};
int rc = WSAPoll(&pollfd, 1, 100);
if (rc == 1) {
// Socket is ready for writing!
}

c++ Socket receive takes a long time

I am writing the client side of the Socket. When there is something to read my code works fine but when there is nothing to read, the recv never returns. Help please.
Code:
m_socket = socket(AF_INET, SOCK_STREAM, 0);
struct sockaddr_in dest;
if ( m_socket )
{
memset(&dest, 0, sizeof(dest)); /* zero the struct */
dest.sin_family = AF_INET;
dest.sin_addr.s_addr = inet_addr(address); /* set destination IP number */
dest.sin_port = htons(port);
if (connect(m_socket, (struct sockaddr *)&dest, sizeof(struct sockaddr)) == SOCKET_ERROR)
{
return false;
}
else
{
std::vector<char> inStartup1(2);
int recvReturn = recv(Socket, &inStartup1.at(0), inStartup1.size(), 0);
}
recv is a blocking call. This would help you:-
The recv() call is normally used only on a connected socket.It returns the length of the message on successful completion. If a message is too long to fit in the supplied buffer, excess bytes may be discarded DEPENDING on the type of socket the message is received from.
If no messages are available at the socket, the receive calls wait for a message to arrive, unless the socket is nonblocking, in which case the value -1 is returned and the external variable errno is set to EAGAIN or EWOULDBLOCK. The receive calls normally return any data available, up to the requested amount, rather than waiting for receipt of the full amount requested.
Taking this one step further, on a server this is how you would correctly handle a connection (socket or serial port does not matter):
make the socket/port non-blocking: this is the first important step; it means that recv() will read what is available (if anything) and return the number of read bytes or -1 in case of an error.
use select(), with a timeout, to find out when data becomes available. So now you wait for a certain amount of time for data to become available and than read it.
The next problem to handle is making sure you read the full message. Since there is no guarantee that the whole message will be available when you call recv(), you need to save whatever is available and go back to select() and wait for the next data to become available.
Put everything in a while(cond) construct to make sure you read all the data.
The condition in the while is the only thing left to figure out - you either know the length of the expected message or you use some delimiters to mark the end of the message.
Hope this helps!

clean window socket internal buffer

I am wondering if there is a way to clean up window socket internal buffer, because what I want to achieve is this
while(1){
for(i=0;i<10;i++){
sendto(...) //send 10 UDP datagrams
}
for(i=0;i<10;i++){
recvfrom (Socket, RecBuf, MAX_PKT_SIZE, 0,
(SOCKADDR*) NULL, NULL);
int Status = ProcessBuffer(RecBuf);
if (Status == SomeCondition)
MagicalSocketCleanUP(Socket); //clean up the rest of stuff in the socket, so that it doesn't effect the reading for next iteration of the outer while loop
break; //occasionally the the receive loop needs to terminate before finishing off all 10 iteration
}
}
so I am asking for is there a function to clean up whatever remaining in the socket so that it won't effect my next reading? Thank you
The way to clean up data from the internal receive socket buffer is to read data until there is no more data to read. If you do this in a non-blocking way, you do not need to wait for more data in select(), because the EWOUDBLOCK error value means the internal receive socket buffer is empty.
int MagicalSocketCleanUP(SOCKET Socket) {
int r;
std::vector<char> buf(128*1024);
do {
r = recv(Socket, &buf[0], buf.size(), MSG_DONTWAIT);
if (r < 0 && errno == EINTR) continue;
} while (r > 0);
if (r < 0 && errno != EWOULDBLOCK) {
perror(__func__);
//... code to handle unexpected error
}
return r;
}
But this is not exactly safe. The other end of the socket may have sent good data into the socket buffer too, so this routine may discard more than what you want to discard.
Instead, the data on the socket should be framed in such a way that you know when the data of interest arrives. So instead of a cleanup API, you could extend ProcessBuffer() to discard input until it finds data of interest.
A simpler mechanism would be a message exchange between the two sides of the socket. When the error state is entered, the sender sends a "DISCARDING UNTIL <TOKEN>" message. The receiver sends back "<TOKEN>" and knows that only the data after the "<TOKEN>" message will be processed. The "<TOKEN>" can be a random sequence.