Let me give the general overview first. I'm recieving data thru three ports. I have a socket, a completion port and a worker thread for each. I call WSARecv and the worker thread process calls GetQueuedCompletionStatus followed by my parsing routine ReadMsgs. It sometimes happens that the buffer is unchanged when ReadMsgs is called and the buffer is updated while ReadMsgs is processing the buffer. The number of bytes processed, as returned by GetQueuedCompletionStatus is correct for the update when it occurs.
Does anyone know why this might occur and what I am doing wrong. Let me show you the code that seems most relevant. If you need to see more code, please be specific. My base socket class looks like this (I have omitted details that seem to me to be irrelevant. I have also omitted all error checking.)
class Socket_Base : public OVERLAPPED
{
public:
Socket_Base()
{
// Initialize base OVERLAPPED object
Internal = 0;
InternalHigh = 0;
Offset = 0;
OffsetHigh = 0;
hEvent = WSACreateEvent();
// Initialize addr structure
ZeroMemory( &addr, sizeof(struct sockaddr_in));
// Create the completion port
hCP = CreateIoCompletionPort( INVALID_HANDLE_VALUE, NULL, 0, 1);
// Create the worker thread and bind it to the callback function and the completion port
hThread = (HANDLE)_beginthreadex( NULL, 0, Callback_Socket, hCP, 0, NULL);
// Create the socket
Sock = WSASocket( AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED);
// Bind the socket to the completion port
CreateIoCompletionPort( (HANDLE)Sock, hCP, 0, 0);
}
void Connect() { WSAConnect( Sock, (SOCKADDR*)(&addr), sizeof(addr), NULL, NULL, NULL, NULL);}
void StartRecv()
{
DWORD Flags = 0;
DWORD numBytes = 0;
if (WSARecv( Sock, &wsaBuf, 1, &numBytes, &Flags, (OVERLAPPED*)this, NULL) == 0) ReadMsgs( numBytes);
}
int ReadMsgs( int NumBytes);
protected:
virtual ~Socket_Base() {}
virtual void ProcessMsg() = 0;
struct sockaddr_in addr;
SOCKET Sock;
HANDLE hCP;
WSABUF wsaBuf;
HANDLE hThread;
char *readBuf;
int bufsize;
};
Each port has its own derived socket class distinguised by port number and the virtual ProcessMsg function (which is called by ReadMsgs as each message is parsed). Here is one such class:
class Socket_Admin : public Socket_Base
{
public:
static const int bufcap = 1024;
Socket_Admin::Socket_Admin() : Socket_Base()
{
// Buffer
readBuf = new char[ bufcap];
// The socket
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = inet_addr("127.0.0.1");
addr.sin_port = htons(9300);
wsaBuf.buf = readBuf;
wsaBuf.len = bufcap;
}
~Socket_Admin();
void ProcessMsg();
};
The worker thread process is
unsigned int Callback_Socket( void *lpParameter)
{
HANDLE hCP = (HANDLE)lpParameter;
DWORD NumBytes = 0;
ULONGLONG CompletionKey;
WSAOVERLAPPED *pOverlapped;
while (GetQueuedCompletionStatus( hCP, &NumBytes, &CompletionKey, &pOverlapped, INFINITE) && CompletionKey == 0)
{
Socket_Base *pTCP = (Socket_Base*)pOverlapped;
if (NumBytes > 0) pTCP->ReadMsgs( NumBytes);
NumBytes = 0;
}
return 0;
}
There is one other thing I should explain. ReadMsgs does its parsing in place. The server delimits messages with a final line feed character and it delimits fields within a message with commas. ReadMsgs replaces commas and line feeds with null characters as it finds them and notes where each field begins in a separate array of pointers to locations in the buffer. Now when ReadMsgs gets to the end of the region of the buffer that was last filled it sometimes finds an incomplete message. This is copied to the beginning of the buffer, expecting the next read to complete the message, and wsaBuf is modified accordingly. Thus the end of ReadMsgs looks like this:
wsaBuf.buf = pchar;
wsaBuf.len = remsize;
StartRecv();
where pchar points to the character just beyond the partial message and remsize is the size of the remaining buffer.
I know what messages have been sent to my application from detailed server logs. The replacement of delimiters with null characters also makes it easy to see what part of the buffer has been processed. By saving the buffer to file and examining it, I can tell that it was updated after ReadMsgs was called. Also, by log messages not shown in the code above, I know that in these cases ReadMsgs was called by the worker thread. It doesnt happen every time, but it does happen.
If anyone can tell me what my mistake is, I would be grateful.
I may have an answer. I have calls to ReadMsgs in both StartRecv and in Callback_Socket. In StartRecv, I call ReadMsgs when WSARecv returns 0, indicating that the transfer was completed by WSARecv. I was thinking that GetQueuedCompletion status would not get involved if the transfer was completed in WSARecv. However if GetQueuedCompletionStatus did return in response to the completed read, then I would have a duplicate call to ReadMsgs that would explain the logged data I collected, just as well as my previous hypothesis.
I've removed the call to ReadMsgs in StartRecv. The code is working properly now.
Thanks to the gentleman who gave me the negative vote. That suggested to me that the behavior I described has not been observed before and so is extremely unlikely. That sent me thinking in a new direction. Sometimes just a grunt from someone more experienced van speak volumes.
Related
I'm trying to send data to the connected client, even when the client did not send me a message first.
This is my current code:
while (true) {
// open a new socket to transmit data per connection
int sock;
if ((sock = accept(listen_sock, (sockaddr *) &client_address, &client_address_len)) < 0) {
logger.log(TYPE::ERROR, "server::could not open a socket to accept data");
exit(0);
}
int n = 0, total_received_bytes = 0, max_len = 4096;
std::vector<char> buffer(max_len);
logger.log(TYPE::SUCCESS,
"server::client connected with ip address: " + std::string(inet_ntoa(client_address.sin_addr)));
// keep running as long as the client keeps the connection open
while (true) {
n = recv(sock, &buffer[0], buffer.size(), 0);
if (n > 0) {
total_received_bytes += n;
std::string str(buffer.begin(), buffer.end());
KV key_value = kv_from(vector_from(str));
messaging.set_command(key_value);
}
std::string message = "hmc::" + messaging.get_value("hmc") + "---" + "sonar::" + messaging.get_value("sonar") + "\n";
send(sock, message.c_str(), message.length(), 0);
}
logger.log(TYPE::INFO, "server::connection closed");
close(sock);
}
I thought by moving the n = recv(sock, &buffer[0], buffer.size(), 0); outside the while condition that it would send the data indefinitely, but that is not what happened.
Thanks in advance.
Solution
Adding MSG_DONTWAIT to the recv function enabled non-blocking operations which I was looking for.
First I will explain, why it does not work, then I will make a proposal for solutions. Basically you will find the answer in the man7.org > Linux > man-pages and for recv specifially here.
When the function "recv" is called, then it will not return, until data is available and can be read. This behavior of functions is called "blocking". Means, the current execution thread is blocked until data has been read.
So, calling the function
n = recv(sock, &buffer[0], buffer.size(), 0);
as you did, causes the trouble. You need also to check the return code. 0 means, connection closed, -1 means error and you must check errno for further information.
You can modify the socket to work in non-blocking mode with the function fnctl and the O_NONBLOCK flag, for the lifetime of the socket. You can also use the the flag MSG_DONTWAIT as 4th parameter (flags), to unblock the function on a per-function-call base.
In both cases, if no data is available, the functions returns a -1 and you need to check errno for EAGAIN or EWOULDBLOCK.
return value 0 indicates that the connection has been closed.
But from the architecture point of view, I would not recommend to use this approach. You could use multiple threads for receiving and sending data, or, using Linux, one of select, poll or similar functions. There is even a common design pattern for this. It is called "reactor", There are also related patterns like "Acceptor/Connector" and "Proactor"/"ACT" available. If you plan to write a more robust application, then you may consider those.
You will find an implementation of Acceptor, Connector, Reactor, Proactor, ACT here
Hope this helps
I am using the following function to receive XML files for a while, but it has been going wrong for some time now and I think the problem is on the customer's network. I'm not sure, it's just a guess.
It happens some times when they try to send me XMLs files bigger than 13KB - the received buffer contains trash like this:
...
<Identifiers>
<Identifier>
<PID>E3744</PID>
</Identifier>
<Identifier IDType="SHC">
<PID>10021020</PID>
</Identifier>
<Identifier><*X| Å Å Ÿòc PV“R¢ E ·Â÷# #€ˆ
þõ
øæ=Ì×KåÅôdËÞ¦P s÷j
<PID>1002102-0</PID>
</Identifier>
<Identifier>
<PID>1002102</PID>
</Identifier>
</Identifiers>
...
Here is the fuction:
bool ReceiveBuffer(HWND hDlg, const SOCKET& socket, string& sBuffer)
{
WSAAsyncSelect(socket, hDlg, WM_WINSOCK, FD_CLOSE);
int iBufSize = 10000000; //10MB
int iBufVarSize = sizeof(iBufSize);
if (setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, iBufVarSize) == SOCKET_ERROR)
if (getsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, &iBufVarSize) == SOCKET_ERROR)
WriteLog("Unable to GET buffer receiving size");
char* buf = (char*)MALLOCZ(iBufSize);
if (!buf)
{
WriteLog("Unable to allocate memory");
return false;
}
int iCharsRead = 0;
do
{
memset(buf, 0, iBufSize);
iCharsRead = recv(socket, buf, iBufSize, 0);
if (iCharsRead > 0)
sBuffer.append(buf, iCharsRead);
}
while (iCharsRead > 0);
FREE(buf);
buf = NULL;
return true;
}
ReceiveBuffer() should not be calling WSAAsyncSelect() or setting SO_RCVBUF. That is the responsibility of whatever code initially creates the SOCKET.
But more importantly, WSAAsyncSelect() puts the socket into non-blocking mode, per the documentation:
The WSAAsyncSelect function automatically sets socket s to nonblocking mode, regardless of the value of lEvent.
However, your reading loop is not accounting for possible WSAEWOULDBLOCK errors from recv() so it can call recv() again to keep reading.
ReceiveBuffer() is also assuming that if setsockopt() succeeds then the actual buffer size is really the requested size, which is not guaranteed. So you need to call getsockopt() regardless of whether setsockopt() succeeds or fails, per the documentation:
SO_RCVBUF and SO_SNDBUF
When a Windows Sockets implementation supports the SO_RCVBUF and SO_SNDBUF options, an application can request different buffer sizes (larger or smaller). The call to setsockopt can succeed even when the implementation did not provide the whole amount requested. An application must call getsockopt with the same option to check the buffer size actually provided.
But really, setting SO_RCVBUF on every call to ReceiveBuffer() is not necessary in the first place. recv() returns whatever data is currently available at that moment, up to the requested buffer size. It is very unlikely that it will return anywhere close to 10MB of data on any given read. So you are just wasting a lot of memory for no real benefit. It is one thing to set the socket's internal buffer to 10MB if you are on a fast network. It is another thing to allocate a memory buffer of 10MB to receive data from each recv() call. You should use a much smaller memory buffer. 1K is a common size to use.
But beyond that, regardless of the buffer size you use, ReceiveBuffer() is reading arbitrary bytes in an endless loop until the socket is disconnected or errors (and not accounting for non-blocking errors). When the socket does eventually disconnect/error, ReceiveBuffer() is returning true instead of false, so the caller has no idea that something went wrong, or that sBuffer may be incomplete.
Also, in case the caller calls ReceiveBuffer() multiple times with the same variable for the sBuffer parameter, you should call sBuffer.clear() before starting the reading loop to make sure you are not appending new data to the end of stale data.
Now, all of the above is just technical issues with your code logic. But there is also a semantic element as well. XML has a finite length to it, but your current code has no way of knowing what that length actually is. It is the sender's responsibility to tell the receiver when the XML has stopped being sent. That could be by sending the XML's length before sending the XML itself, so the receiver knows how many bytes to expect. Or that could be by sending a unique delimiter, like a null terminator, at the end of the XML, so the receiver can stop reading when it sees the delimiter. Or that could be by gracefully closing the connection at the end of the XML (which is a bad idea, because then the receiver can't differentiate between end-of-data and data loss). But it has to do something.
Now, with all of that said, try something more like this instead (I'm assuming a graceful disconnect is the end-of-data indicator, since that is what your original code is doing - you need to seriously consider a different protocol design!):
bool ReceiveBuffer(SOCKET socket, string& sBuffer)
{
sBuffer.clear();
/*
int iBufSize = 1024 * 1024 * 10; //10MB
setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, sizeof(iBufSize));
if (getsockopt(socket, SOL_SOCKET, SO_RCVBUF, (char*)&iBufSize, sizeof(iBufSize)) == SOCKET_ERROR)
WriteLog("Unable to GET buffer receiving size");
*/
char* buf = (char*) malloc(1024);
if (!buf)
{
WriteLog("Unable to allocate memory");
return false;
}
int iCharsRead;
bool bRet = true;
do
{
iCharsRead = recv(socket, buf, 1024, 0);
if (iCharsRead > 0)
{
sBuffer.append(buf, iCharsRead);
}
else if (iCharsRead == 0)
{
// socket disconnected gracefully
break;
}
else
{
if (WSAGetLastError() != WSAEWOULDBLOCK)
{
// socket error!
WriteLog("Unable to read from socket");
bRet = false;
break;
}
// socket is non-blocking and there is no data available
// at this moment. Call recv() again...
// optional: call select() to wait for new data to arrive
// before calling recv() again. For instance, this will
// allow you to fail the function if no new data arrived
// within a timeout period...
//
/*
fd_set fd;
FD_ZERO(&fd);
FD_SET(socket, &fd);
timeval tv;
tv.tv_sec = 30;
tv.tv_usec = 0;
int ret = select(0, &fd, NULL, NULL, &tv);
if (ret <= 0)
{
if (ret == 0)
{
// timeout!
WriteLog("Timeout waiting for data from socket");
}
else
{
// socket error!
WriteLog("Unable to wait for data from socket");
}
bRet = false;
break;
}
*/
}
}
while (true);
free(buf);
return bRet;
}
I'm trying to write a web program, using poll(). I'm creating a UDP in a struct pollfd array, and then polling it. However, the poll() returns 0 every time, no matter how many times I send it a message. When I just call recvfrom, it works just fine. So here's my code:
Creating and binding the socket:
struct pollfd[2] fds;
// ...
fds[0].fd = socket(AF_INET, SOCK_DGRAM, 0);
if (fds[0].fd < 0)
syserr("socket");
listen_address = { 0 };
listen_address.sin_family = AF_INET;
listen_address.sin_addr.s_addr = htonl(INADDR_ANY);
listen_address.sin_port = htons(m_port);
if (bind(
fds[0].fd,
(struct sockaddr*) &listen_address,
(socklen_t) sizeof(listen_address)
) < 0)
syserr("bind");
Now this works:
for (;;) {
memset(buffer, 0, BUF_SIZE + 1);
rval = recvfrom(
fds[0].fd,
buffer,
BUF_SIZE,
0,
(struct sockaddr *) &respond_address,
&rcva_len
);
std::cout << buffer << std::endl;
}
But this doesn't
finished = false;
do {
fds[0].revents = fds[1].revents = 0;
ret = poll(fds, 2, 0);
if (ret < 0) {
perror("poll");
} else if (ret == 0) {
// the loop always enters here
} else {
// the loop never enters here,
// even though I send messages to the socket
}
} while (!finished);
For testing, I use a command like this
echo -n “foo” | nc -4u -w1 <host> <udp port>
I would appreciate some insight
There are two critical problems with the way that you use poll():
You must set the revents field, for each file descriptor, to indicate which events you are interested in, such as POLLIN and/or POLLOUT. Your code fails to set revents to anything.
The third parameter to poll() is the timeout setting. Which you are setting to 0. This means "check the file descriptors for whether or not any of the requested events have occured, but always return immediately in any case, and if no file descriptors have the requested events then return 0".
Which is the behavior you are seeing.
Instead of telling you what you need to set the third parameter to, in order to wait until any of the file descriptors's events have occured, I'll just refer you to poll()'s manual page. Reading manual pages is good. They explain everything.
In conclusion:
Initialize each file descriptor's revents parameter properly.
Pass the correct parameters to poll(), as described in its manual page. If you want poll() to wait indefinitely, until any of the passed file descriptors' requested events have occured, there's a specific value to do this. Look it up.
After poll() returns, check each file descriptor's events parameter to determine the file descriptor's status.
I am trying to check if a client has send some new data. This actually tells me that i always have new data:
bool ClientHandle::hasData()
{
fd_set temp;
FD_ZERO(&temp);
FD_SET(m_sock, &temp);
//setup the timeout to 1000ms
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 1000;
//temp.fd_count possible?
if (select(m_sock+1, &temp, nullptr, nullptr, &tv) == -1)
{
return false;
}
if (FD_ISSET(m_sock, &temp))
return true;
return false;
}
I am connecting with a java client and send a "connection" message which i read inside of the ctor:
ClientHandle::ClientHandle(SOCKET s) : m_sock(s)
{
while (!hasData())
{
}
char buffer[5];
recv(m_sock, buffer, 4, NULL);
auto i = atoi(buffer);
LOG_INFO << "Byte to receive: " << i;
auto dataBuffer = new char[i + 1]{'\0'};
recv(m_sock, dataBuffer, i, NULL);
LOG_INFO << dataBuffer;
//clean up
delete[] dataBuffer;
}
This seems to work right. After that i keep checking if there is new data which always is true even if the java client does not send any new data.
Here is the java client. Don't judge me it's just for checking the connections. It wont stay like this to send the size information as char[].
public static void main(String[] args) throws UnknownHostException,
IOException {
Socket soc = null;
soc = new Socket("localhost", 6060);
PrintWriter out = new PrintWriter(soc.getOutputStream(), true);
BufferedReader in = new BufferedReader(new InputStreamReader(
soc.getInputStream()));
if (soc != null)
System.out.println("Connected");
out.write("10\0");
out.flush();
out.write("newCon\0");
out.flush();
out.close();
in.close();
soc.close();
}
So what is wrong with the hasData FD_ISSET method?
So what is wrong with the hasData FD_ISSET method?
Actually no. There is a problem with your use of recv().
recv() will return 0 if the client is disconnected and will return this until you close the socket (server-side). You can find this information in the manual.
Even if recv() returns 0, it will "trigger" select().
Knowing that, it's easy to find out the problem: you never check the return value of recv() and so you're unable to say if the client is still connected or not. However, you still add it with FD_SET!
#include <sys/types.h> // for ssize_t
#include <stdio.h> // for perror()
ClientHandle::ClientHandle(SOCKET s) : m_sock(s)
{
while (!hasData())
{
}
char buffer[5];
ssize_t ret = recv(m_sock, buffer, 4, NULL);
if (ret == -1) // error
{
perror("recv");
return ;
}
else if (ret == 0) // m_sock disconnects
{
close(m_sock);
// DO NOT FD_SET m_sock since the socket is now closed
}
else
{
auto i = atoi(buffer);
LOG_INFO << "Byte to receive: " << i;
auto dataBuffer = new char[i + 1]{'\0'};
recv(m_sock, dataBuffer, i, NULL);
LOG_INFO << dataBuffer;
//clean up
delete[] dataBuffer;
}
}
From Steven's book UNIX Networking Programming:
A socket is ready for reading if any of the following four conditions is true:
The number of bytes of data in the socket receive buffer is greater than or equal to the current size of the low-water mark for the socket receive buffer. A read operation on the socket will not block and will return a value greater than 0 (i.e., the data that is ready to be read). We can set this low-water mark using the SO_RCVLOWAT socket option. It defaults to 1 for TCP and UDP sockets.
The read half of the connection is closed (i.e., a TCP connection that has received a FIN). A read operation on the socket will not block and will return 0 (i.e., EOF).
The socket is a listening socket and the number of completed connections is nonzero. An accept on the listening socket will normally not block, although we will describe a timing condition in Section 16.6 under which the accept can block.
A socket error is pending. A read operation on the socket will not block and will return an error (–1) with errno set to the specific error condition. These pending errors can also be fetched and cleared by calling getsockopt and specifying the SO_ERROR socket option.
ISSET is going to return true in all the cases above. After your Java client closes the connection, the socket will be ready for reading in the server.
In ClientHandle::ClientHandle you are not checking the return value of recv and if any data is returned.
Is it blocking in the second call to recv?
You don't check the return value of recv and you don't handle receiving fewer bytes than you asked for. So what do you expect to happen when the connection is closed?
I am implementing a Windows-based web server handling multiple specific HTTP requests from clients using WinSock2. I have a class to start and stop my server. It looks something like this:
class CMyServer
{
// Not related to this question methods and variables here
// ...
public:
SOCKET m_serverSocket;
TLM_ERROR Start();
TLM_ERROR Stop();
static DWORD WINAPI ProcessRequest(LPVOID pInstance);
static DWORD WINAPI Run(LPVOID pInstance);
}
where TLM_ERROR is a type definition for my server's errors enumeration.
bool CMyServer::Start() method starts the server creating a socket listening on configured port and creating a separate thread DWORD CMyServer::Run(LPVOID) to accept incoming connections like described here:
// Creating a socket
m_serverSocket = ::socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (m_serverSocket == INVALID_SOCKET)
return TLM_ERROR_CANNOT_CREATE_SOCKET;
// Socket address
sockaddr_in serverSocketAddr;
serverSocketAddr.sin_family = AF_INET; // address format is host and port number
serverSocketAddr.sin_addr.S_un.S_addr = inet_addr(m_strHost.c_str()); // specifying host
serverSocketAddr.sin_port = htons(m_nPort); // specifying port number
// Binding the socket
if (::bind(m_serverSocket, (SOCKADDR*)&serverSocketAddr, sizeof(serverSocketAddr)) == SOCKET_ERROR)
{
// Error during binding the socket
::closesocket(m_serverSocket);
m_serverSocket = NULL;
return TLM_ERROR_CANNOT_BIND_SOCKET;
}
// Starting to listen to requests
int nBacklog = 20;
if (::listen(m_serverSocket, nBacklog) == SOCKET_ERROR)
{
// Error listening on socket
::closesocket(m_serverSocket);
m_serverSocket = NULL;
return TLM_ERROR_CANNOT_LISTEN;
}
// Further initialization here...
// ...
// Creating server's main thread
m_hManagerThread = ::CreateThread(NULL, 0, CTiledLayersManager::Run, (LPVOID)this, NULL, NULL);
I use ::accept(...) to wait for incoming client connections in CMyServer::Run(LPVOID), and after new connection has been accepted I create a separate thread CMyServer::ProcessRequest(LPVOID) to receive a data from a client and send a response passing the socket returned by ::accept(...) as part of thread function's argument:
DWORD CMyServer::Run(LPVOID pInstance)
{
CMyServer* pTLM = (CMyServer*)pInstance;
// Initialization here...
// ...
bool bContinueRun = true;
while (bContinueRun)
{
// Waiting for a client to connect
SOCKADDR clientSocketAddr; // structure to store socket's address
int nClientSocketSize = sizeof(clientSocketAddr); // defining structure's length
ZeroMemory(&clientSocketAddr, nClientSocketSize); // cleaning the structure
SOCKET connectionSocket = ::accept(pTLM->m_serverSocket, &clientSocketAddr, &nClientSocketSize); // waiting for client's request
if (connectionSocket != INVALID_SOCKET)
{
if (bContinueRun)
{
// Running a separate thread to handle this request
REQUEST_CONTEXT rc;
rc.pTLM = pTLM;
rc.connectionSocket = connectionSocket;
HANDLE hRequestThread = ::CreateThread(NULL, 0, CTiledLayersManager::ProcessRequest, (LPVOID)&rc, CREATE_SUSPENDED, NULL);
// Storing created thread's handle to be able to close it later
// ...
// Starting suspended thread
::ResumeThread(hRequestThread);
}
}
// Checking whether thread is signaled to stop...
// ...
}
// Waiting for all child threads to over...
// ...
}
Testing this implementation manually gives me the desired results. But when I send multiple requests generated by JMeter I can see that some of them are not handled properly by DWORD CMyServer::ProcessRequest(LPVOID). Looking at log file created by ProcessRequest I determine 10038 WinSock error code (meaning that ::recv call was tried on nonsocket), 10053 error code (Software caused connection abort) or even 10058 error code (Cannot send after socket shutdown). But the 10038th error occurs more often than others mentioned.
It looks like a socket was closed somehow but I close it only after ::recv and ::send have been called in ProcessRequest. I also thought that it can be an issue related to using ::CreateThread instead of ::_beginthreadex but as I can get it could only lead to memory leaks. I don't have any memory leaks detected by the method described here so I have doubts that it is the reason. All the more, ::CreateThread returns a handle that can be used in ::WaitForMultipleObjects to wait for threads to be over, and I need it to stop my server properly.
Could these errors occur due to a client doesn't want to wait for response anymore? I am out of ideas, and I will thank you if you tell me what I am missing or doing/understanding wrong. By the way, both my server and JMeter run on the localhost.
Finally, here is my implementation of ProcessRequest method:
DWORD CMyServer::ProcessRequest(LPVOID pInstance)
{
REQUEST_CONTEXT* pRC = (REQUEST_CONTEXT*)pInstance;
CMyServer* pTLM = pRC->pTLM;
SOCKET connectionSocket = pRC->connectionSocket;
// Retrieving client's request
const DWORD dwBuffLen = 1 << 15;
char buffer[dwBuffLen];
ZeroMemory(buffer, sizeof(buffer));
if (::recv(connectionSocket, buffer, sizeof(buffer), NULL) == SOCKET_ERROR)
{
stringStream ss;
ss << "Unable to receive client's request with the following error code " << ::WSAGetLastError() << ".";
pTLM->Log(ss.str(), TLM_LOG_TYPE_ERROR);
::SetEvent(pTLM->m_hRequestCompleteEvent);
return 0;
}
string str = "HTTP/1.1 200 OK\nContent-Type: text/plain\n\nHello World!";
if (::send(connectionSocket, str.c_str(), str.length(), 0) == SOCKET_ERROR)
{
stringStream ss;
ss << "Unable to send response to client with the following error code " << ::WSAGetLastError() << ".";
pTLM->Log(ss.str(), TLM_LOG_TYPE_ERROR);
::SetEvent(pTLM->m_hRequestCompleteEvent);
return 0;
}
::closesocket(connectionSocket);
connectionSocket = NULL;
pTLM->Log(string("Request has been successfully handled."));
::SetEvent(pTLM->m_hRequestCompleteEvent);
return 0;
}
You pass a pointer to the REQUEST_CONTEXT to every newly created thread. However this is an automatic variable, allocated on the stack. Hence its lifetime is limited to its scope. It ends right after you call ResumeThread.
Practically what happens is that the same memory for REQUEST_CONTEXT is used in every loop iteration. Now imagine you accept 2 connections in a short time internal. It's likely that at the time the first thread starts execution its REQUEST_CONTEXT will already be overwritten. So that you actually have 2 threads serving the same socket.
The easiest fix is to allocate the REQUEST_CONTEXT dynamically. That is, allocate it upon new accept, pass its pointer to the new thread. Then during the thread termination don't forget to delete it.
When creating the thread to handle requests, you give the address to a local variable as argument to the thread. The data of this pointer will not be valid as soon as the local variable is out of scope. Create it dynamically with new and delete it in the thread.