Passing data to another thread in a C++ winsock app - c++

So I have this winsock application (a server, able to accept multiple clients), where in the main thread I setup the socket and create another thread where I listen for clients (listen_for_clients function).
I also constantly receive data from a device in the main thread, which I afterwards concatenate to char arrays (buffers) of Client objects (BroadcastSample function). Currently I create a thread for each connected client (ProcessClient function), where I initialize a Client object and push it to a global vector of clients after which I send data to this client through the socket whenever the buffer in the corresponding Client object exceeds 4000 characters.
Is there a way I can send data from the main thread to the separate client threads so I don't have to use structs/classes (also to send a green light if I want to send the already accumulated data) and also if I'm going to keep a global container of objects, what is a good way to remove a disconnected client object from it without crashing the program because another thread is using the same container?
struct Client{
int buffer_len;
char current_buffer[5000];
SOCKET s;
};
std::vector<Client*> clientBuffers;
DWORD WINAPI listen_for_clients(LPVOID Param)
{
SOCKET client;
sockaddr_in from;
int fromlen = sizeof(from);
char buf[100];
while(true)
{
client = accept(ListenSocket,(struct sockaddr*)&from,&fromlen);
if(client != INVALID_SOCKET)
{
printf("Client connected\n");
unsigned dwThreadId;
HANDLE hThread = (HANDLE)_beginthreadex(NULL, 0, &ProcessClient, (void*)client, 0, &dwThreadId);
}
}
closesocket(ListenSocket);
WSACleanup();
ExitThread(0);
}
unsigned __stdcall ProcessClient(void *data)
{
SOCKET ClientSocket = (SOCKET)data;
Client * a = new Client();
a->current_buffer[0] = '\0';
a->buffer_len = 0;
a->s = ClientSocket;
clientBuffers.push_back(a);
char szBuffer[255];
while(true)
{
if(a->buffer_len > 4000)
{
send(ClientSocket,a->current_buffer,sizeof(a->current_buffer),0);
memset(a->current_buffer,0,5000);
a->buffer_len = 0;
a->current_buffer[0] = '\0';
}
}
exit(1);
}
//function below is called only in main thread, about every 100ms
void BroadcastSample(Sample s)
{
for(std::vector<Client*>::iterator it = clientBuffers.begin(); it != clientBuffers.end(); it++)
{
strcat((*it)->current_buffer,s.to_string);
(*it)->buffer_len += strlen(s.to_string);
}
}

This link has some Microsoft documentation on MS-style mutexes (muticies?).
This other link has some general info on mutexes.
Mutexes are the general mechanism for protecting data which is accessed by multiple threads. There are data structures with built-in thread safety, but in my experience, they usually have caveats that you'll eventually miss. That's just my two cents.
Also, for the record, you shouldn't use strcat, but rather strncat. Also, if one of your client servicing threads accesses one of those buffers after strncat overwrites the old '\0' but before it appends the new one, you'll have a buffer overread (read past end of allocated buffer).
Mutexes will also solve your current busy-waiting problem. I'm not currently near a windows compiler, or I'd try to help more.

Related

Threads and sockets, threads and objects more generally

Thanks for your time.
What am I trying to accomplish?
I'm trying to utilise threads to speed up my program. After some profiling I found that a large portion of my program time (a graphics application) is utilised checking on the status of my socket. Obviously not ideal when trying to trim the fat and get down to <16ms per cycle. I'm currently using the select function to check for new data and read if data is available.
What's the problem?
I can't get my head around threads & objects, I had a play with some textbook examples running and joining local functions with threads which worked fine. Trying to move this into my own code has proved beyond me.
What have I tried?
I've tried looking to smart pointers to allocate my UDPSocket objects on the heap, with the hope that heap memory is accessible by all threads. I've tried good old new & delete for the same reason. I've tried wrapping my UDPSockets inside another object and getting the whole lot to launch on another thread.
In summary It's absolutely certain that I have a big hole in my understanding of threads, I would be grateful for a solution to this specific problem but also links to any good articles, tutorials, video's etc that might help to further my understanding. Perhaps I simply need to re-examine my whole UDPSocket class? Your advice is most welcome.
I'll post my example below, please note I've stripped out all error checking etc for readability.
#pragma once
#define WIN32_MEAN_AND_LEAN
#include <WS2tcpip.h>
#include <iostream>
#include <memory>
#include <thread>
#pragma comment(lib, "ws2_32.lib")
class UDPServer
{
public:
UDPServer(unsigned short port_in)
:
port(port_in)
{
// Startup Winsock
WSADATA data;
WORD version = MAKEWORD(2, 2);
int wsOk = WSAStartup(version, &data);
//Bind socket to port, Any Address
s = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
//Hint structure
sockaddr_in serverHint;
serverHint.sin_addr.S_un.S_addr = ADDR_ANY;
serverHint.sin_family = AF_INET;
serverHint.sin_port = htons(port);
bind(s, (sockaddr*)&serverHint, sizeof(serverHint));
}
~UDPServer()
{
closesocket(s);
WSACleanup();
}
bool Recieve()
{
ZeroMemory(&client, clientLength);
if (dataAvailable(s))
{
ZeroMemory(messageBuffer, bufferSize);
int bytesIn = recvfrom(s, messageBuffer, bufferSize, 0, (sockaddr*)&client, &clientLength);
char clientIP[bufferSize];
ZeroMemory(clientIP, bufferSize);
inet_ntop(AF_INET, &client.sin_addr, clientIP, 256);
return true;
}
return false;
}
std::string GetNetworkMessage()
{
std::string message = messageBuffer;
return message;
}
private:
bool dataAvailable(int sock, int interval = 6000)
{
fd_set fds;
FD_ZERO(&fds);
FD_SET(sock, &fds);
timeval tv;
tv.tv_sec = 0;
tv.tv_usec = interval;
return (select(sock + 1, &fds, 0, 0, &tv) == 1);
}
private:
SOCKET s;
sockaddr_in client;
int clientLength = sizeof(client);
static constexpr int bufferSize = 512;
unsigned short port;
char messageBuffer[bufferSize] = {};
};
int main()
{
//Create server object on the heap.
std::unique_ptr<UDPServer> udp = std::make_unique<UDPServer>(6000);
//Get some new threads mate.
std::thread theThread;
std::string oldString = "";
while (true)
{
//Problems...
theThread = std::thread{udp->Recieve()};
if (udp->GetNetworkMessage() != oldString)
{
//print out any changed data we find.
oldString = udp->GetNetworkMessage();
std::cout << oldString << std::endl;
}
}
}
One of the items you weren't clear on is memory accessibility in threads. In Windows, and likely most other operating systems, any memory accessible in the main thread is also accessible by every other thread in the same process.
There are two issues with regards to threads and that memory. The first is how can more than one thread know where a given variable or class is in memory. This is generally solved by passing a pointer to the new thread when it is created. Most thread creation mechanisms provide a parameter for this. So this is the easier issue to solve.
The harder issue to solve is making sure that one thread doesn't change a variable or class while another thread is using it. Generally this is solved by using a mutual exclusion synchronization object, generally referred to as a mutex or a lock. I suggest learning about the concept of a mutex. But bottom line it only allows one thread at a time to access whatever is locked by that mutex. So if one thread is busy changing or using that object the other thread will wait until the other thread has unlocked the object before continuing.
But when you get into multiple locks, there is something called a deadlock. A simple is demonstrated by: Thread A hold lock 1 and is waiting to get access to lock 2. Thread B meanwhile is holding lock 2 and waiting for access to lock 1. So both threads are stuck waiting on the other. The solution is that anytime you have to hold two locks always take them in the same order. So in this case if both threads always took lock 1 then lock 2 they can't deadlock.
The subject matter you want to learn about is threads and thread synchronization.

Multiple threads writing to same socket causing issues

I have written a client/server application where the server spawns multiple threads depending upon the request from client.
These threads are expected to send some data to the client(string).
The problem is, data gets overwritten on the client side. How do I tackle this issue ?
I have already read some other threads on similar issue but unable to find the exact solution.
Here is my client code to receive data.
while(1)
{
char buff[MAX_BUFF];
int bytes_read = read(sd,buff,MAX_BUFF);
if(bytes_read == 0)
{
break;
}
else if(bytes_read > 0)
{
if(buff[bytes_read-1]=='$')
{
buff[bytes_read-1]='\0';
cout<<buff;
}
else
{
cout<<buff;
}
}
}
Server Thread code :
void send_data(int sd,char *data)
{
write(sd,data,strlen(data));
cout<<data;
}
void *calcWordCount(void *arg)
{
tdata *tmp = (tdata *)arg;
string line = tmp->line;
string s = tmp->arg;
int sd = tmp->sd_c;
int line_no = tmp->line_no;
int startpos = 0;
int finds = 0;
while ((startpos = line.find(s, startpos)) != std::string::npos)
{
++finds;
startpos+=1;
pthread_mutex_lock(&myMux);
tcount++;
pthread_mutex_unlock(&myMux);
}
pthread_mutex_lock(&mapMux);
int t=wcount[s];
wcount[s]=t+finds;
pthread_mutex_unlock(&mapMux);
char buff[MAX_BUFF];
sprintf(buff,"%s",s.c_str());
sprintf(buff+strlen(buff),"%s"," occured ");
sprintf(buff+strlen(buff),"%d",finds);
sprintf(buff+strlen(buff),"%s"," times on line ");
sprintf(buff+strlen(buff),"%d",line_no);
sprintf(buff+strlen(buff),"\n",strlen("\n"));
send_data(sd,buff);
delete (tdata*)arg;
}
On the server side make sure the shared resource (the socket, along with its associated internal buffer) is protected against the concurrent access.
Define and implement an application level protocol used by the server to make it possible for the client to distinguish what the different threads sent.
As an additional note: One cannot rely on read()/write() reading/writing as much bytes as those two functions were told to read/write. It is an essential necessity to check their return value to learn how much bytes those functions actually read/wrote and loop around them until all data that was intended to be read/written had been read/written.
You should put some mutex to your socket.
When a thread use the socket it should block the socket.
Some mutex example.
I can't help you more without the server code. Because the problem is probably in the server.

_beginthreadx and socket

i have a question about the _beginthreadx function In the third and fourth parameter:
if i have this line to create the thread
hThread=(HANDLE)_beginthreadex(0,0, &RunThread, &m_socket,CREATE_SUSPENDED,&threadID );
m_socket is the socket that i want inside the thread (fourth parameter)
and i have the RunThread function (third parameter) in this way
static unsigned __stdcall RunThread (void* ptr) {
return 0;
}
It is sufficient to create the thread independently if m_socket has something or not?
Thanks in advance
Thank you for the response Ciaran Keating helped me understand better the thread
I'll explain a little more the situation
I´m creating the tread in this function inside a class
public: void getClientsConnection()
{
numberOfClients = 1;
SOCKET temporalSocket = NULL;
firstClient = NULL;
secondClient = NULL;
while (numberOfClients < 2)
{
temporalSocket = SOCKET_ERROR;
while (temporalSocket == SOCKET_ERROR)
{
temporalSocket = accept(m_socket, NULL, NULL);
//-----------------------------------------------
HANDLE hThread;
unsigned threadID;
hThread=(HANDLE)_beginthreadex(0,0, &RunThread, &m_socket,CREATE_SUSPENDED,&threadID );
WaitForSingleObject( hThread, INFINITE );
if(!hThread)
printf("ERROR AL CREAR EL HILO: %ld\n", WSAGetLastError());
//-----------------------------------------------
}
if(firstClient == NULL)
{
firstClient = temporalSocket;
muebleC1 = temporalSocket;
actionC1 = temporalSocket;
++numberOfClients;
printf("CLIENTE 1 CONECTADO\n");
}
else
{
secondClient = temporalSocket;
muebleC2 = temporalSocket;
actionC2 = temporalSocket;
++numberOfClients;
printf("CLIENTE 2 CONECTADO\n");
}
}
}
What i'm trying to do is to have the socket inside the thread while wait for a client connection
Is this feasible as i have the code of the thread?
I can change the state of the thread that is not a problem
Thanks again
Yes, that will create the thread and pass it your socket handle. But by returning immediately from RunThread your new thread will terminate immediately after you resume it (you've created it suspended.) You'll need to put your socket handling code (read/write loop etc.) inside RunThread.
Some more tips:
You'll have to make sure that m_socket remains valid for the life of the thread, because you passed it by reference. You might prefer to pass it by value instead, and let ownership pass to the new thread, but of course in that case it probably wouldn't belong in your object instance (I infer from the m_ prefix.) Or you might prefer to leave the socket handle in the object instance, and pass a reference to the object to beginthread instead:
beginthread(...,&RunThread,this,...);
(With your new info, I can see that my other answer isn't what you need.)
If I understand you right, you just want to wait on the accept() call until a client connects. You don't need threads for that - there are native sockets ways to do it. One option is to make m_socket a blocking socket, so accept() doesn't return until a client connects. An easier way is to use the select() function to wait until the socket is ready to read, which in the case of a listening socket means that a client has connected.
fd_set fds;
FD_ZERO(&fds);
FD_SET(m_socket,&fds);
int ret = select(0,&fds,NULL,NULL,NULL); // will block
if(FD_ISSET(m_socket,&fds))
temporalSocket = accept(...);

How to correctly read data when using epoll_wait

I am trying to port to Linux an existing Windows C++ code that uses IOCP. Having decided to use epoll_wait to achieve high concurrency, I am already faced with a theoretical issue of when we try to process received data.
Imagine two threads calling epoll_wait, and two consequetives messages being received such that Linux unblocks the first thread and soon the second.
Example :
Thread 1 blocks on epoll_wait
Thread 2 blocks on epoll_wait
Client sends a chunk of data 1
Thread 1 deblocks from epoll_wait, performs recv and tries to process data
Client sends a chunk of data 2
Thread 2 deblocks, performs recv and tries to process data.
Is this scenario conceivable ? I.e. can it occure ?
Is there a way to prevent it so to avoid implementing synchronization in the recv/processing code ?
If you have multiple threads reading from the same set of epoll handles, I would recommend you put your epoll handles in one-shot level-triggered mode with EPOLLONESHOT. This will ensure that, after one thread observes the triggered handle, no other thread will observe it until you use epoll_ctl to re-arm the handle.
If you need to handle read and write paths independently, you may want to completely split up the read and write thread pools; have one epoll handle for read events, and one for write events, and assign threads to one or the other exclusively. Further, have a separate lock for read and for write paths. You must be careful about interactions between the read and write threads as far as modifying any per-socket state, of course.
If you do go with that split approach, you need to put some thought into how to handle socket closures. Most likely you will want an additional shared-data lock, and 'acknowledge closure' flags, set under the shared data lock, for both read and write paths. Read and write threads can then race to acknowledge, and the last one to acknowledge gets to clean up the shared data structures. That is, something like this:
void OnSocketClosed(shareddatastructure *pShared, int writer)
{
epoll_ctl(myepollhandle, EPOLL_CTL_DEL, pShared->fd, NULL);
LOCK(pShared->common_lock);
if (writer)
pShared->close_ack_w = true;
else
pShared->close_ack_r = true;
bool acked = pShared->close_ack_w && pShared->close_ack_r;
UNLOCK(pShared->common_lock);
if (acked)
free(pShared);
}
I'm assuming here that the situation you're trying to process is something like this:
You have multiple (maybe very many) sockets that you want to receive data from at once;
You want to start processing data from the first connection on Thread A when it is first received and then be sure that data from this connection is not processed on any other thread until you have finished with it in Thread A.
While you are doing that, if some data is now received on a different connection you want Thread B to pick that data and process it while still being sure that no one else can process this connection until Thread B is done with it etc.
Under these circumstances it turns out that using epoll_wait() with the same epoll fd in multiple threads is a reasonably efficient approach (I'm not claiming that it is necessarily the most efficient).
The trick here is to add the individual connections fds to the epoll fd with the EPOLLONESHOT flag. This ensures that once an fd has been returned from an epoll_wait() it is unmonitored until you specifically tell epoll to monitor it again. This ensures that the thread processing this connection suffers no interference as no other thread can be processing the same connection until this thread marks the connection to be monitored again.
You can set up the fd to monitor EPOLLIN or EPOLLOUT again using epoll_ctl() and EPOLL_CTL_MOD.
A significant benefit of using epoll like this in multiple threads is that when one thread is finished with a connection and adds it back to the epoll monitored set, any other threads still in epoll_wait() are immediately monitoring it even before the previous processing thread returns to epoll_wait(). Incidentally that could also be a disadvantage because of lack of cache data locality if a different thread now picks up that connection immediately (thus needing to fetch the data structures for this connection and flush the previous thread's cache). What works best will sensitively depend on your exact usage pattern.
If you are trying to process messages received subsequently on the same connection in different threads then this scheme to use epoll is not going to be appropriate for you, and an approach using a listening thread feeding an efficient queue feeding worker threads might be better.
Previous answers that point out that calling epoll_wait() from multiple threads is a bad idea are almost certainly right, but I was intrigued enough by the question to try and work out what does happen when it is called from multiple threads on the same handle, waiting for the same socket. I wrote the following test code:
#include <netinet/in.h>
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/epoll.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
struct thread_info {
int number;
int socket;
int epoll;
};
void * thread(struct thread_info * arg)
{
struct epoll_event events[10];
int s;
char buf[512];
sleep(5 * arg->number);
printf("Thread %d start\n", arg->number);
do {
s = epoll_wait(arg->epoll, events, 10, -1);
if (s < 0) {
perror("wait");
exit(1);
} else if (s == 0) {
printf("Thread %d No data\n", arg->number);
exit(1);
}
if (recv(arg->socket, buf, 512, 0) <= 0) {
perror("recv");
exit(1);
}
printf("Thread %d got data\n", arg->number);
} while (s == 1);
printf("Thread %d end\n", arg->number);
return 0;
}
int main()
{
pthread_attr_t attr;
pthread_t threads[2];
struct thread_info thread_data[2];
int s;
int listener, client, epollfd;
struct sockaddr_in listen_address;
struct sockaddr_storage client_address;
socklen_t client_address_len;
struct epoll_event ev;
listener = socket(AF_INET, SOCK_STREAM, 0);
if (listener < 0) {
perror("socket");
exit(1);
}
memset(&listen_address, 0, sizeof(struct sockaddr_in));
listen_address.sin_family = AF_INET;
listen_address.sin_addr.s_addr = INADDR_ANY;
listen_address.sin_port = htons(6799);
s = bind(listener,
(struct sockaddr*)&listen_address,
sizeof(listen_address));
if (s != 0) {
perror("bind");
exit(1);
}
s = listen(listener, 1);
if (s != 0) {
perror("listen");
exit(1);
}
client_address_len = sizeof(client_address);
client = accept(listener,
(struct sockaddr*)&client_address,
&client_address_len);
epollfd = epoll_create(10);
if (epollfd == -1) {
perror("epoll_create");
exit(1);
}
ev.events = EPOLLIN;
ev.data.fd = client;
if (epoll_ctl(epollfd, EPOLL_CTL_ADD, client, &ev) == -1) {
perror("epoll_ctl: listen_sock");
exit(1);
}
thread_data[0].number = 0;
thread_data[1].number = 1;
thread_data[0].socket = client;
thread_data[1].socket = client;
thread_data[0].epoll = epollfd;
thread_data[1].epoll = epollfd;
s = pthread_attr_init(&attr);
if (s != 0) {
perror("pthread_attr_init");
exit(1);
}
s = pthread_create(&threads[0],
&attr,
(void*(*)(void*))&thread,
&thread_data[0]);
if (s != 0) {
perror("pthread_create");
exit(1);
}
s = pthread_create(&threads[1],
&attr,
(void*(*)(void*))&thread,
&thread_data[1]);
if (s != 0) {
perror("pthread_create");
exit(1);
}
pthread_join(threads[0], 0);
pthread_join(threads[1], 0);
return 0;
}
When data arrives, and both threads are waiting on epoll_wait(), only one will return, but as subsequent data arrives, the thread that wakes up to handle the data is effectively random between the two threads. I wasn't able to to find a way to affect which thread was woken.
It seems likely that a single thread calling epoll_wait makes most sense, with events passed to worker threads to pump the IO.
I believe that the high performance software that uses epoll and a thread per core creates multiple epoll handles that each handle a subset of all the connections. In this way the work is divided but the problem you describe is avoided.
Generally, epoll is used when you have a single thread listening for data on a single asynchronous source. To avoid busy-waiting (manually polling), you use epoll to let you know when data is ready (much like select does).
It is not standard practice to have multiple threads reading from a single data source, and I, at least, would consider it bad practice.
If you want to use multiple threads, but you only have one input source, then designate one of the threads to listen and queue the data so the other threads can read individual pieces from the queue.

How to pass user-defined data to a worker thread using IOCP?

Hey... I created a small test server using I/O completion ports and winsock.
I can successfully connect and associate a socket handle with the completion port.
But I don´t know how to pass user-defined data-structures into the wroker thread...
What I´ve tried so far was passing a user-structure as (ULONG_PTR)&structure as the Completion Key in the association-call of CreateIoCompletionPort()
But that did not work.
Now I tried defining my own OVERLAPPED-structure and using CONTAINING_RECORD() as described here http://msdn.microsoft.com/en-us/magazine/cc302334.aspx and http://msdn.microsoft.com/en-us/magazine/bb985148.aspx.
But that does not work, too. (I get freaky values for the contents of pHelper)
So my Question is: How can I pass data to the worker thread using WSARecv(), GetQueuedCompletionStatus() and the Completion packet or the OVERLAPPED-strucutre?
EDIT: How can I successfully transmit "per-connection-data"?... It seems like I got the art of doing it (like explained in the two links above) wrong.
Here goes my code: (Yes, its ugly and its only TEST-code)
struct helper
{
SOCKET m_sock;
unsigned int m_key;
OVERLAPPED over;
};
///////
SOCKET newSock = INVALID_SOCKET;
WSABUF wsabuffer;
char cbuf[250];
wsabuffer.buf = cbuf;
wsabuffer.len = 250;
DWORD flags, bytesrecvd;
while(true)
{
newSock = accept(AcceptorSock, NULL, NULL);
if(newSock == INVALID_SOCKET)
ErrorAbort("could not accept a connection");
//associate socket with the CP
if(CreateIoCompletionPort((HANDLE)newSock, hCompletionPort, 3,0) != hCompletionPort)
ErrorAbort("Wrong port associated with the connection");
else
cout << "New Connection made and associated\n";
helper* pHelper = new helper;
pHelper->m_key = 3;
pHelper->m_sock = newSock;
memset(&(pHelper->over), 0, sizeof(OVERLAPPED));
flags = 0;
bytesrecvd = 0;
if(WSARecv(newSock, &wsabuffer, 1, NULL, &flags, (OVERLAPPED*)pHelper, NULL) != 0)
{
if(WSAGetLastError() != WSA_IO_PENDING)
ErrorAbort("WSARecv didnt work");
}
}
//Cleanup
CloseHandle(hCompletionPort);
cin.get();
return 0;
}
DWORD WINAPI ThreadProc(HANDLE h)
{
DWORD dwNumberOfBytes = 0;
OVERLAPPED* pOver = nullptr;
helper* pHelper = nullptr;
WSABUF RecvBuf;
char cBuffer[250];
RecvBuf.buf = cBuffer;
RecvBuf.len = 250;
DWORD dwRecvBytes = 0;
DWORD dwFlags = 0;
ULONG_PTR Key = 0;
GetQueuedCompletionStatus(h, &dwNumberOfBytes, &Key, &pOver, INFINITE);
//Extract helper
pHelper = (helper*)CONTAINING_RECORD(pOver, helper, over);
cout << "Received Overlapped item" << endl;
if(WSARecv(pHelper->m_sock, &RecvBuf, 1, &dwRecvBytes, &dwFlags, pOver, NULL) != 0)
cout << "Could not receive data\n";
else
cout << "Data Received: " << RecvBuf.buf << endl;
ExitThread(0);
}
If you pass your struct like this it should work just fine:
helper* pHelper = new helper;
CreateIoCompletionPort((HANDLE)newSock, hCompletionPort, (ULONG_PTR)pHelper,0);
...
helper* pHelper=NULL;
GetQueuedCompletionStatus(h, &dwNumberOfBytes, (PULONG_PTR)&pHelper, &pOver, INFINITE);
Edit to add per IO data:
One of the frequently abused features of the asynchronous apis is they don't copy the OVERLAPPED struct, they simply use the provided one - hence the overlapped struct returned from GetQueuedCompletionStatus points to the originally provided struct. So:
struct helper {
OVERLAPPED m_over;
SOCKET m_socket;
UINT m_key;
};
if(WSARecv(newSock, &wsabuffer, 1, NULL, &flags, &pHelper->m_over, NULL) != 0)
Notice that, again, in your original sample, you were getting your casting wrong. (OVERLAPPED*)pHelper was passing a pointer to the START of the helper struct, but the OVERLAPPED part was declared last. I changed it to pass the address of the actual overlapped part, which means that the code compiles without a cast, which lets us know we are doing the correct thing. I also moved the overlapped struct to be the first member of the struct.
To catch the data on the other side:
OVERLAPPED* pOver;
ULONG_PTR key;
if(GetQueuedCompletionStatus(h,&dw,&key,&pOver,INFINITE))
{
// c cast
helper* pConnData = (helper*)pOver;
On this side it is particularly important that the overlapped struct is the first member of the helper struct, as that makes it easy to cast back from the OVERLAPPED* the api gives us, and the helper* we actually want.
You can send special-purpose data of your own to the completion port via PostQueuedCompletionStatus.
The I/O completion packet will satisfy
an outstanding call to the
GetQueuedCompletionStatus function.
This function returns with the three
values passed as the second, third,
and fourth parameters of the call to
PostQueuedCompletionStatus. The system
does not use or validate these values.
In particular, the lpOverlapped
parameter need not point to an
OVERLAPPED structure.
I use the standard socket routines (socket, closesocket, bind, accept, connect ...) for creating/destroying and ReadFile/WriteFile for I/O as they allow use of the OVERLAPPED structure.
After your socket has accepted or connected you should associate it with the session context that it services. Then you associate your socket to an IOCP and (in the third parameter) provide it with a reference to the session context. The IOCP does not know what this reference is and doesn't care either for that matter. The reference is for YOUR use so that when you get an IOC through GetQueuedCompletionStatus the variable pointed to by parameter 3 will be filled in with the reference so that you immediately find the context associated with the socket event and can begin servicing the event. I usually use an indexed structure containing (among other things) the socket declaration, the overlapped structure as well as other session-specific data. The reference I pass to CreateIoCompletionPort in parameter 3 will be the index to the structure member containing the socket.
You need to check if GetQueuedCompletionStatus returned a completion or a timeout. With a timeout you can run through your indexed structure and see (for example) if one of them has timed out or something else and take appropriate house-keeping actions.
The overlapped structure also needs to be checked to see that the I/O completed correctly.
The function servicing the IOCP should be a separate, multi-threaded entity. Use the same number of threads that you have cores in your system, or at least no more than that as it wastes system resources (you don't have more resources for servicing the event than the number of cores in your system, right?).
IOCPs really are the best of all worlds (too good to be true) and anyone who says "one thread per socket" or "wait on multiple-socket list in one function" don't know what they are talking about. The former stresses your scheduler and the latter is polling and polling is ALWAYS extremely wasteful.