Should i pass unique OVERLAPPED structure for each WSASend call , in this case? - c++

I have a list of sockets.(Opened connections)
I have n worker threads.
Thread loop:
while (1)
{
_this.result = GetQueuedCompletionStatus(a_server.server_iocp, &_this.numberOfBytesTransfered,
&_this.completionKey, (OVERLAPPED**)&_this.iocp_task, INFINITE);
...
}
I have this simple struct:
struct iocp_send_global :public iocp_task<IOCP_SEND_GLOBAL> {
OVERLLAPED ov; //overlapped struct at top
std::atomic_uint32_t ref;
bool decr_ref(){ return ref.fetch_sub(1, std::memory_order_acq_rel) == 1;}
//packet data here
}
...
This is the 'Broadcast' function:
iocp_send_global * packet = new iocp_send_global;
[set packet data here]
for(int i=0;i<connectionsCount;++i){
WSASend(connections[i],...,&packet,...); //posting same packet to all connections
}
I want to do this in the worker loop after GetQueuedCompletionStatus call returns with the overlapped result;
if (_this.iocp_task->type == IOCP_SEND_GLOBAL) {
auto* task = (iocp_send_global*)_this.iocp_task;
if (!task->decr_ref()) {
_this.iocp_task = nullptr;
//dont delete the task yet,
//all send post must finish first
//[all posts share the same buffer]
}
else {
//delete the task containing the send data after all send posts finished
delete _this.iocp_task;
_this.iocp_task = nullptr;
}
}
From what i read on Microsoft WSASend documentation each WSASend overlapped call sould have its own OVERLAPPED structure passed, but is that valid when i WSASend the same buffer?
Thank you!

You must pass a different OVERLAPPED buffer for each call since you'll be making multiple pending calls. This is clearly spelled out in the documentation for the OVERLAPPED structure.

Related

Task Synchronization for Command/Response server (C++/ESP32/FreeRTOS)

I want to synchronize two tasks in a command/response communication server. One task sends data to a serial port and another task receives data on the serial port. Received data should either be returned to the sender task or do something else with it.
I unsuccessfully tried using volatile bool flags but have now found that won't work with C++ (See When to use volatile with multi threading?)
So trying to use semaphores to do it but can't quite figure out how. Some (bad) psuedo-code using volatile bool is below. How/where to modify for semaphore give/take?
Actual code/platform is C++ 11 running on ESP32 (ESP-IDF). Resources are very limited so no C++ std:: libraries.
volatile bool responsePending = false;
volatile bool cmdAccepted = false;
char sharedBuffer[100];
// SENDER //
void Task1()
{
char localBuffer[100];
while (1)
{
responsePending = true;
cmdAccepted = false;
sendMessage();
while (responsePending)
sleep();
strcpy(localBuffer, sharedBuffer);
cmdAccepted = true; // signal Task2
}
}
// RECEIVER //
void Task2()
{
char localBuf[100];
int fd = open();
while (1)
{
if (select())
{
read(fd, localBuf);
if (responsePending)
{
strcpy(sharedBuffer, localBuf);
responsePending = false; // signal Task1
while (!cmdAccepted)
sleep();
}
else
{
// Do something else with the received data
}
}
}
}
Create a queue which holds a struct. One tasks waits for the serial, if it got data it will put the message to the struct and the struct to the queue.
Other task waits for the queue, if there are items in the queue it will process the struct.
Example:
struct queueData{
char messageBuffer[100];
};
QueueHandle_t queueHandle;
void taskOne(){
while(){
// Task one checks if it got serial data.
if( gotSerialMsg() ){
// create a struct
queueData data;
// copy the data to the struct
strcpy( getSerialMSG(), data.messageBuffer );
// send struct to queue ( waits indefinietly )
xQueueSend(queueHandle, &data, portMAX_DELAY);
}
vTaskDelay(1); // Must feed other tasks
}
}
void taskTwo(){
while(){
// Check if a structs has an item
if( uxQueueMessagesWaiting(queueHandle) > 0 ){
// create a holding struct
queueData data;
// Receive the whole struct
if (xQueueReceive(queueHandle, &data, 0) == pdTRUE) {
// Struct holds message like: data.messageBuffer
}
}
vTaskDelay(1); // Must feed other tasks
}
}
The good thing in passing structs to queues is that you can always put more data into it. booleans or ints or any other thing.

Creating a dispatch queue / thread handler in C++ with pipes: FIFOs overfilling

Threads are resource-heavy to create and use, so often a pool of threads will be reused for asynchronous tasks. A task is packaged up, and then "posted" to a broker that will enqueue the task on the next available thread.
This is the idea behind dispatch queues (i.e. Apple's Grand Central Dispatch), and thread handlers (Android's Looper mechanism).
Right now, I'm trying to roll my own. In fact, I'm plugging a gap in Android whereby there is an API for posting tasks in Java, but not in the native NDK. However, I'm keeping this question platform independent where I can.
Pipes are the ideal choice for my scenario. I can easily poll the file descriptor of the read-end of a pipe(2) on my worker thread, and enqueue tasks from any other thread by writing to the write-end. Here's what that looks like:
int taskRead, taskWrite;
void setup() {
// Create the pipe
int taskPipe[2];
::pipe(taskPipe);
taskRead = taskPipe[0];
taskWrite = taskPipe[1];
// Set up a routine that is called when task_r reports new data
function_that_polls_file_descriptor(taskRead, []() {
// Read the callback data
std::function<void(void)>* taskPtr;
::read(taskRead, &taskPtr, sizeof(taskPtr));
// Run the task - this is unsafe! See below.
(*taskPtr)();
// Clean up
delete taskPtr;
});
}
void post(const std::function<void(void)>& task) {
// Copy the function onto the heap
auto* taskPtr = new std::function<void(void)>(task);
// Write the pointer to the pipe - this may block if the FIFO is full!
::write(taskWrite, &taskPtr, sizeof(taskPtr));
}
This code puts a std::function on the heap, and passes the pointer to the pipe. The function_that_polls_file_descriptor then calls the provided expression to read the pipe and execute the function. Note that there are no safety checks in this example.
This works great 99% of the time, but there is one major drawback. Pipes have a limited size, and if the pipe is filled, then calls to post() will hang. This in itself is not unsafe, until a call to post() is made within a task.
auto evil = []() {
// Post a new task back onto the queue
post({});
// Not enough new tasks, let's make more!
for (int i = 0; i < 3; i++) {
post({});
}
// Now for each time this task is posted, 4 more tasks will be added to the queue.
});
post(evil);
post(evil);
...
If this happens, then the worker thread will be blocked, waiting to write to the pipe. But the pipe's FIFO is full, and the worker thread is not reading anything from it, so the entire system is in deadlock.
What can be done to ensure that calls to post() eminating from the worker thread always succeed, allowing the worker to continue processing the queue in the event it is full?
Thanks to all the comments and other answers in this post, I now have a working solution to this problem.
The trick I've employed is to prioritise worker threads by checking which thread is calling post(). Here is the rough algorithm:
pipe ← NON-BLOCKING-PIPE()
overflow ← Ø
POST(task)
success ← WRITE(task, pipe)
IF NOT success THEN
IF THREAD-IS-WORKER() THEN
overflow ← overflow ∪ {task}
ELSE
WAIT(pipe)
POST(task)
Then on the worker thread:
LOOP FOREVER
task ← READ(pipe)
RUN(task)
FOR EACH overtask ∈ overflow
RUN(overtask)
overflow ← Ø
The wait is performed with pselect(2), adapted from the answer by #Sigismondo.
Here's the algorithm implemented in my original code example that will work for a single worker thread (although I haven't tested it after copy-paste). It can be extended to work for a thread pool by having a separate overflow queue for each thread.
int taskRead, taskWrite;
// These variables are only allowed to be modified by the worker thread
std::__thread_id workerId;
std::queue<std::function<void(void)>> overflow;
bool overflowInUse;
void setup() {
int taskPipe[2];
::pipe(taskPipe);
taskRead = taskPipe[0];
taskWrite = taskPipe[1];
// Make the pipe non-blocking to check pipe overflows manually
::fcntl(taskWrite, F_SETFL, ::fcntl(taskWrite, F_GETFL, 0) | O_NONBLOCK);
// Save the ID of this worker thread to compare later
workerId = std::this_thread::get_id();
overflowInUse = false;
function_that_polls_file_descriptor(taskRead, []() {
// Read the callback data
std::function<void(void)>* taskPtr;
::read(taskRead, &taskPtr, sizeof(taskPtr));
// Run the task
(*taskPtr)();
delete taskPtr;
// Run any tasks that were posted to the overflow
while (!overflow.empty()) {
taskPtr = overflow.front();
overflow.pop();
(*taskPtr)();
delete taskPtr;
}
// Release the overflow mechanism if applicable
overflowInUse = false;
});
}
bool write(std::function<void(void)>* taskPtr, bool blocking = true) {
ssize_t rc = ::write(taskWrite, &taskPtr, sizeof(taskPtr));
// Failure handling
if (rc < 0) {
// If blocking is allowed, wait for pipe to become available
int err = errno;
if ((errno == EAGAIN || errno == EWOULDBLOCK) && blocking) {
fd_set fds;
FD_ZERO(&fds);
FD_SET(taskWrite, &fds);
::pselect(1, nullptr, &fds, nullptr, nullptr, nullptr);
// Try again
return write(tdata);
}
// Otherwise return false
return false;
}
return true;
}
void post(const std::function<void(void)>& task) {
auto* taskPtr = new std::function<void(void)>(task);
if (std::this_thread::get_id() == workerId) {
// The worker thread gets 1st-class treatment.
// It won't be blocked if the pipe is full, instead
// using an overflow queue until the overflow has been cleared.
if (!overflowInUse) {
bool success = write(taskPtr, false);
if (!success) {
overflow.push(taskPtr);
overflowInUse = true;
}
} else {
overflow.push(taskPtr);
}
} else {
write(taskPtr);
}
}
Make the pipe write file descriptor non-blocking, so that write fails with EAGAIN when the pipe is full.
One improvement is to increase the pipe buffer size.
Another is to use a UNIX socket/socketpair and increase the socket buffer size.
Yet another solution is to use a UNIX datagram socket which many worker threads can read from, but only one gets the next datagram. In other words, you can use a datagram socket as a thread dispatcher.
You can use the old good select to determine whether the file descriptors are ready to be used for writing:
The file descriptors in writefds will be watched to see if
space is available for write (though a large write may still block).
Since you are writing a pointer, your write() cannot be classified as large at all.
Clearly you must be ready to handle the fact that a post may fail, and then be ready to retry it later... otherwise you will be facing indefinitely growing pipes, until you system will break again.
More or less (not tested):
bool post(const std::function<void(void)>& task) {
bool post_res = false;
// Copy the function onto the heap
auto* taskPtr = new std::function<void(void)>(task);
fd_set wfds;
struct timeval tv;
int retval;
FD_ZERO(&wfds);
FD_SET(taskWrite, &wfds);
// Don't wait at all
tv.tv_sec = 0;
tv.tv_usec = 0;
retval = select(1, NULL, &wfds, NULL, &tv);
// select() returns 0 when no FD's are ready
if (retval == -1) {
// handle error condition
} else if (retval > 0) {
// Write the pointer to the pipe. This write will succeed
::write(taskWrite, &taskPtr, sizeof(taskPtr));
post_res = true;
}
return post_res;
}
If you only look at Android/Linux using a pipe is not start of the art but using a event file descriptor together with epoll is the way to go.

Why would an Overlapped call to recv return ERROR_NO_MORE_ITEMS(259)?

I did a few tests with an I/O-Completion port and winsock sockets.
I encountered, that sometimes after I received data from a connection and then adjacently call WSARecv again on that socket it returns immediately with the error 259 (ERROR_NO_MORE_ITEMS).
I am wondering why the system flags the overlapped transaction with this error instead of keeping the recv call blocking/waiting for incoming data.
Do You know what´s the sense of this ?
I would be glad to hear about your thoughts.
Edit: Code
do
{
OVERLAPPED* pOverlapped = nullptr;
DWORD dwBytes = 0; ULONG_PTR ulKey = 0;
//Dequeue a completion packet
if(!m_pIOCP->GetCompletionStatus(&dwBytes, &ulKey, &pOverlapped, INFINITE))
DebugBreak();
//Evaluate
switch(((MYOVERLAPPED*)pOverlapped)->WorkType)
{
case ACCEPT_OVERLAPPED_TYPE:
{
//cast
ACCEPT_OVERLAPPED* pAccept = (ACCEPT_OVERLAPPED*)pOverlapped;
//Associate the newly accepted connection with the IOCP
if(!m_pIOCP->AssociateHandle((HANDLE)(pAccept->pSockClient)->operator SOCKET(), 1))
{
//Association failed: close the socket and and delte the overlapped strucuture
}
//Call recv
RECV_OVERLAPPED* pRecvAction = new RECV_OVERLAPPED;
pRecvAction->pSockClient = pAccept->pSockClient;
short s = (pRecvAction->pSockClient)->Recv(pRecvAction->strBuf, pRecvAction->pWSABuf, 10, pRecvAction);
if(s == Inc::REMOTECONNECTION_CLOSED)
{
//Error stuff
}
//Call accept again (create a new ACCEPT_OVERLAPPED to ensure overlapped being zeroed out)
ACCEPT_OVERLAPPED *pNewAccept = new ACCEPT_OVERLAPPED;
pNewAccept->pSockListen = pAccept->pSockListen;
pNewAccept->pSockClient = new Inc::CSocket((pNewAccept->pSockListen)->Accept(nullptr, nullptr, pNewAccept));
//delete the old overlapped struct
delete pAccept;
}
break;
case RECV_OVERLAPPED_TYPE:
{
RECV_OVERLAPPED* pOldRecvAction = (RECV_OVERLAPPED*)pOverlapped;
if(!pOldRecvAction->InternalHigh)
{
//Connection has been closed: delete the socket(implicitly closes the socket)
Inc::CSocket::freewsabuf(pOldRecvAction->pWSABuf); //free the wsabuf
delete pOldRecvAction->pSockClient;
}
else
{
//Call recv again (create a new RECV_OVERLAPPED)
RECV_OVERLAPPED* pNewRecvAction = new RECV_OVERLAPPED;
pNewRecvAction->pSockClient = pOldRecvAction->pSockClient;
short sRet2 = (pNewRecvAction->pSockClient)->Recv(pNewRecvAction->strBuf, pNewRecvAction->pWSABuf, 10, pNewRecvAction);
//Free the old wsabuf
Inc::CSocket::freewsabuf(pOldRecvAction->pWSABuf);
delete pOldRecvAction;
}
Cutted error checkings...
The Recv-member-function is a simple wrapper around the WSARecv-call which creates the WSABUF and the receiving buffer itself (which needs to be cleaned up by the user via freewsabuf - just to mention)...
It looks like I was sending less data than was requested by the receiving side.
But since it´s an overlapped operation receiving a small junk of the requested bunch via the TCP-connection would trigger the completion indication with the error ERROR_NO_MORE_ITEMS, meaning there was nothing more to recv than what it already had.

How to pass user-defined data to a worker thread using IOCP?

Hey... I created a small test server using I/O completion ports and winsock.
I can successfully connect and associate a socket handle with the completion port.
But I don´t know how to pass user-defined data-structures into the wroker thread...
What I´ve tried so far was passing a user-structure as (ULONG_PTR)&structure as the Completion Key in the association-call of CreateIoCompletionPort()
But that did not work.
Now I tried defining my own OVERLAPPED-structure and using CONTAINING_RECORD() as described here http://msdn.microsoft.com/en-us/magazine/cc302334.aspx and http://msdn.microsoft.com/en-us/magazine/bb985148.aspx.
But that does not work, too. (I get freaky values for the contents of pHelper)
So my Question is: How can I pass data to the worker thread using WSARecv(), GetQueuedCompletionStatus() and the Completion packet or the OVERLAPPED-strucutre?
EDIT: How can I successfully transmit "per-connection-data"?... It seems like I got the art of doing it (like explained in the two links above) wrong.
Here goes my code: (Yes, its ugly and its only TEST-code)
struct helper
{
SOCKET m_sock;
unsigned int m_key;
OVERLAPPED over;
};
///////
SOCKET newSock = INVALID_SOCKET;
WSABUF wsabuffer;
char cbuf[250];
wsabuffer.buf = cbuf;
wsabuffer.len = 250;
DWORD flags, bytesrecvd;
while(true)
{
newSock = accept(AcceptorSock, NULL, NULL);
if(newSock == INVALID_SOCKET)
ErrorAbort("could not accept a connection");
//associate socket with the CP
if(CreateIoCompletionPort((HANDLE)newSock, hCompletionPort, 3,0) != hCompletionPort)
ErrorAbort("Wrong port associated with the connection");
else
cout << "New Connection made and associated\n";
helper* pHelper = new helper;
pHelper->m_key = 3;
pHelper->m_sock = newSock;
memset(&(pHelper->over), 0, sizeof(OVERLAPPED));
flags = 0;
bytesrecvd = 0;
if(WSARecv(newSock, &wsabuffer, 1, NULL, &flags, (OVERLAPPED*)pHelper, NULL) != 0)
{
if(WSAGetLastError() != WSA_IO_PENDING)
ErrorAbort("WSARecv didnt work");
}
}
//Cleanup
CloseHandle(hCompletionPort);
cin.get();
return 0;
}
DWORD WINAPI ThreadProc(HANDLE h)
{
DWORD dwNumberOfBytes = 0;
OVERLAPPED* pOver = nullptr;
helper* pHelper = nullptr;
WSABUF RecvBuf;
char cBuffer[250];
RecvBuf.buf = cBuffer;
RecvBuf.len = 250;
DWORD dwRecvBytes = 0;
DWORD dwFlags = 0;
ULONG_PTR Key = 0;
GetQueuedCompletionStatus(h, &dwNumberOfBytes, &Key, &pOver, INFINITE);
//Extract helper
pHelper = (helper*)CONTAINING_RECORD(pOver, helper, over);
cout << "Received Overlapped item" << endl;
if(WSARecv(pHelper->m_sock, &RecvBuf, 1, &dwRecvBytes, &dwFlags, pOver, NULL) != 0)
cout << "Could not receive data\n";
else
cout << "Data Received: " << RecvBuf.buf << endl;
ExitThread(0);
}
If you pass your struct like this it should work just fine:
helper* pHelper = new helper;
CreateIoCompletionPort((HANDLE)newSock, hCompletionPort, (ULONG_PTR)pHelper,0);
...
helper* pHelper=NULL;
GetQueuedCompletionStatus(h, &dwNumberOfBytes, (PULONG_PTR)&pHelper, &pOver, INFINITE);
Edit to add per IO data:
One of the frequently abused features of the asynchronous apis is they don't copy the OVERLAPPED struct, they simply use the provided one - hence the overlapped struct returned from GetQueuedCompletionStatus points to the originally provided struct. So:
struct helper {
OVERLAPPED m_over;
SOCKET m_socket;
UINT m_key;
};
if(WSARecv(newSock, &wsabuffer, 1, NULL, &flags, &pHelper->m_over, NULL) != 0)
Notice that, again, in your original sample, you were getting your casting wrong. (OVERLAPPED*)pHelper was passing a pointer to the START of the helper struct, but the OVERLAPPED part was declared last. I changed it to pass the address of the actual overlapped part, which means that the code compiles without a cast, which lets us know we are doing the correct thing. I also moved the overlapped struct to be the first member of the struct.
To catch the data on the other side:
OVERLAPPED* pOver;
ULONG_PTR key;
if(GetQueuedCompletionStatus(h,&dw,&key,&pOver,INFINITE))
{
// c cast
helper* pConnData = (helper*)pOver;
On this side it is particularly important that the overlapped struct is the first member of the helper struct, as that makes it easy to cast back from the OVERLAPPED* the api gives us, and the helper* we actually want.
You can send special-purpose data of your own to the completion port via PostQueuedCompletionStatus.
The I/O completion packet will satisfy
an outstanding call to the
GetQueuedCompletionStatus function.
This function returns with the three
values passed as the second, third,
and fourth parameters of the call to
PostQueuedCompletionStatus. The system
does not use or validate these values.
In particular, the lpOverlapped
parameter need not point to an
OVERLAPPED structure.
I use the standard socket routines (socket, closesocket, bind, accept, connect ...) for creating/destroying and ReadFile/WriteFile for I/O as they allow use of the OVERLAPPED structure.
After your socket has accepted or connected you should associate it with the session context that it services. Then you associate your socket to an IOCP and (in the third parameter) provide it with a reference to the session context. The IOCP does not know what this reference is and doesn't care either for that matter. The reference is for YOUR use so that when you get an IOC through GetQueuedCompletionStatus the variable pointed to by parameter 3 will be filled in with the reference so that you immediately find the context associated with the socket event and can begin servicing the event. I usually use an indexed structure containing (among other things) the socket declaration, the overlapped structure as well as other session-specific data. The reference I pass to CreateIoCompletionPort in parameter 3 will be the index to the structure member containing the socket.
You need to check if GetQueuedCompletionStatus returned a completion or a timeout. With a timeout you can run through your indexed structure and see (for example) if one of them has timed out or something else and take appropriate house-keeping actions.
The overlapped structure also needs to be checked to see that the I/O completed correctly.
The function servicing the IOCP should be a separate, multi-threaded entity. Use the same number of threads that you have cores in your system, or at least no more than that as it wastes system resources (you don't have more resources for servicing the event than the number of cores in your system, right?).
IOCPs really are the best of all worlds (too good to be true) and anyone who says "one thread per socket" or "wait on multiple-socket list in one function" don't know what they are talking about. The former stresses your scheduler and the latter is polling and polling is ALWAYS extremely wasteful.

Chat server design of the "main" loop

I am writing on a small tcp chat server, but I am encountering some problems I can´t figure out how to solve "elegantly".
Below is the code for my main loop: it does:
1.Initiates a vector with the basic event, which is flagged, when a new tcp connection is made.
2. gets this connection and pushes it back into a vector, too. Then with the socket it creates a CSingleConnection object and passes the socket into it.
2.1. gets the event from the CSingleConnection, which is flagged when the connection receives data...
3. when it receives data. the wait is fullfilled and returns the number of the handle in the array... with all those other vectors it seems i can determine which one is sending now...
but as everybody can see: this methodology is really poorly... I cant figure out how to do all this better, with getting the connection socket, creating a single connection and so on :/...
Any suggestions, improvements, etc?...
void CServer::MainLoop()
{
DWORD dwResult = 0;
bool bMainLoop = true;
std::vector<std::string> vecData;
std::vector<HANDLE> vecEvents; //Contains the handles to wait on
std::vector<SOCKET> vecSocks; //contains the sockets
enum
{
ACCEPTOR = 0, //First element: sequence is mandatory
EVENTSIZE //Keep as the last element!
};
//initiate the vector with the basic handles
vecEvents.clear();
GetBasicEvents(vecEvents);
while(bMainLoop)
{
//wait for event handle(s)
dwResult = WaitForMultipleObjects(vecEvents.size(), &vecEvents[0], true, INFINITE);
//New connection(s) made
if(dwResult == (int)ACCEPTOR)
{
//Get the sockets for the new connections
m_pAcceptor->GetOutData(vecSocks);
//Create new connections
for(unsigned int i = 0; i < vecSocks.size(); i++)
{
//Add a new connection
CClientConnection Conn(vecSocks[i]);
m_vecConnections.push_back(Conn);
//Add event
vecEvents.push_back(Conn.GetOutEvent());
}
}
//Data from one of the connections
if(dwResult >= (int)EVENTSIZE)
{
Inc::MSG Msg;
//get received string data
m_vecConnections[dwResult].GetOutData(vecData);
//handle the data
for(unsigned int i = 0; i < vecData.size(); i++)
{
//convert data into message
if(Inc::StringToMessage(vecData[i], Msg) != Inc::SOK)
continue;
//Add the socket to the sender information
Msg.Sender.sock = vecSocks[dwResult];
//Evaluate and delegate data and task
EvaluateMessage(Msg);
}
}
}
}
Do not re-invent the wheel, use Boost.ASIO. It is well optimized utilizing kernel specific features of different operating systems, designed the way which makes client code architecture simple. There are a lot of examples and documentation, so you just cannot get it wrong.