Winsock2 select() returns WSAEINVAL (error 10022) - c++

I have the given code:
#include <winsock2.h>
#include <sys/time.h>
#include <iostream>
int main()
{
WSADATA wsaData;
if (WSAStartup(MAKEWORD(2, 2), &wsaData) != 0)
{
std::cout << "WSA Initialization failed!" << std::endl;
WSACleanup();
}
timeval time;
time.tv_sec = 1;
time.tv_usec = 0;
int retval = select(0, NULL, NULL, NULL, &time);
if (retval == SOCKET_ERROR)
{
std::cout << WSAGetLastError() << std::endl;
}
return 0;
}
It prints 10022, which means error WSAEINVAL. According to this page, I can get this error only if:
WSAEINVAL: The time-out value is not valid, or all three descriptor parameters were null.
However, I have seen a few examples calling select() without any FD_SETs. Is it possible somehow? I need to do it in a client-side code to let the program sleep for short periods while it is not connected to the server.

However, I have seen a few examples calling select() without any
FD_SETs.
It will work in most OS's (that aren't Windows).
Is it possible somehow [under Windows]?
Not directly, but it's easy enough to roll your own wrapper around select() that gives you the behavior you want even under Windows:
int proper_select(int largestFileDescriptorValuePlusOne, struct fd_set * readFS, struct fd_set * writeFS, struct fd_set * exceptFS, struct timeVal * timeout)
{
#ifdef _WIN32
// Note that you *do* need to pass in the correct value
// for (largestFileDescriptorValuePlusOne) for this wrapper
// to work; Windows programmers sometimes just pass in a dummy value,
// because the current Windows implementation of select() ignores the
// parameter, but that's a portability-killing hack and wrong,
// so don't do it!
if ((largestFileDescriptorValuePlusOne <= 0)&&(timeout != NULL))
{
// Windows select() will error out on a timeout-only call, so call Sleep() instead.
Sleep(((timeout->tv_sec*1000000)+timeout->tv_usec)/1000);
return 0;
}
#endif
// in all other cases we just pass through to the normal select() call
return select(maxFD, readFS, writeFS, exceptFS, timeout);
}
... then just call proper_select() instead of select() and you're golden.

From the notorious and offensive Winsock 'lame list':
Calling select() with three empty FD_SETs and a valid TIMEOUT structure as a sleezy delay function.
Inexcusably lame.
Note the mis-spelling. The document is worth reading, if you can stand it, just to see the incredible depths hubris can attain. In case they've recanted, or discovered that they didn't invent the Sockets API, you could try it with empty FD sets instead of null parameters, but I don't hold out much hope.

Related

ZeroMQ IPC across several instances of a program

I am having some problems with inter process communication in ZMQ between several instances of a program
I am using Linux OS
I am using zeromq/cppzmq, header-only C++ binding for libzmq
If I run two instances of this application (say on a terminal), I provide one with an argument to be a listener, then providing the other with an argument to be a sender. The listener never receives a message. I have tried TCP and IPC to no avail.
#include <zmq.hpp>
#include <string>
#include <iostream>
int ListenMessage();
int SendMessage(std::string str);
zmq::context_t global_zmq_context(1);
int main(int argc, char* argv[] ) {
std::string str = "Hello World";
if (atoi(argv[1]) == 0) ListenMessage();
else SendMessage(str);
zmq_ctx_destroy(& global_zmq_context);
return 0;
}
int SendMessage(std::string str) {
assert(global_zmq_context);
std::cout << "Sending \n";
zmq::socket_t publisher(global_zmq_context, ZMQ_PUB);
assert(publisher);
int linger = 0;
int rc = zmq_setsockopt(publisher, ZMQ_LINGER, &linger, sizeof(linger));
assert(rc==0);
rc = zmq_connect(publisher, "tcp://127.0.0.1:4506");
if (rc == -1) {
printf ("E: connect failed: %s\n", strerror (errno));
return -1;
}
zmq::message_t message(static_cast<const void*> (str.data()), str.size());
rc = publisher.send(message);
if (rc == -1) {
printf ("E: send failed: %s\n", strerror (errno));
return -1;
}
return 0;
}
int ListenMessage() {
assert(global_zmq_context);
std::cout << "Listening \n";
zmq::socket_t subscriber(global_zmq_context, ZMQ_SUB);
assert(subscriber);
int rc = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, "", 0);
assert(rc==0);
int linger = 0;
rc = zmq_setsockopt(subscriber, ZMQ_LINGER, &linger, sizeof(linger));
assert(rc==0);
rc = zmq_bind(subscriber, "tcp://127.0.0.1:4506");
if (rc == -1) {
printf ("E: bind failed: %s\n", strerror (errno));
return -1;
}
std::vector<zmq::pollitem_t> p = {{subscriber, 0, ZMQ_POLLIN, 0}};
while (true) {
zmq::message_t rx_msg;
// when timeout (the third argument here) is -1,
// then block until ready to receive
std::cout << "Still Listening before poll \n";
zmq::poll(p.data(), 1, -1);
std::cout << "Found an item \n"; // not reaching
if (p[0].revents & ZMQ_POLLIN) {
// received something on the first (only) socket
subscriber.recv(&rx_msg);
std::string rx_str;
rx_str.assign(static_cast<char *>(rx_msg.data()), rx_msg.size());
std::cout << "Received: " << rx_str << std::endl;
}
}
return 0;
}
This code will work if I running one instance of the program with two threads
std::thread t_sub(ListenMessage);
sleep(1); // Slow joiner in ZMQ PUB/SUB pattern
std::thread t_pub(SendMessage str);
t_pub.join();
t_sub.join();
But I am wondering why when running two instances of the program the code above won't work?
Thanks for your help!
In case one has never worked with ZeroMQ,one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds"before diving into further details
Q : wondering why when running two instances of the program the code above won't work?
This code will never fly - and it has nothing to do with thread-based nor the process-based [CONCURENT] processing.
It was caused by a wrong design of the Inter Process Communication.
ZeroMQ can provide for this either one of the supported transport-classes :{ ipc:// | tipc:// | tcp:// | norm:// | pgm:// | epgm:// | vmci:// } plus having even smarter one for in-process comms, an inproc:// transport-class ready for inter-thread comms, where a stack-less communication may enjoy the lowest ever latency, being just a memory-mapped policy.
The selection of L3/L2-based networking stack for an Inter-Process-Communication is possible, yet sort of the most "expensive" option.
The Core Mistake :
Given that choice, any single processes ( not speaking about a pair of processes ) will collide on an attempt to .bind() its AccessPoint onto the very same TCP/IP-address:port#
The Other Mistake :
Even for the sake of a solo programme launched, both of the spawned threads attempt to .bind() its private AccessPoint, yet none does an attempt to .connect() a matching "opposite" AccessPoint.
At least one has to successfully .bind(), and
at least one has to successfully .connect(), so as to get a "channel", here of the PUB/SUB Archetype.
ToDo:
decide about a proper, right-enough Transport-Class ( best avoid an overkill to operate the full L3/L2-stack for localhost/in-process IPC )
refactor the Address:port# management ( for 2+ processes not to fail on .bind()-(s) to the same ( hard-wired ) address:port#
always detect and handle appropriately the returned {PASS|FAIL}-s from API calls
always set LINGER to zero explicitly ( you never know )

how to wakeup select() within timeout from another thread

According to the "man select" information:
"On success, select() and pselect() return the number of file descrip‐
tors contained in the three returned descriptor sets which may be zero
if the timeout expires before anything interesting happens. On error,
-1 is returned, and errno is set appropriately; the sets and timeout become
undefined, so do not rely on their contents after an error."
Select will wakup because of:
1)read/write availability
2)select error
3)descriptoris closed.
However, how can we wake up the select() from another thread if there is no data available and the select is still within timeout?
[update]
Pseudo Code
// Thread blocks on Select
void *SocketReadThread(void *param){
...
while(!(ReadThread*)param->ExitThread()) {
struct timeval timeout;
timeout.tv_sec = 60; //one minute
timeout.tv_usec = 0;
fd_set rds;
FD_ZERO(&rds);
FD_SET(sockfd, &rds)'
//actually, the first parameter of select() is
//ignored on windows, though on linux this parameter
//should be (maximum socket value + 1)
int ret = select(sockfd + 1, &rds, NULL, NULL, &timeout );
//handle the result
//might break from here
}
return NULL;
}
//main Thread
int main(){
//create the SocketReadThread
ReaderThread* rthread = new ReaderThread;
pthread_create(&pthreadid, NULL, SocketReaderThread,
NULL, (void*)rthread);
// do lots of things here
............................
//now main thread wants to exit SocketReaderThread
//it sets the internal state of ReadThread as true
rthread->SetExitFlag(true);
//but how to wake up select ??????????????????
//if SocketReaderThread currently blocks on select
}
[UPDATE]
1) #trojanfoe provides a method to achieve this, his method writes socket data (maybe dirty data or exit message data) to wakeup select. I am going to have a test and update the result there.
2) Another thing to mention, closing a socket doesn't guarantee to wake up select function call, please see this post.
[UPDATE2]
After doing many tests, here are some facts about waking up select:
1) If the socket watched by select is closed by another application, then select() calling
will wakeup immediately. Hereafter, reading from or writing to the socket will get return value of 0 with an errno = 0
2) If the socket watched by select is closed by another thread of the same application,
then select() won't wake up until timeout if there is no data to read or write. After select timeouts, making read/write operation results in an error with errno = EBADF
(because the socket has been closed by another thread during timeout period)
I use an event object based on pipe():
IoEvent.h:
#pragma once
class IoEvent {
protected:
int m_pipe[2];
bool m_ownsFDs;
public:
IoEvent(); // Creates a user event
IoEvent(int fd); // Create a file event
IoEvent(const IoEvent &other);
virtual ~IoEvent();
/**
* Set the event to signalled state.
*/
void set();
/**
* Reset the event from signalled state.
*/
void reset();
inline int fd() const {
return m_pipe[0];
}
};
IoEvent.cpp:
#include "IoEvent.h"
#include <string.h>
#include <unistd.h>
#include <errno.h>
#include <fcntl.h>
#include <poll.h>
using namespace std;
IoEvent::IoEvent() :
m_ownsFDs(true) {
if (pipe(m_pipe) < 0)
throw MyException("Failed to create pipe: %s (%d)", strerror(errno), errno);
if (fcntl(m_pipe[0], F_SETFL, O_NONBLOCK) < 0)
throw MyException("Failed to set pipe non-blocking mode: %s (%d)", strerror(errno), errno);
}
IoEvent::IoEvent(int fd) :
m_ownsFDs(false) {
m_pipe[0] = fd;
m_pipe[1] = -1;
}
IoEvent::IoEvent(const IoEvent &other) {
m_pipe[0] = other.m_pipe[0];
m_pipe[1] = other.m_pipe[1];
m_ownsFDs = false;
}
IoEvent::~IoEvent() {
if (m_pipe[0] >= 0) {
if (m_ownsFDs)
close(m_pipe[0]);
m_pipe[0] = -1;
}
if (m_pipe[1] >= 0) {
if (m_ownsFDs)
close(m_pipe[1]);
m_pipe[1] = -1;
}
}
void IoEvent::set() {
if (m_ownsFDs)
write(m_pipe[1], "x", 1);
}
void IoEvent::reset() {
if (m_ownsFDs) {
uint8_t buf;
while (read(m_pipe[0], &buf, 1) == 1)
;
}
}
You could ditch the m_ownsFDs member; I'm not even sure I use that any more.

select() behaviour for writeability?

I have a fd_set "write_set" which contains sockets that I want to use in a send(...) call. When I call select(maxsockfd+1, NULL, &write_set, NULL, &tv) there it always returns 0 (timeout) although I haven't sent anything over the sockets in the write_set yet and it should be possible to send data.
Why is this? Shouldn't select return instantly when it's possible to send data over the sockets in write_set?
Thanks!
Edit: My code..
// _read_set and _write_set are the master sets
fd_set read_set = _read_set;
fd_set write_set = _write_set;
// added this for testing, the socket is a member of RemoteChannelConnector.
std::list<RemoteChannelConnector*>::iterator iter;
for (iter = _acceptingConnectorList->begin(); iter != _acceptingConnectorList->end(); iter++) {
if(FD_ISSET((*iter)->getSocket(), &write_set)) {
char* buf = "a";
int ret;
if ((ret = send((*iter)->getSocket(), buf, 1, NULL)) == -1) {
std::cout << "error." << std::endl;
} else {
std::cout << "success." << std::endl;
}
}
}
struct timeval tv;
tv.tv_sec = 10;
tv.tv_usec = 0;
int status;
if ((status = select(_maxsockfd, &read_set, &write_set, NULL, &tv)) == -1) {
// Terminate process on error.
exit(1);
} else if (status == 0) {
// Terminate process on timeout.
exit(1);
} else {
// call send/receive
}
When I run it with the code for testing if my socket is actually in the write_set and if it is possible to send data over the socket, I get a "success"...
I don't believe that you're allowed to copy-construct fd_set objects. The only guaranteed way is to completely rebuild the set using FD_SET before each call to select. Also, you're writing to the list of sockets to be selected on, before ever calling select. That doesn't make sense.
Can you use poll instead? It's a much friendlier API.
Your code is very confused. First, you don't seem to be setting any of the bits in the fd_set. Secondly, you test the bits before you even call select.
Here is how the flow generally works...
Use FD_ZERO to zero out your set.
Go through, and for each file descriptor you're interested in the writeable state of, use FD_SET to set it.
Call select, passing it the address of the fd_set you've been calling the FD_SET function on for the write set and observe the return value.
If the return value is > 0, then go through the write set and use FD_ISSET to figure out which ones are still set. Those are the ones that are writeable.
Your code does not at all appear to be following this pattern. Also, the important task of setting up the master set isn't being shown.

How to pass user-defined data to a worker thread using IOCP?

Hey... I created a small test server using I/O completion ports and winsock.
I can successfully connect and associate a socket handle with the completion port.
But I don´t know how to pass user-defined data-structures into the wroker thread...
What I´ve tried so far was passing a user-structure as (ULONG_PTR)&structure as the Completion Key in the association-call of CreateIoCompletionPort()
But that did not work.
Now I tried defining my own OVERLAPPED-structure and using CONTAINING_RECORD() as described here http://msdn.microsoft.com/en-us/magazine/cc302334.aspx and http://msdn.microsoft.com/en-us/magazine/bb985148.aspx.
But that does not work, too. (I get freaky values for the contents of pHelper)
So my Question is: How can I pass data to the worker thread using WSARecv(), GetQueuedCompletionStatus() and the Completion packet or the OVERLAPPED-strucutre?
EDIT: How can I successfully transmit "per-connection-data"?... It seems like I got the art of doing it (like explained in the two links above) wrong.
Here goes my code: (Yes, its ugly and its only TEST-code)
struct helper
{
SOCKET m_sock;
unsigned int m_key;
OVERLAPPED over;
};
///////
SOCKET newSock = INVALID_SOCKET;
WSABUF wsabuffer;
char cbuf[250];
wsabuffer.buf = cbuf;
wsabuffer.len = 250;
DWORD flags, bytesrecvd;
while(true)
{
newSock = accept(AcceptorSock, NULL, NULL);
if(newSock == INVALID_SOCKET)
ErrorAbort("could not accept a connection");
//associate socket with the CP
if(CreateIoCompletionPort((HANDLE)newSock, hCompletionPort, 3,0) != hCompletionPort)
ErrorAbort("Wrong port associated with the connection");
else
cout << "New Connection made and associated\n";
helper* pHelper = new helper;
pHelper->m_key = 3;
pHelper->m_sock = newSock;
memset(&(pHelper->over), 0, sizeof(OVERLAPPED));
flags = 0;
bytesrecvd = 0;
if(WSARecv(newSock, &wsabuffer, 1, NULL, &flags, (OVERLAPPED*)pHelper, NULL) != 0)
{
if(WSAGetLastError() != WSA_IO_PENDING)
ErrorAbort("WSARecv didnt work");
}
}
//Cleanup
CloseHandle(hCompletionPort);
cin.get();
return 0;
}
DWORD WINAPI ThreadProc(HANDLE h)
{
DWORD dwNumberOfBytes = 0;
OVERLAPPED* pOver = nullptr;
helper* pHelper = nullptr;
WSABUF RecvBuf;
char cBuffer[250];
RecvBuf.buf = cBuffer;
RecvBuf.len = 250;
DWORD dwRecvBytes = 0;
DWORD dwFlags = 0;
ULONG_PTR Key = 0;
GetQueuedCompletionStatus(h, &dwNumberOfBytes, &Key, &pOver, INFINITE);
//Extract helper
pHelper = (helper*)CONTAINING_RECORD(pOver, helper, over);
cout << "Received Overlapped item" << endl;
if(WSARecv(pHelper->m_sock, &RecvBuf, 1, &dwRecvBytes, &dwFlags, pOver, NULL) != 0)
cout << "Could not receive data\n";
else
cout << "Data Received: " << RecvBuf.buf << endl;
ExitThread(0);
}
If you pass your struct like this it should work just fine:
helper* pHelper = new helper;
CreateIoCompletionPort((HANDLE)newSock, hCompletionPort, (ULONG_PTR)pHelper,0);
...
helper* pHelper=NULL;
GetQueuedCompletionStatus(h, &dwNumberOfBytes, (PULONG_PTR)&pHelper, &pOver, INFINITE);
Edit to add per IO data:
One of the frequently abused features of the asynchronous apis is they don't copy the OVERLAPPED struct, they simply use the provided one - hence the overlapped struct returned from GetQueuedCompletionStatus points to the originally provided struct. So:
struct helper {
OVERLAPPED m_over;
SOCKET m_socket;
UINT m_key;
};
if(WSARecv(newSock, &wsabuffer, 1, NULL, &flags, &pHelper->m_over, NULL) != 0)
Notice that, again, in your original sample, you were getting your casting wrong. (OVERLAPPED*)pHelper was passing a pointer to the START of the helper struct, but the OVERLAPPED part was declared last. I changed it to pass the address of the actual overlapped part, which means that the code compiles without a cast, which lets us know we are doing the correct thing. I also moved the overlapped struct to be the first member of the struct.
To catch the data on the other side:
OVERLAPPED* pOver;
ULONG_PTR key;
if(GetQueuedCompletionStatus(h,&dw,&key,&pOver,INFINITE))
{
// c cast
helper* pConnData = (helper*)pOver;
On this side it is particularly important that the overlapped struct is the first member of the helper struct, as that makes it easy to cast back from the OVERLAPPED* the api gives us, and the helper* we actually want.
You can send special-purpose data of your own to the completion port via PostQueuedCompletionStatus.
The I/O completion packet will satisfy
an outstanding call to the
GetQueuedCompletionStatus function.
This function returns with the three
values passed as the second, third,
and fourth parameters of the call to
PostQueuedCompletionStatus. The system
does not use or validate these values.
In particular, the lpOverlapped
parameter need not point to an
OVERLAPPED structure.
I use the standard socket routines (socket, closesocket, bind, accept, connect ...) for creating/destroying and ReadFile/WriteFile for I/O as they allow use of the OVERLAPPED structure.
After your socket has accepted or connected you should associate it with the session context that it services. Then you associate your socket to an IOCP and (in the third parameter) provide it with a reference to the session context. The IOCP does not know what this reference is and doesn't care either for that matter. The reference is for YOUR use so that when you get an IOC through GetQueuedCompletionStatus the variable pointed to by parameter 3 will be filled in with the reference so that you immediately find the context associated with the socket event and can begin servicing the event. I usually use an indexed structure containing (among other things) the socket declaration, the overlapped structure as well as other session-specific data. The reference I pass to CreateIoCompletionPort in parameter 3 will be the index to the structure member containing the socket.
You need to check if GetQueuedCompletionStatus returned a completion or a timeout. With a timeout you can run through your indexed structure and see (for example) if one of them has timed out or something else and take appropriate house-keeping actions.
The overlapped structure also needs to be checked to see that the I/O completed correctly.
The function servicing the IOCP should be a separate, multi-threaded entity. Use the same number of threads that you have cores in your system, or at least no more than that as it wastes system resources (you don't have more resources for servicing the event than the number of cores in your system, right?).
IOCPs really are the best of all worlds (too good to be true) and anyone who says "one thread per socket" or "wait on multiple-socket list in one function" don't know what they are talking about. The former stresses your scheduler and the latter is polling and polling is ALWAYS extremely wasteful.

Multithreading won't work as expected

I have a problem with my program. I wanted it to have two threads, one of them listening for connections, and the other one receiving data from them... Unfortunately, it acts strangely. It will ignore my cout and cin usage everywhere in the code, so I can't even debug it. May I ask that someone sheds some light on it? Thank you in advance.
#include <windows.h>
#include <iostream.h>
#include <string.h>
#include <cstdlib>
int ConnectionNum, Port=4673;
WSADATA wsaData;
SOCKET Connections[256];
DWORD WINAPI ReceiveThread(LPVOID iValue)
{
//this is going to be receiving TCP/IP packets, as soon as the connection works
}
DWORD WINAPI ListenThread(LPVOID iValue) //this thread is supposed to listen for new connections and store them in an array
{
SOCKET ListeningSocket;
SOCKET NewConnection;
SOCKADDR_IN ServerAddr;
SOCKADDR_IN ClientAddr;
int ClientAddrLen;
WSAStartup(MAKEWORD(2,2), &wsaData);
ListeningSocket=socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
ServerAddr.sin_family=AF_INET;
ServerAddr.sin_port=htons(Port);
ServerAddr.sin_addr.s_addr=htonl(INADDR_ANY);
bind(ListeningSocket, (SOCKADDR*)&ServerAddr, sizeof(ServerAddr));
if(listen(ListeningSocket, 5)!=0)
{
cout << "Could not begin listening for connections.";
return 0;
}
ConnectionNum=0;
while(ConnectionNum<256)
{
Connections[ConnectionNum]=accept(ListeningSocket, (SOCKADDR*)&ClientAddr, &ClientAddrLen);
ConnectionNum++;
cout << "New connection.";
}
}
int main()
{
HANDLE hThread1,hThread2;
DWORD dwGenericThread;
char lszThreadParam[3];
hThread1=CreateThread(NULL, 0, ListenThread, &lszThreadParam, 0, &dwGenericThread);
if(hThread1==NULL)
{
DWORD dwError=GetLastError();
cout<<"SCM:Error in Creating thread"<<dwError<<endl ;
return 0;
}
hThread2=CreateThread(NULL, 0, ReceiveThread, &lszThreadParam, 0, &dwGenericThread);
if(hThread2==NULL)
{
DWORD dwError=GetLastError();
cout<<"SCM:Error in Creating thread"<<dwError<<endl ;
return 0;
}
system("pause"); //to keep the entire program from ending
}
I don't see any cin calls here. As for the calls to cout, you may have to flush the output, as it is being called in a separate thread. You can do this by simply calling std::endl:
cout << "New connection." << std::endl;
The reason that your cout calls aren't showing up is possibly because you're supplying the wrong parameters to the linker. Are you specifying /SUBSYSTEM:CONSOLE ? (System tab of the Linker properties). If not then you're not telling the operating system to create a console for the program, you may be telling it it's a windows program, and if you program doesn't have a console then you wont see your programs cout output...
Once you can see your debug...
I assume you are connecting to your test program from a client of some sort? Nothing will happen until you connect to your program which will cause the call to Accept() to return.
By the way, system("pause"); is probably the worst way to achieve what you want but I assume you're only doing that because you can't get cin to work...