I'm trying to get FD_CLOSE event (c++) by WSAWaitForMulipleObjects. in the WSASelectEvent I've set only the FD_CLOSE. however, the wait return, and the network enumaration also return 0, but NetworkEvents return 0 from the enumaration so I can't see FD_CLOSE in it.
Any help?
thanks.
void EventThread(void* obj)
{
WSANETWORKEVENTS NetworkEvents;
WSAEVENT EventArray[WSA_MAXIMUM_WAIT_EVENTS];
DWORD EventTotal = 0;
EventArray[EventTotal] = WSACreateEvent();
EventTotal++;
int res;
int index;
if(WSAEventSelect(_socket, EventArray[EventTotal - 1], FD_CLOSE)==SOCKET_ERROR)
Logger::GetInstance() << "WSAEventSelect failed with error " << WSAGetLastError() << endl;
bool bResult;
while(true)
{
if((index = WSAWaitForMultipleEvents(EventTotal, EventArray, FALSE, WSA_INFINITE, FALSE))==WSA_WAIT_FAILED)
{
Logger::GetInstance() << "WSAWaitForMultipleEvents failed with error " << WSAGetLastError() << endl;
}
if ((index != WSA_WAIT_FAILED) && (index != WSA_WAIT_TIMEOUT)) {
res = WSAEnumNetworkEvents(_socket, EventArray[index - WSA_WAIT_EVENT_0], &NetworkEvents) ;
if(NetworkEvents.lNetworkEvents & FD_CLOSE)
{
if(NetworkEvents.iErrorCode[FD_CLOSE_BIT] !=0)
{
Logger::GetInstance() << "FD_CLOSE failed with error " << NetworkEvents.iErrorCode[FD_CLOSE_BIT] << endl;
}
else
{
Logger::GetInstance() << "FD_CLOSE is OK!!! " << NetworkEvents.iErrorCode[FD_CLOSE_BIT] << endl;
}
}
}
}
}
The WinSock documentation says the following:
The FD_CLOSE message is posted when a close indication is received
for the virtual circuit corresponding to the socket. In TCP terms,
this means that the FD_CLOSE is posted when the connection goes into
the TIME_WAIT or CLOSE_WAIT states. This results from the remote
end performing a shutdown() on the send side or a closesocket().
FD_CLOSE should only be posted after all data is read from a socket,
but an application should check for remaining data upon receipt of
FD_CLOSE to avoid any possibility of losing data.
Be aware that the application will only receive an FD_CLOSE message
to indicate closure of a virtual circuit, and only when all the
received data has been read if this is a graceful close. It will not
receive an FD_READ message to indicate this condition.
...
Here is a summary of events and conditions for each asynchronous
notification message.
...
FD_CLOSE: Only valid on connection-oriented sockets (for example,
SOCK_STREAM)
When WSAAsyncSelect() called, if socket connection has been closed.
After remote system initiated graceful close, when no data currently available to receive (Be aware that, if data has been
received and is waiting to be read when the remote system initiates a
graceful close, the FD_CLOSE is not delivered until all pending data
has been read).
After local system initiates graceful close with shutdown() and remote system has responded with "End of Data" notification (for
example, TCP FIN), when no data currently available to receive.
When remote system terminates connection (for example, sent TCP RST), and lParam will contain WSAECONNRESET error value.
Note FD_CLOSE is not posted after closesocket() is called.
Pulling out the network cable does not satisfy any of those conditions. This is actually by design, as networks are designed to handle unexpected outages so they can maintain existing connections as best as they can during short outages. Wait a few minutes until the OS times out and see what happens. Also, when you put the cable back in, the OS will validate pre-existing connections and then may or may not reset them at that time.
Related
I'm working on a vision-application, which have two modes:
1) parameter setting
2) automatic
The problem is in 2), when my app waits for a signal via TCP/IP. The program is freezing while accept()-methode is called. I want to provide the possibility on a GUI to change the mode. So if the mode is changing, it's provided by another signal (message_queue). So I want to interrupt the accept state.
Is there a simple possibility to interrupt the accept?
std::cout << "TCPIP " << std::endl;
client = accept(slisten, (struct sockaddr*)&clientinfo, &clientinfolen);
if (client != SOCKET_ERROR)
cout << "client accepted: " << inet_ntoa(clientinfo.sin_addr) << ":"
<< ntohs(clientinfo.sin_port) << endl;
//receive the message from client
//recv returns the number of bytes received!!
//buf contains the data received
int rec = recv(client, buf, sizeof(buf), 0);
cout << "Message: " << rec << " bytes and the message " << buf << endl;
I read about select() but I have no clue how to use it. Could anybody give me a hint how to implement for example select() in my code?
Thanks.
Best regards,
T
The solution is to call accept() only when there is an incoming connection request. You do that by polling on the listen socket, where you can also add other file descriptors, use a timeout etc.
You did not mention your platform. On Linux, see epoll(), UNIX see poll()/select(), Windows I don't know.
A general way would be to use a local TCP connection by which the UI thread could interrupt the select call. The general architecture would use:
a dedicated thread waiting with select on both slisten and the local TCP connection
a TCP connection (Unix domain socket on a Unix or Unix-like system, or 127.0.0.1 on Windows) between the UI thread and the waiting one
various synchronizations/messages between both threads as required
Just declare that select should read slisten and the local socket. It will return as soon as one is ready, and you will be able to know which one is ready.
As you haven't specified your platform, and networking, especially async, is platform-specific, I suppose you need a cross-platform solution. Boost.Asio fits perfectly here: http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/reference/basic_socket_acceptor/async_accept/overload1.html
Example from the link:
void accept_handler(const boost::system::error_code& error)
{
if (!error)
{
// Accept succeeded.
}
}
...
boost::asio::ip::tcp::acceptor acceptor(io_service);
...
boost::asio::ip::tcp::socket socket(io_service);
acceptor.async_accept(socket, accept_handler);
If Boost is a problem, Asio can be a header-only lib and used w/o Boost: http://think-async.com/Asio/AsioAndBoostAsio.
One way would be to run select in a loop with a timeout.
Put slisten into nonblocking mode (this isn't strictly necessary but sometimes accept blocks even when select says otherwise) and then:
fd_set read_fds;
FD_ZERO(&read_fds);
FD_SET(slisten, &read_fds);
struct timeval timeout;
timeout.tv_sec = 1; // 1s timeout
timeout.tv_usec = 0;
int select_status;
while (true) {
select_status = select(slisten+1, &read_fds, NULL, NULL, &timeout);
if (select_status == -1) {
// ERROR: do something
} else if (select_status > 0) {
break; // we have data, we can accept now
}
// otherwise (i.e. select_status==0) timeout, continue
}
client = accept(slisten, ...);
This will allow you to catch signals once per second. More info here:
http://man7.org/linux/man-pages/man2/select.2.html
and Windows version (pretty much the same):
https://msdn.microsoft.com/pl-pl/library/windows/desktop/ms740141(v=vs.85).aspx
I have a winsock-server, accepting packets from a local IP, which currently works without using IOCP. I want it to be non-blocking though, working through IOCP. Yes I know about the alternatives (select, WSAAsync etc.), but this won't do it for developing an MMO server.
So here's the question - how do I do this using std::thread and IOCP?
I already know that GetQueuedCompletionStatus() dequeues packets, while PostQueuedCompletionStatus() queues those to the IOCP.
Is this the proper way to do it async though?
How can I threat all clients equally on about 10 threads? I thought about receiving UDP packets and processing those while IOCP has something in queue, but packets will be processed by max 10 at a time and I also have an infinite loop in each thread.
The target is creating a game server, capable of holding thousands of clients at the same time.
About the code: netListener() is a class, holding packets received from the listening network interface in a vector. All it does in Receive() is
WSARecvFrom(sockfd, &buffer, 1, &bytesRecv, &flags, (SOCKADDR*)&senderAddr, &size, &overl, 0);
std::cout << "\n\nReceived " << bytesRecv << " bytes.\n" << "Packet [" << std::string(buffer.buf, bytesRecv)<< "]\n";*
The code works, buffer shows what I've sent to myself, but I'm not sure whether having only ONE receive() will suffice.
About blocking - yes, I realized that putting listener.Receive() into a separate thread doesn't block the main thread. But imagine this - lots of clients try to send packets, can one receive process them all? Not to mention I was planning to queue an IOCP packet on each receive, but still not sure how to do this properly.
And another question - is it possible to establish a direct connection between a client and another client? If you host a server on a local machine behind NAT and you want it to be accessible from the internet, for example.
Threads:
void Host::threadFunc(int i) {
threadMutex.lock();
for (;;) {
if (m_Init) {
if (GetQueuedCompletionStatus(iocp, &bytesReceived, &completionKey, (LPOVERLAPPED*)&overl, WSA_INFINITE)) {
std::cout << "1 completion packet dequeued, bytes: " << bytesReceived << std::endl;
}
}
}
threadMutex.unlock(); }
void Host::createThreads() {
//Create threads
for (unsigned int i = 0; i < SystemInfo.dwNumberOfProcessors; ++i) {
threads.push_back(std::thread(&Host::threadFunc, this, i));
if (threads[i].joinable()) threads[i].detach();
}
std::cout << "Threads created: " << threads.size() << std::endl; }
Host
Host::Host() {
using namespace std;
InitWSA();
createThreads();
m_Init = true;
SecureZeroMemory((PVOID)&overl, sizeof(WSAOVERLAPPED));
overl.hEvent = WSACreateEvent();
iocp = CreateIoCompletionPort((HANDLE)sockfd, iocp, 0, threads.size());
listener = netListener(sockfd, overl, 12); //12 bytes buffer size
for (int i = 0; i < 4; ++i) { //IOCP queue test
if (PostQueuedCompletionStatus(iocp, 150, completionKey, &overl)) {
std::cout << "1 completion packet queued\n";
}
}
std::cin.get();
listener.Receive(); //Packet receive test - adds a completion packet n bytes long if client sent one
std::cin.get();}
I am having trouble using the std::async to have tasks execute in parallel when the task involves a socket.
My program is a simple TCP socket server written in standard C++ for Linux. When a client connects, a dedicated port is opened and separate thread is started, so each client is serviced in their own thread.
The client objects are contained in a map.
I have a function to broadcast a message to all clients. I originally wrote it like below:
// ConnectedClient is an object representing a single client
// ConnectedClient::SendMessageToClient opens a socket, connects, writes, reads response and then closes socket
// broadcastMessage is the std::string to go out to all clients
// iterate through the map of clients
map<string, ConnectedClient*>::iterator nextClient;
for ( nextClient = mConnectedClients.begin(); nextClient != mConnectedClients.end(); ++nextClient )
{
printf("%s\n", nextClient->second->SendMessageToClient(broadcastMessage).c_str());
}
I have tested this and it works with 3 clients at a time. The message gets to all three clients (one at a time), and the response string is printed out three times in this loop. However, it is slow, because the message only goes out to one client at a time.
In order to make it more efficient, I was hoping to take advantage of std::async to call the SendMessageToClient function for every client asynchronously. I rewrote the code above like this:
vector<future<string>> futures;
// iterate through the map of clients
map<string, ConnectedClient*>::iterator nextClient;
for ( nextClient = mConnectedClients.begin(); nextClient != mConnectedClients.end(); ++nextClient )
{
printf("start send\n");
futures.push_back(async(launch::async, &ConnectedClient::SendMessageToClient, nextClient->second, broadcastMessage, wait));
printf("end send\n");
}
vector<future<string>>::iterator nextFuture;
for( nextFuture = futures.begin(); nextFuture != futures.end(); ++nextFuture )
{
printf("start wait\n");
nextFuture->wait();
printf("end wait\n");
printf("%s\n", nextFuture->get().c_str());
}
The code above functions as expected when there is only one client in the map. That you see "start send" quickly followed by "end send", quickly followed by "start wait" and then 3 seconds later (I have a three second sleep on the client response side to test this) you see the trace from the socket read function that the response comes in, and then you see "end wait"
The problem is that when there is more than one client in the map. In the part of the SendMessageToClient function that opens and connects to the socket, it fails in the code identified below:
// connected client object has a pipe open back to the client for sending messages
int clientSocketFileDescriptor;
clientSocketFileDescriptor = socket(AF_INET, SOCK_STREAM, 0);
// set the socket timeouts
// this part using setsockopt is omitted for brevity
// host name
struct hostent *server;
server = gethostbyname(mIpAddressOfClient.c_str());
if (server == 0)
{
close(clientSocketFileDescriptor);
return "";
}
//
struct sockaddr_in clientsListeningServerAddress;
memset(&clientsListeningServerAddress, 0, sizeof(struct sockaddr_in));
clientsListeningServerAddress.sin_family = AF_INET;
bcopy((char*)server->h_addr, (char*)&clientsListeningServerAddress.sin_addr.s_addr, server->h_length);
clientsListeningServerAddress.sin_port = htons(mPortNumberClientIsListeningOn);
// The connect function fails !!!
if ( connect(clientSocketFileDescriptor, (struct sockaddr *)&clientsListeningServerAddress, sizeof(clientsListeningServerAddress)) < 0 )
{
// print out error code
printf("Connected client thread: fail to connect %d \n", errno);
close(clientSocketFileDescriptor);
return response;
}
The output reads: "Connected client thread: fail to connect 4".
I looked this error code up, it is explained thus:
#define EINTR 4 /* Interrupted system call */
I searched around on the internet, all I found were some references to system calls being interrupted by signals.
Does anyone know why this works when I call my send message function one at a time, but it fails when the send message function is called using async? Does anyone have a different suggestion how I should send a message to multiple clients?
First, I would try to deal with the EINTR issue. connect ( ) has been interrupted (this is the meaning of EINTR) and does not try again because you are using and asynch descriptor.
What I usually do in such a circumstance is to retry: I wrap the function (connect in this case) in a while cycle. If connect succeeds I break out of the cycle. If it fails, I check the value of errno. If it is EINTR I try again.
Mind that there are other values of errno that deserve a retry (EWOULDBLOCK is one of them)
I have used C++ & Winsock2 to create both server and client applications. It currently handles multiple client connections by creating separate threads.
Two clients connect to the server. After both have connected, I need to send a message ONLY to the first client which connected, then wait until a response has been received, send a separate message to the second client.
The trouble is, I don't know how I can target the first client which connected.
The code I have at the moment accepts two connections but the message is sent to client 2.
Can someone please give me so ideas on how I can use Send() to a specific client? Thanks
Code which accepts the connections and starts the new threads
SOCKET TempSock = SOCKET_ERROR; // create a socket called Tempsock and assign it the value of SOCKET_ERROR
while (TempSock == SOCKET_ERROR && numCC !=2) // Until a client has connected, wait for client connections
{
cout << "Waiting for clients to connect...\n\n";
while ((ClientSocket = accept(Socket, NULL, NULL)))
{
// Create a new thread for the accepted client (also pass the accepted client socket).
unsigned threadID;
HANDLE hThread = (HANDLE)_beginthreadex(NULL, 0, &ClientSession, (void*)ClientSocket, 0, &threadID);
}
}
ClientSession()
unsigned __stdcall ClientSession(void *data)
{
SOCKET ClientSocket = (SOCKET)data;
numCC ++; // increment the number of connected clients
cout << "Clients Connected: " << numCC << endl << endl; // output number of clients currently connected to the server
if (numCC <2)
{
cout << "Waiting for additional clients to connect...\n\n";
}
if (numCC ==2)
{
SendRender(); // ONLY TO CLIENT 1???????????
// wait for client render to complete and receive Done message back
memset(bufferReply, 0, 999); // set the memory of the buffer
int inDataLength = recv(ClientSocket,bufferReply,1000,0); // receive data from the server and store in the buffer
response = bufferReply; // assign contents of buffer to string var 'message'
cout << response << ". " << "Client 1 Render Cycle complete.\n\n";
SendRender(); // ONLY TO CLIENT 2????????????
}
return 0;
}
Sendrender() function (sends render command to the client)
int SendRender()
{
// Create message to send to client which will initialise rendering
char *szMessage = "Render";
// Send the Render message to the first client
iSendResult = send(ClientSocket, szMessage, strlen(szMessage), 0); // HOW TO SEND ONLY TO CLIENT 1???
if (iSendResult == SOCKET_ERROR)
{
// Display error if unable to send message
cout << "Failed to send message to Client " << numCC << ": ", WSAGetLastError();
closesocket(Socket);
WSACleanup();
return 1;
}
// notify user that Render command has been sent
cout << "Render command sent to Client " << numCC << endl << endl;
return 0;
}
You can provide both a wait function and a control function to the thread by adding a WaitForSingleObject (or WaitForMultipleObjects) call. Those API calls suspend the thread until some other thread sets an event handle. The API return value tells you which event handle was set, which you can use to determine which action to take.
Use a different event handle for each thread. To pass it in to a thread you will need a struct that contains both the event handle and the socket handle you are passing now. Passing a pointer to this struct into the thread is a way to, in effect, pass two parameters.
Your main thread will need to use CreateEvent to initialize the thread handles. Then after both sockets are connected it would set one event (SetEvent), triggering the first thread.
In my C++ application, I am using ::bind() for a UDP socket, but on rare occasions, after reconnection due to lost connection, I get errno EADDRINUSE, even after many retries. The other side of the UDP connection which will receive the data reconnected fine and is waiting for select() to indicate there is something to read.
I presume this means the local port is in use. If true, how might I be leaking the local port such that the other side connects to it fine? The real issue here is that other side connected fine and is waiting but this side is stuck on EADDRINUSE.
--Edit--
Here is a code snippet showing that I am already doing SO_REUSEADDR on my TCP socket, not on this UDP socket for which I am having issue:
// According to "Linux Socket Programming by Example" p. 319, we must call
// setsockopt w/ SO_REUSEADDR option BEFORE calling bind.
// Make the address is reuseable so we don't get the nasty message.
int so_reuseaddr = 1; // Enabled.
int reuseAddrResult
= ::setsockopt(getTCPSocket(), SOL_SOCKET, SO_REUSEADDR, &so_reuseaddr,
sizeof(so_reuseaddr));
Here is my code to close the UDP socket when done:
void
disconnectUDP()
{
if (::shutdown(getUDPSocket(), 2) < 0) {
clog << "Warning: error during shutdown of data socket("
<< getUDPSocket() << "): " << strerror(errno) << '\n';
}
if (::close(getUDPSocket()) < 0 && !seenWarn) {
clog << "Warning: error while closing data socket("
<< getUDPSocket() << "): " << strerror(errno) << '\n';
}
}
Yes, that's normal. You need to set the socket SO_REUSEADDR before you bind, eg on *nix:
int sock = socket(...);
int yes = 1;
setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes));
If you have separate code that reconnects by creating a new socket, set it on that one too. This is just to do with the default behaviour of the OS -- the port on a broken socket is kept defunct for a while.
[EDIT] This shouldn't apply to UDP connections. Maybe you should post the code you use to set up the socket.
In UDP there's no such thing as lost connection, because there's no connection. You can lose sent packets, that's all.
Don't reconnect, simply reuse the existing fd.