How to interrupt accept() in a TCP/IP server? - c++

I'm working on a vision-application, which have two modes:
1) parameter setting
2) automatic
The problem is in 2), when my app waits for a signal via TCP/IP. The program is freezing while accept()-methode is called. I want to provide the possibility on a GUI to change the mode. So if the mode is changing, it's provided by another signal (message_queue). So I want to interrupt the accept state.
Is there a simple possibility to interrupt the accept?
std::cout << "TCPIP " << std::endl;
client = accept(slisten, (struct sockaddr*)&clientinfo, &clientinfolen);
if (client != SOCKET_ERROR)
cout << "client accepted: " << inet_ntoa(clientinfo.sin_addr) << ":"
<< ntohs(clientinfo.sin_port) << endl;
//receive the message from client
//recv returns the number of bytes received!!
//buf contains the data received
int rec = recv(client, buf, sizeof(buf), 0);
cout << "Message: " << rec << " bytes and the message " << buf << endl;
I read about select() but I have no clue how to use it. Could anybody give me a hint how to implement for example select() in my code?
Thanks.
Best regards,
T

The solution is to call accept() only when there is an incoming connection request. You do that by polling on the listen socket, where you can also add other file descriptors, use a timeout etc.
You did not mention your platform. On Linux, see epoll(), UNIX see poll()/select(), Windows I don't know.

A general way would be to use a local TCP connection by which the UI thread could interrupt the select call. The general architecture would use:
a dedicated thread waiting with select on both slisten and the local TCP connection
a TCP connection (Unix domain socket on a Unix or Unix-like system, or 127.0.0.1 on Windows) between the UI thread and the waiting one
various synchronizations/messages between both threads as required
Just declare that select should read slisten and the local socket. It will return as soon as one is ready, and you will be able to know which one is ready.

As you haven't specified your platform, and networking, especially async, is platform-specific, I suppose you need a cross-platform solution. Boost.Asio fits perfectly here: http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/reference/basic_socket_acceptor/async_accept/overload1.html
Example from the link:
void accept_handler(const boost::system::error_code& error)
{
if (!error)
{
// Accept succeeded.
}
}
...
boost::asio::ip::tcp::acceptor acceptor(io_service);
...
boost::asio::ip::tcp::socket socket(io_service);
acceptor.async_accept(socket, accept_handler);
If Boost is a problem, Asio can be a header-only lib and used w/o Boost: http://think-async.com/Asio/AsioAndBoostAsio.

One way would be to run select in a loop with a timeout.
Put slisten into nonblocking mode (this isn't strictly necessary but sometimes accept blocks even when select says otherwise) and then:
fd_set read_fds;
FD_ZERO(&read_fds);
FD_SET(slisten, &read_fds);
struct timeval timeout;
timeout.tv_sec = 1; // 1s timeout
timeout.tv_usec = 0;
int select_status;
while (true) {
select_status = select(slisten+1, &read_fds, NULL, NULL, &timeout);
if (select_status == -1) {
// ERROR: do something
} else if (select_status > 0) {
break; // we have data, we can accept now
}
// otherwise (i.e. select_status==0) timeout, continue
}
client = accept(slisten, ...);
This will allow you to catch signals once per second. More info here:
http://man7.org/linux/man-pages/man2/select.2.html
and Windows version (pretty much the same):
https://msdn.microsoft.com/pl-pl/library/windows/desktop/ms740141(v=vs.85).aspx

Related

standard C++ TCP socket, connect fails with EINTR when using std::async

I am having trouble using the std::async to have tasks execute in parallel when the task involves a socket.
My program is a simple TCP socket server written in standard C++ for Linux. When a client connects, a dedicated port is opened and separate thread is started, so each client is serviced in their own thread.
The client objects are contained in a map.
I have a function to broadcast a message to all clients. I originally wrote it like below:
// ConnectedClient is an object representing a single client
// ConnectedClient::SendMessageToClient opens a socket, connects, writes, reads response and then closes socket
// broadcastMessage is the std::string to go out to all clients
// iterate through the map of clients
map<string, ConnectedClient*>::iterator nextClient;
for ( nextClient = mConnectedClients.begin(); nextClient != mConnectedClients.end(); ++nextClient )
{
printf("%s\n", nextClient->second->SendMessageToClient(broadcastMessage).c_str());
}
I have tested this and it works with 3 clients at a time. The message gets to all three clients (one at a time), and the response string is printed out three times in this loop. However, it is slow, because the message only goes out to one client at a time.
In order to make it more efficient, I was hoping to take advantage of std::async to call the SendMessageToClient function for every client asynchronously. I rewrote the code above like this:
vector<future<string>> futures;
// iterate through the map of clients
map<string, ConnectedClient*>::iterator nextClient;
for ( nextClient = mConnectedClients.begin(); nextClient != mConnectedClients.end(); ++nextClient )
{
printf("start send\n");
futures.push_back(async(launch::async, &ConnectedClient::SendMessageToClient, nextClient->second, broadcastMessage, wait));
printf("end send\n");
}
vector<future<string>>::iterator nextFuture;
for( nextFuture = futures.begin(); nextFuture != futures.end(); ++nextFuture )
{
printf("start wait\n");
nextFuture->wait();
printf("end wait\n");
printf("%s\n", nextFuture->get().c_str());
}
The code above functions as expected when there is only one client in the map. That you see "start send" quickly followed by "end send", quickly followed by "start wait" and then 3 seconds later (I have a three second sleep on the client response side to test this) you see the trace from the socket read function that the response comes in, and then you see "end wait"
The problem is that when there is more than one client in the map. In the part of the SendMessageToClient function that opens and connects to the socket, it fails in the code identified below:
// connected client object has a pipe open back to the client for sending messages
int clientSocketFileDescriptor;
clientSocketFileDescriptor = socket(AF_INET, SOCK_STREAM, 0);
// set the socket timeouts
// this part using setsockopt is omitted for brevity
// host name
struct hostent *server;
server = gethostbyname(mIpAddressOfClient.c_str());
if (server == 0)
{
close(clientSocketFileDescriptor);
return "";
}
//
struct sockaddr_in clientsListeningServerAddress;
memset(&clientsListeningServerAddress, 0, sizeof(struct sockaddr_in));
clientsListeningServerAddress.sin_family = AF_INET;
bcopy((char*)server->h_addr, (char*)&clientsListeningServerAddress.sin_addr.s_addr, server->h_length);
clientsListeningServerAddress.sin_port = htons(mPortNumberClientIsListeningOn);
// The connect function fails !!!
if ( connect(clientSocketFileDescriptor, (struct sockaddr *)&clientsListeningServerAddress, sizeof(clientsListeningServerAddress)) < 0 )
{
// print out error code
printf("Connected client thread: fail to connect %d \n", errno);
close(clientSocketFileDescriptor);
return response;
}
The output reads: "Connected client thread: fail to connect 4".
I looked this error code up, it is explained thus:
#define EINTR 4 /* Interrupted system call */
I searched around on the internet, all I found were some references to system calls being interrupted by signals.
Does anyone know why this works when I call my send message function one at a time, but it fails when the send message function is called using async? Does anyone have a different suggestion how I should send a message to multiple clients?
First, I would try to deal with the EINTR issue. connect ( ) has been interrupted (this is the meaning of EINTR) and does not try again because you are using and asynch descriptor.
What I usually do in such a circumstance is to retry: I wrap the function (connect in this case) in a while cycle. If connect succeeds I break out of the cycle. If it fails, I check the value of errno. If it is EINTR I try again.
Mind that there are other values of errno that deserve a retry (EWOULDBLOCK is one of them)

Why might bind() sometimes give EADDRINUSE when other side connects?

In my C++ application, I am using ::bind() for a UDP socket, but on rare occasions, after reconnection due to lost connection, I get errno EADDRINUSE, even after many retries. The other side of the UDP connection which will receive the data reconnected fine and is waiting for select() to indicate there is something to read.
I presume this means the local port is in use. If true, how might I be leaking the local port such that the other side connects to it fine? The real issue here is that other side connected fine and is waiting but this side is stuck on EADDRINUSE.
--Edit--
Here is a code snippet showing that I am already doing SO_REUSEADDR on my TCP socket, not on this UDP socket for which I am having issue:
// According to "Linux Socket Programming by Example" p. 319, we must call
// setsockopt w/ SO_REUSEADDR option BEFORE calling bind.
// Make the address is reuseable so we don't get the nasty message.
int so_reuseaddr = 1; // Enabled.
int reuseAddrResult
= ::setsockopt(getTCPSocket(), SOL_SOCKET, SO_REUSEADDR, &so_reuseaddr,
sizeof(so_reuseaddr));
Here is my code to close the UDP socket when done:
void
disconnectUDP()
{
if (::shutdown(getUDPSocket(), 2) < 0) {
clog << "Warning: error during shutdown of data socket("
<< getUDPSocket() << "): " << strerror(errno) << '\n';
}
if (::close(getUDPSocket()) < 0 && !seenWarn) {
clog << "Warning: error while closing data socket("
<< getUDPSocket() << "): " << strerror(errno) << '\n';
}
}
Yes, that's normal. You need to set the socket SO_REUSEADDR before you bind, eg on *nix:
int sock = socket(...);
int yes = 1;
setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes));
If you have separate code that reconnects by creating a new socket, set it on that one too. This is just to do with the default behaviour of the OS -- the port on a broken socket is kept defunct for a while.
[EDIT] This shouldn't apply to UDP connections. Maybe you should post the code you use to set up the socket.
In UDP there's no such thing as lost connection, because there's no connection. You can lose sent packets, that's all.
Don't reconnect, simply reuse the existing fd.

WSAWaitForMultipleEvents and NetworkEvents

I'm trying to get FD_CLOSE event (c++) by WSAWaitForMulipleObjects. in the WSASelectEvent I've set only the FD_CLOSE. however, the wait return, and the network enumaration also return 0, but NetworkEvents return 0 from the enumaration so I can't see FD_CLOSE in it.
Any help?
thanks.
void EventThread(void* obj)
{
WSANETWORKEVENTS NetworkEvents;
WSAEVENT EventArray[WSA_MAXIMUM_WAIT_EVENTS];
DWORD EventTotal = 0;
EventArray[EventTotal] = WSACreateEvent();
EventTotal++;
int res;
int index;
if(WSAEventSelect(_socket, EventArray[EventTotal - 1], FD_CLOSE)==SOCKET_ERROR)
Logger::GetInstance() << "WSAEventSelect failed with error " << WSAGetLastError() << endl;
bool bResult;
while(true)
{
if((index = WSAWaitForMultipleEvents(EventTotal, EventArray, FALSE, WSA_INFINITE, FALSE))==WSA_WAIT_FAILED)
{
Logger::GetInstance() << "WSAWaitForMultipleEvents failed with error " << WSAGetLastError() << endl;
}
if ((index != WSA_WAIT_FAILED) && (index != WSA_WAIT_TIMEOUT)) {
res = WSAEnumNetworkEvents(_socket, EventArray[index - WSA_WAIT_EVENT_0], &NetworkEvents) ;
if(NetworkEvents.lNetworkEvents & FD_CLOSE)
{
if(NetworkEvents.iErrorCode[FD_CLOSE_BIT] !=0)
{
Logger::GetInstance() << "FD_CLOSE failed with error " << NetworkEvents.iErrorCode[FD_CLOSE_BIT] << endl;
}
else
{
Logger::GetInstance() << "FD_CLOSE is OK!!! " << NetworkEvents.iErrorCode[FD_CLOSE_BIT] << endl;
}
}
}
}
}
The WinSock documentation says the following:
The FD_CLOSE message is posted when a close indication is received
for the virtual circuit corresponding to the socket. In TCP terms,
this means that the FD_CLOSE is posted when the connection goes into
the TIME_WAIT or CLOSE_WAIT states. This results from the remote
end performing a shutdown() on the send side or a closesocket().
FD_CLOSE should only be posted after all data is read from a socket,
but an application should check for remaining data upon receipt of
FD_CLOSE to avoid any possibility of losing data.
Be aware that the application will only receive an FD_CLOSE message
to indicate closure of a virtual circuit, and only when all the
received data has been read if this is a graceful close. It will not
receive an FD_READ message to indicate this condition.
...
Here is a summary of events and conditions for each asynchronous
notification message.
...
FD_CLOSE: Only valid on connection-oriented sockets (for example,
SOCK_STREAM)
When WSAAsyncSelect() called, if socket connection has been closed.
After remote system initiated graceful close, when no data currently available to receive (Be aware that, if data has been
received and is waiting to be read when the remote system initiates a
graceful close, the FD_CLOSE is not delivered until all pending data
has been read).
After local system initiates graceful close with shutdown() and remote system has responded with "End of Data" notification (for
example, TCP FIN), when no data currently available to receive.
When remote system terminates connection (for example, sent TCP RST), and lParam will contain WSAECONNRESET error value.
Note FD_CLOSE is not posted after closesocket() is called.
Pulling out the network cable does not satisfy any of those conditions. This is actually by design, as networks are designed to handle unexpected outages so they can maintain existing connections as best as they can during short outages. Wait a few minutes until the OS times out and see what happens. Also, when you put the cable back in, the OS will validate pre-existing connections and then may or may not reset them at that time.

sockets question

i have a server and client classes but the problem is: when i make infinite loop to accept incoming connection i cant receive all the data received from the client while accepting the connections because accept blocks until the connection is accepted, my code:
for (;;)
{
boost::thread thread(boost::bind(&Irc::Server::startAccept, &s));
thread.join();
for (ClientsMap::const_iterator it = s.begin(); it != s.end(); ++it)
{
std::string msg = getData(it->second->recv());
std::clog << "Msg: " << msg << std::endl;
}
}
You need either multiple threads or a call to select/poll to find out which connections have unprocessed data. IBM has a nice example here, which will work on any flavor of Unix, Linux, BSD, etc. (you might need different header files depending on the OS).
Right now you're starting a thread and then waiting for it immediately, which results in sequential execution and completely defeats the purpose of threads.
Take a look here : http://www.boost.org/doc/libs/1_38_0/doc/html/boost_asio/examples.html
especially the HTTP Server 3 example, thats exactly what you are looking for , all you have to do is change that code a little bit for your needs :) and your done
A good approach would be to create one thread that only accepts new connections. That's where you have a listener socket. Then, for every connection that gets accepted, you have a new connected socket, so you can spawn another thread, giving it the connected socket as a parameter. That way, your thread that accepts connections doesn't get blocked, and can connect to many clients very fast. The processing threads deal with the clients and then they exit.
I don't even know why need to wait for them, but if you do, you may deal with it in some other way, depending on the OS and/or libraries that you use (messages, signals etc can be used).
If you don't want to spawn a new thread for each connected client, then as Ben Voigt suggested, you can use select. That is another good approach if you want to make it single threaded. Basically, all your sockets will be in an array of socket descriptors and using select you will know what happened (someone connected, socket is ready for read/write, socket got disconnected etc) and act accordingly.
Here's one example Partial one, but it works. you just accept connections in the acceptConnections(), which will then spawn a separate thread for each client. That's where you communicate with the clients. It's from a windows code that i have lying around, but it's very easy to be reimplemented for any platform.
typedef struct SOCKET_DATA_ {
SOCKET sd;
/* other parameters that you may want to pass to the clientProc */
} SOCKET_DATA;
/* In this function you communicate with the clients */
DWORD WINAPI clientProc(void * param)
{
SOCKET_DATA * pSocketData = (SOCKET_DATA *)param;
/* Communicate with the new client, and at the end deallocate the memory for
SOCKET_DATA and return.
*/
delete pSocketData;
return 0;
}
int acceptConnections(const char * pcAddress, int nPort)
{
sockaddr_in sinRemote;
int nAddrSize;
SOCKET sd_client;
SOCKET sd_listener;
sockaddr_in sinInterface;
SOCKET_DATA * pSocketData;
HANDLE hThread;
sd_listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (INVALID_SOCKET == sd_listener) {
fprintf(stderr, "Could not get a listener socket!\n");
return 1;
}
sinInterface.sin_family = AF_INET;
sinInterface.sin_port = nPort;
sinInterface.sin_addr.S_un.S_addr = INADDR_ANY;
if (SOCKET_ERROR != bind(sd_listener, (sockaddr*)&sinInterface, sizeof(sockaddr_in))) {
listen(sd_listener, SOMAXCONN);
} else {
fprintf(stderr, "Could not bind the listening socket!\n");
return 1;
}
while (1)
{
nAddrSize = sizeof(sinRemote);
sd_client = accept(sd_listener, (sockaddr*)&sinRemote, &nAddrSize);
if (INVALID_SOCKET == sd_client) {
fprintf(stdout, "Accept failed!\n");
closesocket(sd_listener);
return 1;
}
fprintf(stdout, "Accepted connection from %s:%u.\n", inet_ntoa(sinRemote.sin_addr), ntohs(sinRemote.sin_port));
pSocketData = (SOCKET_DATA *)malloc(sizeof(SOCKET_DATA));
if (!pSocketData) {
fprintf(stderr, "Could not allocate memory for SOCKET_DATA!\n");
return 1;
}
pSocketData->sd = sd_client;
hThread = CreateThread(0, 0, clientProc, pSocketData, 0, &nThreadID);
if (hThread == INVALID_HANDLE_VALUE) {
fprintf(stderr, "An error occured while trying to create a thread!\n");
delete pSocketData;
return 1;
}
}
closesocket(sd_listener);
return 0;
}

Socket Timeout in C++ Linux

Ok first of all I like to mention what im doing is completely ethical and yes I am port scanning.
The program runs fine when the port is open but when I get to a closed socket the program halts for a very long time because there is no time-out clause. Below is the following code
int main(){
int err, net;
struct hostent *host;
struct sockaddr_in sa;
sa.sin_family = AF_INET;
sa.sin_port = htons(xxxx);
sa.sin_addr.s_addr = inet_addr("xxx.xxx.xxx.xxx");
net = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
err = connect(net, (struct sockaddr *)&sa, sizeof(sa));
if(err >= 0){ cout << "Port is Open"; }
else { cout << "Port is Closed"; }
}
I found this on stack overflow but it just doesn't make sense to me using a select() command.
Question is can we make the connect() function timeout so we dont wait a year for it to come back with an error?
The easiest is to setup an alarm and have connect be interrupted with a signal (see UNP 14.2):
signal( SIGALRM, connect_alarm ); /* connect_alarm is you signal handler */
alarm( secs ); /* secs is your timeout in seconds */
if ( connect( fs, addr, addrlen ) < 0 )
{
if ( errno == EINTR ) /* timeout */
...
}
alarm( 0 ); /* cancel alarm */
Though using select is not much harder :)
You might want to learn about raw sockets too.
If you're dead-set on using blocking IO to get this done, you should investigate the setsockopt() call, specifically the SO_SNDTIMEO flag (or other flags, depending on your OS).
Be forewarned these flags are not reliable/portable and may be implemented differently on different platforms or different versions of a given platform.
The traditional/best way to do this is via the nonblocking approach which uses select(). In the event you're new to sockets, one of the very best books is TCP/IP Illustrated, Volume 1: The Protocols. It's at Amazon at: http://www.amazon.com/TCP-Illustrated-Protocols-Addison-Wesley-Professional/dp/0201633469
RudeSocket Solved the Problem
I found a lib file that is tested in linux Fedora (Not Sure about Windows) that gives me the option of timeout. Below you can find a very simple Example.
#include <rude/socket.h>
#include <iostream>
using namespace std;
using namespace rude;
Socket soc;
soc.setTimeout(30, 5);
//Try connecting
if (soc.connect("xxx.xxx.xxx.xxx", 80)){
cout << "Connected to xxx.xxx.xxx.xxx on Port " << 80 << "\n";
}
//connections Failed
else{
cout << "Timeout to xxx.xxx.xxx.xxx on Port " << 80 << "\n";
}
soc.close();
Here is a link to the DevSite