Handling threads in server application after clients disconnect - c++

I'm currently working on simple HTTP server. I use Winsock and standard threads from C++11. For each connected (accepted) client there is new thread created.
std::map<SOCKET, std::thread> threads;
bool server_running = true;
while(server_running) {
SOCKET client_socket;
client_socket = accept(listen_socket, NULL, NULL);
if(client_socket == INVALID_SOCKET) {
// some error handling
}
threads[client_socket] = std::thread(clientHandler, client_socket);
}
clientHandler function looks generally like this:
while(1) {
while(!all_data_received) {
bytes_received = recv(client_socket, recvbuf, recvbuflen, 0);
if(bytes_received > 0) {
// do something
} else {
goto client_cleanup;
}
}
// do something
}
client_cleanup: // we also get here when Connection: close was received
closesocket(client_socket);
And here we come to my problem - how to handle all the threads which ended but haven't been joined with main thread and references to them still exist in threads map?
The simplest solution would be probably to iterate over threads frequently (e.q. from another thread?) and join and delete those which returned.
Please share your expertise. :)
PS. Yes, I know about thread pool pattern. I'm not using it in my app (for better or worse). I'm looking for answer concerning my current architecture.

Simple solution? Just detach() after you start the thread. This will mean that once the thread terminates the resources will be cleaned up and you don't need to keep the std::map<SOCKET, std::thread> threads.
std::thread(clientHandler, client_socket).detach();
Otherwise create a thread-safe LIFO queue where during cleanup you push the socket to it.
Then in the main loop you alternately check accept and that queue and when the queue has sockets in them you do threads.erase(socket); for each socket in the queue.
However if you do that then you may as well putt he LIFO in the other direction and use a thread pool.

Related

C++ - Sockets and multithreading

Socket A(local_address);
void enviar(sockaddr_in remote_address, std::atomic<bool>& quit){
std::string message_text;
Message message;
while(!quit){
std::getline(std::cin, message_text);
if (message_text != "/quit"){
memset(message.text, 0, 1024);
message_text.copy(message.text, sizeof(message.text) - 1, 0);
A.send_to(message, remote_address);
}
else {
quit = true;
}
}
}
void recibir(sockaddr_in local_address, std::atomic<bool>& quit){
Message messager;
while(!quit){
A.receive_from(messager, local_address);
}
}
int main(void){
std::atomic<bool> quit(false);
sockaddr_in remote_address = make_ip_address("127.0.0.1",6000);
std::thread hilorec(&recibir,local_address, std::ref(quit));
std::thread hiloenv(&enviar,remote_address, std::ref(quit));
hiloenv.join();
hilorec.join();
}
Hi! I'm trying to make a simple chat with sockets. I want the program to finish when I write "/quit". I'm trying this with an atomic bool variable called quit. The problem is when I write "/quit" quit will be 'true' and the hiloenv thread will be finish, but hilorec, which is to receive the messages, will be blocked until i receive a message because of the recvfrom() function. How i can solve this?
Sorry for my english and thanks!
Shutdown the socket for input. That will cause recvfrom() to return zero as though the peer had closed the connection, which will cause that thread to exit.
I would send some special (e.g. empty) message to A socket from main thread when quit is detected. In this case your while(!quit) ... loop will finish and so the thread.
If you want to create a single thread app, then use epoll or select apis. If you want to stick to your current design, then you can create your socket having timeout set. Please look for How to set socket timeout in C when making multiple connections? for details. SO when you do quit, the waiting thread will come out of recv or send after the timout and then thread will join and your application can quit gracefully.
Thanks for the answers. I managed to fix it, If anyone is interested how:
std::thread hilorec(&recibir,local_address);
std::thread hiloenv(&enviar,remote_address);
while(!quit){}
pthread_cancel(hilorec.native_handle());
pthread_cancel(hiloenv.native_handle());
hilorec.join();
hiloenv.join();

C++ winsockets threading issue

I made a pretty simple C++ socket server. I'm trying to spawn a thread each time a new client connects (so reading can be done in parallel).
void Server::start(void){
for(;;){
Logger::Log("Now accepting clients");
int client;
struct sockaddr_in client_addr;
size_t addr_size = sizeof(client_addr);
client = accept(this->m_socket, (sockaddr*)&client_addr, 0);
if(client != SOCKET_ERROR){
Logger::Log("New client connected!");
StateObject client_object(client, this);
this->clients.push_back(&client_object);
std::stringstream stream;
stream<<this->clients.size()<<" clients online";
Logger::Log(const_cast<char*>(stream.str().c_str()));
std::thread c_thread(std::bind(&StateObject::read, std::ref(client_object)));
//c_thread.join(); //if I join the child, new clients won't be accepted until the previous thread exits
}
}
}
Reading method in client class:
void StateObject::read(){
Logger::Log("Now reading");
for(;;){
int bytesReceived = recv(this->socket, buffer, 255, 0);
if(bytesReceived > 0){
Logger::Log(const_cast<char*>(std::string("Received: " + std::string(buffer).substr(0, bytesReceived)).c_str()));
}else if(bytesReceived == 0){
Logger::Log("Client gracefully disconnected");
break;
}else{
Logger::Log("Could not receive data from remote host");
break;
}
}
Server * server = reinterpret_cast<Server*>(parent);
server->removeClient(this);
}
Currently, after a client connects an exception is thrown:
Why and when has abort been triggered?
Please note that this happens when the child thread hasn't joined the main thread. On the other case, the "flow" goes expectedly synchronous (the current client thread has to exit so that the loop can continue to accept the next client).
Notes:
Since I am tied to Windows, I'm unable to fork child tasks - I am also not a fan of Cygwin. Asynchronous win32 methods seem to complicate things that is why I avoid them.
C++ std::thread reference
Tests have been done through Telnet
You either need to detach the thread or join it before it goes out of scope.. Otherwise std::thread calls std::terminate in its destructor.
http://www.cplusplus.com/reference/thread/thread/~thread/

C++ Implementing threads with Inter thread communication

im currently attempting to create server and client application that use winsock, with a main program I need to have a second thread to always be listening for data.
This communication is non blocking. I am really having trouble in finding a way of communicating between threads, an example of what im looking for is: Server sends a string to the client e.g. "viewData" and this kind of information will be fetched by the main thread and then a specific function may also be called.
Here is an example of my thread, i am creating this using _beginthread( (void(*)(void*))SocketReceive, 0, (void*)&ohuman );
//thread focused on listening to connection
void SocketReceive( comms* ohuman)
{
char buffer[1000];
int inDataLength;
std::string contents;
for(;;)
{
if(!ohuman->getGameOn())
{
// Display message from server
memset(buffer,0,999);
inDataLength=recv((INT_PTR)ohuman->getSocket(),buffer,1000,0);
contents = std::string(buffer); //create a string from the char array for easy access
//only display if we get some content
if(inDataLength > 0)
{
//???DealWithMessage(
int nError=WSAGetLastError();
if(nError!=WSAEWOULDBLOCK&&nError!=0)
{
std::cout<<"Winsock error code: "<<nError<<"\r\n";
std::cout<<"Server disconnected!\r\n";
// Shutdown our socket
shutdown((INT_PTR)ohuman->getSocket(),0x01);
// Close our socket entirely
closesocket((INT_PTR)ohuman->getSocket());
break;
}
}
}
_endthread();
}
I also saw this site which is supposed to help out with ITC, any advice on this->
http://derkarl.org/itc/
With a straightforward main loop, I am interested in any approach that might work, I've been trying to figure this out for a couple of days with no luck, any help is greatly appreciated.
You can either have a shared variable(with locks around it) and both threads poll/write to it, or you can register callback functions between the threads and call the other thread on some event.

How to correctly read data when using epoll_wait

I am trying to port to Linux an existing Windows C++ code that uses IOCP. Having decided to use epoll_wait to achieve high concurrency, I am already faced with a theoretical issue of when we try to process received data.
Imagine two threads calling epoll_wait, and two consequetives messages being received such that Linux unblocks the first thread and soon the second.
Example :
Thread 1 blocks on epoll_wait
Thread 2 blocks on epoll_wait
Client sends a chunk of data 1
Thread 1 deblocks from epoll_wait, performs recv and tries to process data
Client sends a chunk of data 2
Thread 2 deblocks, performs recv and tries to process data.
Is this scenario conceivable ? I.e. can it occure ?
Is there a way to prevent it so to avoid implementing synchronization in the recv/processing code ?
If you have multiple threads reading from the same set of epoll handles, I would recommend you put your epoll handles in one-shot level-triggered mode with EPOLLONESHOT. This will ensure that, after one thread observes the triggered handle, no other thread will observe it until you use epoll_ctl to re-arm the handle.
If you need to handle read and write paths independently, you may want to completely split up the read and write thread pools; have one epoll handle for read events, and one for write events, and assign threads to one or the other exclusively. Further, have a separate lock for read and for write paths. You must be careful about interactions between the read and write threads as far as modifying any per-socket state, of course.
If you do go with that split approach, you need to put some thought into how to handle socket closures. Most likely you will want an additional shared-data lock, and 'acknowledge closure' flags, set under the shared data lock, for both read and write paths. Read and write threads can then race to acknowledge, and the last one to acknowledge gets to clean up the shared data structures. That is, something like this:
void OnSocketClosed(shareddatastructure *pShared, int writer)
{
epoll_ctl(myepollhandle, EPOLL_CTL_DEL, pShared->fd, NULL);
LOCK(pShared->common_lock);
if (writer)
pShared->close_ack_w = true;
else
pShared->close_ack_r = true;
bool acked = pShared->close_ack_w && pShared->close_ack_r;
UNLOCK(pShared->common_lock);
if (acked)
free(pShared);
}
I'm assuming here that the situation you're trying to process is something like this:
You have multiple (maybe very many) sockets that you want to receive data from at once;
You want to start processing data from the first connection on Thread A when it is first received and then be sure that data from this connection is not processed on any other thread until you have finished with it in Thread A.
While you are doing that, if some data is now received on a different connection you want Thread B to pick that data and process it while still being sure that no one else can process this connection until Thread B is done with it etc.
Under these circumstances it turns out that using epoll_wait() with the same epoll fd in multiple threads is a reasonably efficient approach (I'm not claiming that it is necessarily the most efficient).
The trick here is to add the individual connections fds to the epoll fd with the EPOLLONESHOT flag. This ensures that once an fd has been returned from an epoll_wait() it is unmonitored until you specifically tell epoll to monitor it again. This ensures that the thread processing this connection suffers no interference as no other thread can be processing the same connection until this thread marks the connection to be monitored again.
You can set up the fd to monitor EPOLLIN or EPOLLOUT again using epoll_ctl() and EPOLL_CTL_MOD.
A significant benefit of using epoll like this in multiple threads is that when one thread is finished with a connection and adds it back to the epoll monitored set, any other threads still in epoll_wait() are immediately monitoring it even before the previous processing thread returns to epoll_wait(). Incidentally that could also be a disadvantage because of lack of cache data locality if a different thread now picks up that connection immediately (thus needing to fetch the data structures for this connection and flush the previous thread's cache). What works best will sensitively depend on your exact usage pattern.
If you are trying to process messages received subsequently on the same connection in different threads then this scheme to use epoll is not going to be appropriate for you, and an approach using a listening thread feeding an efficient queue feeding worker threads might be better.
Previous answers that point out that calling epoll_wait() from multiple threads is a bad idea are almost certainly right, but I was intrigued enough by the question to try and work out what does happen when it is called from multiple threads on the same handle, waiting for the same socket. I wrote the following test code:
#include <netinet/in.h>
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/epoll.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
struct thread_info {
int number;
int socket;
int epoll;
};
void * thread(struct thread_info * arg)
{
struct epoll_event events[10];
int s;
char buf[512];
sleep(5 * arg->number);
printf("Thread %d start\n", arg->number);
do {
s = epoll_wait(arg->epoll, events, 10, -1);
if (s < 0) {
perror("wait");
exit(1);
} else if (s == 0) {
printf("Thread %d No data\n", arg->number);
exit(1);
}
if (recv(arg->socket, buf, 512, 0) <= 0) {
perror("recv");
exit(1);
}
printf("Thread %d got data\n", arg->number);
} while (s == 1);
printf("Thread %d end\n", arg->number);
return 0;
}
int main()
{
pthread_attr_t attr;
pthread_t threads[2];
struct thread_info thread_data[2];
int s;
int listener, client, epollfd;
struct sockaddr_in listen_address;
struct sockaddr_storage client_address;
socklen_t client_address_len;
struct epoll_event ev;
listener = socket(AF_INET, SOCK_STREAM, 0);
if (listener < 0) {
perror("socket");
exit(1);
}
memset(&listen_address, 0, sizeof(struct sockaddr_in));
listen_address.sin_family = AF_INET;
listen_address.sin_addr.s_addr = INADDR_ANY;
listen_address.sin_port = htons(6799);
s = bind(listener,
(struct sockaddr*)&listen_address,
sizeof(listen_address));
if (s != 0) {
perror("bind");
exit(1);
}
s = listen(listener, 1);
if (s != 0) {
perror("listen");
exit(1);
}
client_address_len = sizeof(client_address);
client = accept(listener,
(struct sockaddr*)&client_address,
&client_address_len);
epollfd = epoll_create(10);
if (epollfd == -1) {
perror("epoll_create");
exit(1);
}
ev.events = EPOLLIN;
ev.data.fd = client;
if (epoll_ctl(epollfd, EPOLL_CTL_ADD, client, &ev) == -1) {
perror("epoll_ctl: listen_sock");
exit(1);
}
thread_data[0].number = 0;
thread_data[1].number = 1;
thread_data[0].socket = client;
thread_data[1].socket = client;
thread_data[0].epoll = epollfd;
thread_data[1].epoll = epollfd;
s = pthread_attr_init(&attr);
if (s != 0) {
perror("pthread_attr_init");
exit(1);
}
s = pthread_create(&threads[0],
&attr,
(void*(*)(void*))&thread,
&thread_data[0]);
if (s != 0) {
perror("pthread_create");
exit(1);
}
s = pthread_create(&threads[1],
&attr,
(void*(*)(void*))&thread,
&thread_data[1]);
if (s != 0) {
perror("pthread_create");
exit(1);
}
pthread_join(threads[0], 0);
pthread_join(threads[1], 0);
return 0;
}
When data arrives, and both threads are waiting on epoll_wait(), only one will return, but as subsequent data arrives, the thread that wakes up to handle the data is effectively random between the two threads. I wasn't able to to find a way to affect which thread was woken.
It seems likely that a single thread calling epoll_wait makes most sense, with events passed to worker threads to pump the IO.
I believe that the high performance software that uses epoll and a thread per core creates multiple epoll handles that each handle a subset of all the connections. In this way the work is divided but the problem you describe is avoided.
Generally, epoll is used when you have a single thread listening for data on a single asynchronous source. To avoid busy-waiting (manually polling), you use epoll to let you know when data is ready (much like select does).
It is not standard practice to have multiple threads reading from a single data source, and I, at least, would consider it bad practice.
If you want to use multiple threads, but you only have one input source, then designate one of the threads to listen and queue the data so the other threads can read individual pieces from the queue.

Closing a thread with select() system call statement?

I have a thread to monitor serial port using select system call, the run function of the thread is as follows:
void <ProtocolClass>::run()
{
int fd = mPort->GetFileDescriptor();
fd_set readfs;
int maxfd=fd+1;
int res;
struct timeval Timeout;
Timeout.tv_usec=0;
Timeout.tv_sec=3;
//BYTE ack_message_frame[ACKNOWLEDGE_FRAME_SIZE];
while(true)
{
usleep(10);
FD_ZERO(&readfs);
FD_SET(fd,&readfs);
res=select(maxfd,&readfs,NULL,NULL,NULL);
if(res<0)
perror("\nselect failed");
else if( res==0)
puts("TIMEOUT");
else if(FD_ISSET(fd,&readfs))
{//IF INPUT RECEIVED
qDebug("************RECEIVED DATA****************");
FlushBuf();
qDebug("\nReading data into a read buffer");
int bytes_read=mPort->ReadPort(mBuf,1000);
mFrameReceived=false;
for(int i=0;i<bytes_read;i++)
{
qDebug("%x",mBuf[i]);
}
//if complete frame has been received, write the acknowledge message frame to the port.
if(bytes_read>0)
{
qDebug("\nAbout to Process Received bytes");
ProcessReceivedBytes(mBuf,bytes_read);
qDebug("\n Processed Received bytes");
if(mFrameReceived)
{
int no_bytes=mPort->WritePort(mAcknowledgeMessage,ACKNOWLEDGE_FRAME_SIZE);
}//if frame received
}//if bytes read > 0
} //if input received
}//end while
}
The problem is when I exit from this thread, using
delete <protocolclass>::instance();
the program crashes with a glibc error of malloc memory corruption. On checking the core with gdb it was found the when exiting the thread it was processing the data and thus the error. The destructor of the protocol class looks as follows:
<ProtocolClass>::~<ProtocolClass>()
{
delete [] mpTrackInfo; //delete data
wait();
mPort->ClosePort();
s_instance = NULL; //static instance of singleton
delete mPort;
}
Is this due to select? Do the semantics for destroying objects change when select is involved? Can someone suggest a clean way to destroy threads involving select call.
Thanks
I'm not sure what threading library you use, but you should probably signal the thread in one way or another that it should exit, rather than killing it.
The most simple way would be to keep a boolean that is set true when the thread should exit, and use a timeout on the select() call to check it periodically.
ProtocolClass::StopThread ()
{
kill_me = true;
// Wait for thread to die
Join();
}
ProtocolClass::run ()
{
struct timeval tv;
...
while (!kill_me) {
...
tv.tv_sec = 1;
tv.tv_usec = 0;
res = select (maxfd, &readfds, NULL, NULL, &tv);
if (res < 0) {
// Handle error
}
else if (res != 0) {
...
}
}
You could also set up a pipe and include it in readfds, and then just write something to it from another thread. That would avoid waking up every second and bring down the thread without delay.
Also, you should of course never use a boolean variable like that without some kind of lock, ...
Are the threads still looking at mpTrackInfo after you delete it?
Not seeing the code it is hard.
But Iwould think that the first thing the destructor should do is wait for any threads to die (preferably with some form of join() to make sure they are all accounted for). Once they are dead you can start cleaning up the data.
your thread is more than just memory with some members, so just deleting and counting on the destructor is not enough. Since I don't know qt threads I think this link can put you on your way:
trolltech message
Two possible problems:
What is mpTrackInfo? You delete it before you wait for the thread to exit. Does the thread use this data somewhere, maybe even after it's been deleted?
How does the thread know it's supposed to exit? The loop in run() seems to run forever, which should cause wait() in the destructor to wait forever.