How to correctly read data when using epoll_wait - c++

I am trying to port to Linux an existing Windows C++ code that uses IOCP. Having decided to use epoll_wait to achieve high concurrency, I am already faced with a theoretical issue of when we try to process received data.
Imagine two threads calling epoll_wait, and two consequetives messages being received such that Linux unblocks the first thread and soon the second.
Example :
Thread 1 blocks on epoll_wait
Thread 2 blocks on epoll_wait
Client sends a chunk of data 1
Thread 1 deblocks from epoll_wait, performs recv and tries to process data
Client sends a chunk of data 2
Thread 2 deblocks, performs recv and tries to process data.
Is this scenario conceivable ? I.e. can it occure ?
Is there a way to prevent it so to avoid implementing synchronization in the recv/processing code ?

If you have multiple threads reading from the same set of epoll handles, I would recommend you put your epoll handles in one-shot level-triggered mode with EPOLLONESHOT. This will ensure that, after one thread observes the triggered handle, no other thread will observe it until you use epoll_ctl to re-arm the handle.
If you need to handle read and write paths independently, you may want to completely split up the read and write thread pools; have one epoll handle for read events, and one for write events, and assign threads to one or the other exclusively. Further, have a separate lock for read and for write paths. You must be careful about interactions between the read and write threads as far as modifying any per-socket state, of course.
If you do go with that split approach, you need to put some thought into how to handle socket closures. Most likely you will want an additional shared-data lock, and 'acknowledge closure' flags, set under the shared data lock, for both read and write paths. Read and write threads can then race to acknowledge, and the last one to acknowledge gets to clean up the shared data structures. That is, something like this:
void OnSocketClosed(shareddatastructure *pShared, int writer)
{
epoll_ctl(myepollhandle, EPOLL_CTL_DEL, pShared->fd, NULL);
LOCK(pShared->common_lock);
if (writer)
pShared->close_ack_w = true;
else
pShared->close_ack_r = true;
bool acked = pShared->close_ack_w && pShared->close_ack_r;
UNLOCK(pShared->common_lock);
if (acked)
free(pShared);
}

I'm assuming here that the situation you're trying to process is something like this:
You have multiple (maybe very many) sockets that you want to receive data from at once;
You want to start processing data from the first connection on Thread A when it is first received and then be sure that data from this connection is not processed on any other thread until you have finished with it in Thread A.
While you are doing that, if some data is now received on a different connection you want Thread B to pick that data and process it while still being sure that no one else can process this connection until Thread B is done with it etc.
Under these circumstances it turns out that using epoll_wait() with the same epoll fd in multiple threads is a reasonably efficient approach (I'm not claiming that it is necessarily the most efficient).
The trick here is to add the individual connections fds to the epoll fd with the EPOLLONESHOT flag. This ensures that once an fd has been returned from an epoll_wait() it is unmonitored until you specifically tell epoll to monitor it again. This ensures that the thread processing this connection suffers no interference as no other thread can be processing the same connection until this thread marks the connection to be monitored again.
You can set up the fd to monitor EPOLLIN or EPOLLOUT again using epoll_ctl() and EPOLL_CTL_MOD.
A significant benefit of using epoll like this in multiple threads is that when one thread is finished with a connection and adds it back to the epoll monitored set, any other threads still in epoll_wait() are immediately monitoring it even before the previous processing thread returns to epoll_wait(). Incidentally that could also be a disadvantage because of lack of cache data locality if a different thread now picks up that connection immediately (thus needing to fetch the data structures for this connection and flush the previous thread's cache). What works best will sensitively depend on your exact usage pattern.
If you are trying to process messages received subsequently on the same connection in different threads then this scheme to use epoll is not going to be appropriate for you, and an approach using a listening thread feeding an efficient queue feeding worker threads might be better.

Previous answers that point out that calling epoll_wait() from multiple threads is a bad idea are almost certainly right, but I was intrigued enough by the question to try and work out what does happen when it is called from multiple threads on the same handle, waiting for the same socket. I wrote the following test code:
#include <netinet/in.h>
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/epoll.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
struct thread_info {
int number;
int socket;
int epoll;
};
void * thread(struct thread_info * arg)
{
struct epoll_event events[10];
int s;
char buf[512];
sleep(5 * arg->number);
printf("Thread %d start\n", arg->number);
do {
s = epoll_wait(arg->epoll, events, 10, -1);
if (s < 0) {
perror("wait");
exit(1);
} else if (s == 0) {
printf("Thread %d No data\n", arg->number);
exit(1);
}
if (recv(arg->socket, buf, 512, 0) <= 0) {
perror("recv");
exit(1);
}
printf("Thread %d got data\n", arg->number);
} while (s == 1);
printf("Thread %d end\n", arg->number);
return 0;
}
int main()
{
pthread_attr_t attr;
pthread_t threads[2];
struct thread_info thread_data[2];
int s;
int listener, client, epollfd;
struct sockaddr_in listen_address;
struct sockaddr_storage client_address;
socklen_t client_address_len;
struct epoll_event ev;
listener = socket(AF_INET, SOCK_STREAM, 0);
if (listener < 0) {
perror("socket");
exit(1);
}
memset(&listen_address, 0, sizeof(struct sockaddr_in));
listen_address.sin_family = AF_INET;
listen_address.sin_addr.s_addr = INADDR_ANY;
listen_address.sin_port = htons(6799);
s = bind(listener,
(struct sockaddr*)&listen_address,
sizeof(listen_address));
if (s != 0) {
perror("bind");
exit(1);
}
s = listen(listener, 1);
if (s != 0) {
perror("listen");
exit(1);
}
client_address_len = sizeof(client_address);
client = accept(listener,
(struct sockaddr*)&client_address,
&client_address_len);
epollfd = epoll_create(10);
if (epollfd == -1) {
perror("epoll_create");
exit(1);
}
ev.events = EPOLLIN;
ev.data.fd = client;
if (epoll_ctl(epollfd, EPOLL_CTL_ADD, client, &ev) == -1) {
perror("epoll_ctl: listen_sock");
exit(1);
}
thread_data[0].number = 0;
thread_data[1].number = 1;
thread_data[0].socket = client;
thread_data[1].socket = client;
thread_data[0].epoll = epollfd;
thread_data[1].epoll = epollfd;
s = pthread_attr_init(&attr);
if (s != 0) {
perror("pthread_attr_init");
exit(1);
}
s = pthread_create(&threads[0],
&attr,
(void*(*)(void*))&thread,
&thread_data[0]);
if (s != 0) {
perror("pthread_create");
exit(1);
}
s = pthread_create(&threads[1],
&attr,
(void*(*)(void*))&thread,
&thread_data[1]);
if (s != 0) {
perror("pthread_create");
exit(1);
}
pthread_join(threads[0], 0);
pthread_join(threads[1], 0);
return 0;
}
When data arrives, and both threads are waiting on epoll_wait(), only one will return, but as subsequent data arrives, the thread that wakes up to handle the data is effectively random between the two threads. I wasn't able to to find a way to affect which thread was woken.
It seems likely that a single thread calling epoll_wait makes most sense, with events passed to worker threads to pump the IO.

I believe that the high performance software that uses epoll and a thread per core creates multiple epoll handles that each handle a subset of all the connections. In this way the work is divided but the problem you describe is avoided.

Generally, epoll is used when you have a single thread listening for data on a single asynchronous source. To avoid busy-waiting (manually polling), you use epoll to let you know when data is ready (much like select does).
It is not standard practice to have multiple threads reading from a single data source, and I, at least, would consider it bad practice.
If you want to use multiple threads, but you only have one input source, then designate one of the threads to listen and queue the data so the other threads can read individual pieces from the queue.

Related

Threads and sockets, threads and objects more generally

Thanks for your time.
What am I trying to accomplish?
I'm trying to utilise threads to speed up my program. After some profiling I found that a large portion of my program time (a graphics application) is utilised checking on the status of my socket. Obviously not ideal when trying to trim the fat and get down to <16ms per cycle. I'm currently using the select function to check for new data and read if data is available.
What's the problem?
I can't get my head around threads & objects, I had a play with some textbook examples running and joining local functions with threads which worked fine. Trying to move this into my own code has proved beyond me.
What have I tried?
I've tried looking to smart pointers to allocate my UDPSocket objects on the heap, with the hope that heap memory is accessible by all threads. I've tried good old new & delete for the same reason. I've tried wrapping my UDPSockets inside another object and getting the whole lot to launch on another thread.
In summary It's absolutely certain that I have a big hole in my understanding of threads, I would be grateful for a solution to this specific problem but also links to any good articles, tutorials, video's etc that might help to further my understanding. Perhaps I simply need to re-examine my whole UDPSocket class? Your advice is most welcome.
I'll post my example below, please note I've stripped out all error checking etc for readability.
#pragma once
#define WIN32_MEAN_AND_LEAN
#include <WS2tcpip.h>
#include <iostream>
#include <memory>
#include <thread>
#pragma comment(lib, "ws2_32.lib")
class UDPServer
{
public:
UDPServer(unsigned short port_in)
:
port(port_in)
{
// Startup Winsock
WSADATA data;
WORD version = MAKEWORD(2, 2);
int wsOk = WSAStartup(version, &data);
//Bind socket to port, Any Address
s = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
//Hint structure
sockaddr_in serverHint;
serverHint.sin_addr.S_un.S_addr = ADDR_ANY;
serverHint.sin_family = AF_INET;
serverHint.sin_port = htons(port);
bind(s, (sockaddr*)&serverHint, sizeof(serverHint));
}
~UDPServer()
{
closesocket(s);
WSACleanup();
}
bool Recieve()
{
ZeroMemory(&client, clientLength);
if (dataAvailable(s))
{
ZeroMemory(messageBuffer, bufferSize);
int bytesIn = recvfrom(s, messageBuffer, bufferSize, 0, (sockaddr*)&client, &clientLength);
char clientIP[bufferSize];
ZeroMemory(clientIP, bufferSize);
inet_ntop(AF_INET, &client.sin_addr, clientIP, 256);
return true;
}
return false;
}
std::string GetNetworkMessage()
{
std::string message = messageBuffer;
return message;
}
private:
bool dataAvailable(int sock, int interval = 6000)
{
fd_set fds;
FD_ZERO(&fds);
FD_SET(sock, &fds);
timeval tv;
tv.tv_sec = 0;
tv.tv_usec = interval;
return (select(sock + 1, &fds, 0, 0, &tv) == 1);
}
private:
SOCKET s;
sockaddr_in client;
int clientLength = sizeof(client);
static constexpr int bufferSize = 512;
unsigned short port;
char messageBuffer[bufferSize] = {};
};
int main()
{
//Create server object on the heap.
std::unique_ptr<UDPServer> udp = std::make_unique<UDPServer>(6000);
//Get some new threads mate.
std::thread theThread;
std::string oldString = "";
while (true)
{
//Problems...
theThread = std::thread{udp->Recieve()};
if (udp->GetNetworkMessage() != oldString)
{
//print out any changed data we find.
oldString = udp->GetNetworkMessage();
std::cout << oldString << std::endl;
}
}
}
One of the items you weren't clear on is memory accessibility in threads. In Windows, and likely most other operating systems, any memory accessible in the main thread is also accessible by every other thread in the same process.
There are two issues with regards to threads and that memory. The first is how can more than one thread know where a given variable or class is in memory. This is generally solved by passing a pointer to the new thread when it is created. Most thread creation mechanisms provide a parameter for this. So this is the easier issue to solve.
The harder issue to solve is making sure that one thread doesn't change a variable or class while another thread is using it. Generally this is solved by using a mutual exclusion synchronization object, generally referred to as a mutex or a lock. I suggest learning about the concept of a mutex. But bottom line it only allows one thread at a time to access whatever is locked by that mutex. So if one thread is busy changing or using that object the other thread will wait until the other thread has unlocked the object before continuing.
But when you get into multiple locks, there is something called a deadlock. A simple is demonstrated by: Thread A hold lock 1 and is waiting to get access to lock 2. Thread B meanwhile is holding lock 2 and waiting for access to lock 1. So both threads are stuck waiting on the other. The solution is that anytime you have to hold two locks always take them in the same order. So in this case if both threads always took lock 1 then lock 2 they can't deadlock.
The subject matter you want to learn about is threads and thread synchronization.

Creating a dispatch queue / thread handler in C++ with pipes: FIFOs overfilling

Threads are resource-heavy to create and use, so often a pool of threads will be reused for asynchronous tasks. A task is packaged up, and then "posted" to a broker that will enqueue the task on the next available thread.
This is the idea behind dispatch queues (i.e. Apple's Grand Central Dispatch), and thread handlers (Android's Looper mechanism).
Right now, I'm trying to roll my own. In fact, I'm plugging a gap in Android whereby there is an API for posting tasks in Java, but not in the native NDK. However, I'm keeping this question platform independent where I can.
Pipes are the ideal choice for my scenario. I can easily poll the file descriptor of the read-end of a pipe(2) on my worker thread, and enqueue tasks from any other thread by writing to the write-end. Here's what that looks like:
int taskRead, taskWrite;
void setup() {
// Create the pipe
int taskPipe[2];
::pipe(taskPipe);
taskRead = taskPipe[0];
taskWrite = taskPipe[1];
// Set up a routine that is called when task_r reports new data
function_that_polls_file_descriptor(taskRead, []() {
// Read the callback data
std::function<void(void)>* taskPtr;
::read(taskRead, &taskPtr, sizeof(taskPtr));
// Run the task - this is unsafe! See below.
(*taskPtr)();
// Clean up
delete taskPtr;
});
}
void post(const std::function<void(void)>& task) {
// Copy the function onto the heap
auto* taskPtr = new std::function<void(void)>(task);
// Write the pointer to the pipe - this may block if the FIFO is full!
::write(taskWrite, &taskPtr, sizeof(taskPtr));
}
This code puts a std::function on the heap, and passes the pointer to the pipe. The function_that_polls_file_descriptor then calls the provided expression to read the pipe and execute the function. Note that there are no safety checks in this example.
This works great 99% of the time, but there is one major drawback. Pipes have a limited size, and if the pipe is filled, then calls to post() will hang. This in itself is not unsafe, until a call to post() is made within a task.
auto evil = []() {
// Post a new task back onto the queue
post({});
// Not enough new tasks, let's make more!
for (int i = 0; i < 3; i++) {
post({});
}
// Now for each time this task is posted, 4 more tasks will be added to the queue.
});
post(evil);
post(evil);
...
If this happens, then the worker thread will be blocked, waiting to write to the pipe. But the pipe's FIFO is full, and the worker thread is not reading anything from it, so the entire system is in deadlock.
What can be done to ensure that calls to post() eminating from the worker thread always succeed, allowing the worker to continue processing the queue in the event it is full?
Thanks to all the comments and other answers in this post, I now have a working solution to this problem.
The trick I've employed is to prioritise worker threads by checking which thread is calling post(). Here is the rough algorithm:
pipe ← NON-BLOCKING-PIPE()
overflow ← Ø
POST(task)
success ← WRITE(task, pipe)
IF NOT success THEN
IF THREAD-IS-WORKER() THEN
overflow ← overflow ∪ {task}
ELSE
WAIT(pipe)
POST(task)
Then on the worker thread:
LOOP FOREVER
task ← READ(pipe)
RUN(task)
FOR EACH overtask ∈ overflow
RUN(overtask)
overflow ← Ø
The wait is performed with pselect(2), adapted from the answer by #Sigismondo.
Here's the algorithm implemented in my original code example that will work for a single worker thread (although I haven't tested it after copy-paste). It can be extended to work for a thread pool by having a separate overflow queue for each thread.
int taskRead, taskWrite;
// These variables are only allowed to be modified by the worker thread
std::__thread_id workerId;
std::queue<std::function<void(void)>> overflow;
bool overflowInUse;
void setup() {
int taskPipe[2];
::pipe(taskPipe);
taskRead = taskPipe[0];
taskWrite = taskPipe[1];
// Make the pipe non-blocking to check pipe overflows manually
::fcntl(taskWrite, F_SETFL, ::fcntl(taskWrite, F_GETFL, 0) | O_NONBLOCK);
// Save the ID of this worker thread to compare later
workerId = std::this_thread::get_id();
overflowInUse = false;
function_that_polls_file_descriptor(taskRead, []() {
// Read the callback data
std::function<void(void)>* taskPtr;
::read(taskRead, &taskPtr, sizeof(taskPtr));
// Run the task
(*taskPtr)();
delete taskPtr;
// Run any tasks that were posted to the overflow
while (!overflow.empty()) {
taskPtr = overflow.front();
overflow.pop();
(*taskPtr)();
delete taskPtr;
}
// Release the overflow mechanism if applicable
overflowInUse = false;
});
}
bool write(std::function<void(void)>* taskPtr, bool blocking = true) {
ssize_t rc = ::write(taskWrite, &taskPtr, sizeof(taskPtr));
// Failure handling
if (rc < 0) {
// If blocking is allowed, wait for pipe to become available
int err = errno;
if ((errno == EAGAIN || errno == EWOULDBLOCK) && blocking) {
fd_set fds;
FD_ZERO(&fds);
FD_SET(taskWrite, &fds);
::pselect(1, nullptr, &fds, nullptr, nullptr, nullptr);
// Try again
return write(tdata);
}
// Otherwise return false
return false;
}
return true;
}
void post(const std::function<void(void)>& task) {
auto* taskPtr = new std::function<void(void)>(task);
if (std::this_thread::get_id() == workerId) {
// The worker thread gets 1st-class treatment.
// It won't be blocked if the pipe is full, instead
// using an overflow queue until the overflow has been cleared.
if (!overflowInUse) {
bool success = write(taskPtr, false);
if (!success) {
overflow.push(taskPtr);
overflowInUse = true;
}
} else {
overflow.push(taskPtr);
}
} else {
write(taskPtr);
}
}
Make the pipe write file descriptor non-blocking, so that write fails with EAGAIN when the pipe is full.
One improvement is to increase the pipe buffer size.
Another is to use a UNIX socket/socketpair and increase the socket buffer size.
Yet another solution is to use a UNIX datagram socket which many worker threads can read from, but only one gets the next datagram. In other words, you can use a datagram socket as a thread dispatcher.
You can use the old good select to determine whether the file descriptors are ready to be used for writing:
The file descriptors in writefds will be watched to see if
space is available for write (though a large write may still block).
Since you are writing a pointer, your write() cannot be classified as large at all.
Clearly you must be ready to handle the fact that a post may fail, and then be ready to retry it later... otherwise you will be facing indefinitely growing pipes, until you system will break again.
More or less (not tested):
bool post(const std::function<void(void)>& task) {
bool post_res = false;
// Copy the function onto the heap
auto* taskPtr = new std::function<void(void)>(task);
fd_set wfds;
struct timeval tv;
int retval;
FD_ZERO(&wfds);
FD_SET(taskWrite, &wfds);
// Don't wait at all
tv.tv_sec = 0;
tv.tv_usec = 0;
retval = select(1, NULL, &wfds, NULL, &tv);
// select() returns 0 when no FD's are ready
if (retval == -1) {
// handle error condition
} else if (retval > 0) {
// Write the pointer to the pipe. This write will succeed
::write(taskWrite, &taskPtr, sizeof(taskPtr));
post_res = true;
}
return post_res;
}
If you only look at Android/Linux using a pipe is not start of the art but using a event file descriptor together with epoll is the way to go.

Passing data to another thread in a C++ winsock app

So I have this winsock application (a server, able to accept multiple clients), where in the main thread I setup the socket and create another thread where I listen for clients (listen_for_clients function).
I also constantly receive data from a device in the main thread, which I afterwards concatenate to char arrays (buffers) of Client objects (BroadcastSample function). Currently I create a thread for each connected client (ProcessClient function), where I initialize a Client object and push it to a global vector of clients after which I send data to this client through the socket whenever the buffer in the corresponding Client object exceeds 4000 characters.
Is there a way I can send data from the main thread to the separate client threads so I don't have to use structs/classes (also to send a green light if I want to send the already accumulated data) and also if I'm going to keep a global container of objects, what is a good way to remove a disconnected client object from it without crashing the program because another thread is using the same container?
struct Client{
int buffer_len;
char current_buffer[5000];
SOCKET s;
};
std::vector<Client*> clientBuffers;
DWORD WINAPI listen_for_clients(LPVOID Param)
{
SOCKET client;
sockaddr_in from;
int fromlen = sizeof(from);
char buf[100];
while(true)
{
client = accept(ListenSocket,(struct sockaddr*)&from,&fromlen);
if(client != INVALID_SOCKET)
{
printf("Client connected\n");
unsigned dwThreadId;
HANDLE hThread = (HANDLE)_beginthreadex(NULL, 0, &ProcessClient, (void*)client, 0, &dwThreadId);
}
}
closesocket(ListenSocket);
WSACleanup();
ExitThread(0);
}
unsigned __stdcall ProcessClient(void *data)
{
SOCKET ClientSocket = (SOCKET)data;
Client * a = new Client();
a->current_buffer[0] = '\0';
a->buffer_len = 0;
a->s = ClientSocket;
clientBuffers.push_back(a);
char szBuffer[255];
while(true)
{
if(a->buffer_len > 4000)
{
send(ClientSocket,a->current_buffer,sizeof(a->current_buffer),0);
memset(a->current_buffer,0,5000);
a->buffer_len = 0;
a->current_buffer[0] = '\0';
}
}
exit(1);
}
//function below is called only in main thread, about every 100ms
void BroadcastSample(Sample s)
{
for(std::vector<Client*>::iterator it = clientBuffers.begin(); it != clientBuffers.end(); it++)
{
strcat((*it)->current_buffer,s.to_string);
(*it)->buffer_len += strlen(s.to_string);
}
}
This link has some Microsoft documentation on MS-style mutexes (muticies?).
This other link has some general info on mutexes.
Mutexes are the general mechanism for protecting data which is accessed by multiple threads. There are data structures with built-in thread safety, but in my experience, they usually have caveats that you'll eventually miss. That's just my two cents.
Also, for the record, you shouldn't use strcat, but rather strncat. Also, if one of your client servicing threads accesses one of those buffers after strncat overwrites the old '\0' but before it appends the new one, you'll have a buffer overread (read past end of allocated buffer).
Mutexes will also solve your current busy-waiting problem. I'm not currently near a windows compiler, or I'd try to help more.

Handling threads in server application after clients disconnect

I'm currently working on simple HTTP server. I use Winsock and standard threads from C++11. For each connected (accepted) client there is new thread created.
std::map<SOCKET, std::thread> threads;
bool server_running = true;
while(server_running) {
SOCKET client_socket;
client_socket = accept(listen_socket, NULL, NULL);
if(client_socket == INVALID_SOCKET) {
// some error handling
}
threads[client_socket] = std::thread(clientHandler, client_socket);
}
clientHandler function looks generally like this:
while(1) {
while(!all_data_received) {
bytes_received = recv(client_socket, recvbuf, recvbuflen, 0);
if(bytes_received > 0) {
// do something
} else {
goto client_cleanup;
}
}
// do something
}
client_cleanup: // we also get here when Connection: close was received
closesocket(client_socket);
And here we come to my problem - how to handle all the threads which ended but haven't been joined with main thread and references to them still exist in threads map?
The simplest solution would be probably to iterate over threads frequently (e.q. from another thread?) and join and delete those which returned.
Please share your expertise. :)
PS. Yes, I know about thread pool pattern. I'm not using it in my app (for better or worse). I'm looking for answer concerning my current architecture.
Simple solution? Just detach() after you start the thread. This will mean that once the thread terminates the resources will be cleaned up and you don't need to keep the std::map<SOCKET, std::thread> threads.
std::thread(clientHandler, client_socket).detach();
Otherwise create a thread-safe LIFO queue where during cleanup you push the socket to it.
Then in the main loop you alternately check accept and that queue and when the queue has sockets in them you do threads.erase(socket); for each socket in the queue.
However if you do that then you may as well putt he LIFO in the other direction and use a thread pool.

valgrind/helgrind gets killed on stress test

I'm making a web server on linux in C++ with pthreads. I tested it with valgrind for leaks and memory problems - all fixed. I tested it with helgrind for thread problems - all fixed. I'm trying a stress test. I'm getting problem when the probram is run with helgrind
valgrind --tool=helgrind ./chats
It just dies on random places with the text "Killed" as it would do when I kill it with kill -9. The only report I get sometimes from helgrind is that the program exists while still holding some locks, which is normal when gets killed.
When checking for leaks:
valgrind --leak-check=full ./chats
it's more stable, but I managed to make it die once with few hundreds of concurrent connections.
I tried running program alone and couldn't make it crash at all. I tried up to 250 concurrent connections. Each thread delays with 100ms to make it easier to have multiple connections at the same time. No crash.
In all cases threads as well as connections do not get above 10 and I see it crash even with 2 connections, but never with only one connection at the same time (with including main thread and one helper thread is total of 3).
Is it possible that the problem will only happen when run with
helgrind or just helgrind makes it more likely to show?
What be the reason that a program gets killed (by kernel?) Allocating too much memory, too many file descriptors?
I tested a bit more and I found out that it only dies when the client times out and closes the connection. So here is the code which detects that the client closed the socket:
void *TcpClient::run(){
int ret;
struct timeval tv;
char * buff = (char *)malloc(10001);
int br;
colorPrintf(TC_GREEN, "new client starting: %d\n", sockFd);
while(isRunning()){
tv.tv_sec = 0;
tv.tv_usec = 500*1000;
FD_SET(sockFd, &readFds);
ret = select(sockFd+1, &readFds, NULL, NULL, &tv);
if(ret < 0){
//select error
continue;
}else if(ret == 0){
// no data to read
continue;
}
br = read(sockFd, buff, 10000);
buff[br] = 0;
if (br == 0){
// client disconnected;
setRunning(false);
break;
}
if (reader != NULL){
reader->tcpRead(this, std::string(buff, br));
}else{
readBuffer.append(buff, br);
}
//printf("received: %s\n", buff);
}
free(buff);
sendFeedback((void *)1);
colorPrintf(TC_RED, "closing client socket: %d\n", sockFd);
::close(sockFd);
sockFd = -1;
return NULL;
}
// this method writes to socket
bool TcpClient::write(std::string data){
int bw;
int dataLen = data.length();
bw = ::write(sockFd, data.data(), dataLen);
if (bw != dataLen){
return false; // I don't close the socket in this case, maybe I should
}
return true;
}
P.S. Threads are:
main thread. connections are accepted here.
one helper thread which listen for signals and sends signals. It stops signal reception for the app and manually polls the signal queue. The reason is because it's hard to handle signals when using threads. I found this technique here in stackoverflow and it seams to work pretty fine in other projects.
client connection threads
The full code is pretty big, but I can post chunks if someone is interested.
Update:
I managed to trigger the problem with only one connection. It's all happening in client thread. This is what I do:
I read/parse headers. I put delay before writing so the client can timeout (which causes the problem).
Here the client timeouts and leaves (probably closes socket)
I write back headers
I write back the html code.
Here is how I write back
bw = ::write(sockFd, data.data(), dataLen);
// bw is = dataLen = 108 when writing the headers
//then secondary write for HTML kills the program. there is a message before and after write()
bw = ::write(sockFd, data.data(), dataLen); // doesn't go past this point second time
Update 2: Got it :)
gdb sais:
Program received signal SIGPIPE, Broken pipe.
[Switching to Thread 0x41401940 (LWP 10554)]
0x0000003ac2e0d89b in write () from /lib64/libpthread.so.0
Question 1: What should I do to void receiving this signal.
Question 2: How to know that remote side disconnected while writing. On read select returns that there is data but data read is 0. How about write?
Well I just had to handle the SIGPIPE singal and write returned -1 -> I close socket and quit thread gracefully. Works like a charm.
I guess the easiest way is to set signal handler of SIGPIPE to SIG_IGN:
signal(SIGPIPE, SIG_IGN);
Note that first write was successful and didn't kill the program. If you have similar problem check if you are writing once or multiple times. If you are not familiar with gdb this is how to do it:
gdb ./your-program
> run
and gdb will tell you all about signals and sigfaults.