Log queue in multithreaded application - c++

I wrote a network logger which works in separate thread. The idea was to allow application push any amount of data and logger should process it separately without slowing down the main thread. The pseudocode looks like:
void LogCoroutine::runLogic()
{
mBackgroundWorker = std::thread(&LogCoroutine::logic, this);
mBackgroundWorker.detach();
}
void LogCoroutine::logic()
{
while (true)
{
_serverLogic();
_senderLogic();
std::this_thread::sleep_for(std::chrono::milliseconds(10)); // 10ms
}
}
void LogCoroutine::_senderLogic()
{
std::lock_guard<std::mutex> lock(mMutex);
while (!mMessages.empty() && !mClients.empty())
{
std::string nextMessage = mMessages.front();
mMessages.pop_front();
_sendMessage(nextMessage);
}
}
_serverLogic checks the socket for the new connections (peers) and _senderLogic processes queue with messages and send it to all connected peers.
And the last function: pushing message:
void LogCoroutine::pushMessage(const std::string& message)
{
std::lock_guard<std::mutex> lock(mMutex);
mMessages.push_back(message);
}
Everything works well when the packages send not very often. There is a cycle when application starts which logs a lot of information. And application hangs up for a 5-10 seconds, without logging it doesn't slow down.
So, where is the bottleneck of this architecture? Maybe pushing each message with mutex inside is a bad idea?

Your approach is basically polling for log events with some interval (10 ms). This approach (which is in fact busy waiting) is not very performant, since you always consume some CPU even if there are no any log messages. On another hand if new message arrives you don't notify the waiting thread.
I would propose to use some kind of blocking queue which solves both issues. Internally blocking queue has mutex and condition variable, so that consumer thread is waiting (not busy looping!) while queue is empty. I think your use case is just ideal for blocking queue. You can really easily implement your own queue based on mutex + condition variable.
Pushing each message with mutex is not a bad idea, you have to synchronize it anyway. I would just propose to get rid of polling.

See this example:
How to use work queues for producer & consumers (1 to many). Very well explained.

Related

How can I inform a thread running libevent that it should take some action?

I am using libevent in a backend thread to run hiredis and subscribe to a remote redis database. The subscription works superbly using the simple examples from another SO question:
Hiredis waiting for message
However, in order to avoid race conditions it is not trivial to add subscriptions from the main thread. To achieve this I've created a std::vector<std::string> object containing any key strings that the backend should subscribe to. Reading to / from this vector is performed via a mutex.
However, how can I inform the backend that I've added some subscriptions? Currently I use a timer set to a very low resolution:
void Client::fireAndRequeueTimer(int fd, short e, void* arg)
{
Client* client = reinterpret_cast<Client*>(arg); // the client handles the subscription to redis (via hiredis/libevent)
if (client->mDisconnect)
return; // the main thread wants us to exit, so we don't recreate the timer
event* ev = &client->mTimerEvent; // some timer event object we created
timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 1000; // 1ms
evtimer_add(ev, &tv);
// mPendingSubscriptions is an std::vector of strings, which contain the keys that we should add subscriptions to.
if (client->mPendingSubscriptions.size())
{
std::unique_lock<std::mutex> lock(client->mSubscriptionsMutex);
do
{
redisAsyncCommand(
client->mContext,
Client::subCallback,
(char*)"sub",
"SUBSCRIBE %s",
client->mPendingSubscriptions.back().c_str());
client->mPendingSubscriptions.pop_back();
}
while (client->mPendingSubscriptions.size());
}
}
(note that I'm using libevent 1.4.x so features such as EV_PERSIST don't exist and I have to recreate the timer at each event).
While the above works, I am not happy with it for the following reasons:
It places unnecessary strain on the backend to continually poll the vector.
It is difficult for the reader to follow without extensive comments
It is slow; this timer will add as much as 1ms to the time it takes to subscribe to an event. This might be significant, or it might not, but either way it's a waste of time.
Are there any solutions to this problem that will address these concerns within the confines of libevent 1.4.x?
Personally, I prefer to have the target thread add an eventfd (or similar construct) to its event queue.
The eventfd can be notified from any other thread safely and cause the target thread to call the associated event handler.
This way, you do not need to worry about correctly locking the absolute minimum of the libevent structures, as the OS takes care of that for you.
Note: eventfd is not available on OSX, but can be easily emulated with a pipe as long as you do not require an extremely high rate of events.

Portable C++11 way to wait on a socket and bool var at the same time

I have a thread with endless loop that receieves and processes data from a socket. I want that thread to block (sleep) until the data becomes available on the socket for reading or the "exit" boolean variable becomes true (set by a different thread). Is it possible to do that in a portable way without polling and using any third-party libraries (except sockets library naturally)? If it is not possible to do in a portable way what would be the best way to do it under Windows (still NO polling and third-party libraries)?
Example code:
bool exit = false; // or "std::atomic<bool> exit" or anything else
void fn()
{
SOCKET s;
// init socket, establish connection, etc
for(;;)
{
// This thread goes to wait (blocks) until data becomes available on socket
// OR exit var is set to true (by a different thread) - how?
if(exit) break;
// receive and process data from socket
}
}
Set up a queue of messages.
These messages are of the form "PleaseExit" or "DataOnSocket".
Your thread, or task, is activated when anything shows up in the queue, processes the queue, then waits on the queue again. If it gets "PleaseExit" it instead starts cleaning up.
Possibly you will have to have a different thread/task waiting on the condition variable and on the socket to ferry the information over to your unified queue.
I say "thread or task", because having an entire thread dedicated to waiting is overkill. Sadly, C++11 threading doesn't support light weight tasks out of the box perfectly.
Basically, this solution allows a thread to wait on multiple events by delegating the waiting on each event to other threads, which send notifications "up the pipe". You could imagine creating a common infrastructure, where your thread that wants to wait on multiple objects tells the dispatch center what it is waiting for, then waits on its own condition condition variable.
The dispatch center waits on each of the things your thread wanted to wait for, and when they occur proceeds to figure out which threads should be notified, then notifies them.
Far, far from ideal, but it does let you do it in fully standards compliant C++11 land. And can give you an interface much like "wait for multiple objects" from windows. (In fact, on windows, you could do away with much of the machinery if the native_handle of your C++11 synchronization primitives are amenable).

Which protection method to use (mutex , readwritelock ..) on thread inner function

I have a thread that is polling data from web service and then sending it to different class to handle the data. The process of that data can takes a long time, sometimes more than the timer interval that invoking the polling function inside the thread.
I would like to protect this polling function, that is while the processing of the data is in progress, don't enter the function.
My flow is like this
workerThread -> start timer -> that invoking the polling method ->
the polling method gets the data and send it to processing > mean while this polling function can be called again .
If your polling function takes longer to execute than the polling timer than in your function implementation you could attempt to lock the mutex
void pollingFunction() {
bool isLocked = mutex.tryLock(3000); //timeout if you want
if(isLocked)
{
//process the data
}
else
{
return;
}
mutex.unlock();
}
i assume you are using at least 2 threads. one is triggered by timer, the other one is handling the polling data. so the Monitor Object pattern will work for it, you need to define a queue for the polling data and define 2 condition variables (not full, not empty). if it is not full, then the polling could start and then put the data to the queue. if it is not empty then handling could retieve the data, and handle it.

I want to wait on both a file descriptor and a mutex, what's the recommended way to do this?

I would like to spawn off threads to perform certain tasks, and use a thread-safe queue to communicate with them. I would also like to be doing IO to a variety of file descriptors while I'm waiting.
What's the recommended way to accomplish this? Do I have to created an inter-thread pipe and write to it when the queue goes from no elements to some elements? Isn't there a better way?
And if I have to create the inter-thread pipe, why don't more libraries that implement shared queues allow you to create the shared queue and inter-thread pipe as a single entity?
Does the fact I want to do this at all imply a fundamental design flaw?
I'm asking this about both C++ and Python. And I'm mildly interested in a cross-platform solution, but primarily interested in Linux.
For a more concrete example...
I have some code which will be searching for stuff in a filesystem tree. I have several communications channels open to the outside world through sockets. Requests that may (or may not) result in a need to search for stuff in the filesystem tree will be arriving.
I'm going to isolate the code that searches for stuff in the filesystem tree in one or more threads. I would like to take requests that result in a need to search the tree and put them in a thread-safe queue of things to be done by the searcher threads. The results will be put into a queue of completed searches.
I would like to be able to service all the non-search requests quickly while the searches are going on. I would like to be able to act on the search results in a timely fashion.
Servicing the incoming requests would generally imply some kind of event-driven architecture that uses epoll. The queue of disk-search requests and the return queue of results would imply a thread-safe queue that uses mutexes or semaphores to implement the thread safety.
The standard way to wait on an empty queue is to use a condition variable. But that won't work if I need to service other requests while I'm waiting. Either I end up polling the results queue all the time (and delaying the results by half the poll interval, on average), blocking and not servicing requests.
Whenever one uses an event driven architecture, one is required to have a single mechanism to report event completion. On Linux, if one is using files, one is required to use something from the select or poll family meaning that one is stuck with using a pipe to initiate all none file related events.
Edit: Linux has eventfd and timerfd. These can be added to your epoll list and used to break out of the epoll_wait when either triggered from another thread or on a timer event respectively.
There is another option and that is signals. One can use fcntl modify the file descriptor such that a signal is emitted when the file descriptor becomes active. The signal handler may then push a file-ready message onto any type of queue of your choosing. This may be a simple semaphore or mutex/condvar driven queue. Since one is now no longer using select/poll, one no longer needs to use a pipe to queue none file based messages.
Health warning: I have not tried this and although I cannot see why it will not work, I don't really know the performance implications of the signal approach.
Edit: Manipulating a mutex in a signal handler is probably a very bad idea.
I've solved this exact problem using what you mention, pipe() and libevent (which wraps epoll). The worker thread writes a byte to its pipe FD when its output queue goes from empty to non-empty. That wakes up the main IO thread, which can then grab the worker thread's output. This works great is actually very simple to code.
You have the Linux tag so I am going to throw this out: POSIX Message Queues do all this, which should fulfill your "built-in" request if not your less desired cross-platform wish.
The thread-safe synchronization is built-in. You can have your worker threads block on read of the queue. Alternatively MQs can use mq_notify() to spawn a new thread (or signal an existing one) when there is a new item put in the queue. And since it looks like you are going to be using select(), MQ's identifier (mqd_t) can be used as a file descriptor with select.
It seems nobody has mentioned this option yet:
Don't run select/poll/etc. in your "main thread". Start a dedicated secondary thread which does the I/O and pushes notifications into your thread-safe queue (the same queue which your other threads use to communicate with the main thread) when I/O operations complete.
Then your main thread just needs to wait on the notification queue.
Duck's and twk's are actually better answers than doron's (the one selected by the OP), in my opinion. doron suggests writing to a message queue from within the context of a signal handler, and states that the message queue can be "any type of queue." I would strongly caution you against this since many C library/system calls cannot safely be called from within a signal handler (see async-signal-safe).
In particuliar, if you choose a queue protected by a mutex, you should not access it from a signal handler. Consider this scenario: your consumer thread locks the queue to read it. Immediately after, the kernel delivers the signal to notify you that a file descriptor now has data on it. You signal handler runs in the consumer thread, necessarily), and tries to put something on your queue. To do this, it first has to take the lock. But it already holds the lock, so you are now deadlocked.
select/poll is, in my experience, the only viable solution to an event-driven program in UNIX/Linux. I wish there were a better way inside a mutlithreaded program, but you need some mechanism to "wake up" your consumer thread. I have yet to find a method that does not involve a system call (since the consumer thread is on a waitqueue inside the kernel during any blocking call such as select).
EDIT: I forgot to mention one Linux-specific way to handle signals when using select/poll: signalfd(2). You get a file descriptor you can select/poll on, and you handling code runs normally instead of in a signal handler's context.
This is a very common seen problem, especially when you are developing network server-side program. Most Linux server-side program's main look will loop like this:
epoll_add(serv_sock);
while(1){
ret = epoll_wait();
foreach(ret as fd){
req = fd.read();
resp = proc(req);
fd.send(resp);
}
}
It is single threaded(the main thread), epoll based server framework. The problem is, it is single threaded, not multi-threaded. It requires that proc() should never blocks or runs for a significant time(say 10 ms for common cases).
If proc() will ever runs for a long time, WE NEED MULTI THREADS, and executes proc() in a separated thread(the worker thread).
We can submit task to the worker thread without blocking the main thread, using a mutex based message queue, it is fast enough.
epoll_add(serv_sock);
while(1){
ret = epoll_wait();
foreach(ret as fd){
req = fd.read();
queue.add_job(req); // fast, non blockable
}
}
Then we need a way to obtain the task result from a worker thread. How? If we just check the message queue directly, before or after epoll_wait().
epoll_add(serv_sock);
while(1){
ret = epoll_wait(); // may blocks for 10ms
resp = queue.check_result(); // fast, non blockable
foreach(ret as fd){
req = fd.read();
queue.add_job(req); // fast, non blockable
}
}
However, the checking action will execute after epoll_wait() to end, and epoll_wait() usually blocks for 10 micro seconds(common cases) if all file descriptors it waits are not active.
For a server, 10 ms is quite a long time! Can we signal epoll_wait() to end immediately when task result is generated?
Yes! I will describe how it is done in one of my open source project:
Create a pipe for all worker threads, and epoll waits on that pipe as well. Once a task result is generated, the worker thread writes one byte into the pipe, then epoll_wait() will end in nearly the same time! - Linux pipe has 5 us to 20 us latency.
In my project SSDB(a Redis protocol compatible in-disk NoSQL database), I create a SelectableQueue for passing messages between the main thread and worker threads. Just like its name, SelectableQueue has an file descriptor, which can be wait by epoll.
SelectableQueue: https://github.com/ideawu/ssdb/blob/master/src/util/thread.h#L94
Usage in main thread:
epoll_add(serv_sock);
epoll_add(queue->fd());
while(1){
ret = epoll_wait();
foreach(ret as fd){
if(fd is queue){
sock, resp = queue->pop_result();
sock.send(resp);
}
if(fd is client_socket){
req = fd.read();
queue->add_task(fd, req);
}
}
}
Usage in worker thread:
fd, req = queue->pop_task();
resp = proc(req);
queue->add_result(fd, resp);
C++11 has std::mutex and std::condition_variable. The two can be used to have one thread signal another when a certain condition is met. It sounds to me like you will need to build your solution out of these primitives. If you environment does not yet support these C++11 library features, you can find very similar ones at boost. Sorry, can't say much about python.
One way to accomplish what you're looking to do is by implementing the Observer Pattern
You would register your main thread as an observer with all your spawned threads, and have them notify it when they were done doing what they were supposed to (or updating during their run with the info you need).
Basically, you want to change your approach to an event-driven model.

Dedicated thread (one thread per connection) with buffering capability (c/c++)

My process reads from a single queue tasks that need to be sent to several destinations.
We need to maintain order between the tasks (ie task that arrived in the queue at 00:00 needs to be sent before the task that arrived at 00:01) therefore we cannot use thread pool. Order needs to be maintained per destination.
One solution is to create a dedicated thread per destination. The main thread reads the
task from the queue and depending on the destination finds the correct thread.
This solution has a problem: if a worker thread is busy, the master thread would remain blocked, making the system slow. What I need is a new queue per thread. The master thread
shares the resources to the queues and the worker thread reads the new queues for incoming
messages...
I would like to share my thought with the SO community, and I am searching for a C/C++ solution close to me description. Is there a library that implements such model?
The design you want is fairly straightforward; I think you can probably write the code you need and get it working in an hour or two. Looking for a 3rd party library to implement this is probably overkill (unless I am misunderstanding the problem).
In particular, for each 'worker' thread, you need a FIFO data structure (e.g. std::queue), a Mutex, and a mechanism that the 'master' thread can use to signal the thread to wake up and check the data structure for new messages (e.g. a condition variable, or a semaphore, or even a socketpair that the worker blocks on reading, and the master can send a byte on to wake the worker up).
Then to send a task to a particular worker thread, the master would do something like this (pseudocode):
struct WorkerThreadData & workerThread = _workerThreads[threadIndexIWantToSendTo];
workerThread.m_mutex.Lock();
workerThread.m_incomingTasksList.push_back(theNewTaskObject);
workerThread.m_mutex.Unlock();
workerThread.m_signalMechanism.SignalThreadToWakeUp(); // make sure the worker looks at the task list!
... and each worker thread would have an event loop like this:
struct WorkerThreadData & myData = _workerThreads[myWorkerIndex];
TaskObject * taskObject;
while(1)
{
myData.m_signalMechanism.WaitForSignal(); // block until the main thread wakes me up
myData.m_mutex.Lock();
taskObject = (myData.m_incomingTasks.length() > 0) ? myData.m_incomingTasks.pop_front() : NULL;
myData.m_mutex.Unlock();
if (taskObject)
{
taskObject->DoTheWork();
delete taskObject;
}
}
This will never block the master thread (for any significant amount of time), since the Mutex is only held very briefly by anyone. In particular, the worker threads are not holding the mutex while they are working on a task object.
The "need to maintain order" all-but-directly states that you're going to be executing the tasks serially no matter how many threads you have. That being the case, you're probably best off with just one thread servicing the requests.
You could gain something if the requirement is a bit looser than that -- for example, if all the tasks for one destination need to remain in order, but there's no ordering requirement for tasks with different destinations. If this is the case, then your solution of a master queue sending tasks to an input queue for each individual thread sounds like quite a good one.
Edit:
Specifying the number of threads/mutexes dynamically is pretty easy. For example, to take the number from the command line, you could do something on this order (leaving out error and sanity checking for the moment):
std::vector<pthread_t> threads;
int num_threads = atoi(argv[1]);
threads.resize(num_threads);
for (int i=0; i<num_threads; i++)
pthread_create(&threads[i], NULL, thread_routine, NULL);