I am creating a program that will receive messages from a remote machine and needs to write the messages to a file on disk. The difficulty I am finding lies in the fact that the aim of this program is to test the performance of the library that receives the messages and, therefore, I need to make sure that writing the messages to disk does not affect the performance of the library. The library delivers the messages to the program via a callback function. One other difficulty is that the solution must be platform independent.
What options do I have?
I thought of the following:
using boost:asio to write to file, but it seems (see this documentation) that asynchronous write to file is in the Windows specific part of this library - so this cannot be used.
Using boost::interprocess to create a message queue but this documentation indicates that there are 3 methods in which the messages can be sent, and all methods would require the program to block (implicitly or not) if the message queue is full, which I cannot risk.
creating a std::deque<MESSAGES> to push to the deque from the callback function, and pop the messages out while writing to the file (on a separate thread), but STL containers are not guaranteed to be thread-safe. I could lock the pushing onto, and popping off, the deque but we are talking about 47 microseconds between successive messages so I would like to avoid locks altogether.
Does anyone have any more ideas on possible solutions?
STL containers may not be thread-safe but I haven't ever hit one that cannot be used at different times on different threads. Passing the ownership to another thread seems safe.
I have used the following a couple of times, so I know it works:
Create a pointer to a std::vector.
Create a mutex lock to protect the vector pointer.
Use new[] to create a std::vector and then reserve() a large size for it.
In the receiver thread:
Lock the mutex whenever adding an item to the queue. This should be a short lock.
Add queue item.
Release the lock.
If you feel like it signal a condition variable. I sometimes don't: it depends on the design. If the volume is very high and there is no pause in the receive side just skip the condition and poll instead.
On the consumer thread (the disk writer):
Go look for work to do by polling or waiting on a condition variable:
Lock the queue mutex.
Look at the queue length.
If there is work in the queue assign the pointer to a variable in the consumer thread.
Use new[] and reserve() to create a new queue vector and assign it to the queue pointer.
Unlock the mutex.
Go off and write your items to disk.
delete[] the used-up queue vector.
Now, depending on your problem you may end up needing a way to block. For example in one of my programs if the queue length ever hits 100,000 items, the producing thread just starts doing 1 second sleeps and complaining a lot. It is one of those things that shouldn't happen, yet does, so you should consider it. Without any limits at all it will just use all the memory on the machine and then crash with an exception, get killed by OOM or just come to a halt in a swap storm.
boost::thread is platform independent so you should be able to utilize that create a thread to do the blocking writes on. To avoid needing to lock the container every time a message is placed into the main thread you can utilize a modification on the double buffering technique by creating nested containers, such as:
std::deque<std::deque<MESSAGES> >
Then only lock the top level deque when a deque full of messages is ready to be added. The writing thread would in turn only lock the top level deque to pop off a deque full of messages to be written.
Related
I use concurrency::task from ppltasks.h heavily in my codebase.
I would like to find a awaitable queue, where I can do "co_await my_queue.pop()". Has anyone implemented one?
Details:
I have one producer thread that pushes elements to a queue, and another receiver thread would be waiting and waking up when elements arrive in the queue. This receiving thread might wait/wake up to handle other tasks in the meantime (using pplpp::when_any).
I don't want a queue with an interface where i have to poll a try_pop method as that is slow, and I don't want a blocking_pop method as that means I can't handle other ready tasks in the meantime.
This is basically your standard thread-safe queue implementation, but instead of a condition_variable, you will have to use futures to coordinate the different threads. You can then co_await on the future returned by pop to become ready.
The queue's implementation will need to keep a list of the promises that correspond to the outstanding pop calls. In case that the queue is still full when poping, you can return a ready future immediately. You can use plain old std::mutex to synchronize concurrent access to the underlying data structures.
I don't know of any implementation that already does this, but it shouldn't be too hard to pull off. Note though that managing all the futures will introduce some additional overhead, so your queue will probably be slightly less efficient than the classic condition_variable-based approach.
Posted a comment but I might as well write this as the answer since its long an I need formatting.
Basically you're two options are:
Lock-free queues, the most popular of which is this:
https://github.com/cameron314/concurrentqueue
They do have try_pop, because it uses atomic pointer and any atomic methods (e.g. std::atomic_compare_exchange_weak) can and will "fail" and return false at times, so you are forced to have a spin-lock over them.
You may find queues that abstract this inside a "pop" which just calls "try_pop" until it works, but that's the same overhead in the backround.
Lock-base queues:
These are easier to do on your own, without a third part library, just wrap every method you need in locks, if you want to 'peek' very often look into using shared_locks, otherwise just std::lock_guard should be enough to guard all wrapper. However this is what you may call a 'blocking' queue since during an access, weather it is to read or to write, the whole queue will be locked.
There is not thread-safe alternatives to these two implementations. If you are in need of a really large queue (e.g. hundreds of GBs of memory worth of objects) under heavy usage you can consider writing some custom hybrid data structure, but for most usecases moodycamel's queue will be more than sufficient an.
I have a very time sensitive task in my main thread. However, I would also like to simultaneously print some information about this task.
Problem: cout takes some time to execute and therefore slows down the time sensitive task in the main thread.
Idea: I thought about creating an extra thread which handles the output. To communicate between the main thread and the newly created thread, I thought about a vector which includes the strings that should be printed. In the newly created thread an infinite while loop would print these strings one after another.
Problem with the idea: Vectors are not thread safe. Therefore, I'm worried that locking and unlocking the vector will take nearly as much time as it would take when calling cout directly in the main thread.
Question: Is there an alternative to locking/unlocking the vector? Are my worries regarding the locking of the vector misguided? Would you take a whole different approach to solve the problem?
Depending on how time-sensitive the task is, I'd probably build up a vector of outputs in the producer thread, then pass the whole vector to the consumer thread (and repeat as needed).
The queue between the two will need to be thread safe, but you can keep the overhead minuscule by passing a vector every, say, 50-100 ms or so. This is still short enough to look like real-time to most observers, but is long enough to keep the overhead of locking far too low to care about in most cases.
You could use the idea one often sees in "interrupt" programming - send data from the thread to a ring buffer. Then, in another thread, print from the ring buffer. Actually, in the "good old days", one could write a ring buffer without any "atomics" (and can still do on some embedded systems).
But, even with atomics, ring buffers are not hard to write. There is one implementation here: c++ threadsafe ringbuffer implementation (untested, but on the first look seems OK).
I ran recently into a requirement in which there is a need for multithreaded application whose threads run at different rates.
The questions then become, since i am still learning multithreading:
A scenario is given to put things into perspective:
Say 1st thread runs at 100 Hz "real time"
2nd runs at 10 Hz
and say that the 1st thread provides data "myData" to the 2nd thread.
How is myData going to be provided to the 2nd thread, is the common practice to just read whatever is available from the first thread, or there need to be some kind of decimation to reduce the rate.
Does the myData need to be some kind of Singleton with locking mechanism. Although myData isn't shared, but rather updated by the first thread and used in the second thread.
How about the opposite case, when the data used in one thread need to be used at higher rate in a different thread.
How is myData going to be provided to the 2nd thread
One common method is to provide a FIFO queue -- this could be a std::dequeue or a linked list, or whatever -- and have the producer thread push data items onto one end of the queue while the consumer thread pops the data items off of the other end of the queue. Be sure to serialize all accesses to the FIFO queue (using a mutex or similar locking mechanism), to avoid race conditions.
Alternatively, instead of a queue you could have a single shared data object (essentially a queue of length one) and have your producer thread overwrite the object every time it generates new data. This could be done in cases where it's not important that the consumer thread sees every piece of data that was generated, but rather it's only important that it sees the most recent data. You'd still need to do the locking, though, to avoid the risk of the consumer thread reading from the data object at the same time the producer thread is in the middle of writing to it.
or does there need to be some kind of decimation to reduce the rate.
There doesn't need to be any decimation -- the second thread can just read in as much data as there is available to read, whenever it wakes up.
Does the myData need to be some kind of Singleton with locking
mechanism.
Singleton isn't necessary (although it's possible to do it that way). The locking mechanism is necessary, unless you have some kind of lock-free synchronization mechanism (and if you're asking this level of question, you don't have one and you don't want to try to get one either -- keep things simple for now!)
How about the opposite case, when the data used in one thread need to
be used at higher rate in a different thread.
It's the same -- if you're using a proper inter-thread communications mechanism, the rates at which the threads wake up doesn't matter, because the communications mechanism will do the right thing regardless of when or how often the the threads wake up.
Any multithreaded program has to cope with the possibility that one of the threads will work faster than another - by any ratio - even if they're executing on the same CPU with the same clock frequency.
Your choices include:
producer-consumer container than lets the first thread enqueue data, and the second thread "pop" it off for processing: you could let the queue grow as large as memory allows, or put some limit on the size after which either data would be lost or the 1st thread would be forced to slow down and wait to enqueue further values
there are libraries available (e.g. boost), or if you want to implement it yourself google some tutorials/docs on mutex and condition variables
do something conceptually similar to the above but where the size limit is 1 so there's just the single myData variable rather than a "container" - but all the synchronisation and delay choices remain the same
The Singleton pattern is orthogonal to your needs here: the two threads do need to know where the data is, but that would normally be done using e.g. a pointer argument to the function(s) run in the threads. Singleton's easily overused and best avoided unless reasons stack up high....
How Can I develop a producer/ consumer pattern which is thread safe?
in my case, the producer runs in a thread and the consumer runs on another thread.
Is std::deque is safe for this purpose?
can I push_back to the back of a deque in one thread and push_front in another thread?
Edit 1
In my case, I know the maximum number of items in the std::deque (for example 10). Is there any way that I can reserve enough space for items beforehand so during processing, there was no need to change the size of queue memory and hence make sure when I am adding pushing data to back, no change could be happen on front data?
STL C++ containers are not thread-safe: if you decide for them, you need to use proper synchronizations (basically std::mutex and std::lock) when pushing/popping elements.
Alternatively you can use properly designed containers (single-producer-single-consumer queues should fit your needs), one example of them here: http://www.boost.org/doc/libs/1_58_0/doc/html/lockfree.html
addon after your EDIT:
yep, a SPSC queue is basically a ring buffer and definitively fits you needs.
How Can I develop a producer/ consumer pattern which is thread safe?
There are several ways, but using locks and monitors is fairly easy to grasp and doesn't have many hidden caveats. The standard library has std::unique_lock, std::lock_guard and std::condition_variable to implement the pattern. Check out the cppreference page of condition_variable for simple example.
Is std::deque is safe for this purpose?
It's not safe. You need synchronization.
can I push_back to the back of a deque in one thread and push_front in another thread?
Sure, but you need synchronization. There is a race condition when the queue is empty or has only one element. Also when the queue is full or one short of full in case you want to limit it's size.
I think you mean push_back() and pop_front().
std::deque is not thread-safe on its own.
You will need to serialise access using an std::mutex so the consumer isn't trying to pop while the producer is trying to push.
You should also consider how you handle the following:
How does the consumer behave if the deque is empty when it looks for the next item?
If it enters a wait state then you will need a std::condition_variable to be notified by the producer when the deque has been added to.
You may also need to handle program termination in which the consumer is waiting on the deque and the program is terminated. It could be left 'waiting forever' unless you orchestrate things correctly.
10 items is 'piffle' so I wouldn't bother about reserving space. std::deque grows and shrinks automatically so don't bother with fine grain tuning until you've built a working application.
Premature optimization is the root of all evil.
NB: It's not clear how your limiting the queue size but if the producer fills up the queue and then waits for it to clear back down you'll need more waits and conditions coming back the other way to coordinate.
I read a article about multithread program design http://drdobbs.com/architecture-and-design/215900465, it says it's a best practice that "replacing shared data with asynchronous messages. As much as possible, prefer to keep each thread’s data isolated (unshared), and let threads instead communicate via asynchronous messages that pass copies of data".
What confuse me is that I don't see the difference between using shared data and message queues. I am now working on a non-gui project on windows, so let's use windows's message queues. and take a tradition producer-consumer problem as a example.
Using shared data, there would be a shared container and a lock guarding the container between the producer thread and the consumer thread. when producer output product, it first wait for the lock and then write something to the container then release the lock.
Using message queue, the producer could simply PostThreadMessage without block. and this is the async message's advantage. but I think there must exist some lock guarding the message queue between the two threads, otherwise the data will definitely corrupt. the PostThreadMessage call just hide the details. I don't know whether my guess is right but if it's true, the advantage seems no longer exist,since both two method do the same thing and the only difference is that the system hide the details when using message queues.
ps. maybe the message queue use a non-blocking containner, but I could use a concurrent container in the former way too. I want to know how the message queue is implemented and is there any performance difference bwtween the two ways?
updated:
I still don't get the concept of async message if the message queue operations are still blocked somewhere else. Correct me if my guess was wrong: when we use shared containers and locks we will block in our own thread. but when using message queues, myself's thread returned immediately, and left the blocking work to some system thread.
Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. It's much easier to implement than is shared memory for intercomputer communication. Also, as you've already noticed, message passing has the advantage that application developers don't need to worry about the details of protections like shared memory.
Shared memory allows maximum speed and convenience of communication, as it can be done at memory speeds when within a computer. Shared memory is usually faster than message passing, as message-passing are typically implemented using system calls and thus require the more time-consuming tasks of kernel intervention. In contrast, in shared-memory systems, system calls are required only to establish shared-memory regions. Once established, all access are treated as normal memory accesses w/o extra assistance from the kernel.
Edit: One case that you might want implement your own queue is that there are lots of messages to be produced and consumed, e.g., a logging system. With the implemenetation of PostThreadMessage, its queue capacity is fixed. Messages will most liky get lost if that capacity is exceeded.
Imagine you have 1 thread producing data,and 4 threads processing that data (presumably to make use of a multi core machine). If you have a big global pool of data you are likely to have to lock it when any of the threads needs access, potentially blocking 3 other threads. As you add more processing threads you increase the chance of a lock having to wait and increase how many things might have to wait. Eventually adding more threads achieves nothing because all you do is spend more time blocking.
If instead you have one thread sending messages into message queues, one for each consumer thread then they can't block each other. You stil have to lock the queue between the producer and consumer threads but as you have a separate queue for each thread you have a separate lock and each thread can't block all the others waiting for data.
If you suddenly get a 32 core machine you can add 20 more processing threads (and queues) and expect that performance will scale fairly linearly unlike the first case where the new threads will just run into each other all the time.
I have used a shared memory model where the pointers to the shared memory are managed in a message queue with careful locking. In a sense, this is a hybrid between a message queue and shared memory. This is very when large quantities of data must be passed between threads while retaining the safety of the message queue.
The entire queue can be packaged in a single C++ class with appropriate locking and the like. The key is that the queue owns the shared storage and takes care of the locking. Producers acquire a lock for input to the queue and receive a pointer to the next available storage chunk (usually an object of some sort), populates it and releases it. The consumer will block until the next shared object has released by the producer. It can then acquire a lock to the storage, process the data and release it back to the pool. In A suitably designed queue can perform multiple producer/multiple consumer operations with great efficiency. Think a Java thread safe (java.util.concurrent.BlockingQueue) semantics but for pointers to storage.
Of course there is "shared data" when you pass messages. After all, the message itself is some sort of data. However, the important distinction is when you pass a message, the consumer will receive a copy.
the PostThreadMessage call just hide the details
Yes, it does, but being a WINAPI call, you can be reasonably sure that it does it right.
I still don't get the concept of async message if the message queue operations are still blocked somewhere else.
The advantage is more safety. You have a locking mechanism that is systematically enforced when you are passing a message. You don't even need to think about it, you can't forget to lock. Given that multi-thread bugs are some of the nastiest ones (think of race conditions), this is very important. Message passing is a higher level of abstraction built on locks.
The disadvantage is that passing large amounts of data would be probably slow. In that case, you need to use need shared memory.
For passing state (i.e. worker thread reporting progress to the GUI) the messages are the way to go.
It's quite simple (I'm amazed others wrote such length responses!):
Using a message queue system instead of 'raw' shared data means that you have to get the synchronization (locking/unlocking of resources) right only once, in a central place.
With a message-based system, you can think in higher terms of "messages" without having to worry about synchronization issues anymore. For what it's worth, it's perfectly possible that a message queue is implemented using shared data internally.
I think this is the key piece of info there: "As much as possible, prefer to keep each thread’s data isolated (unshared), and let threads instead communicate via asynchronous messages that pass copies of data". I.e. use producer-consumer :)
You can do your own message passing or use something provided by the OS. That's an implementation detail (needs to be done right ofc). The key is to avoid shared data, as in having the same region of memory modified by multiple threads. This can cause hard to find bugs, and even if the code is perfect it will eat performance because of all the locking.
I had exact the same question. After reading the answers. I feel:
in most typical use case, queue = async, shared memory (locks) = sync. Indeed, you can do a async version of shared memory, but that's more code, similar to reinvent the message passing wheel.
Less code = less bug and more time to focus on other stuff.
The pros and cons are already mentioned by previous answers so I will not repeat.