I'm switching from xlobjects to boost::signals2 as my signal/slot framework in the hope that the establishment of connections, threir removal, signal emission, etc is thread-safe. I'm not interested in inter-thread signal emission at all.
So the simple question is: is boost::signals2 thread safe in the way that, for instance, two or more threads can make a connection on the same signal?
Also, does boost::signals2 incur a performance penalty compared to xlobjects? This is not important as the application doesn't rely heavily on signals/slots, but I'd like to know nevertheless.
boost signals2 is thread safe.
but if for some reason you need extra performance, and can guarantee the single thread access, there is a dummy mutex in the signals2 library that will be a lot faster than a real mutex.
I believe all the answers you need regarding thread safety in boost.signal are in the documentation (short answer : yes, boost:signals2 is thread safe). Regarding performance, I guess thread-safety comes at a cost, but there's only one way to be sure : benchmark !
Related
What would be a smart way to implement something like the following?
// Plain C function for example purposes.
void sleep_async(delay_t delay, void (* callback)(void *), void * data);
That is, a means of asynchronously executing a callback after a delay. POSIX, for example, has a few functions that do something like this, but they are mostly for asynchronous I/O (see this for what I mean). What interests me about those functions how they are executed "as if" on a new thread, according to that manual page, where an implementation may choose to spawn "a single thread...to receive all notifications". I am aware that some may nonetheless choose to spawn a whole thread for each of them, and that stuff like this may require support from the OS itself, so this is just an example.
I already have a couple of ways I could implement this (e.g. priority queue of events sorted by wake time on a timer loop, with no need to start a thread at all), but I am wondering whether there already exists smart[er] or [more] complete implementations of what I want to accomplish. For example, maybe implementations of Task.Delay() from C♯ (and coroutines like it in other language environments) do something smart in minimizing the amount of thread spawning for getting asynchronous delays.
Why am I looking for something like this? As implied by the title, I'm looking for something asynchronous. The above signature is just a simple C example to illustrate roughly what POSIX does. I am implementing some C++20 coroutines for use with co_await and friends, with thread pools and whatnot. Scheduling anything that would end up synchronously waiting on something is probably a bad idea, as it would prevent otherwise free threads from doing any work. Spawning [and potentially immediately detaching] a new thread just to add in an asynchronous delay doesn't seem like a very smart idea, either. My timer loop idea could be okay, but that implies needing a predefined timer granularity, and overhead from the priority queue.
Edit
I neglected to mention any real set of target platforms, as a commenter mentioned. I don't expect to target anything outside the "usual" desktop platforms, so the quirks of embedded development are ignored. The way I plan to use asynchronous delays themselves this way does not necessarily require threading support (everything could just be on a timer loop), but threading will nonetheless be required and used in accord (namely thread pools on which coroutines would be scheduled).
The simple but inefficient way would be to spawn a thread, have it sleep for delay, and then call the callback. This can be done in just a few lines using std::async():
auto delayed_call = std::async(std::launch::async, [&]{
std::this_thread::sleep_for(delay);
callback(data);
});
As mentioned by Thomas Matthews, this requires support for threads. While it's fine for a one-off call, it's not efficient if you have many such delayed calls. Having a priority queue and an event loop or a dedicated thread to handle events in this queue, as you already mentioned, is probably the most efficient way to do it. If you are looking for a library that implements this, then have a look at boost::asio.
As for using C++20 coroutines, I do not think that this will make something like your sleep_async() any easier. However, an event loop could be implemented on top of it.
A smart way? You mean really, really smart? That would be my own implementation, of course. You know about POSIX timers, you probably know about linux timers and the various hacks involving std::thread. But, more seriously, what you require sounds mostly to the tune of something like libeio, or libuv - both of these provide callbacks. It depends on what you can afford in binary size and whether you like the particular abstractions a library offers. The 2 libraries seem to be evolved versions of libevent and libev, libevent being the progenitor of them all.
Creating a std::thread instance involves allocating a stack frame, at the very least, which is by no means cheap.
If I'm not wrong there is no easy way to make a c++0x thread cancellable. I'm wondering if it's legal to use GCancellable mixing it with c++0x thread.
If the answer is
No
I guess I should use glib threads or it's not so legal too?
I am not very familiar with GCancellable. After a quick read through, it appears to be a hierarchical notification system.
If that is the case then yes you can easily mix GCancellable with std::thread.
There is no easy way to make a std::thread cancellable.
This is wrong.
There is no non-zero cost way to make all std::threads cancellable.
This is correct.
The problem is providing a general solution. Notification is easy enough. The hard part is making sure the thread sees the notification. The thread may be blocked on a mutex or IO. You cannot just kill the thread. All sorts of bad can occur.
Each individual implementation is free to implement their own cancellation system tailored to you particular needs.
If you need to be interruptable from a blocking mutex, make sure you only use timed_mutexes, and that you call g_cancellable_is_cancelled frequently enough that your thread will cancel as needed.
You mean something like boost's interruptible threads?
This aspect didn't make it into the standard but you can derive from std::thread to offer a protected check_interrupted() method which throws if someone called a public interrupt() method.
I wouldn't bother mixing with Gnome's thread constructs. Sounds like more trouble than it's worth.
I have 2 versions of a function which are available in a C++ library which do the same task. One is a synchronous function, and another is of asynchronous type which allows a callback function to be registered.
Which of the below strategies is preferable for giving a better memory and performance optimization?
Call the synchronous function in a worker thread, and use mutex synchronization to wait until I get the result
Do not create a thread, but call the asynchronous version and get the result in callback
I am aware that worker thread creation in option 1 will cause more overhead. I am wanting to know issues related to overhead caused by thread synchronization objects, and how it compares to overhead caused by asynchronous call. Does the asynchronous version of a function internally spin off a thread and use synchronization object, or does it uses some other technique like directly talk to the kernel?
"Profile, don't speculate." (DJB)
The answer to this question depends on too many things, and there is no general answer. The role of the developer is to be able to make these decisions. If you don't know, try the options and measure. In many cases, the difference won't matter and non-performance concerns will dominate.
"Premature optimisation is the root of all evil, say 97% of the time" (DEK)
Update in response to the question edit:
C++ libraries, in general, don't get to use magic to avoid synchronisation primitives. The asynchronous vs. synchronous interfaces are likely to be wrappers around things you would do anyway. Processing must happen in a context, and if completion is to be signalled to another context, a synchronisation primitive will be necessary to do that.
Of course, there might be other considerations. If your C++ library is talking to some piece of hardware that can do processing, things might be different. But you haven't told us about anything like that.
The answer to this question depends on context you haven't given us, including information about the library interface and the structure of your code.
Use asynchronous function because will probably do what you want to do manually with synchronous one but less error prone.
Asynchronous: Will create a thread, do work, when done -> call callback
Synchronous: Create a event to wait for, Create a thread for work, Wait for event, On thread call sync version , transfer result, signal event.
You might consider that threads each have their own environment so they use more memory than a non threaded solution when all other things are equal.
Depending on your threading library there can also be significant overhead to starting and stopping threads.
If you need interprocess synchronization there can also be a lot of pain debugging threaded code.
If you're comfortable writing non threaded code (i.e. you won't burn a lot of time writing and debugging it) then that might be the best choice.
I need a fast inter-thread communication mechanism for passing work (void*) from TBB tasks to several workers which are in running/blocking operations.
Currently I'm looking into using pipe()+libevent. Is there a faster and more elegant alternative for use with Intel Threading Building Blocks?
You should be able to just use standard memory with mutex locks since threads share the same memory space. The pipe()+libevent solution seems more fitting for interprocess communication where each process has a different memory space.
Check out Implementing a Thread-Safe Queue using Condition Variables. It uses an STL queue, a mutex, and a condition variable to facilitate inter-thread communication. (I don't know if this is applicable to Intel Threading Building Blocks, but since TBB is not mentioned in the question/title, I assume others will end up here like I did -- looking for an inter-thread communication mechanism that is not IPC. And this article might help them, like it helped me.)
Take a look at the Boost lock free and thread safe queue. Very easy to use and works really well. I've used it with threads running on separate cores polling the queue for work.
http://www.boost.org/doc/libs/1_55_0/doc/html/lockfree.html
In Cocoa, is NSThread faster than pthread? is are any performance gain? is it negligible to ignore?
I have no data to back this up, but I'm going to go out on a limb and say "they're equivalent". NSThread is almost certainly wrapper around pthread (is there really any other way to create a system thread?), so any overhead of using NSThread versus pthread would be that associated with creating a new object and then destroying it. Once the thread itself starts, it should be pretty much identical in terms of performance.
I think the real question here is: "Why do you need to know?" Have you come up against some situation where spawning NSThreads seems to be detrimental to your performance? (I could see this being an issue if you're spawning hundreds of threads, but in that case, the hundreds of threads are most likely your problem, and not the NSThread objects)
Unless you have proof that the creation of an NSThread object is a bottleneck in your application, I would definitely go with the "negligible to ignore" option.
pthreads actually have slightly less overhead, but I can't imagine it will make any difference in practice. NSThread uses pthreads underneath. The actual execution speed of the code in your thread will be the same for both.
Under iPhone SDK, NSThread uses pthread as an actual thread. Frankly, they're equivalent.
However, we can access "deep" settings via pthread APIs if we use pthread API. For example, scheduling way, stack size, detached or not, etc. These API are hidden by NSThread capsule.
Therefore, under some conditions, pthreads win.
I would also guess that any "overhead" or "instantiation difference" you pay as an extra for NSThread, would be evened by the extra cycles and calls you will eventually need to perform, to configure your pthread correctly, using pthread APIs.
I believe the NSThread is nothing but convenience wrapper that saves some coding in Cocoa/Cocoa-touch applications that want to be multithreaded.