It's legal to mix c++0x threads with gio GCancellable? - c++

If I'm not wrong there is no easy way to make a c++0x thread cancellable. I'm wondering if it's legal to use GCancellable mixing it with c++0x thread.
If the answer is
No
I guess I should use glib threads or it's not so legal too?

I am not very familiar with GCancellable. After a quick read through, it appears to be a hierarchical notification system.
If that is the case then yes you can easily mix GCancellable with std::thread.
There is no easy way to make a std::thread cancellable.
This is wrong.
There is no non-zero cost way to make all std::threads cancellable.
This is correct.
The problem is providing a general solution. Notification is easy enough. The hard part is making sure the thread sees the notification. The thread may be blocked on a mutex or IO. You cannot just kill the thread. All sorts of bad can occur.
Each individual implementation is free to implement their own cancellation system tailored to you particular needs.
If you need to be interruptable from a blocking mutex, make sure you only use timed_mutexes, and that you call g_cancellable_is_cancelled frequently enough that your thread will cancel as needed.

You mean something like boost's interruptible threads?
This aspect didn't make it into the standard but you can derive from std::thread to offer a protected check_interrupted() method which throws if someone called a public interrupt() method.
I wouldn't bother mixing with Gnome's thread constructs. Sounds like more trouble than it's worth.

Related

Implementing asynchronous delays

What would be a smart way to implement something like the following?
// Plain C function for example purposes.
void sleep_async(delay_t delay, void (* callback)(void *), void * data);
That is, a means of asynchronously executing a callback after a delay. POSIX, for example, has a few functions that do something like this, but they are mostly for asynchronous I/O (see this for what I mean). What interests me about those functions how they are executed "as if" on a new thread, according to that manual page, where an implementation may choose to spawn "a single thread...to receive all notifications". I am aware that some may nonetheless choose to spawn a whole thread for each of them, and that stuff like this may require support from the OS itself, so this is just an example.
I already have a couple of ways I could implement this (e.g. priority queue of events sorted by wake time on a timer loop, with no need to start a thread at all), but I am wondering whether there already exists smart[er] or [more] complete implementations of what I want to accomplish. For example, maybe implementations of Task.Delay() from C♯ (and coroutines like it in other language environments) do something smart in minimizing the amount of thread spawning for getting asynchronous delays.
Why am I looking for something like this? As implied by the title, I'm looking for something asynchronous. The above signature is just a simple C example to illustrate roughly what POSIX does. I am implementing some C++20 coroutines for use with co_await and friends, with thread pools and whatnot. Scheduling anything that would end up synchronously waiting on something is probably a bad idea, as it would prevent otherwise free threads from doing any work. Spawning [and potentially immediately detaching] a new thread just to add in an asynchronous delay doesn't seem like a very smart idea, either. My timer loop idea could be okay, but that implies needing a predefined timer granularity, and overhead from the priority queue.
Edit
I neglected to mention any real set of target platforms, as a commenter mentioned. I don't expect to target anything outside the "usual" desktop platforms, so the quirks of embedded development are ignored. The way I plan to use asynchronous delays themselves this way does not necessarily require threading support (everything could just be on a timer loop), but threading will nonetheless be required and used in accord (namely thread pools on which coroutines would be scheduled).
The simple but inefficient way would be to spawn a thread, have it sleep for delay, and then call the callback. This can be done in just a few lines using std::async():
auto delayed_call = std::async(std::launch::async, [&]{
std::this_thread::sleep_for(delay);
callback(data);
});
As mentioned by Thomas Matthews, this requires support for threads. While it's fine for a one-off call, it's not efficient if you have many such delayed calls. Having a priority queue and an event loop or a dedicated thread to handle events in this queue, as you already mentioned, is probably the most efficient way to do it. If you are looking for a library that implements this, then have a look at boost::asio.
As for using C++20 coroutines, I do not think that this will make something like your sleep_async() any easier. However, an event loop could be implemented on top of it.
A smart way? You mean really, really smart? That would be my own implementation, of course. You know about POSIX timers, you probably know about linux timers and the various hacks involving std::thread. But, more seriously, what you require sounds mostly to the tune of something like libeio, or libuv - both of these provide callbacks. It depends on what you can afford in binary size and whether you like the particular abstractions a library offers. The 2 libraries seem to be evolved versions of libevent and libev, libevent being the progenitor of them all.
Creating a std::thread instance involves allocating a stack frame, at the very least, which is by no means cheap.

How to std::thread sleep

I am new to std::thread. I need to put a thread to sleep from another thread, is that possible? In examples, all I see is code like:
std::this_thread::sleep_for(std::chrono::seconds(1));
But what I want to do is something like:
std::thread t([]{...});
t.sleep(std::chrono::seconds(1));
or
sleep(t, std::chrono::seconds(1));
Any ideas?
Because sleep_for is synchronous, it only really makes sense in the current thread. What you want is a way to suspend / resume other threads. The standard does not provide a way to do this (afaik), but you can use platform-dependent methods using native_handle.
For example on Windows, SuspendThread and ResumeThread.
But more important is that there is almost never a need to do this. Usually when you encounter basic things you need that the standard doesn't provide, it's a red flag that you're heading down a dangerous design path. Consider accomplishing your bigger goal in a different way.
No. The standard doesn't give you such a facility, and it shouldn't. What does sleep do? It pauses the execution of a given thread for a at least the given amount of time. Can other threads possibly know without synchronizing that the given thread can be put to sleep in order to achieve a better performance?
No. You would have to provide an synchronized interface, which would counter the performance gain from threads. The only thread which has the needed information whether it's ok to sleep is the thread itself. Therefore std::thread has no member sleep, while std::this_thread has one.

Using asynchronous method vs thread wait

I have 2 versions of a function which are available in a C++ library which do the same task. One is a synchronous function, and another is of asynchronous type which allows a callback function to be registered.
Which of the below strategies is preferable for giving a better memory and performance optimization?
Call the synchronous function in a worker thread, and use mutex synchronization to wait until I get the result
Do not create a thread, but call the asynchronous version and get the result in callback
I am aware that worker thread creation in option 1 will cause more overhead. I am wanting to know issues related to overhead caused by thread synchronization objects, and how it compares to overhead caused by asynchronous call. Does the asynchronous version of a function internally spin off a thread and use synchronization object, or does it uses some other technique like directly talk to the kernel?
"Profile, don't speculate." (DJB)
The answer to this question depends on too many things, and there is no general answer. The role of the developer is to be able to make these decisions. If you don't know, try the options and measure. In many cases, the difference won't matter and non-performance concerns will dominate.
"Premature optimisation is the root of all evil, say 97% of the time" (DEK)
Update in response to the question edit:
C++ libraries, in general, don't get to use magic to avoid synchronisation primitives. The asynchronous vs. synchronous interfaces are likely to be wrappers around things you would do anyway. Processing must happen in a context, and if completion is to be signalled to another context, a synchronisation primitive will be necessary to do that.
Of course, there might be other considerations. If your C++ library is talking to some piece of hardware that can do processing, things might be different. But you haven't told us about anything like that.
The answer to this question depends on context you haven't given us, including information about the library interface and the structure of your code.
Use asynchronous function because will probably do what you want to do manually with synchronous one but less error prone.
Asynchronous: Will create a thread, do work, when done -> call callback
Synchronous: Create a event to wait for, Create a thread for work, Wait for event, On thread call sync version , transfer result, signal event.
You might consider that threads each have their own environment so they use more memory than a non threaded solution when all other things are equal.
Depending on your threading library there can also be significant overhead to starting and stopping threads.
If you need interprocess synchronization there can also be a lot of pain debugging threaded code.
If you're comfortable writing non threaded code (i.e. you won't burn a lot of time writing and debugging it) then that might be the best choice.

Performance of boost::signals2

I'm switching from xlobjects to boost::signals2 as my signal/slot framework in the hope that the establishment of connections, threir removal, signal emission, etc is thread-safe. I'm not interested in inter-thread signal emission at all.
So the simple question is: is boost::signals2 thread safe in the way that, for instance, two or more threads can make a connection on the same signal?
Also, does boost::signals2 incur a performance penalty compared to xlobjects? This is not important as the application doesn't rely heavily on signals/slots, but I'd like to know nevertheless.
boost signals2 is thread safe.
but if for some reason you need extra performance, and can guarantee the single thread access, there is a dummy mutex in the signals2 library that will be a lot faster than a real mutex.
I believe all the answers you need regarding thread safety in boost.signal are in the documentation (short answer : yes, boost:signals2 is thread safe). Regarding performance, I guess thread-safety comes at a cost, but there's only one way to be sure : benchmark !

Boost: what exactly is not threadsafe in Boost.Signals?

I read at multiple places that Boost.Signals is not threadsafe but I haven't found much more details about it. This simple quote doesn't say really that much. Most applications nowadays have threads - even if they try to be single threaded, some of their libraries may use threads (for example libsdl).
I guess the implementation doesn't have problems with other threads not accessing the slot. So it is at least threadsafe in this sense.
But what exactly works and what would not work? Would it work to use it from multiple threads as long as I don't ever access it at the same time? I.e. if I build my own mutexes around the slot?
Or am I forced to use the slot only in that thread where I created it? Or where I used it for the first time?
I don't think it's too clear either, and one of the library reviewers said here:
I also don't liked the fact that only three times the word 'thread' was named.
Boost.signals2 wants to be a 'thread safe signals' library. Therefore some more
details and especially more examples concerning on that area should be given to
the user.
One way of figuring it out is to go to the source and see what they're using _mutex / lock() to protect. Then just imagine what would happen if those calls weren't there. :)
From what I can gather, it's ensuring simple things like "if one thread is doing connects or disconnects, that won't cause a different thread which is iterating through the slots attached to those signals to crash". Kind of like how using a thread-safe version of the C runtime library assures that if two threads make valid calls to printf at the same time then there won't be a crash. (Not to say the output you'll get will make any sense—you're still responsible for the higher order semantics.)
It doesn't seem to be like Qt, in which the thread a certain slot's code gets run on is based on the target slot's "thread affinity" (which means emitting a signal can trigger slots on many different threads to run in parallel.) But I guess not supporting that is why the boost::signal "combiners" can do things like this.
One problem I see is that one thread can connect or disconnect while another thread is signalling.
You can easily wrap your signal and connect calls with mutexes. However, it is non-trivial to wrap the connections. (connect returns connections which you can use to disconnect).