We have a code that registers a callback. So the flow that registers the callback has no knowledge of when the callback will be called.
Now the callback will called by another flow in a thread - hence the main flow that has registered the callback needs to wait for callback to complete.
I am having no idea to implement the same as I cannot modify anything in the other thread that will call the callback. How can I make my main thread to responds synchronously - after the callback is called by other thread?
You will need to share some state between the two that can be used to communicate this.
As a corollary, if the callback is stateless, this cannot be done (or only within certain restrictions, such as limiting the number of callbacks that can be active at the same time).
Since the access to that shared state potentially happens concurrently from different threads, all access needs to be synchronized, ie. made thread-safe.
Here is a simple example using std::future:
#include <future>
// [...]
std::promise<void> p;
do_work_async([&p]() { p.set_value(); });
std::future<void> f = p.get_future();
f.get(); // this line will block until the callback is executed
Note that this has potential lifetime issues: The promise needs to be kept alive until the callback has executed. Depending on your program, this might make it necessary to put the promise on the heap.
If stateless callbacks are not supported (eg. the callback parameter must be a plain C-function pointer and no injection point for user state is provided) you need to put your shared state into static storage instead, with the usual resulting limitations.
Related
I am doing some work with threading on an embedded platform. This platform provides a Thread class, and it has a start method that takes a function pointer, like this:
void do_in_parallel() {
// Some stuff to do in a new thread
}
Thread my_thread;
my_thread.start(do_in_parallel);
The problem is there is no way to pass parameters in.1 I want to solve this by creating an abstract class, call it Thread2, that extends Thread (or it could just have a Thread as instance data).
Thread2 would have a pure virtual function void run() and the goal was to pass that to Thread::start(void*()), except I soon learned that member function pointers have a different type and can't be used like this. I could make run() static, but then I still can't have more than one instance, defeating the whole purpose (not to mention you can't have a virtual static function).
Are there any workarounds that wouldn't involve changing the original Thread class (considering it's a library that I'm stuck with as-is)?
1. Global variables are a usable workaround in many cases, except when instantiating more than one thread from the same function pointer. I can't come up with a way to avoid race conditions in that case.
Write a global thread pool.
It maintains a queue of tasks. These tasks can have state.
Whe you add a task to the queue, you can choose to also request it get a thread immediately. Or you can wait for threads in the pool to be finished what they are doing.
The threads in the pool are created by the provided Thread class, and they get their marching instructions from the pool. For the most part, they should pop tasks, do them, then wait on another task being ready.
If waiting isn't permitted, you could still have some global thread manager that stores state for the threads.
The pool/manager returns the equivalent of a future<T> augmented with whatever features you want. Code that provides tasks interacts with the task through that object instead of the embedded Thread type.
A simple wrapper can be written if locking is permitted
void start(Thread& t, void (*fn)(void*), void* p)
{
static std::mutex mtx; // or any other mutex
static void* sp;
static void (*sfn)(void*);
mtx.lock();
sp = p;
sfn = fn;
t.start([]{
auto p = sp;
auto fn = sfn;
mtx.unlock();
fn(p);
});
}
This is obviously not going to scale well, all thread creations goes through the same lock, but its likely enough.
Note this is exception-unsafe, but I assume that is fine in embedded systems.
With the wrapper in place
template<typename C>
void start(Thread& t, C& c)
{
start(t, [](void* p){
(*(C*)p)();
}, &c);
}
Which allows any callable to be used. This particular implementation places the responsibility of managing the callable's lifetime on the caller.
You can create your own threaded dispatching mechanism (producer-consumer queue) built around the platform specific thread.
I assume that you have the equivalent facilities of mutex and conditional variables/signalling mechanism for the target platform.
Create a thread safe queue that can accept function objects.
The run method creates a thread and waits on the queue.
The calling thread can call post()/invoke() method that simply insert a function object to the queue.
The function object can have the necessary arguments passed to the caller thread.
I'm currently trying to get my hands on boost::asio strands. Doing so, I keep reading about "invoking strand post/dispatch inside or outside a strand". Somehow I can't figure out how inside a strand differs from through a strand, and therefore can't grasp the concept of invoking a strand function outside the strand at all.
Probably there is just a small piece missing in my puzzle. Can somebody please give an example how calls to a strand can be inside or outside it?
What I think I've understood so far is that posting something through a strand would be
m_strand.post(myfunctor);
or
m_strand.wrap(myfunctor);
io_svc.post(myfunctor);
Is the latter considered a call to dispatch outside the strand (as opposed to the other being a call to post inside it)? Is there some relation between the strand's "inside realm" and the threads the strand operates on?
If being inside a strand simply meant to invoke a strand's function, then the strand class's documentation would be pointless. It states that strand::post can be invoked outside the strand... That's precisely the part I don't understand.
Even I had some trouble in understanding this concept, but became clear once I started working on libdispatch. It helped me map things with asio better.
Now lets see how to make some sense out of strand. Consider strand as a serial queue of handlers which needs to be executed.
Now, where does these handlers get executed ? Within the worker threads.
Where did these worker threads come from ? From the io_service object you passed while creating the strand.
Something like:
asio::strand s(io_serv_obj);
Now, as you must be knowing, the io_service::run can be called by a single thread or multiple threads. The threads calling the run method of the io_serv_obj are the worker threads for that strand in our case. So, it could be either single threaded or multithreaded.
Coming back to strands, when you post a handler, that handler is always enqueued in the serial queue which we talked about. The worker threads will pick up the handler from the queue one after the other.
Now, when you do a dispatch, asio does some optimization for you:
It checks whether you are calling it from inside one of the worker thread or from some other thread (maybe of some other io_service instance). When it is called outside the current execution context of the strand, thats when it is called outside the strand. So, in the outside case, the dispatch will just enqueue the handler like post when there are other handlers waiting in the queue or will call it directly when it can guarantee that it will not be called concurrently with any other handler from that queue that may be running in one of the worker threads at that moment.
UPDATE:
As noted in the comments section, inside means called within another handler i.e for eg: I posted a handler A and inside that handler, I am doing a dispatch of another handler. Now, as would be explained in #2, if there are no other handlers waiting in the strands serial queue, the dispatch handler will be called synchronously. If this condition is not met, that means, the dispatch is called from outside.
Now, if you call dispatch from outside of the strand i.e not within the current execution context, asio checks its callstack to see if any other handler present in its serial queue is running or not. If not, then it will directly call that handler synchronously. So, there is no cost of enqueueing the handler (I think no extra allocation will be done as well, not sure though).
Lets see the documentation link now:
s.dispatch(a) happens-before s.post(b), where the former is performed
outside the strand
This means that, if dispatch was called from some outside the current run OR there are other handlers already enqueued, then it needs to enqueue the handler, it just cannot call it synchronously. Since its a serial queue, a will get executed before b.
Had there been another call s.dispatch(c) along with a and b but before a and b(in the mentioned order) enqueued, then c will get executed before a and b, but in no way b can get executed before a.
Hope this clears your doubt.
For a given strand object s, running outside s implies that s.running_in_this_thread() returns false. This returns true if the calling thread is executing a handler that was submitted to the strand via post(), dispatch(), or wrap(). Otherwise, it returns false:
io_service.post(handler); // handler will run outside of strand
strand.post(handler); // handler will run inside of strand
strand.dispatch(handler); // handler will run inside of strand
io_service.post(strand.wrap(handler)); // handler will run inside of strand
Given:
a strand object s
a function object f1 that is added to strand s via s.post(), or s.dispatch() when s.running_in_this_thread() == false
a function object f2 that is added to strand s via s.post(), or s.dispatch() when s.running_in_this_thread() == false
then the strand provides a guarantee of ordering and non-concurrency, such that f1 and f2 will not be invoked concurrently. Furthermore, if the addition of f1 happens before the addition of f2, then f1 will be invoked before f2.
I'm writing plugin for one software. This software invokes
void Init() {...}
on loading and have multithreading feature: program can run multiple threads and can call custom functions from my plugin at the same time.
In my plugin I'm using COM objects which I initialize following way:
void Init() { // "Global" initializaton
CoInitializeEx(NULL, COINIT_APARTMENTTHREADED);
position.CreateInstance(__uuidof(Position));
order.CreateInstance(__uuidof(Order));
}
And next I implement plugin-based function (example):
int SendOrder(....) {
return order.SendOrder(...); // invoke COM object's method
}
Problem is that this variant not working as expected so I moved COM object instantiation directly to the function's body:
int SendOrder(....) {
CoInitializeEx(NULL, COINIT_APARTMENTTHREADED);
order.CreateInstance(__uuidof(Order));
int ret = order.SendOrder(...);
CoUnitialize();
return ret;
}
Now COM object will be instantiated on every function call and this variant works as expected (every thread now have it's own apartment and object's instance), but I'm afraid that is not the best solution, because instantiation is costly operation.
Can this be done somehow better?
If you want to be able to invoke COM objects at the same time on multiple threads, you should be initializing the the thread to use a multi-threaded apartment, instead of single-threaded apartment.
Currently, you're initializing the thread as a single-threaded apartment, which means that any objects created on that thread will only execute their functions on that thread. If you attempt to use one of these objects from a different thread, the calls will be marshaled to the thread that created them.
If COM needs to marshal a function call to another thread, it does it via Windows's messaging system. If the thread isn't pumping it's messages, the function will never be called; this is most likely what's happening to you, and why you're seeing that nothing gets executed.
If you initialize your thread as a multi-threaded apartment by using COINIT_MULTITHREADED instead of COINIT_APARTMENTTHREADED when you call CoInitializeEx, it will allow objects created by this thread (i.e. your order) to be used on any other thread.
I have this situation:
void foo::bar()
{
RequestsManager->SendRequest(someRequest, this, &foo::someCallback);
}
where RequestsManager works in asynchronous way:
SendRequest puts the request in a queue and returns to the caller
Other thread gets the requests from the queue and process them
When one request is processed the callback is called
Is it possible to have foo::someCallback called in the same thread as SendRequest? If not, how may I avoid following "callback limitation": callbacks should not make time consuming operations to avoid blocking the requests manager.
No - calls/callbacks cannot change thread context - you have to issue some signal to communicate between threads.
Typically, 'someCallback' would either signal an event upon which the thread that originated the 'SendRequest' call is waiting on, (synchronous call), or push the SendRequest, (and so, presumably, results from its processing), onto a queue upon which the thread that originated the 'SendRequest' call will eventually pop , (asynchronous). Just depends on how the originator wshes to be signaled..
Aynch example - the callback might PostMessage/Dispatcher.BeginInvoke the completed SendRequest to a GUI thread for display of the results.
I can see few ways how to achieve it:
A) Implement strategy similar to signal handling
When request processing is over RequestManager puts callback invocation on the waiting list. Next time SendRequest is called, right before returning execution it will check are there any pending callbacks for the thread and execute them. This is relatively simple approach with minimal requirements on the client. Choose it if latency is not of a concern. RequestManager can expose API to forcefully check for pending callbacks
B) Suspend callback-target thread and execute callback in the third thread
This will give you true asynchronous solution with all its caveats. It will look like target-thread execution got interrupted and execution jumped into interrupt handler. Before callback returns target thread needs to be resumed. You wont be able to access thread local storage or original thread's stack from inside the callback.
Depends on "time-consuming operations"'s definition.
The classic way to do this is:
when the request is processed, the RequestManager should execute that &foo::someCallback
to avoid blocking the request manager, you may just rise a flag inside this callback
check that flag periodically inside the thread, which called RequestsManager->SendRequest
This flag will be just a volatile bool inside class foo
If you want to make sure, that the calling thread (foo's) will understand immediately, that the request has been processed, you need additional synchronization.
Implement (or use already implemented) blocking pipe (or use signals/events) between these threads. The idea is:
foo's thread executes SendRequest
foo starts sleeping on some select (for example)
RequestManager executes the request and:
calls &foo::someCallback
"awakes" the foo's thread (by sending something in that file descriptor, which foo sleeps on (using select))
foo is awaken
checks the volatile bool flag for already processed request
does what it needs to do
annuls the flag
Follow up question to:
This question
As described in the linked question, we have an API that uses an event look that polls select() to handle user defined callbacks.
I have a class using this like such:
class example{
public:
example(){
Timer* theTimer1 = Timer::Event::create(timeInterval,&example::FunctionName);
Timer* theTimer2 = Timer::Event::create(timeInterval,&example::FunctionName);
start();
cout<<pthread_self()<<endl;
}
private:
void start(){
while(true){
if(condition)
FunctionName();
sleep(1);
}
}
void FunctionName(){
cout<<pthread_self()<<endl;
//Do stuff
}
};
The idea behind this is that you want FunctionName to be called both if the condition is true or when the timer is up. Not a complex concept. What I am wondering, is if FunctionName will be called both in the start() function and by the callback at the same time? This could cause some memory corruption for me, as they access a non-thread safe piece of shared memory.
My testing tells me that they do run in different threads (corruption only when I use the events), even though: cout<<pthread_self()<<endl; says they have the same thread id.
Can someone explains to me how these callbacks get forked off? What order do they get exectued? What thread do they run in? I assume they are running in the thread that does the select(), but then when do they get the same thread id?
The real answer would depend on the implementation of Timer, but if you're getting callbacks run from the same thread, it's most likely using signals or posix timers. Either way, select() isn't involved at all.
With signals and posix timers, there is very little you can do safely from the signal handler. Only certain specific signal safe calls, such as read() and write() (NOT fread() and fwrite(), or even new and cout) are allowed to be used. Typically what one will do is write() to a pipe or eventfd, then in another thread, or your main event loop running select(), notice this notification and handle it. This allows you to handle the signal in a safe manner.
Your code as written won't compile, much less run. Example::FunctionName needs to be static, and needs to take an object reference to be used as a callback function.
If the timers run in separate threads, it's possible for this function to be called by three different threads.