If I make a system call, eg:
call execute_command_line (slowcall1, wait=.false., exitstat=i)
call execute_command_line (slowcall2, wait=.false., exitstat=j)
call execute_command_line (slowcall3, wait=.false., exitstat=k)
call execute_command_line (slowcall4, wait=.false., exitstat=l)
I want to call these in parallel then to check on them as they progress, whether they have completed, and finally take some action. However, polling exitstat doesn't give me that information. What's the right idiom for checking whether my system call I am not waiting for has actually completed?
From here:
https://gcc.gnu.org/onlinedocs/gfortran/EXECUTE_005fCOMMAND_005fLINE.html
It does not appear any of the optional arguments:
EXITSTAT
CMDSTAT
CMDMSG
provide this information.
Is there a recommended modern Fortran procedure used involving writing lock files as part of the call? Or a different asynchronous call?
No, there is no way. The command as you are using it is "fire and forget". If you want any finer control, use threads. Not for parallelism, but for concurency.
Related
In vxWorks 6.9 you can create timers, which are really just wrappers for a watchdog. You supply these guys a function pointer, a delay, and up to one parameter, and after the delay the function is called with the parameter. However, it is called in the interrupt context. This (for some reason) means you cannot call any "blocking" functions or the system literally crashes. You cannot call printf and you cannot call upon an object's public function, ie you cannot do this:
void Foo::WdCallback(Foo *foo){
foo->DoThing();
}
wdStart(wd, 16, (FUNCPTR)Foo::WdCallback, (_Vx_usr_arg_t)my_foo_ptr);
as it will also crash for reasons I don't understand.
What other way can we create a timer/timeout in vxWorks so that we can actually do something useful with the callback? One method I have seen is using a message queue - the watchdog function will call upon a message queue send function. However this means that a task must be created to dequeue that message queue somewhere else. I've also read that the watchdog callback could give a semaphore allowing a task to continue, but that means we have to create a task for every single timer-based function we want..
It looks like no matter what road we take with watchdogs, or timers, in vxWorks, we have to create an entire task just to be able to handle the watchdog callback due to the interupt context. There has to be a less ridiculous way to do this. Is there a purely C++ way to write a timer? Or a simpler vxWorks implementation?
C++ shall not be used for function being executed in an interrupt context. The watchdog here is executed in the context of the system tick interrupt.
If you want to keep C++ code, make sure that no new/delete operation will be performed and you would need to compile the code with addition flags (this should be documented in the VxWorks Programmer's Guide at the C++ section => -fno-rtti -fno-exceptions).
My requirement is that a single frame of data is to be processed by two methods in parallel (they need to be parallel because they are which are computationally demanding).
Based on the result of either of the threads, the other need to be stopped.
That is if method 1 returns TRUE first, method 2 should be stopped.
If method 1 returns FALSE first, method 2 should not be stopped.
Similarly, if method 2 returns TRUE first, method 1 should be stopped.
If method 2 returns FALSE first, method 1 should not be stopped.
Please note that method 1 and method 2 are library calls (black box) and I don't have access to their internals. All I know is that they are computationally intense.
How can I implement it in C++/Windows? Any suggestions?
Take a look at the concurrency runtime.
Specifically the task namespace (http://msdn.microsoft.com/en-us/library/dd492427.aspx) and the when_any function (http://msdn.microsoft.com/en-us/library/hh749973.aspx).
concurrency::when_any will create a task that completes when any of the input tasks complete.
No matter if you use plain Windows threads, std::thread, Task Parallelism, or whatever library you prefer, you're still not going to achieve what you want given the details you provided in your question.
While you can certainly figure out when the first thread/task is finished (e.g. #j-w's answer), you cannot really stop the other task gracefully without telling your "blackbox library function" to stop (unless it provides a ways for explicit early cancellation). You didn't indicate the blackbox function can be told to cancel midway, so I'm assuming it is not.
You cannot simply kill the thread/task since this would create resource leaks and maybe even other nasty stuff such as dealocks, etc. depending on what your blackbox function does.
So, you could go with something like when_any, or other synchronization/signaling primitives, and just let the other thread/task continue to run even though you don't need the result, "un-blackbox" your library functions and add cancellation support, or forget about it altogether.
I'm trying to explore all the options of the new C++11 standard in depth, while using std::async and reading its definition, I noticed 2 things, at least under linux with gcc 4.8.1 :
it's called async, but it got a really "sequential behaviour", basically in the row where you call the future associated with your async function foo, the program blocks until the execution of foo it's completed.
it depends on the exact same external library as others, and better, non-blocking solutions, which means pthread, if you want to use std::async you need pthread.
at this point it's natural for me asking why choosing std::async over even a simple set of functors ? It's a solution that doesn't even scale at all, the more future you call, the less responsive your program will be.
Am I missing something ? Can you show an example that is granted to be executed in an async, non blocking, way ?
it's called async, but it got a really "sequential behaviour",
No, if you use the std::launch::async policy then it runs asynchronously in a new thread. If you don't specify a policy it might run in a new thread.
basically in the row where you call the future associated with your async function foo, the program blocks until the execution of foo it's completed.
It only blocks if foo hasn't completed, but if it was run asynchronously (e.g. because you use the std::launch::async policy) it might have completed before you need it.
it depends on the exact same external library as others, and better, non-blocking solutions, which means pthread, if you want to use std::async you need pthread.
Wrong, it doesn't have to be implemented using Pthreads (and on Windows it isn't, it uses the ConcRT features.)
at this point it's natural for me asking why choosing std::async over even a simple set of functors ?
Because it guarantees thread-safety and propagates exceptions across threads. Can you do that with a simple set of functors?
It's a solution that doesn't even scale at all, the more future you call, the less responsive your program will be.
Not necessarily. If you don't specify the launch policy then a smart implementation can decide whether to start a new thread, or return a deferred function, or return something that decides later, when more resources may be available.
Now, it's true that with GCC's implementation, if you don't provide a launch policy then with current releases it will never run in a new thread (there's a bugzilla report for that) but that's a property of that implementation, not of std::async in general. You should not confuse the specification in the standard with a particular implementation. Reading the implementation of one standard library is a poor way to learn about C++11.
Can you show an example that is granted to be executed in an async, non blocking, way ?
This shouldn't block:
auto fut = std::async(std::launch::async, doSomethingThatTakesTenSeconds);
auto result1 = doSomethingThatTakesTwentySeconds();
auto result2 = fut.get();
By specifying the launch policy you force asynchronous execution, and if you do other work while it's executing then the result will be ready when you need it.
If you need the result of an asynchronous operation, then you have to block, no matter what library you use. The idea is that you get to choose when to block, and, hopefully when you do that, you block for a negligible time because all the work has already been done.
Note also that std::async can be launched with policies std::launch::async or std::launch::deferred. If you don't specify it, the implementation is allowed to choose, and it could well choose to use deferred evaluation, which would result in all the work being done when you attempt to get the result from the future, resulting in a longer block. So if you want to make sure that the work is done asynchronously, use std::launch::async.
I think your problem is with std::future saying that it blocks on get. It only blocks if the result isn't already ready.
If you can arrange for the result to be already ready, this isn't a problem.
There are many ways to know that the result is already ready. You can poll the future and ask it (relatively simple), you could use locks or atomic data to relay the fact that it is ready, you could build up a framework to deliver "finished" future items into a queue that consumers can interact with, you could use signals of some kind (which is just blocking on multiple things at once, or polling).
Or, you could finish all the work you can do locally, and then block on the remote work.
As an example, imagine a parallel recursive merge sort. It splits the array into two chunks, then does an async sort on one chunk while sorting the other chunk. Once it is done sorting its half, the originating thread cannot progress until the second task is finished. So it does a .get() and blocks. Once both halves have been sorted, it can then do a merge (in theory, the merge can be done at least partially in parallel as well).
This task behaves like a linear task to those interacting with it on the outside -- when it is done, the array is sorted.
We can then wrap this in a std::async task, and have a future sorted array. If we want, we could add in a signally procedure to let us know that the future is finished, but that only makes sense if we have a thread waiting on the signals.
In the reference: http://en.cppreference.com/w/cpp/thread/async
If the async flag is set (i.e. policy & std::launch::async != 0), then
async executes the function f on a separate thread of execution as if
spawned by std::thread(f, args...), except that if the function f
returns a value or throws an exception, it is stored in the shared
state accessible through the std::future that async returns to the
caller.
It is a nice property to keep a record of exceptions thrown.
http://www.cplusplus.com/reference/future/async/
there are three type of policy,
launch::async
launch::deferred
launch::async|launch::deferred
by default launch::async|launch::deferred is passed to std::async.
I evaluate JavaScript in my Qt application using QScriptEngine::evaluate(QString code). Let's say I evaluate a buggy piece of JavaScript which loops forever (or takes too long to wait for the result). How can I abort such an execution?
I want to control an evaluation via two buttons Run and Abort in a GUI. (But only one execution is allowed at a time.)
I thought of running the script via QtConcurrent::run, keeping the QFuture and calling cancel() when the Abort is was pressed. But the documentation says that I can't abort such executions. It seems like QFuture only cancels after the current item in the job has been processed, i.e. when reducing or filtering a collection. But for QtConcurrent::run this means that I can't use the future to abort its execution.
The other possibility I came up with was using a QThread and calling quit(), but there I have a similar problems: It only cancels the thread if / as soon as it is waiting in an event loop. But since my execution is a single function call, this is no option either.
QThread also has terminate(), but the documentation makes me worry a bit. Although my code itself doesn't involve mutexes, maybe QScriptEngine::evaluate does behind the scenes?
Warning: This function is dangerous and its use is discouraged. The thread can be terminated at any point in its code path. Threads can be terminated while modifying data. There is no chance for the thread to clean up after itself, unlock any held mutexes, etc. In short, use this function only if absolutely necessary.
Is there another option I am missing, maybe some asynchronous evaluation feature?
http://doc.qt.io/qt-4.8/qscriptengine.html#details
It has a few sections that address your concerns:
http://doc.qt.io/qt-4.8/qscriptengine.html#long-running-scripts
http://doc.qt.io/qt-4.8/qscriptengine.html#script-exceptions
http://doc.qt.io/qt-4.8/qscriptengine.html#abortEvaluation
http://doc.qt.io/qt-4.8/qscriptengine.html#setProcessEventsInterval
Hope that helps.
While the concurrent task itself can't be aborted "from outside", the QScriptEngine can be told (of course from another thread, like your GUI thread) to abort the execution:
QScriptEngine::abortEvaluation(const QScriptValue & result = QScriptValue())
The optional parameter is used as the "pseudo result" which is passed to the caller of evaluate().
You should either set a flag somewhere or use a special result value in abortEvaluation() to make it possible for the caller routine to detect that the execution was aborted.
Note: Using isEvaluating() you can see if an evaluation is currently running.
I'm using a third party library which has a blocking function, that is, it won't return until it's done; I can set a timeout for that call.
Problem is, that function puts the library in a certain state. As soon as it enters that state, I need to do something from my own code. My first solution was to do that in a separate thread:
void LibraryWrapper::DoTheMagic(){
//...
boost::thread EnteredFooStateNotifier( &LibraryWrapper::EnterFooState, this );
::LibraryBlockingFunction( timeout_ );
//...
}
void LibraryWrapper::EnterFooState(){
::Sleep( 50 ); //Ensure ::LibraryBlockingFunction is called first
//Do the stuff
}
Quite nasty, isn't it? I had to put the Sleep call because ::LibraryBlockingFunction must definitely be called before the stuff I do below, or everything will fail. But waiting 50 milliseconds is quite a poor guarantee, and I can't wait more because this particular task needs to be done as fast as possible.
Isn't there a better way to do this? Consider that I don't have access to the Library's code. Boost solutions are welcome.
UPDATE: Like one of the answers says, the library API is ill-defined. I sent an e-mail to the developers explaining the problem and suggesting a solution (i.e. making the call non-blocking and sending an event to a registered callback notifying the state change). In the meantime, I set a timeout high enough to ensure stuff X is done, and set a delay high enough before doing the post-call work to ensure the library function was called. It's not deterministic, but works most of the time.
Would using boost future clarify this code? To use an example from the boost future documentation:
int calculate_the_answer_to_life_the_universe_and_everything()
{
return 42;
}
boost::packaged_task<int> pt(calculate_the_answer_to_life_the_universe_and_everything);
boost::unique_future<int> fi=pt.get_future();
boost::thread task(boost::move(pt));
// In your example, now would be the time to do the post-call work.
fi.wait(); // wait for it to finish
Although you will still presumably need a bit of a delay in order to ensure that your function call has happened (this bit of your problem seems rather ill-defined - is there any way you can establish deterministically when it is safe to execute the post-call state change?).
The problem as I understand it is that you need to do this:
Enter a blocking call
After you have entered the blocking call but before it completes, you need to do something else
You need to have finished #2 before the blocking call returns
From a purely C++ standpoint, there's no way you can accomish this in a deterministic way. That is without understanding the details of the library you're using.
But I noticed your timeout value. That might provide a loophole, maybe.
What if you:
Enter the blocking call with a timeout of zero, so that it returns immediately
Do you other stuff, either in the same thread or synchronized with the main thread. Perhaps using a barrier.
After #2 is verified to be done, enter the blocking call again, with the normal non-zero timeout.
This will only work if the library's state will change if you enter the blocking call with a zero timeout.