Can I interrupt function if it is executed for too long? - c++

I a have third party function which I use in my program. I can't replace it; it's in a dynamic library, so I also can't edit it. The problem is that it sometimes runs for too long.
So, can I do anything to stop this function from running if it runs more than 10 seconds for example? (It's OK to close program in this scenario.)
PS. I have Linux, and this program won't have to be ported anywhere else.
What I want is something like this:
#include <stdio.h>
#include <stdlib.h>
void func1 (void) // I can not change contents of this.
{
int i; // random
while (i % 2 == 0);
}
int main ()
{
setTryTime(10000);
timeTry{
func1();
} catchTime {
puts("function executed too long, aborting..");
}
return 0;
}

Sure. And you'd do it just the way you suggested in your title: "signals".
Specifically, an "alarm" signal:
http://linux.die.net/man/2/alarm
http://beej.us/guide/bgipc/output/html/multipage/signals.html

If you really have to do this, you probably want to spawn a process that does nothing but invoke the function and return its result to the caller. If it runs too long, you can kill that process.
By putting it into its own process, you stand a decent (not great, but decent) chance of cleaning up at least most of what it was doing so when it dies unexpectedly it probably won't make a complete mess of things that will lead to later problem.

The potential problem with forcefully cancelling a running function is that it may "own" resources that it intended to return later. The kind of resources that can be problems include:
heap memory allocations (free store)
shared memory segments
threads
sockets
file handles
locks
Some of these resources are managed on a per-process basis, so letting the function run in a different process (perhaps using fork) makes it easier to kill cleanly. Other resources can outlive a process, and really must be cleaned up explicitly. Depending on your operating system, it's also possible that the function may be part-way through interacting with some hardware driver or device, and killing it unexpectedly may leave that driver or device in a bizarre state such that it won't work until after a restart.
If you happen to know that the function doesn't use any of these kind of resources, then you can kill it confidently. But, it's hard to guarantee that: in a large system with many such decisions - which the compiler can't check - evolution of code in functions like func1() is likely to introduce dependencies on such resources.
If you must do this, I'd suggest running it in a different process or thread, and using kill() for processes, pthread_kill if func1() has some support for terminating when a flag is set asynchronously, or the non-portable pthread_cancel if there's really no other choice.

Related

Executing function for some amount of time

I am sorry if this was asked before, but I didn't find anything related to this. And this is for my understanding. It's not an home work.
I want to execute a function only for some amount of time. How do I do that? For example,
main()
{
....
....
func();
.....
.....
}
function func()
{
......
......
}
Here, my main function calls another function. I want that function to execute only for a minute. In that function, I will be getting some data from the user. So, if user doesn't enter the data, I don't want to be stuck in that function forever. So, Irrespective of whether function is completed by that time or it is not completed, I want to come back to the main function and execute the next operation.
Is there any way to do it ? I am on windows 7 and I am using VS-2013.
Under windows, the options are limited.
The simplest option would be for func() to explicitly and periodically check how long it has been executing (e.g. store its start time, periodically check the amount of time elapses since that start time) and return if it has gone longer than you wish.
It is possible (C++11 or later) to execute the function within another thread, and for main() to signal that thread when the required time period has elapsed. That is best done cooperatively. For example, main() sets a flag, the thread function checks that flag and exits when required to. Such a flag is usually best protected by a critical section or mutex.
An extremely unsafe way under windows is for main() to forceably terminate the thread. That is unsafe, as it can leave the program (and, in worst cases, the operating system itself) in an unreliable state (e.g. if the terminated thread is in the process of allocating memory, if it is executing certain kernel functions, manipulating global state of a shared DLL).
If you want better/safer options, you will need a real-time operating system with strict memory and timing partitioning. To date, I have yet to encounter any substantiated documentation about any variant of Windows and unix (not even real time variants) with those characteristics. There are a couple of unix-like systems (e.g. LynxOS) with variants that have such properties.
I think a part of your requirement can be met using multithreading and a loop with a stopwatch.
Create a new thread.
Start a stopwatch.
Start a loop with one minute as the condition for the loop.
During each iteration check if the user has entered the input and process.
when one minute is over, the loop quits.
I 'am not sure about the feasibility about this idea, just shared my idea. I don't know much about c++, but in Node.js your requirement can be achieved using 'events'. May be such things exists in C++ too.

cancelling a search using threads

I am new to multi-threading. I am using c++ on unix.
In the code below, runSearch() takes a long time and I want to be able to kill the search as soon as "cancel == true". The function cancelSearch is called by another thread.
What is the best way to solve this problem?
Thanks you..
------------------This is the existing code-------------------------
struct SearchTask : public Runnable
{
bool cancel = false;
void cancelSearch()
{
cancel = true;
}
void run()
{
cancel = false;
runSearch();
if (cancel == true)
{
return;
}
//...more steps.
}
}
EDIT: To make it more clear, say runSearch() takes 10 mins to run. After 1 min, cancel==true, then I want to exit out of run() immediately rather than waiting another 9 more mins for runSearch() to complete.
You'll need to keep checking the flag throughout the search operation. Something like this:
void run()
{
cancel = false;
while (!cancel)
{
runSearch();
//do your thread stuff...
}
}
You have mentioned that you cannot modify runSearch(). With pthreads there's a pthread_setcancelstate() function, however I don't believe this is safe, especially with C++ code that expects RAII semantics.
Safe thread cancellation must be cooperative. The code that gets canceled must be aware of the cancellation and be able to clean up after itself. If the code is not designed to do this and is simply terminated then your program will probably exhibit undefined behavior.
For this reason C++'s std::thread does not offer any method of thread cancellation and instead the code must be written with explicit cancellation checks as other answers have shown.
Create a generic method that accepts a action / delegate. Have each step be something REALLY small and specific. Send the generic method a delegate / action of what you consider a "step". In the generic method detect if cancel is true and return if true. Because steps are small if it is cancelled it shouldn't take long for the thread to die.
That is the best advice I can give without any code of what the steps do.
Also note :
void run()
{
cancel = false;
runSearch();
while (!cancel)
{
//do your thread stuff...
}
}
Won't work because if what you are doing is not a iteration it will run the entire thread before checking for !cancel. Like I said if you can add more details on what the steps do it would easier to give you advice. When working with threads that you want to halt or kill, your best bet is to split your code into very small steps.
Basically you have to poll the cancel flag everywhere. There are other tricks you could use, but they are more platform-specific, like thread cancellation, or are not general enough like interrupts.
And cancel needs to be an atomic variable (like in std::atomic, or just protected it with a mutex) otherwise the compiler might just cache the value in a register and not see the update coming from another thread.
Reading the responses is right - just because you've called a blocking function in a thread doesn't mean it magically turns into a non-blocking call. The thread may not interrupt the rest of the program, but it still has to wait for the runSearch call to complete.
OK, so there are ways round this, but they're not necessarily safe to use.
You can kill a thread explicitly. On Windows you can use TerminateThread() that will kill the thread execution. Sound good right? Well, except that it is very dangerous to use - unless you know exactly what all the resources and calls are going on in the killed thread, you may find yourself with an app that refuses to work correctly next time round. If runSearch opens a DB connection for example, the TerminateThread call will not close it. Same applies to memory, loaded dlls, and all they use. Its designed for killing totally unresponsive threads so you can close a program and restart it.
Given the above, and the very strong recommendation you not use it, the next step is to call the runSearch in a external manner - if you run your blocking call in a separate process, then the process can be killed with a lot more certainty that you won't bugger everything else up. The process dies, clears up its memory, its heap, any loaded dlls, everything. So inside your thread, call CreateProcess and wait on the handle. You'll need some form on IPC (probably best not to use shared memory as it can be a nuisance to reset that when you kill the process) to transfer the results back to your main app. If you need to kill this process, call ExitProcess on it's handle (or exit in Linux)
Note that these exit calls require to be called inside the process, so you'll need to run a thread inside the process for your blocking call. You can terminate a process externally, but again, its dangerous - not nearly as dangerous as killing a thread, but you can still trip up occasionally. (use TerminateProcess or kill for this)

Shutdown Hook c++

is there some way to run code on termination, no matter what kind termination (abnormal,normal,uncaught exception etc.)?
I know its actually possible in Java, but is it even possible in C++? Im assuming a windows environment.
No -- if somebody invokes TerminateProcess, your process will be destroyed without further adieu, and (in particular) without any chance to run any more code in the process of shutting down.
For normal closing applciation I would suggest
atexit()
One good way to approach the problem is using the C++ RAII idiom, which here means that cleanup operations can be placed in the destructor of an object, i.e.
class ShutdownHook {
~ShutdownHook() {
// exit handler code
}
};
int main() {
ShutdownHook h;
//...
}
See the Object Lifetime Manager in ACE library. At the linked document, they discuss about the atexit function as well.
Not for any kind of termination; there are signals that are designed to not be handled, like KILL on Linux.
These signals are designed to terminate a program that has consumed all memory, or CPU, or some other resources, and has left the computer in a state that makes it difficult to run a handler function.

How to insulate a job/thread from crashes

I'm working on a library where I'm farming various tasks out to some third-party libraries that do some relatively sketchy or dangerous platform-specific work. (In specific, I'm writing a mathematical function parser that calls JIT-compilers, like LLVM or libjit, to build machine code.) In practice, these third-party libraries have a tendency to be crashy (part of this is my fault, of course, but I still want some insurance).
I'd like, then, to be able to very gracefully deal with a job dying horribly -- SIGSEGV, SIGILL, etc. -- without bringing down the rest of my code (or the code of the users calling my library functions). To be clear, I don't care if that particular job can continue (I'm not going to try to repair a crash condition), nor do I really care about the state of the objects after such a crash (I'll discard them immediately if there's a crash). I just want to be able to detect that a crash has occurred, stop the crash from taking out the entire process, stop calling whatever's crashing, and resume execution.
(For a little more context, the code at the moment is a for loop, testing each of the available JIT-compilers. Some of these compilers might crash. If they do, I just want to execute continue; and get on with testing another compiler.)
Currently, I've got a signal()-based implementation that fails pretty horribly; of course, it's undefined behavior to longjmp() out of a signal handler, and signal handlers are pretty much expected to end with exit() or terminate(). Just throwing the code in another thread doesn't help by itself, at least the way I've tested it so far. I also can't hack out a way to make this work using C++ exceptions.
So, what's the best way to insulate a particular set of instructions / thread / job from crashes?
Spawn a new process.
What output do you collect when a job succeeds?
I ask because if the output is low bandwidth I would be tempted to run each job in its own process.
Each of these crashy jobs you fire up has a high chance of corrupting memory used elsewhere in your process.
Processes offer the best protection.
Processes offer the best protection, but it's possible you can't do that.
If your threads' entry points are functions you wrote, (for example, ThreadProc in the Windows world), then you can wrap them in try{...}catch(...) blocks. If you want to communicate that an exception has occurred, then you can communicate specific error codes back to the main thread or use some other mechanism. If you want to log not only that an exception has occured but what that exception was, then you'll need to catch specific exception types and extract diagnostic information from them to communicate back to the main thread. A'la:
int my_tempermental_thread()
{
try
{
// ... magic happens ...
return 0;
}
catch( const std::exception& ex )
{
// ... or maybe it doesn't ...
string reason = ex.what();
tell_main_thread_what_went_wong(reason);
return 1;
}
catch( ... )
{
// ... definitely not magical happenings here ...
tell_main_thread_what_went_wrong("uh, something bad and undefined");
return 2;
}
}
Be aware that if you go this way you run the risk of hosing the host process when the exceptions do occur. You say you're not trying to correct the problem, but how do you know the malignant thread didn't eat your stack for example? Catch-and-ignore is a great way to create horribly confounding bugs.
On Windows, you might be able to use VirtualProtect(YourMemory, PAGE_READONLY) when calling the untrusted code. Any attempt to modify this memory would cause a Structured Exception. You can safely catch this and continue execution. However, memory allocated by that library will of course leak, as will other resources. The Linux equivalent is mprotect(YorMemory, PROT_READ), which causes a SEGV.

Side effects of exit() without exiting?

If my application runs out of memory, I would like to re-run it with changed parameters. I have malloc / new in various parts of the application, the sizes of which are not known in advance. I see two options:
Track all memory allocations and write a restarting procedure which deallocates all before re-running with changed parameters. (Of course, I free memory at the appropriate places if no errors occur)
Restarting the application (e.g., with WinExec() on Windows) and exiting
I am not thrilled by either solution. Did I miss an alternative maybe.
Thanks
You could embedd all the application functionality in a class. Then let it throw an expection when it runs out of memory. This exception would be catched by your application and then you could simply destroy the class, construct a new one and try again. All in one application in one run, no need for restarts. Of course this might not be so easy, depending on what your application does...
There is another option, one I have used in the past, however it requires having planned for it from the beginning, and it's not for the library-dependent programmer:
Create your own heap. It's a lot simpler to destroy a heap than to cleanup after yourself.
Doing so requires that your application is heap-aware. That means that all memory allocations have to go to that heap and not the default one. In C++ you can simply override the static new/delete operators which takes care of everything your code allocates, but you have to be VERY aware of how your libraries, even the standard library, use memory. It's not as simple as "never call a library method that allocates memory". You have to consider each library method on a case-by-case basis.
It sounds like you've already built your app and are looking for a shortcut to memory wiping. If that is the case, this will not help as you could never tack this kind of thing onto an already built application.
The wrapper-program (as proposed before) does not need to be a seperate executable. You could just fork, run your program and then test the return code of the child. This would have the additional benefit, that the operating system automatically reclaims the child's memory when it dies. (at least I think so)
Anyway, I imagined something like this (this is C, you might have to change the includes for C++):
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#define OUT_OF_MEMORY 99999 /* or whatever */
int main(void)
{
int pid, status;
fork_entry:
pid = fork();
if (pid == 0) {
/* child - call the main function of your program here */
} else if (pid > 0) {
/* parent (supervisor) */
wait(&status); /* waiting for the child to terminate */
/* see if child exited normally
(i.e. by calling exit(), _exit() or by returning from main()) */
if (WIFEXITED(status)) {
/* if so, we can get the status code */
if (WEXITSTATUS(status) == OUT_OF_MEMORY) {
/* change parameters */
goto fork_entry; /* forking again */
}
}
} else {
/* fork() error */
return 1;
}
return 0;
}
This might not be the most elegant solution/workaround/hack, but it's easy to do.
A way to accomplish this:
Define an exit status, perhaps like this:
static const int OUT_OF_MEMORY=9999;
Set up a new handler and have it do this:
exit(OUT_OF_MEMORY);
Then just wrap your program with another program that detects this
exit status. When it does then it can rerun the program.
Granted this is more of a workaround than a solution...
The wrapper program I mentioned above could be something like this:
static int special_code = 9999;
int main()
{
const char* command = "whatever";
int status = system(command);
while ( status == 9999 )
{
command = ...;
status = system(command);
}
return 0;
}
That's the basicness of it. I would use std::string instead of char* in production. I'd probably also have another condition for breaking out of the while loop, some maximum number of tries perhaps.
Whatever the case, I think the fork/exec route mentioned below is pretty solid, and I'm pretty sure a solution like it could be created for Windows using spawn and its brethren.
simplicity rules: just restart your app with different parameters.
it is very hard to either track down all allocs/deallocs and clean up the memory (just forget some minor blocks inside bigger chunks [fragmentation] and you still have problems to rerun the class), or to do introduce your own heap-management (very clever people have invested years to bring nedmalloc etc to live, do not fool yourself into the illusion this is an easy task).
so:
catch "out of memory" somehow (signals, or std::bad_alloc, or whatever)
create a new process of your app:
windows: CreateProcess() (you can just exit() your program after this, which cleans up all allocated resources for you)
unix: exec() (replaces the current process completely, so it "cleans up all the memory" for you)
done.
Be warned that on Linux, by default, your program can request more memory than the system has available. (This is done for a number of reasons, e.g. avoiding memory duplication when fork()ing a program into two with identical data, when most of the data will remain untouched.) Memory pages for this data won't be reserved by the system until you try to write in every page you've allocated.
Since there's no good way to report this (since any memory write can cause your system to run out memory), your process will be terminated by the out of memory process killer, and you won't have the information or opportunity for your process to restart itself with different parameters.
You can change the default by using the setrlimit system call, to to limit the RLIMIT_RSS which limits the total amount of memory your process can request. Only after you have done this will malloc return NULL or new throw a std::bad_alloc exception when you reach the limit that you have set.
Be aware that on a heavily loaded system, other processes can still contribute to a systemwide out of memory condition that could cause your program to be killed without malloc or new raising an error, but if you manage the system well, this can be avoided.