I am using a memory pool to create a lot of objects. All my objects derive from a Base class which has its new/delete over ridden to use my memory pool, basically they call pool.allocate(size).
What I would like to do is when the pool runs out of memory (there is still available memory in the system to function) I would like to set everything back to the beginning. I am thinking of setting a label right after main and goto label when allocation fails, reset the pool and start again.
All non stack allocation are handled by the memory pool. Is this a sensible way to achieve this? Are there gonna be any problems down the line?
EDIT:
This is running on an embedded platform so no OS no exceptions. I am trying to achieve a controlled restart instead of a crash from out of memory. Pool is big enough to hold calculations, I am trying to have a controlled crash in case some functions goes awry.
There is no state to be saved from run to run. I am trying to achive the process of hitting the reset button with software. So I can reset back to start of main notify the app about the restart.
I once did a similar thing using setjmp()/longjmp(). It's not perfect or devoid of problems, but, for the most part it works. Like:
jmp_buf g_env;
int main()
{
int val = setjmp(g_env);
if (val) {
// restarting, do what you need to do
}
// initialize your program and go to work
}
/// where you want to restart:
longjmp(g_env, 101); /// 101 or some other "error" code
This is a goto really, so, remember to do any cleanup yourself.
The first thing which comes in mind is throwing an exception. This is actually how default implementation of operator new behaves: it throws std::bad_alloc.
goto will work only if whole your program is limited to a single function. I see possible implementation of main as follows:
int main(int argc, const char* argv[]) {
while(true) {
try {
do_things(argc, argv);
} catch(const MyBadAlloc& ex) {
do_cleanup();
continue;
}
}
}
Related
I'm trying to check when the console is closed through the close button on Windows. I read about SetConsoleCtrlHandler and I thought I'd use that, but there's some cleanup I want to do in my main function. I'll make a small example describing what I want to do for my larger program.
BOOL CtrlHandler( DWORD fdwCtrlType )
{
switch( fdwCtrlType )
{
//Cleanup exit
case CTRL_CLOSE_EVENT:
bool* programIsOn = &???; //How do I pass the address to that variable in this function?
*programIsOn = false;
return( TRUE );
default:
return FALSE;
}
}
int main(){
MyObject obj = new MyObject();
bool programIsOn = true;
//How do I pass the address of programIsOn here?
if(!SetConsoleCtrlHandler( (PHANDLER_ROUTINE) CtrlHandler, TRUE )){
cout << "Could not set CtrlHandler. Exiting." << endl;
return 0;
}
while(programIsOn){
//...
}
//CLEANUP HERE
delete obj;
return 0;
}
I want to perform cleanup when my program closes via the console close event, however if I just close the console the main function doesn't terminate and is forced to stop. I thought of passing in programIsOn's address to the CtrlHandler callback but I have no idea how to do this without using a global variable.
TL;DR: Proper handling of this control signal is complicated. Don't bother with any 'clean-up' unless it's absolutely necessary.
The system creates a new thread (see the Remarks) in your application, which is then used to execute the handler function you registered. That immediately causes a few issues and forces you in a particular design direction.
Namely, your program suddenly became multi-threaded, with all the complications that brings. Just setting a 'program should stop' (global) boolean variable to true in the handler is not going to work; this has to be done in a thread-aware manner.
Another complication this handler brings is that the moment it returns the program is terminated as per a call to ExitProcess. This means that the handler should wait for the program to finish, again in a thread-aware manner. Queue the next complication, where the OS gives you only 10 seconds to respond to the handler before the program is terminated anyway.
The biggest issue here, I think, is that all these issues force your program to be designed in a very particular way that potentially permeates every nook and cranny of your code.
It's not necessary for your program to clean up any handles, objects, locks or memory it uses: these will all be cleaned up by Windows when your program exits.
Therefore, your clean-up code should consists solely of those operations that need to happen and otherwise wouldn't happen, such as write the end of a log file, delete temporary files, etc.
In fact, it is recommended to not perform such clean-up, as it only slows down the closing of the application and can be so hard to get right in 'unexpected termination' cases; The Old New Thing has a wonderful post about it that's also relevant to this situation.
There are two general choices here for the way to handle the remaining clean-up:
The handler routine does all the clean-up, or
the main application does all the clean-up.
Number 1 has the issue that it's very hard to determine what clean-up to perform (as this depends on where the main program is currently executing) and it's doing so 'while the engine is still running'. Number 2 means that every piece of code in the the main application needs to be aware of the possibility of termination and have short-circuit code to handle such.
So if you truly must, necessarily, absolutely, perform some additional clean-up, choose method 2. Add a global variable, preferably a std::atomic<bool> if C++11 is available to you, and use that to track whether or not the program should exit. Have the handler set it to true
// Shared global variable to track forced termination.
std::atomic<bool> programShouldExit = false;
// In the console handler:
BOOL WINAPI CtrlHandler( DWORD fdwCtrlType )
{
...
programShouldExit = true;
Sleep(10000); // Sleep for 10 seconds; after this returns the program will be terminated if it hasn't already.
}
// In the main application, regular checks should be made:
if (programShouldExit.load())
{
// Short-circuit execution, such as return from function, throw exception, etc.
}
Where you can pick your favourite short-circuiting method, for instance throwing an exception and using the RAII pattern to guard resources.
In the console handler, we sleep for as long as we think we can get away with (it doesn't really matter); the hope is that the main thread will have exited by then causing the application to exit. If not, either the sleep ends, the handler returns and the application is closed, or the OS became impatient and killed the process.
Conclusion: Don't bother with clean-up. Even if there is something you prefer to have done, such as deleting temporary files, I'd recommend you don't. It's truly not worth the hassle (but that's my opinion). If you really must, then use thread-safe means to notify the main thread that it must exit. Modify all longer-running code to handle the exit status and all other code to handle the failure of the longer-running code. Exceptions and RAII can be used to make this more manageable, for instance.
And this is why I feel that it's a very poor design choice, born from legacy code. Just being able to handle an 'exit request' requires you to jump through hoops.
C++ has limited stack space but no way for functions to check whether there's enough space left for them to run. I don't know what to do about this when writing script bindings.
For example:
class Container : Widget {
void addChild(WidgetPtr child) { ... }
void draw(Canvas& canvas) {
for (auto child : m_children) {
child.draw();
}
}
};
A malicious script can do this to crash the program:
var a = new Container()
for (i = 0; i < 10000000; i++) {
var b = new Container()
a.addChild(b)
a = b
}
a.draw() // 10000000 nested calls ---> stack overflow
There's also callback problem:
void doSomething(std::function<void()> callback) {
callback();
}
If wrapped using something like this:
ScriptValue doSomething_Wrapper(ScriptArgs args) {
doSomething([&]() { args[0].callAsFunction(); });
}
Crashed using:
function badCallback() { doSomething(badCallback) }
doSomething(badCallback)
...
doSomething_Wrapper
doSomething
ScriptValue::callAsFunction
...
doSomething_Wrapper
doSomething
ScriptValue::callAsFunction
...
BOOM!
What's the most idiomatic way to defend against this with least inconvenience?
What do browsers written in C++ (Firefox, Chrome) do?
What can I do not to introduce a vulnerability like this by accident?
While a "malicious" script could cause a stack overflow, like you describe, it can´t harm the program more than causing crashes that way (at least on modern OS where the stack limit is actually checked and therefore it´s safe against overwrites of other important data).
If it´s critical that the program is running all the time, another process has to monitor it (and restart it if necessary. Not only because stack overflows, there are many more potential problems.).
Other than that, there isn´t much one can do if the OS stack is used. Dynamically allocating a large memory block for a single pointer in the stack and doing the whole memory management manually in this block is possible, but maybe impractical.
About eg. FIrefox: At least parts of the program are using an own memory management (but I´m not sure if it is relevant for plugins, scripts etc.). Additionally, there is a separate process plugin-container.exe (at least on Windows), and killing won´t kill Firefox (only the plugin part like Flash etc. won´t work anymore and the user gets a message about plugin crashing).
I need a condition like:
try
{
//bug condition
}
catch()
{
//Remove file
}
That is I create one very confidential file which data cannot be viewed by third party, but when any bug occurred in code my file is deleted because I don't known exactly, where bugs occurred.
So I want to catch that bug using try and catch and want to delete that file. How can I catch any exception in C++??
That will delete file if there is a bug.
Like:
try
{
char TempArray[10];
char c = TempArray[11];
}
catch
{
cout<<"Array out of boundry";
deleteFile("Confi.txt");
}
A word about security: If you create a file, on hard disk, with confidential information, anyone can shut down the computer while the process is running and the file is still open, take out the hard drive and read its contents.
If the file is on the server you can do basically the same thing by pausing your process before it deletes the file.
Even if you removed the file from the filesystem, most likely it can still be read, since removing a file does not wipe its contents.
I'd recommend not to deal with confidential information until you have the needed expertise - not by learning from SO. But if you have to do it, I think the watchdog process suggested here + encryption is the way to go.
FIRST OF ALL:
You don't want to do that.
Exceptions are not meant for handling bugs, but run-time error conditions that make it impossible for your function to satisfy the pre-conditions of other functions it has to call, or to keep the promise of fulfilling its own post-conditions (given that the caller has satisfied the pre-conditions). See, for instance, this article by Herb Sutter.
Don't ever write anything like this:
try
{
//bug condition <== NO! Exceptions are not meant to handle bugs
}
catch()
{
//Remove file
}
But rather:
assert( /* bug condition... */ );
BACK TO THE QUESTION:
Your program has undefined behavior, and most likely it will not throw any exception at all when you do:
char TempArray[10];
char c = TempArray[11];
So catching all exceptions won't help. This is a bug, i.e. a programming error, and it is arguable whether you should handle bugs in a way that transfer controls to a buggy routine; moreover, if you are admitting the presence of bugs in your program, couldn't you just be transferring control to a buggy handler? That may make it even worse.
Bugs should be dealt with by preventing them, making use of assertions, perhaps adopting methodologies such as test-driven development, and so on.
This said, regarding a way to catch all exceptions, you can do:
try
{
// ...
}
catch (...) // <== THIS WILL CATCH ANY EXCEPTION
{
}
But using catch (...) is discouraged as a design guideline, because it easily leads to swallow error conditions that are meant to be handled and forget about them. After all, exceptions were invented exactly to prevent programmers to forget checking error codes, and catch (...) makes that so easy.
For a catch-everything purpose, it would be better to let all of your exceptions derive from std::exception, and then do:
try
{
// ...
}
catch (std::exception& e)
{
// Do something with e...
}
What you want to use is RAII. In this case create class which in constructor take name of file and in destructor it delete the file. Before doing anything with the file you instantiate object of such class with appropriate name and later on if for whatever reason the function exits (cleanly or by means of exception), the file is going to be deleted.
Example code:
class FileGuard : public boost::noncopyable {
std::string filename_;
public:
FileGuard(const std::string &filename) : filename_(filename)
{}
~FileGuard()
{
::unlink(filename_);
}
}
Your code attempts to access data outside the defined boundaries, which is in all essence a completely valid thing to attempt. Depending on situation and compiler, your code may or may not crash with an access violation/segfault, most probably not. It will most certainly not raise an exception - in C++ exceptions are only thrown explicitly by code, not implicitly by the system, it just crashes if it goes wrong unlike higher level languages like Java and C#.
The catch(...) syntax will catch every possible exception, but this is not applicable in this situation.
As the other answers alluded to, you cannot correctly solve this problem with native, standard c++. Infact, trying to do anything once undefined behaviour has occurred is normally a really bad thing. If you do not know the state your app is in, how can you possibly and safetly run code? Just fail fast.
The proper way to solve your problem is to have another, separate 'watchdog' process performing your clean up. Its pretty simple - just have the watchdog process constantly monitor for the existance of the file. When it pops into existence the watchdog process should delete it. This delete will pend until the last reference to the file exists and then it will perform the deletion [on *nix OS's it will rename the file to a temporary place, on Win systems it will just wait until the file is unreferenced]. Once the main program is finished with the file - either through normal means or crashing or whatever, the OS will delete the file for you correctly.
If you want to make sure your 'private' file is always removed, how about a 'wrapper' program.
Create a new application that runs your protected application and then waits for it to terminate. When it terminates (however that is, crash or clean exit), delete your private file then exit.
Instead of running your application, run the wrapper.
int main(void)
{
int processId = exec("your protected app");
WaitForProcessExit(processId);
_unlink("protectedfile.bin");
return 0;
}
C++ allows you to work directly with the memory and is partly C compatible. The syntax you are trying to use:
char some_array[10];
Is kind of C syntax, not C++ and
char c = some_array[11];
is raw access to memory. This is unexpected behavior. That means that no one will tell you what is going to happen. Maybe you just get wrong character and maybe you program will be killed by OS. Or the moon will fall on the Earth. =)
If you want high level features - use pure C++. Look at standard library instead of "C" arrays. You can use std::vector with at (size_type n) method to get out_of_range exception - just like you need.
I have a class where I allocate some memory. For Example:
class image {
public:
image(){
pixelvalue = 0;
}
void formatandcopy() {
pixelvalue = new int [10000*50000];
if(pixelvalue)
qDebug()<<"allocation successful";
else
qDebug()<<"allocation failed";
}
private:
int *pixelvalue;
};
When I call formatandcopy() the program throws this:
Qt has caught an exception thrown from an event handler. Throwing
exceptions from an event handler is not supported in Qt. You must
reimplement QApplication::notify() and catch all exceptions there.
Anyone know how I can prevent this and let the user know it's simply out of memory? When I run this it doesn't even show allocation failed. The above error is thrown before the qDebug() is called. The program runs fine if the amount of memory allocated is decreased. I thought this was strange since this error is thrown when using the new operator and not a qt function. Furthermore my machine has plenty of memory left. I assume this is a result of qt limiting it's program to a certain heap space. Lastly, if I can fix this by indeed reimplementing the notify function then can anyone point me in the right direction to do this?
You should be able to catch std::bad_alloc to handle the exception within that function. The exception is part of standard C++.
try {
// ...
} catch (std::bad_alloc &a) {
// ...
}
If it gets beyond that scope (into the Qt event handling) then you will have to implement QApplication::notify as they specify.
As a word of warning, bad allocations usually are not recoverable. The exception is when you know you are allocating a very large amount (perhaps based on user input) when you typically will not be using a very memory hungry application.
EDIT:
To clarify, if the design of your application allows you to run out of memory, then it is unlikely that anything will change if you catch bad_allocs and ignore them. The program is dead, you are only going to be able to display an error message about what happened. That is also tricky, because you cannot allocate any memory to create the message box!
The counter example is a scenario like asking the user for a file and reading it all into memory. Eventually they will try and give you a file that they don't have the memory for and you can safely tell them to try another file. These sorts of problems are usually isolated in an application and well worth guarding against.
The error message is telling you exactly what has happened and what to do: Exceptions cannot propagate out of an event handler (or anything inside a signal/slot mechanism), in order for exceptions to be handled properly they have to be caught inside QApplication::notify(), like this:
bool MyApplication::notify( QObject* receiver, QEvent* event )
{
try {
return QApplication::notify( receiver, event );
} catch ( std::exception& e ) {
cout << "Arrrgggghhhh" << endl;
return false;
}
}
Also, it's your OS that decides how much heap space you can have, not Qt.
You don't need to check pixelvalue integrity by doing if(pixelvalue), if new fails an exception is thrown and the pointer isn't allocated anywhere so pixelvalue doesn't exists, thus its useless to check the integrity of a pointer that doesn't exists. Use it if you want to check if it isn't a NULL pointer.
If my application runs out of memory, I would like to re-run it with changed parameters. I have malloc / new in various parts of the application, the sizes of which are not known in advance. I see two options:
Track all memory allocations and write a restarting procedure which deallocates all before re-running with changed parameters. (Of course, I free memory at the appropriate places if no errors occur)
Restarting the application (e.g., with WinExec() on Windows) and exiting
I am not thrilled by either solution. Did I miss an alternative maybe.
Thanks
You could embedd all the application functionality in a class. Then let it throw an expection when it runs out of memory. This exception would be catched by your application and then you could simply destroy the class, construct a new one and try again. All in one application in one run, no need for restarts. Of course this might not be so easy, depending on what your application does...
There is another option, one I have used in the past, however it requires having planned for it from the beginning, and it's not for the library-dependent programmer:
Create your own heap. It's a lot simpler to destroy a heap than to cleanup after yourself.
Doing so requires that your application is heap-aware. That means that all memory allocations have to go to that heap and not the default one. In C++ you can simply override the static new/delete operators which takes care of everything your code allocates, but you have to be VERY aware of how your libraries, even the standard library, use memory. It's not as simple as "never call a library method that allocates memory". You have to consider each library method on a case-by-case basis.
It sounds like you've already built your app and are looking for a shortcut to memory wiping. If that is the case, this will not help as you could never tack this kind of thing onto an already built application.
The wrapper-program (as proposed before) does not need to be a seperate executable. You could just fork, run your program and then test the return code of the child. This would have the additional benefit, that the operating system automatically reclaims the child's memory when it dies. (at least I think so)
Anyway, I imagined something like this (this is C, you might have to change the includes for C++):
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#define OUT_OF_MEMORY 99999 /* or whatever */
int main(void)
{
int pid, status;
fork_entry:
pid = fork();
if (pid == 0) {
/* child - call the main function of your program here */
} else if (pid > 0) {
/* parent (supervisor) */
wait(&status); /* waiting for the child to terminate */
/* see if child exited normally
(i.e. by calling exit(), _exit() or by returning from main()) */
if (WIFEXITED(status)) {
/* if so, we can get the status code */
if (WEXITSTATUS(status) == OUT_OF_MEMORY) {
/* change parameters */
goto fork_entry; /* forking again */
}
}
} else {
/* fork() error */
return 1;
}
return 0;
}
This might not be the most elegant solution/workaround/hack, but it's easy to do.
A way to accomplish this:
Define an exit status, perhaps like this:
static const int OUT_OF_MEMORY=9999;
Set up a new handler and have it do this:
exit(OUT_OF_MEMORY);
Then just wrap your program with another program that detects this
exit status. When it does then it can rerun the program.
Granted this is more of a workaround than a solution...
The wrapper program I mentioned above could be something like this:
static int special_code = 9999;
int main()
{
const char* command = "whatever";
int status = system(command);
while ( status == 9999 )
{
command = ...;
status = system(command);
}
return 0;
}
That's the basicness of it. I would use std::string instead of char* in production. I'd probably also have another condition for breaking out of the while loop, some maximum number of tries perhaps.
Whatever the case, I think the fork/exec route mentioned below is pretty solid, and I'm pretty sure a solution like it could be created for Windows using spawn and its brethren.
simplicity rules: just restart your app with different parameters.
it is very hard to either track down all allocs/deallocs and clean up the memory (just forget some minor blocks inside bigger chunks [fragmentation] and you still have problems to rerun the class), or to do introduce your own heap-management (very clever people have invested years to bring nedmalloc etc to live, do not fool yourself into the illusion this is an easy task).
so:
catch "out of memory" somehow (signals, or std::bad_alloc, or whatever)
create a new process of your app:
windows: CreateProcess() (you can just exit() your program after this, which cleans up all allocated resources for you)
unix: exec() (replaces the current process completely, so it "cleans up all the memory" for you)
done.
Be warned that on Linux, by default, your program can request more memory than the system has available. (This is done for a number of reasons, e.g. avoiding memory duplication when fork()ing a program into two with identical data, when most of the data will remain untouched.) Memory pages for this data won't be reserved by the system until you try to write in every page you've allocated.
Since there's no good way to report this (since any memory write can cause your system to run out memory), your process will be terminated by the out of memory process killer, and you won't have the information or opportunity for your process to restart itself with different parameters.
You can change the default by using the setrlimit system call, to to limit the RLIMIT_RSS which limits the total amount of memory your process can request. Only after you have done this will malloc return NULL or new throw a std::bad_alloc exception when you reach the limit that you have set.
Be aware that on a heavily loaded system, other processes can still contribute to a systemwide out of memory condition that could cause your program to be killed without malloc or new raising an error, but if you manage the system well, this can be avoided.