How to handle GL_OUT_OF_MEMORY error during glBufferData? - opengl

The OpenGL reference mentions about the GL_OUT_OF_MEMORY error
The state of the GL is undefined, except for the state of the error flags, after this error is recorded.
The function glBufferData can generate this error if it can't digest the given data. But on the other hand the API doesn't seem to provide any way to check if sending data of particular size would succeed.
Is this situation really hopeless? If I get this error, is the only thing left for me to recreate the whole OpenGL context and start over?

What do you do if malloc returns NULL or new throws an exception? Do you have a recovery path for that possibility?
Most applications don't. Most applications happily assume that malloc will never return NULL and/or that new will never throw. And if those operations fail, they will happily crash.
The same would generally go for OpenGL. You presumably asked for that particular memory size for a good reason; because you needed it. And if you can't get it, for whatever reason, there usually isn't a solution for that.
While there are cases where you could recover from not being able to allocate memory, OpenGL confounds you in another way.
See, the reason that the entire state of OpenGL is undefined on an OUT_OF_MEMORY error is this: OOM can happen from anywhere. No function's documentation claims that it can issue an OOM error because every function can issue such an error.
Memory is not (necessarily) allocated when you call an allocating function. The driver can (and almost certainly will) defer allocation until later. So you get an OOM error from whatever OpenGL function you call after the driver detects the OOM condition.
So if buffer allocation fails, long after the call to glBufferData that provoked the failure, what can the OpenGL specification say about the current state? From the OOM error alone, there is no way to track down exactly what caused it.
So if you get this error, recovery is not really possible. Your only real recourse is to terminate the application or rebuild it.
Note that lower-level APIs like Vulkan or D3D12 will OOM immediately when you try to allocate memory and cannot.
Also:
But on the other hand the API doesn't seem to provide any way to check if sending data of particular size would succeed.
That would solve nothing. Why?
Because your application doesn't own the GPU; your OS does. Multiple programs can allocate memory on the GPU at the same time. The OS can poke about with memory too, paging things into and outof memory as it sees fit.
So if you ask if an allocation would succeed, and OpenGL returned yes, by the time you actually perform that allocation, the answer may have changed.
This is also why Vulkan and similar APIs do not have a function to test if an allocation will succeed (nor does it have a function to test how much memory is left unallocated). You just allocate memory; either it works and you get your memory, or it fails and you don't.

Related

VBO limits and GPU memory overflow [duplicate]

The OpenGL reference mentions about the GL_OUT_OF_MEMORY error
The state of the GL is undefined, except for the state of the error flags, after this error is recorded.
The function glBufferData can generate this error if it can't digest the given data. But on the other hand the API doesn't seem to provide any way to check if sending data of particular size would succeed.
Is this situation really hopeless? If I get this error, is the only thing left for me to recreate the whole OpenGL context and start over?
What do you do if malloc returns NULL or new throws an exception? Do you have a recovery path for that possibility?
Most applications don't. Most applications happily assume that malloc will never return NULL and/or that new will never throw. And if those operations fail, they will happily crash.
The same would generally go for OpenGL. You presumably asked for that particular memory size for a good reason; because you needed it. And if you can't get it, for whatever reason, there usually isn't a solution for that.
While there are cases where you could recover from not being able to allocate memory, OpenGL confounds you in another way.
See, the reason that the entire state of OpenGL is undefined on an OUT_OF_MEMORY error is this: OOM can happen from anywhere. No function's documentation claims that it can issue an OOM error because every function can issue such an error.
Memory is not (necessarily) allocated when you call an allocating function. The driver can (and almost certainly will) defer allocation until later. So you get an OOM error from whatever OpenGL function you call after the driver detects the OOM condition.
So if buffer allocation fails, long after the call to glBufferData that provoked the failure, what can the OpenGL specification say about the current state? From the OOM error alone, there is no way to track down exactly what caused it.
So if you get this error, recovery is not really possible. Your only real recourse is to terminate the application or rebuild it.
Note that lower-level APIs like Vulkan or D3D12 will OOM immediately when you try to allocate memory and cannot.
Also:
But on the other hand the API doesn't seem to provide any way to check if sending data of particular size would succeed.
That would solve nothing. Why?
Because your application doesn't own the GPU; your OS does. Multiple programs can allocate memory on the GPU at the same time. The OS can poke about with memory too, paging things into and outof memory as it sees fit.
So if you ask if an allocation would succeed, and OpenGL returned yes, by the time you actually perform that allocation, the answer may have changed.
This is also why Vulkan and similar APIs do not have a function to test if an allocation will succeed (nor does it have a function to test how much memory is left unallocated). You just allocate memory; either it works and you get your memory, or it fails and you don't.

How to tell that a v8 isolate instance uses too much memory?

I am using Google's v8 engine for embedding javascript in my application. At certain times, I will be invoking user-supplied code, and I want to ensure that it does not behave badly by allocating too much memory. Currently, when the javascript tries to make or resize an array to be too large, for example, I get an unceremonious message:
#
# Fatal error in CALL_AND_RETRY_LAST
# Allocation failed - process out of memory
#
And then entire process crashes with a SIGILL. Obviously, this is not acceptable. I require the ability to run user-supplied code, however... and it is not feasible to manually vet all code before it is executed by the engine.
What I would ideally like to do in this case is simply terminate the isolate that was consuming too much memory (without affecting any other isolates that may be running). Is there any way to designate a maximum amount of memory that a js program is allowed to use before it fails, and so if it exceeds that limit, instead of crashing the process, the invocation of the Run or Call commands would simply return an error or set some status flag indicating that it was abnormally terminated.
Things I have tried so far:
Setting a custom array_buffer allocator when the isolate is created, which tracks how much memory is being used and terminates the isolate when the memory usage gets too high> The Allocate function of my allocator does not ever get called.
Calling AddMemoryAllocationCallback with a function that tracks memory usage and tries to terminate the isolate via TerminateExecution() when the allocations exceeds a certain amount. This function does get called, but I get an out of memory error after this function only has reported a couple of megabytes being used, while I know for a fact that the data being created by the bad behaving v8 function is FAR larger than that.
Setting a fatal error handler via SetFatalErrorHandler and trying to invoke TerminateExecution there. This function does get called, but it does not prevent the process from crashing.
Is there anything else I can try?
Edit: authoritative response from the V8 team -- you can't. But they'll accept a patch.
v8::Isolate::SetFatalErrorHandler() should allow you to not crash. But, my understanding is that the isolate is still unusable after the fact. Probably no way of getting around that, since the isolate will be left in an unrecoverable statea.
http://v8.paulfryzel.com/docs/master/classv8_1_1_isolate.html#a131f1e2e6a80618ac3c8c266a041851d
(maybe. there seems to be a lot of stuff going on about this in the 2013-2014 timeframe where people at google said the right thing to do was to just let v8 kill the process -- to which a lot of people thought was dumb. I don't see any resolution)
edit: Mailing list response was that you cannot do this. They will accept a patch if it doesn't have a performance impact.
edit: There was just another thread on this and someone posted what seems to be a pretty good way to avoid OOM in non-malicious situations:
https://groups.google.com/forum/#!topic/v8-users/vKn1hVs8KNQ
I set the heap limit to 8x the limit I actually want. Then, after each
call into the isolate, I check if the memory usage has gone over the
intended limit. If so, I invoke garbage collection. If it's still
over the limit after that, then I terminate the isolate at that point.
Meanwhile, we also enforce a CPU time limit of 50ms. In practice, a
script that allocates tons of memory tends to run out of CPU time
before it can hit the 8x heap limit (especially as the GC slows things
down when approaching the limit).

What happens when you leak a "device" and "device context" - specifically d3d?

I'm a little unclear exactly how these objects function that form bridges between software and hardware. Are they pretty much just software objects that get destroyed if you leak them onto the heap when you terminate the process? Or is there something more to it?
The reason I ask is I forgot to have my initialization routine change its statemachine and hence switch routines resulting in it creating as many "DeviceContexts" and "Devices" as it possibly could and reassigning them to the same pointers (via d3d11createdevice) before I caught my memory leaking at about 2GB.
Then it occurred to me that I really have no clue what it means to fail to release these objects. Is there a hardware component to them that I should be concerned about if these objects were to be leaked such that I need to reset my computer? Or does terminating the process pretty much clean up the mess?
I cold reset my computer regardless just to be sure. But it would be nice to know exactly what happens when you are using low level interfaces like this and you fail to properly destroy/release them.
The operating system will clean up all those device contexts when your program terminates. Otherwise a misbehaved program could bring the system to a standstill.
Your other concern (expressed in a comment) about damaging hardware shouldn't be possible either. If it was a malicious program could wreak all sorts of havoc. You may be possible to damage hardware by directly accessing it, but that sort of access is what the driver (and the device context, which sits between your program and the driver) is for.

Is it not possible to make a C++ application "Crash Proof"?

Let's say we have an SDK in C++ that accepts some binary data (like a picture) and does something. Is it not possible to make this SDK "crash-proof"? By crash I primarily mean forceful termination by the OS upon memory access violation, due to invalid input passed by the user (like an abnormally short junk data).
I have no experience with C++, but when I googled, I found several means that sounded like a solution (use a vector instead of an array, configure the compiler so that automatic bounds check is performed, etc.).
When I presented this to the developer, he said it is still not possible.. Not that I don't believe him, but if so, how is language like Java handling this? I thought the JVM performs everytime a bounds check. If so, why can't one do the same thing in C++ manually?
UPDATE
By "Crash proof" I don't mean that the application does not terminate. I mean it should not abruptly terminate without information of what happened (I mean it will dump core etc., but is it not possible to display a message like "Argument x was not valid" etc.?)
You can check the bounds of an array in C++, std::vector::at does this automatically.
This doesn't make your app crash proof, you are still allowed to deliberately shoot yourself in the foot but nothing in C++ forces you to pull the trigger.
No. Even assuming your code is bug free. For one, I have looked at many a crash reports automatically submitted and I can assure you that the quality of the hardware out there is much bellow what most developers expect. Bit flips are all too common on commodity machines and cause random AVs. And, even if you are prepared to handle access violations, there are certain exceptions that the OS has no choice but to terminate the process, for example failure to commit a stack guard page.
By crash I primarily mean forceful termination by the OS upon memory access violation, due to invalid input passed by the user (like an abnormally short junk data).
This is what usually happens. If you access some invalid memory usually OS aborts your program.
However the question what is invalid memory... You may freely fill with garbage all the memory in heap and stack and this is valid from OS point of view, it would not be valid from your point of view as you created garbage.
Basically - you need to check the input data carefully and relay on this. No OS would do this for you.
If you check your input data carefully you would likely to manage the data ok.
I primarily mean forceful termination
by the OS upon memory access
violation, due to invalid input passed
by the user
Not sure who "the user" is.
You can write programs that won't crash due to invalid end-user input. On some systems, you can be forcefully terminated due to using too much memory (or because some other program is using too much memory). And as Remus says, there is no language which can fully protect you against hardware failures. But those things depend on factors other than the bytes of data provided by the user.
What you can't easily do in C++ is prove that your program won't crash due to invalid input, or go wrong in even worse ways, creating serious security flaws. So sometimes[*] you think that your code is safe against any input, but it turns out not to be. Your developer might mean this.
If your code is a function that takes for example a pointer to the image data, then there's nothing to stop the caller passing you some invalid pointer value:
char *image_data = malloc(1);
free(image_data);
image_processing_function(image_data);
So the function on its own can't be "crash-proof", it requires that the rest of the program doesn't do anything to make it crash. Your developer also might mean this, so perhaps you should ask him to clarify.
Java deals with this specific issue by making it impossible to create an invalid reference - you don't get to manually free memory in Java, so in particular you can't retain a reference to it after doing so. It deals with a lot of other specific issues in other ways, so that the situations which are "undefined behavior" in C++, and might well cause a crash, will do something different in Java (probably throw an exception).
[*] let's face it: in practice, in large software projects, "often".
I think this is a case of C++ codes not being managed codes.
Java, C# codes are managed, that is they are effectively executed by an Interpreter which is able to perform bound checking and detect crash conditions.
With the case of C++, you need to perform bound and other checking yourself. However, you have the luxury of using Exception Handling, which will prevent crash during events beyond your control.
The bottom line is, C++ codes themselves are not crash proof, but a good design and development can make them to be so.
In general, you can't make a C++ API crash-proof, but there are techniques that can be used to make it more robust. Off the top of my head (and by no means exhaustive) for your particular example:
Sanity check input data where possible
Buffer limit checks in the data processing code
Edge and corner case testing
Fuzz testing
Putting problem inputs in the unit test for regression avoidance
If "crash proof" only mean that you want to ensure that you have enough information to investigate crash after it occurred solution can be simple. Most cases when debugging information is lost during crash resulted from corruption and/or loss of stack data due to illegal memory operation by code running in one of threads. If you have few places where you call library or SDK that you don't trust you can simply save the stack trace right before making call into that library at some memory location pointed to by global variable that will be included into partial or full memory dump generated by system when your application crashes. On windows such functionality provided by CrtDbg API.On Linux you can use backtrace API - just search doc on show_stackframe(). If you loose your stack information you can then instruct your debugger to use that location in memory as top of the stack after you loaded your dump file. Well it is not very simple after all, but if you haunted by memory dumps without any clue what happened it may help.
Another trick often used in embedded applications is cycled memory buffer for detailed logging. Logging to the buffer is very cheap since it is never saved, but you can get idea on what happen milliseconds before crash by looking at content of the buffer in your memory dump after the crash.
Actually, using bounds checking makes your application more likely to crash!
This is good design because it means that if your program is working, it's that much more likely to be working /correctly/, rather than working incorrectly.
That said, a given application can't be made "crash proof", strictly speaking, until the Halting Problem has been solved. Good luck!

What's the graceful way of handling out of memory situations in C/C++?

I'm writing an caching app that consumes large amounts of memory.
Hopefully, I'll manage my memory well enough, but I'm just thinking about what
to do if I do run out of memory.
If a call to allocate even a simple object fails, is it likely that even a syslog call
will also fail?
EDIT: Ok perhaps I should clarify the question. If malloc or new returns a NULL or 0L value then it essentially means the call failed and it can't give you the memory for some reason. So, what would be the sensible thing to do in that case?
EDIT2: I've just realised that a call to "new" can throw an exception. This could be caught at a higher level so I can perhaps gracefully exit further up. At that point, it may even be possible to recover depending on how much memory is freed. In the least I should by that point hopefully be able to log something. So while I have seen code that checks the value of a pointer after new, it is unnecessary. While in C, you should check the return value for malloc.
Well, if you are in a case where there is a failure to allocate memory, you're going to get a std::bad_alloc exception. The exception causes the stack of your program to be unwound. In all likelihood, the inner loops of your application logic are not going to be handling out of memory conditions, only higher levels of your application should be doing that. Because the stack is getting unwound, a significant chunk of memory is going to be free'd -- which in fact should be almost all the memory used by your program.
The one exception to this is when you ask for a very large (several hundred MB, for example) chunk of memory which cannot be satisfied. When this happens though, there's usually enough smaller chunks of memory remaining which will allow you to gracefully handle the failure.
Stack unwinding is your friend ;)
EDIT: Just realized that the question was also tagged with C -- if that is the case, then you should be having your functions free their internal structures manually when out of memory conditions are found; not to do so is a memory leak.
EDIT2: Example:
#include <iostream>
#include <vector>
void DoStuff()
{
std::vector<int> data;
//insert a whole crapload of stuff into data here.
//Assume std::vector::push_back does the actual throwing
//i.e. data.resize(SOME_LARGE_VALUE_HERE);
}
int main()
{
try
{
DoStuff();
return 0;
}
catch (const std::bad_alloc& ex)
{ //Observe that the local variable `data` no longer exists here.
std::cerr << "Oops. Looks like you need to use a 64 bit system (or "
"get a bigger hard disk) for that calculation!";
return -1;
}
}
EDIT3: Okay, according to commenters there are systems out there which do not follow the standard in this regard. On the other hand, on such systems, you're going to be SOL in any case, so I don't see why they merit discussion. But if you are on such a platform, it is something to keep in mind.
Doesn't this question make assumptions regarding overcommitted memory?
I.e., an out of memory situation might not be recoverable! Even if you have no memory left, calls to malloc and other allocators may still succeed until the program attempts to use the memory. Then, BAM!, some process gets killed by the kernel in order to satisfy memory load.
I don't have any specific experience on Linux, but I spent a lot of time working in video games on games consoles, where running out of memory is verboten, and on Windows-based tools.
On a modern OS, you're most likely to run out of address space. Running out of memory, as such, is basically impossible. So just allocate a large buffer, or buffers, on startup, in order to hold all the data you'll ever need, whilst leaving a small amount for the OS. Writing random junk to these regions would probably be a good idea in order to force the OS to actually assign the memory to your process. If your process survives this attempt to use every byte it's asked for, there's some kind of backing now reserved for all of this stuff, so now you're golden.
Write/steal your own memory manager, and direct it to allocate from these buffers. Then use it, consistently, in your app, or take advantage of gcc's --wrap option to forward calls from malloc and friends appropriately. If you use any libraries that can't be directed to call into your memory manager, junk them, because they'll just get in your way. Lack of overridable memory management calls is evidence of deeper-seated issues; you're better of without this particular component. (Note: even if you're using --wrap, trust me, this is still evidence of a problem! Life is too short to use those libraries that don't let you overload their memory management!)
Once you run out of memory, OK, you're screwed, but you've still got that space you left free before, so if freeing up some of the memory you've asked for is too difficult you can (with care) call system calls to write a message to the system log and then terminate, or whatever. Just make sure to avoid calls to the C library, because they'll probably try to allocate some memory when you least expect it -- programmers who work with systems that have virtualised address spaces are notorious for this kind of thing -- and that's the very thing that has caused the problem in the first place.
This approach might sound like a pain in the arse. Well... it is. But it's straightforward, and it's worth putting in a bit of effort for that. I think there's a Kernighan-and/or-Ritche quote about this.
If your application is likely to allocate large blocks of memory and risks hitting the per-process or VM limits, waiting until an allocation actually fails is a difficult situation from which to recover. By the time malloc returns NULL or new throws std::bad_alloc, things may be too far gone to reliably recover. Depending on your recovery strategy, many operations may still require heap allocations themselves, so you have to be extremely careful on which routines you can rely.
Another strategy you may wish to consider is to query the OS and monitor the available memory, proactively managing your allocations. This way you can avoid allocating a large block if you know it is likely to fail, and will thus have a better chance of recovery.
Also, depending on your memory usage patterns, using a custom allocator may give you better results than the standard built-in malloc. For example, certain allocation patterns can actually lead to memory fragmentation over time, so even though you have free memory, the available blocks in the heap arena may not have an available block of the right size. A good example of this is Firefox, which switched to dmalloc and saw a great increase in memory efficiency.
I don't think that capturing the failure of malloc or new will gain you much in your situation. linux allocates large chunks of virtual pages in malloc by means of mmap. By this you may find yourself in a situation where you allocate much more virtual memory than you have (real + swap).
The program then will only fail much later with a segfault (SIGSEGV) when you write to the first page for which there isn't any place in swap. In theory you could test for such situations by writing a signal handler and then dirtying all pages that you allocate.
But usually this will not help much either, since your application will be in a very bad state long before that: constantly swapping, computing mechanically with your harddisk...
It's possible for writes to the syslog to fail in low memory conditions: there's no way to know that for every platform without looking at the source for the relevant functions. They could need dynamic memory to format strings that are passed in, for instance.
Long before you run out of memory, however, you'll start paging stuff to disk. And when that happens, you can forget any performance advantages from caching.
Personally, I'm convinced by the design behind Varnish: the operating system offers services to solve a lot of the relevant problems, and it makes sense to use those services (minor editing):
So what happens with Squid's elaborate memory management is that it gets into fights with the kernel's elaborate memory management ...
Squid creates a HTTP object in RAM and it gets used some times rapidly after creation. Then after some time it get no more hits and the kernel notices this. Then somebody tries to get memory from the kernel for something and the kernel decides to push those unused pages of memory out to swap space and use the (cache-RAM) more sensibly for some data which is actually used by a program. This however, is done without Squid knowing about it. Squid still thinks that these http objects are in RAM, and they will be, the very second it tries to access them, but until then, the RAM is used for something productive. ...
After some time, Squid will also notice that these objects are unused, and it decides to move them to disk so the RAM can be used for more busy data. So Squid goes out, creates a file and then it writes the http objects to the file.
Here we switch to the high-speed camera: Squid calls write(2), the address it gives is a "virtual address" and the kernel has it marked as "not at home". ...
The kernel tries to find a free page, if there are none, it will take a little used page from somewhere, likely another little used Squid object, write it to the paging ... space on the disk (the "swap area") when that write completes, it will read from another place in the paging pool the data it "paged out" into the now unused RAM page, fix up the paging tables, and retry the instruction which failed. ...
So now Squid has the object in a page in RAM and written to the disk two places: one copy in the operating system's paging space and one copy in the filesystem. ...
Here is how Varnish does it:
Varnish allocate some virtual memory, it tells the operating system to back this memory with space from a disk file. When it needs to send the object to a client, it simply refers to that piece of virtual memory and leaves the rest to the kernel.
If/when the kernel decides it needs to use RAM for something else, the page will get written to the backing file and the RAM page reused elsewhere.
When Varnish next time refers to the virtual memory, the operating system will find a RAM page, possibly freeing one, and read the contents in from the backing file.
And that's it. Varnish doesn't really try to control what is cached in RAM and what is not, the kernel has code and hardware support to do a good job at that, and it does a good job.
You may not need to write caching code at all.
As has been stated, exhausting memory means that all bets are off. IMHO the best method of handling this situation is to fail gracefully (as opposed to simply crashing!). Your cache could allocate a reasonable amount of memory on instantiation. The size of this memory would equate to an amount that, when freed, will allow the program to terminate reasonably. When your cache detects that memory is becoming low then it should release this memory and instigate a graceful shutdown.
I'm writing an caching app that consumes large amounts of memory.
Hopefully, I'll manage my memory well enough, but I'm just thinking about what to do if I do run out of memory.
If you are writing deamon which should run 24/7/365, then you should not use dynamic memory management: preallocate all the memory in advance and manage it using some slab allocator/memory pool mechanism. That will also protect you again the heap fragmentation.
If a call to allocate even a simple object fails, is it likely that even a syslog call will also fail?
Should not. This is partially reason why syslog exists as a syscall: that application can report an error independent of its internal state.
If malloc or new returns a NULL or 0L value then it essentially means the call failed and it can't give you the memory for some reason. So, what would be the sensible thing to do in that case?
I generally try in the situations to properly handle the error condition, applying the general error handling rules. If error happens during initialization - terminate with error, probably configuration error. If error happens during request processing - fail the request with out-of-memory error.
For plain heap memory, malloc() returning 0 generally means:
that you have exhausted the heap and unless your application free some memory, further malloc()s wouldn't succeed.
the wrong allocation size: it is quite common coding error to mix signed and unsigned types when calculating block size. If the size ends up mistakenly negative, passed to malloc() where size_t is expected, it becomes very large number.
So in some sense it is also not wrong to abort() to produce the core file which can be analyzed later to see why the malloc() returned 0. Though I prefer to (1) include the attempted allocation size in the error message and (2) try to proceed further. If application would crash due to other memory problem down the road (*), it would produce core file anyway.
(*) From my experience of making software with dynamic memory management resilient to malloc() errors I see that often malloc() returns 0 not reliably. First attempts returning 0 are followed by a successful malloc() returning valid pointer. But first access to the pointed memory would crash the application. This is my experience on both Linux and HP-UX - and I have seen similar pattern on Solaris 10 too. The behavior isn't unique to Linux. To my knowledge the only way to make an application 100% resilient to memory problems is to preallocate all memory in advance. And that is mandatory for mission critical, safety, life support and carrier grade applications - they are not allowed dynamic memory management past initialization phase.
I don't know why many of the sensible answers are voted down. In most server environments, running out of memory means that you have a leak somewhere, and that it makes little sense to 'free some memory and try to go on'. The nature of C++ and especially the standard library is that it requires allocations all the time. If you are lucky, you might be able to free some memory and execute a clean shutdown, or at least emit a warning.
It is however far more likely that you won't be able to do a thing, unless the allocation that failed was a huge one, and there is still memory available for 'normal' things.
Dan Bernstein is one of the very few guys I know that can implement server software that operates in memory constrained situations.
For most of the rest of us, we should probably design our software that it leaves things in a useful state when it bails out because of an out of memory error.
Unless you are some kind of brain surgeon, there isn't a lot else to do.
Also, very often you won't even get an std::bad_alloc or something like that, you'll just get a pointer in return to your malloc/new, and only die when you actually try to touch all of that memory. This can be prevented by turning off overcommit in the operating system, but still.
Don't count on being able to deal with the SIGSEGV when you touch memory that the kernel hoped you wouldn't be.. I'm not quite sure how this works on the windows side of things, but I bet they do overcommit too.
All in all, this is not one of C++'s strong spots.