V8::IdleNotification() never returns true - c++

For some reasons, sometimes the call of the IdleNotification never returns true. The documentation states that
Optional notification that the embedder is idle. V8 uses the notification to reduce memory footprint. This call can be used
repeatedly if the embedder remains idle. Returns true if the
embedder should stop calling IdleNotification until real work has
been done. This indicates that V8 has done as much cleanup as it
will be able to do.
So when the IdelNotifcation never returns true, it means that the garbage collector of V8 never finishes its work.
Because I cannot really find the reason for this behavior I would be glad if anybody could give me a hint what might cause this.

Try calling LowMemoryNotification() on the Isolate instead of or before calling IdleNotification. The former function informs V8 that it should work to free up memory. Without that memory pressure, it doesn't care about a few bytes that you may have allocated.

Related

Heap out of memory, all retained memory held by noscript_shared_function_infos

My application creates a lot of functions with callbacks, this is done in the following way:
v8::Local<v8::Function> myFunc = v8::Function::New(
i->GetCurrentContext(),
FunctionInvokerCallback(),
this->WrapDelegate(callbackInvoke),
0,
v8::ConstructorBehavior::kThrow,
v8::SideEffectType::kHasSideEffect).ToLocalChecked();
//persistentObject is a weak Global reference with a callback to clean up native resources
this->objectHandle->persistentObject->Get(i)->Set(i->GetCurrentContext(), functionName, myFunc).FromJust();
The memory grows and eventually there is an OOM error and a crash. Upon looking at the heap snapshot, I find that most of the retained memory is held by noscript_shared_function_infos in (strong roots).
My guess is either the sharedFunctionInfos aren't cleaned up (and grow and grow and grow), or worse my actual functions aren't cleaned up (when no longer in use).
How do I delete the infos / or the actual functions after I'm done?
Have you tried while(!V8::IdleNotification()) {};?
Shouldn't noscript_shared_function_infos be handled by v8's GC?
I have same issue but with Script::Run in new context. Seems like every time it invoked with script that has function ...() {} it appends some bytes to noscript_shared_function_infos WeakRef Array then never clean it.
I've found only some questions on different platforms (like that) and zero answers from someone who knows v8 well. It's sad.

What may cause EnumProcesses() to fail?

The documentation states:
If the function fails, the return value is zero. To get extended error
information, call GetLastError.
But it doesn't give any example how the function could possibly fail.
For unit testing I need to reliably create a situation that makes EnumProcesses() fail.
Like most functions, it can fail if you pass it invalid parameters. In this case that means a smaller PID array than the size you tell it or a NULL pointer for the received count. It is a bit risky to do this on purpose because you don't know if the function uses SEH to protect against this or if it will just crash.
Internally the function has to allocate some memory before calling into NTDLL to get the process information and this can cause the function to fail if there is not enough memory available.
You should call EnumProcesses in a helper function to abstract away the memory/retry details anyway and that would be a good place to simulate failures when needed.
If you absolutely need the function itself to fail you could hook it with something like Microsoft Detours or IAT hooking...

Freeing a null pointer causes following pop up message

I have some memory intensive code followed by the following statement
free(array1); // array1 is NULL
I essentially called free upon a null pointer. It was my understanding that accidentally calling free on a NULL pointer was safe. However, I got the following pop up when the free statement was executed.
"Windows has triggered a breakpoint in app.exe.
This may be due to a corruption of the heap, which indicates a bug in app.exe or any of the DLLs it has loaded."
Does this mean free is not safe to call on NULL pointers, or is this a sign that there could be memory problems in my program?
When I uncomment the free statement, everything works normally.
It is definitely safe to call free() on a NULL pointer, so you must have corrupted your heap earlier. That said, it may be worth putting a print statement in to guarantee array1 really is NULL just before the crash. If you can't spot the error, you may have to resort to tools to help you - there are lots of questions about such heap usage verification tools.
Supporting docs: you're on Windows, but the API is the standardised with the C language: see "If ptr is NULL, no operation is performed." at http://linux.die.net/man/3/free
Of course, you should check new/new[]/malloc-or-realloc are properly paired with delete/delete[]/realloc-or-free everywhere. There are valid reasons to be using malloc/free in C++ though - most obviously for interoperability with C code, and very occasionally for the potential performance benefits of in-place growth with realloc.
"When I uncomment the free statement, everything works normally." - best not to rely on that... if your program misuses the heap, it could crash at any time e.g. some runs and not others, after the tiniest seemingly unrelated change anywhere in your code, after changing a compiler flag, with innocuous input changes, or even the same executable deployed on another PC....

Exit the entire recursion stack

I'm calling a function fooA from main() that calls another function fooB that is recursive.
When I wish to return, I keep using exit(1) to halt execution. What is the right way to exit when the recursion tree is deep?
Returning through the recursion stack may not be of help because returning usually clears a part solution I build and I don't want to do that. I want to do execute more piece of code from main().
I read Exceptions can be used, it would be nice if I can get a code snippet.
The goto statement won't work to hop from one function back to another; Nikos C. is correct that it wouldn't account for releasing the stack frames of each of the calls you've made, so when you got to the function you goto'ed to, the stack pointer would be pointing to the stack frame of the function you were just in... no, that just won't work. Similarly, you can't simply call (either directly, or indirectly via a function pointer) the function you want to end up in when your algorithm is done. You'd never get back to the context you were in prior to diving into your recursive algorithm. You could conceivably architect a system this way, but in essence each time you did this you'd "leak" what was currently on the stack (not quite the same as leaking heap memory, but a similar effect). And if you were deep into a highly recursive algorithm, that could be a lot of "leaked" stack space.
No, you need to somehow return back to the calling context. There are only three ways to do so in C++:
Exit each function in turn by returning from it to its caller
backing up through the call chain in an orderly fashion.
Throw an exception and catch it at the point right after you
launched into your recursive algorithm (which automatically destroys
any objects created by each function on the stack in an orderly
fashion).
Use setjmp() & longjmp() to do something similar to throwing &
catching an exception, but "throwing" a longjmp() will not destroy
objects on the stack; if any such objects own heap allocations,
those allocations will be leaked.
To do option 1, you have to write your recursive function such that once a solution is reached, it returns some sort of indication that it's complete to its caller (which may be the same function), and its caller sees that fact & relays that fact on to its caller by returning to it (which may be the same function), so on and so on, until finally all stack frames of the recursive algorithm are released and you return to whatever function called the first function in the recursive algorithm.
To do option 2, you wrap the call to your recursive algorithm in a try{...} and immediately after it you catch(){...} the expected thrown object (which could conceivably be the result of the computation, or just some object that lets the caller know "hey, I'm done, you know where to find the result"). Example:
try
{
callMyRecursiveFunction(someArg);
}
catch( whateverTypeYouWantToThrow& result )
{
...do whatever you want to do with the result,
including copy it to somewhere else...
}
...and in your recursive function, when you finish the results, you simply:
throw(whateverTypeYouWantToThrow(anyArgsItsConstructorNeeds));
To do option 3...
#include <setjmp.h>
static jmp_buf jmp; // could be allocated other ways; the longjmp() user just needs to have access to it.
.
.
.
if (!setjmp(jmp)) // setjmp() returns zero 1st time, or whatever int value you send back to it with longjmp()
{
callMyRecursiveFunction(someArg);
}
...and in your recursive function, when you finish the results, you simply:
longjmp(jmp, 1); // this passes 1 back to the setjmp(). If your result is an int, you
// could pass that back to setjmp(), but you can't pass zero back.
The bad thing about using setjmp()/longjmp() is that if there are any stack-allocated objects still "alive" on the stack when you call longjmp(), execution will jump back to the setjmp() point, skipping the destructors for those objects. If your algorithm uses only POD types, that's not an issue. It's also not an issue if the non-POD types your algorithm uses do NOT contain any heap allocations (e.g. from malloc() or new). If your algorithm uses non-POD types that contain heap allocations, then you're only safe with options 1 & 2 above. But if your algorithm meets the criteria of being OK with setjmp()/longjmp(), and if your algorithm is buried under a ton of recursive calls at the point it finishes, setjmp()/longjmp() may be the fastest way back to the initial calling context. If that won't work, option 1 is probably your best bet in terms of speed. Option 2 may seem convenient (and would possibly eliminate a condition check at the start of each recursion call), but the overhead associated with the system automatically unwinding the callstack is somewhat significant.
It's typically said you should reserve exceptions for "exceptional events" (events expected to be very rare), and the overhead associated with unwinding the callstack is why. Older compilers used something akin to setjmp()/longjmp() to implement exceptions (setjmp() at the location of the try & catch, and longjmp() at the location of a throw), but there was of course extra overhead associated with determining what objects on the stack need destroyed, even if there are no such objects. Plus, every time you'd run across a try, it would have to save the context just in case there was a throw, and if exceptions are truly exceptional events, the time spent saving that context was simply wasted. Newer compilers are now more likely to use what are known as "Zero Cost Exceptions" (a.k.a. Table Based Exceptions), which seems like that would solve all the world's problems, but it doesn't.... It makes normal runtime faster because there is no longer a need to save the context every time you run across a try, but in the event that a throw executes, there is now even more overhead associated with decoding information stored in massive tables that the runtime has to process in order to figure out how to unwind the stack based on the location where the throw was encountered and content of the runtime stack. So exceptions aren't free, even though they're very convenient. You'll find a lot of stuff on the internet where people make claims about how unreasonably expensive they are and how much they slow down your code, and you'll also find lots of stuff by people refuting those claims, with both sides presenting hard data to bolster their claims. What you should take away from the arguments is that using exceptions is great if you expect them to rarely occur, because they result in cleaner interfaces & logic that's free of a ton of condition checking for "badness" every time you make a function call. But you shouldn't use exceptions as a means of normal communication between a caller and its callees, because that mode of communication is significantly more expensive than simply using return values.
This happened to me while finding the path from root to node of a binary tree. I was using a stack to store the nodes in preorder and the recursion wouldnt stop until the last node returned NULL. I used a global variable, integer i=1, and when I reached the node I was looking for I set that variable to 0 and used while(i==0) return stack; to allow the program to go back up the memory stack without popping my nodes off.

Does it take time to deallocate memory?

I have a C++ program which, during execution, will allocate about 3-8Gb of memory to store a hash table (I use tr1/unordered_map) and various other data structures.
However, at the end of execution, there will be a long pause before returning to shell.
For example, at the very end of my main function I have
std::cout << "End of execution" << endl;
But the execution of my program will go something like
$ ./program
do stuff...
End of execution
[long pause of maybe 2 min]
$ -- returns to shell
Is this expected behavior or am I doing something wrong?
I'm guessing that the program is deallocating the memory at the end. But, commercial applications which use large amounts of memory (such as photoshop) do not exhibit this pause when you close the application.
Please advise :)
Edit: The biggest data structure is an unordered_map keyed with a string and stores a list of integers.
I am using g++ -O2 on linux, the computer I am using has 128GB of memory (with most of that free). There are a few giant objects
Solution: I ended up getting rid of the hashtable since it was almost full anyways. This solved my problem.
If the data structures are sufficiently complicated when your program finishes, freeing them might actually take a long time.
If your program actually must create such complicated structures (do some memory profiling to make sure), there probably is no clean way around this.
You can short cut that freeing of memory by a dirty hack - at least on those operating systems where all memory allocated by a process is automatically freed when the process terminates.
You would do that by directly calling the libc's exit(3) function or the operating system's _exit(2). However, I would be very careful about verifying this does not short-circuit any other (important) cleanups some C++ destructor code might be doing. And what this does or does not do is highly system dependent (operating system, compiler, libc, the APIs you were using, ...).
Yes the deallocation of memory can take some time, and also possibly you have code executing like destructors being called. Photoshop does not use 3-8GB of memory.
Also you should perhaps add profiling to your application to confirm it is the deallocation of memory and not something else.
(I started this as a reply to ndim, but it got to long)
As ndim already posted, termination can take a long time.
Likely reasons are:
you have lots of allocations, and parts of the heap are swapped to disk.
long running destructors
other atexit routines
OS specific cleanup, such as notifying DLL's of thread & process termination on Windows (don't know what exactly happens on Linux.)
exit is not the worst workaround here, however, actual behavior is system dependent. e.g. exit on WIndows / MSVC CRT will run global destructors / atexit routines, then call ExitProcess which does close handles (but not necessarily flush them - at least it's not guaranteed).
Downsides: Destructors of heap allocated objects don't run - if you rely on them (e.g. to save state), you are toast. Also, tracking down real memory leaks gets much harder.
Find the cause You should first analyze what is happening.
e.g. by manually freeing the root objects that are still allocated, you can separate the deallocation time from other process cleanup. Memory is the likely cause accordign to your description, but it's not the only possible one. Some cleanup code deadlocking before it runs into a timeout is possible, too. Monitoring stats (such as CPU/swap activity/disk use) can give clues.
Check the release build - debug builds usually use extra data on the heap that can immensely increase cleanup cost.
Different allocators
Ifdeallocation is the problem, you might benefit a lot from using custom allocation mechanisms. Example: if your map only grows (items are never removed), an arena allocator can help a lot. If your lists of integers have many nodes, switch to a vector, or use a rope if you need random insertion.
Certainly it's possible.
About 7 years ago I had a similar problem on a project, there was much less memory but computers were slower too I suppose.
We had to look at the assembly languge for free in the end to work out why it was so slow and it seemed that it was essentially keeping the freed blocks in a linked list so they could be reallocated and was also scanning that list looking for blocks to combine. Scanning the list was an O(n) operation but freeing 'n' objects turned it into O(n^2)
Our test data took about 5 seconds to free the memory but some customers had about 10 times as much data as we every used and it was taking 5-10 minutes to shut down the program on their systems.
We fixed it, as has been suggested by just terminating the process instead and letting the operating system clear up the mess (which we knew was safe to do on our application).
Perhaps you have a more sensible free function that we had several years ago, but I just wanted to post that it's entirely possible if you have many objects to free and an O(n) free operation.
I can't imagine how you'd use enough memory for it to matter, but one way I sped up a program was to use boost::object_pool to allocate memory for a binary tree. The major benefit for me was that I could just put the object pool as a member variable of the tree, and when the tree went out of scope or was deleted, the object pool would be deleted all at once (letting me not have to use a recursive deconstructor for the nodes). object_pool does call all of its objects decontructors at exit though. I'm not sure if it handles empty decontructors in a special way or not.
If you don't need your allocator to call a constructor, you can also use boost::pool, which I think may deallocate faster because it doesn't have to call deconstructors at all and just deleted the chunk of memory in one free().
Freeing memory may well take time - data structures are being updated. How much time depends on the allocator being used.
Also there might be more than just memory deallocation going on - if destructors are being executed, there may be a lot more than that going on.
2 minutes does sound like a lot of time though - you might want to step through the clean up code in a debugger (or use a profiler if that's more convenient) to see what's actually taking all the time.
The time is probably not entirely wasted deallocating memory, but calling all the destructors. You can provide your own allocator that does not call the destructor (if the object in the map doesn't need to be destructed, but only deallocated).
Also take a look at this other question: C++ STL-conforming Allocators
Normally, deallocating memory as a process ends is not taken care of as part of the process, but rather as an operating system cleanup function. You might try something like valgrind to make sure your memory is being dealt with properly. However, the compiler also does certain things to set up and tear down your program, so some sort of performance profiling, or using a debugger to step through what is taking place at teardown time might be useful.
when your program exits the destructors of all the global objects are called.
if one of them takes a long time, you will see this behavior.
look for global objects and investigate their destructors.
Sorry, but this is a terrible question. You need to show the source code showing the specific algorithms and data structures that you are using.
It could be de-allocating, but that's just a wild guess. What are your destructors doing? Maybe is paging like crazy. Just because your application allocates X amount of memory, that doesn't mean it will get it. Most likely it will be paging off virtual memory. Depending on how the specifics of your application and OS, you might be doing a lot of page faults.
In such cases, it might help to run iostat and vmstat on the background to see what the heck is going on. If you see a lot of I/O that's a sure sign you are page faulting. I/O operations will always be more expensive that memory ops.
I would be very surprised if indeed all that lapsed time at the end is purely due to de-allocation.
Run vmstat and iostat as soon as you get the "ending" message, and look for any indications of I/O going bananas.
The objects in memory are organized in a heap. They are not deleted at once, they are deleted one by one, and the cost of deleting an object is O(log n). Freeing them takes loooong.
The answer is then, yes, it takes so much time.
You can avoid free being called on an object by using a destructor call my_object->~my_class() instead of delete my_object. You can avoid free on all objects of a class by overriding and nullifying operator delete( void * ) {} inside the class. Derived classes with virtual destructors will inherit that delete, otherwise you can copy-paste (or maybe using base::operator delete;).
This is much cleaner than calling exit. Just be sure you don't need that memory back!
I guess your unordered map is a global variable, whose constructor is called at process startup, and destructor is called at process exit.
How could you know if the map is guilty?
You can test if your unordered_map is responsible (and I guess it is) by allocating it with a new, and, well, ahem... forget to delete it.
If your process' exit goes faster, then you have your culprit.
Why this is so sloooooow?
Now, just by reading your post, for your unordered map, I see potential allocations for:
strings allocated buffer
list items (each one being a string + other things)
unordered map items + the bucket array
If you have 3-8 Gb of data in this unordered map, this means that each item above will need some kind of new and delete. And if you free every item, one by one, it could take time.
Other reasons?
Note that if you add items to your map item by item while your process executing, the new are not exactly perceptible... But the moment you want to clean all, all your allocated items must be destroyed at the same time, which could explain the perceived difference between construction/use and destruction...
Now, the destructors could take time for an additional reason.
For example, on Visual C++ 2008 in debug mode, for example, upon destruction of STL iterators, the destructor verifies the iterators are still correct. This caused quite a slowdown upon my object destruction (which was basically a tree of nodes, each node having list of child nodes, with iterators everywhere).
You are working on gcc, so perhaps they have their own debug testing, or perhaps your destructors are doing additional work (e.g. logging?)...
In my experience, the calls to free or delete should not take a significant amount of time. That said, I have seen plenty of cases where it does take non-trivial time to destruct objects because of destructors that did non-trivial things. If you can't tell what's taking time during the destruction, use a debugger and/or a profiler to determine what's going on. If the profiler shows you that it really is calls to free() that take a lot of time, then you should improve your memory allocation scheme, because you must be creating an extremely large number of small objects.
As you noted plenty of applications allocate large amounts of memory, and incur no significant memory during shutdown, so there's no reason your program can't do the same.
I would recommend (as some others have) a simple forced process termination, if you're certain that you've nothing left to do but free memory (for example, no file i/o and such left to do).
The thing is that when you free memory, typically, it's not actually returned to the OS - it's held in a list to be reallocated, and this is obviously slow. However, if you terminate process, the OS will lump reclaim all your memory at once, which should be substantially faster. However, as others have said, if you have any destructors that need to run, you should ensure that they are run before force calling exit() or ExitProcess or anysuch function.
What you should be aware of is that deallocating memory that is spread out (e.g., two nodes in a map) is much slower due to cache effects than deallocating memory in a vector, because the CPU needs to access the memory to free it and run any destructors. If you deallocated a very large amount of memory that's very fragmented, you could be falling afoul of this, and should consider changing to some more contiguous structures.
I actually had a problem where allocating memory was faster than de-allocating it, and after allocating memory and then de-allocating it, I had a memory leak. Eventually, I worked out that this is why.
I am currently facing a similar issue, with a CPU & memory intensive research program of mine. It runs until a specified time limit, prints a solutions and exits. The destructor call of a single object (containing up to 10⁶ relatively small objects) was what unexpectedly took time at the end of execution (about 10sec. to free 5Gb of data).
I was not satisfied by the answers advising to avoid executing every destructor, so here is the solution I came up with:
Original code:
void process() {
vector<unordered_map<State, int>> large_obj(100);
// Processing...
} // Takes a few seconds to exit (destructor calls)
Solution:
void process(bool free_mem = false) {
auto * large_obj_ = new vector<unordered_map<State, int>>(100);
auto &large_obj = *large_obj;
// Processing...
// (No changes required here, 'large_obj' can be used exactly as before)
if(free_mem)
delete large_obj_;
}
It has the advantage of being completely transparent apart from a few lines to insert, and it can even be parametrized to take some time to free the memory if needed. It is explicit which object will intentionally not be freed to avoid leaving things in an "unstable" state. Memory is cleaned up instantly by the OS on exit when free_mem = false.