How to deallocate all allocated memory in Fortran? - fortran

Not sure if this is possible, but nonetheless.
I have a large Fortran90 code (not written by me) that I wish to run with python and mpi4py. Rather than spawn multiple instances of the Fortran executable I would like to spawn once, and loop through the program multiple times. This is something that I have done previously, and works well. However this Fortran code is fairly sprawling, and quite complicated vs previous codes I have tackled with this approach.
My question is - is there a simple way of tearing down and deallocating all allocated arrays from inside the program at the end of each loop (without me having to go through the whole code and ensure everything has been deallocated by hand). At present if I loop through the main body of the code it errors on the second pass through the loop, because it tries to allocate an already allocated array.

No, there is no deallocate all command that would deallocate all allocatable entities in the program. You have to maintain any such list yourself and deallocate it yourself.
You can make use of the automatic deallocation in Fortran, but that only works for local variables. Not for entities in modules or in the main program.

Related

C++ delete [] - how to check if "all is deleted"?

I was wondering, throughout a program I am using a lot of char* pointers to cstrings, and other pointers.
I want to make sure that I have delete all pointers after the program is done, even though Visual Studio and Code Blocks both do it for me (I think..).
Is there a way to check is all memory is cleared? That nothing is still 'using memory'?
The obvious answer on Linux would be valgrind, but the VS mention makes me think you're on Windows. Here is a SO thread discussing valgrind alternatives for windows.
I was wondering, throughout a program I am using a lot of char* pointers to cstrings
Why? I write C++ code every day and I very rarely use a pointer at all. Really it's only when using third party API's, and even then I can usually work around it to some degree. Not that pointers are inherently bad, but if you can avoid them, do so as it simplifies your program.
I want to make sure that I have delete all pointers after the program is done
This is a bit of a pointless exercise. The OS will do it for you when it cleans up your process.
Visual Studio is an IDE. It isn't even there by the time your code is deployed. It doesn't release memory for you.
For what you want you can look into tools like this:
http://sourceforge.net/apps/mediawiki/cppcheck/index.php?title=Main_Page
You might want to use some sort of a smart pointer (see the Boost library, for example). The idea is that instead of having to manage memory manually (that is call delete explicitly when you don't need the object any more), you enlist the power of RAII to do the work for you.
The problem with manual memory management (and in general resource management) is that it is hard to write a program that properly deallocates all memory -- because you forget, or later when you change your code do not realize there was some other memory that needed to be deallocated.
RAII in C++ takes advantage of the fact that the destructor of a stack-allocated object is called automatically when that object goes out of scope. If the destructor logic is written properly, the last object that references (manages) the dynamically allocated data will be the one (and only one) that deallocates that memory. This could be achieved via reference counting, maintaining a list of references, etc.
The RAII paradigm for memory is in a sense similar to the garbage collection of mamged languages, except it is running when needed and dictated by your code, not at certain intervals, largely independent from your code.
Unless you're writing driver code, there's absolutely nothing you can do with heap/freestore-allocated memory that would cause it to remain leaked once the program terminates. Leaked memory is only an issue during the lifetime of a given process; once a particular process has leaked its entire address space, it can't get any more for _that_particular_ process.
And it's not your compiler that does the uiltimate cleanup, it's the OS. In all modern operating systems that support segregated process address spaces (i.e. in which one process cannot read/write another process's address space, at least not without OS assistance), when a program terminates, the process's entire address space is reclaimed by the OS, whether or not the program has cleanly free()ed or deleted all of its heap/freestore-allocated memory. You can easily test this: Write a program that allocates 256 MBytes of space and then either exits or returns normally without freeing/deleting the allocated memory. Then run your program 16 times (or more). Shoot, run it 1000 times. If exiting without releasing that memory made it leak into oblivion until reboot, then you'd very soon run out of available memory. You'll find this to not be the case.
This same carelessness is not acceptable for certain other types of memory, however: If your program is holding onto any "handles" (a number that identifies some sort of resource granted to the process by the operating system), in some cases if you exit the program abnormally or uncleanly without releasing the handles, they remain lost until the next reboot. But unless your program is calling OS functions that give you such handles, that isn't a concern anyway.
If this is windows specific you can call this at the start of your program :-
_CrtSetDbgFlag(_crtDbgFlag | _CRTDBG_LEAK_CHECK_DF);
After including
#include <crtdbg.h>
And when your program exits the C++ runtime will output to the debugger a list of all memory blocks that are still allocated so you can see if you forgot to free anything.
Note that this only happens in DEBUG builds, in RELEASE builds the code does nothing (which is probabyl what you want anyway)

Application killed c++ - memory problem?

I wrote a program in c++ to perform montecarlo. The thing is that after five iterations (every iteration runs monte carlo with different configurations), the process is killed.
At the begining I thought it was a memory problem but after reading this nice post on memory management (http://stackoverflow.com/questions/76796/memory-management-in-c), my scoping seems to be correct.
I do not use a lot of memory since my results are stored in a relativelly small array which is rewritten ever iteration. In an iteration I am not using more memory than in the previous.
I cannot find, if there is any, where is the leak. I have a lot of function calls to perform
the calculations, but I do not need to destroy the objects once I am out of the function right?
Any tip?
EDIT: The program takes all the processor power of my computer, when it is running I cannot even move the mouse.
Thanks in advance.
EDIT SOLVED: The problem was that I was not deletening the pointers I used so every iteration the memory was not delocated and a whole new set of pointers was created using more memory. Thanks a lot to those that answered.
Depending on the platform you are on, you can use a tools like valgrind or vld to find memory leaks in your program.

Shielding app from library leaks

I have to use a function from a shared library which leaks some small amount of memory (Let's assume I can't modify the library). Unfortunately, I have to call this function huge number of times which obviously makes this leak catastrophic.
Is there any method to fix this problem? If yes, is there a fast method of doing this? (The function must be called few hundred thousand times, the leak becomes problematic after about 10k times)
I can think of a couple of approaches, but I don't know what will work for you.
Switch to a garbage-collecting memory allocator like Boehm's gc. This can sweep up those leaks, and may even be a performance gain because free() becomes a no-op.
exit(): The Ultimate Deallocator. Fork off a subprocess, run it 10k times, pass the results back to the parent process. Apache's web server does this to contain damage from third-party library leaks.
I'm not sure this easier than rewriting the function yourself, but you could write your own small memory allocator specific for your task, which would look somewhat the following way:
(it should replace default memory allocation calls and this is done for the functions in your library too).
1) You should have a possibility to enter the leak-reverting mode, which, for example, disposes everything allocated in this mode.
2) Before your function processes something, enter that leak-reverting mode and exit it upon the function finishes.
Basically, if the dependencies in your code aren't too tight, this would help.
Another way would be making another application and pairing it with the main. When the second one exits, the memory would be automatically disposed. You may want to see how googletest framework runs it's child test and how the pipes are being constructed there.
In short, no. If you have time, you can rewrite the function yourself. Catastrophic usually means this is the way to go. One other possibility, can you load and unload the library (like a .so)? It's possible that this will release the leaked memory.

C++ program dies with std::bad_alloc, BUT valgrind reports no memory leaks

My program fails with 'std::bad_alloc' error message. The program is scalable, so I've tested on a smaller version with valgrind and there are no memory leaks.
This is an application of statistical mechanics, so I am basically making hundreds of objects, changing their internal data (in this case stl vectors of doubles), and writing to a datafile. The creation of objects lies inside a loop, so when it ends the memory is free. Something like:
for (cont=0;cont<MAX;cont++){
classSection seccion;
seccion.GenerateObjects(...);
while(somecondition){
seccion.evolve();
seccion.writedatatofile();
}}
So there are two variables which set the computing time of the program, the size of the system and the number of runs. There is only crash for big systems with many runs. Any ideas on how to catch this memory problem?
Thanks,
Run the program under debugger so that it stops once that exception is thrown and you can observe the call stack.
Three most probable problems are:
heap fragmentation
too many objects created on heap (but still pointed to from the program)
a request for an unreasonably large block of memory
valgrind would not show a memory leak because you may well not have one that valgrind would find.
You can actually have memory leaks in garbage-collected languages like Java. Although the memory is cleaned up there, it does not mean a bad programmer cannot hold on indefinitely to data they no longer need (eg building up a hash-map indefinitely). The garbage collector cannot determine that the user does not really need that data anymore.
You may be doing something like that here but we would need to see more of your code.
By the way, if you have a collection that really does have masses of data you are often better off using std::deque rather than std::vector unless you really really need it all to be contiguous.

Allocating large blocks of memory with new

I have the need to allocate large blocks of memory with new.
I am stuck with using new because I am writing a mock for the producer side of a two part application. The actual producer code is allocating these large blocks and my code has responsibility to delete them (after processing them).
Is there a way I can ensure my application is capable of allocating such a large amount of memory from the heap? Can I set the heap to a larger size?
My case is 64 blocks of 288000 bytes. Sometimes I am getting 12 to allocate and other times I am getting 27 to allocate. I am getting a std::bad_alloc exception.
This is: C++, GCC on Linux (32bit).
With respect to new in C++/GCC/Linux(32bit)...
It's been a while, and it's implementation dependent, but I believe new will, behind the scenes, invoke malloc(). Malloc(), unless you ask for something exceeding the address space of the process, or outside of specified (ulimit/getrusage) limits, won't fail. Even when your system doesn't have enough RAM+SWAP. For example: malloc(1gig) on a system with 256Meg of RAM + 0 SWAP will, I believe, succeed.
However, when you go use that memory, the kernel supplies the pages through a lazy-allocation mechanism. At that point, when you first read or write to that memory, if the kernel cannot allocate memory pages to your process, it kills your process.
This can be a problem on a shared computer, when your colleague has a slow core leak. Especially when he starts knocking out system processes.
So the fact that you are seeing std::bad_alloc exceptions is "interesting".
Now new will run the constructor on the allocated memory, touching all those memory pages before it returns. Depending on implementation, it might be trapping the out-of-memory signal.
Have you tried this with plain o'l malloc?
Have you tried running the "free" program? Do you have enough memory available?
As others have suggested, have you checked limit/ulimit/getrusage() for hard & soft constraints?
What does your code look like, exactly? I'm guessing new ClassFoo [ N ]. Or perhaps new char [ N ].
What is sizeof(ClassFoo)? What is N?
Allocating 64*288000 (17.58Meg) should be trivial for most modern machines... Are you running on an embedded system or something otherwise special?
Alternatively, are you linking with a custom new allocator? Does your class have its own new allocator?
Does your data structure (class) allocate other objects as part of its constructor?
Has someone tampered with your libraries? Do you have multiple compilers installed? Are you using the wrong include or library paths?
Are you linking against stale object files? Do you simply need to recompile your all your source files?
Can you create a trivial test program? Just a couple lines of code that reproduces the bug? Or is your problem elsewhere, and only showing up here?
--
For what it's worth, I've allocated over 2gig data blocks with new in 32bit linux under g++. Your problem lies elsewhere.
It's possible that you are being limited by the process' ulimit; run ulimit -a and check the virutal memory and data seg size limits. Other than that, can you post your allocation code so we can see what's actually going on?
Update:
I have since fixed an array indexing bug and it is allocating properly now.
If I had to guess... I was walking all over my heap and was messing with the malloc's data structures. (??)
i would suggest allocating all your memory at program startup and using placement new to position your buffers. why this approach? well, you can manually keep track of fragmentation and such. there is no portable way of determining how much memory is able to be allocated for your process. i'm positive there's a linux specific system call that will get you that info (can't think of what it is). good luck.
The fact that you're getting different behavior when you run the program at different times makes me think that the allocation code isn't the real problem. Instead, somebody else is using the memory and you're the canary finding out it's missing.
If that "somebody else" is in your program, you should be able to find it by using Valgrind.
If that somebody else is another program, you should be able to determine that by going to a different runlevel (although you won't necessarily know the culprit).