What does -fcheck=mem gfortran option check? - fortran

What kind of runtime errors would -fcheck=mem option of gfortran catch?
The manual page explanation is not clear for me:
‘-fcheck=mem’
Enable generation of run-time checks for memory allocation. Note: This option does not affect explicit allocations using the ALLOCATE
statement, which will be always checked.

Most likely these are allocations that happen on assignment (Fortran 2003 feature) and allocations for heap temporary arguments.
These can fail when there is not enough memory available, for example. I cannot come up with a buggy code that would trigger these checks.

Related

How can I verify that the compiler is eliding my coroutine heap allocations?

I'm having trouble understanding the memory allocations made by C++20 coroutines. For my code, I would like to verify that the compiler is eliding heap allocations, and if it isn't, to find out what data is being placed within that allocation. My strategy right now has been to inspect the assembly output, but I'm not sure what to look for.
How can I verify that heap allocations are elided?
Edit
A possible example to refer to would be Lewis Baker's code here: https://www.godbolt.org/z/EoovEEKvW
Respondents should feel free to refer to other code or libraries if they like.
The simplest thing you could do is give your promise type allocator/deallocator functions (ie: operator new/delete members). These functions are called to allocate storage for the coroutine if such storage is needed. As such, if they are not called for a particular use of a coroutine, storage was not needed.
However, this is not a guarantee. It is entirely possible that implementations will bypass elision if you provide such allocators. After all, your program might rely on having a short stack, so if you use allocators, they may be allocating from static storage instead of the actual program stack.
Then again, maybe not. The most you can do is try it and see what happens.

Stack allocation of unknown size complexity

I know that stack allocation takes constant time. From what I understand, this happens because the allocation size can be determined in compile time. In this case the program knows how much memory is needed to run (for example) a function and the entire chunk of memory that is needed can be allocated at once.
What happens in cases where the allocation size is only known at run time?
Consider this code snippet,
void func(){
int n;
std::cin >> n;
// this is a static allocation and its size is only known at run time
int arr[n];
}
Edit: I'm using g++ 5.4 on linux and this code compiles and runs.
What happens in cases where the allocation size is only known at run time?
Then the program is ill-formed, and therefore compilers are not required to compile the program.
If a compiler does compile it, then it is up to the compiler to decide what happens (other than issuing a diagnostic message, as required by the standard). This is usually called a "language extension". What probably happens is: amount of memory is allocated for the array, determined by the runtime argument. More details may be available in the documentation of the compiler.
It is impossible (using standard C++ language) to allocate space on the stack without knowing how much space to allocate.
The line int arr[n]; is not a valid C++ code. It only compiles because the compiler you are using decided to let you do that, so for more information you should refer to your compiler documentation.
For GCC, you might take a look at this page: https://gcc.gnu.org/onlinedocs/gcc/Variable-Length.html
I'm using g++ 5.4 on linux and this code compiles and runs.
Yes, and this invalid code compiles under MSVC 2010:
int& u = 0;
The standard sais that this code is ill formed. Yet MSVC compiles it! This is because of a compiler extension.
GCC accepts it because it implements it as an extension.
When compiling with -pedantic-errors, GCC will reject the code correctly.
Likewise, MSVC has the /permissive- compiler argument to disable some of it's extensions.
The memory allocation procedure varies when the size to be allocated is determined at runtime. Instead of allocation on stack, memory is reserved on the heap when the size is not known at compile time. Now allocation of memory is possible on the heap until the main memory of the computer is completely used up. Also, in some languages like C,C++ the allocation is permanent and the user is required to deallocate the memory after use.
In the example given above, memory of size n*sizeof(int) is reserved on the heap and is garbage collected (in java or python) or manually deallocated if the memory is assigned a pointer. (in c/c++)

On what platform is func(shared_ptr(...), shared_ptr(...)) really dangerous?

I remember that Scott Meyers taught me that
func(shared_ptr(new P), shared_ptr(new Q));
is dangerous, because (if I remember correctly) the order of memory allocation, reference counting (constructing) and assignment to function parameters allows a leak (theoretically?) to appear in rare circumstances. To prevent this one should encapsulate the shared_ptr in a function call, e.g. in make_shared().
func(make_shared<P>(), make_shared<Q>());
Here is some discussion about it, too.
I would like to know if there are (current) compilers in the field, on certain systems that indeed may leave some hole in some error cases? Or are those times gone, or were they only theoretical, anyway?
Most interesting would be know if any of these have that issue:
g++ 4.x or g++ 2.95, on Linux i386, x64, ARM, m68k or any Windows
Visual C++ on i368, x64 or ARM
Clang/LLVM on Linux, or any of its platforms
How about C++ compilers on/from Sun or IBM, HP-UX?
Has anyone observed this behavior on his specific platform?
This isn't a platform problem, it's an exception-safety issue. So the answer to your actual question is: all those platforms might exhibit the issue.
The memory leak problem arises due to 2 things:
Allocating memory with new might throw bad_alloc
The order in which arguments to functions are evaluated is unspecified.
The docs for boost::shared_ptr capture it nicely here
There is more treatment in detail on the general problem here (GOTW)
The reason it might be "rare" comes about because it's really not that common to get bad_alloc, but your code must handle the possibility safely if it is to avoid memory leaks.
(I say "might" exhibit it - I haven't checked that they all throw bad_alloc if new fails...)
func(shared_ptr(new P), shared_ptr(new Q));
A C++ compiler is free to implement this in the following order:
new Q
new P
construct shared_ptr around allocated P
construct shared_ptr around allocated Q
call func
(A compiler can perform 1, 2, 3, and 4 in any order, as long as 1 is before 4 and 2 before 3).
In the order above, if P's constructor or the call to new throws, Q is a memory leak (memory is allocated but the shared_ptr is not yet constructed around it).
As such, you should call std::make_shared (which handles exceptions on allocation gracefully), and you know that when std::make_shared has returned for one of them, the shared_ptr is fully constructed and will not leak.
I would like to know if there are (current) compilers in the field, on certain systems that indeed may leave some hole in some error cases?
All standards compliant compilers will have this behavior.
because both new operation will be done first, and then they are passed into shared_ptr's constructor, but which shared_ptr constructs first is unspecified, so one of the newly created object may cause a memory leak.
This is unsafe on any platform which has a reordering optimizer, if that optimizer performs the following optimization A;B;A;B => A;A;B;B. This optimization improves code cache efficiency, so in general that's a good idea.
Obviously that optimizer can only rearrange B and A if their relative order is unspecified, which happens to be the case here.

is it possible to get memory block size allocated by 'new'?

Hello i need to record my heap, but now i am just thinking to overload 'new' operator with my function.
I need to summary the real count bytes of memory that was increased after malloc() or Heap*() or other windows mem* functions
But for now i need to analyze current heap implementation. Is it possible to get blocks size like allocated by HeapAlloc() function - HeapSize() ?
.
I can see that you did not search the documentation.
HeapSize() exists.
Edit On reflection, perhaps you were asking for an alternative to HeapSize() that you can use when you performed allocation yourself with new.
The answer is no. The standard allocation routines don't have anything to grab information about the memory block, because:
That's highly implementation-dependent, and
You already know the block size (because you specified it in the first place), so what would be the point of the bloat?
In fact HeapSize() is the implementation-dependent function for Windows that does this, but you can only use it when you performed a HeapAlloc().
There's [also non-standard] _msize that can be used with malloc and friends, but new may not use malloc.
Therefore I suggest that you just track the sizes yourself in your allocator.
_msize
According to the documentation it works for calloc,malloc and realloc.
However, at least under Visual Studio, using the default allocator, it works also for new.
It is not a good idea to use it, however, it may do the work for your analysis.
One more thing:
External tools such as VMMap may help for such analysis.

nothrow or exception?

I am a student and I have small knowledge on C++, which I try to expand. This is more of a philosophical question.. I am not trying to implement something.
Since
#include <new>
//...
T * t = new (std::nothrow) T();
if(t)
{
//...
}
//...
Will hide the Exception, and since dealing with Exceptions is heavier compared to a simple if(t), why isn't the normal new T() not considered less good practice, considering we will have to use try-catch() to check if a simple allocation succeeded (and if we don't, just watch the program die)??
What are the benefits (if any) of the normal new allocation compared to using a nothrow new? Exception's overhead in that case is insignificant ?
Also, Assume that an allocation fails (eg. no memory exists in the system). Is there anything the program can do in that situation, or just fail gracefully. There is no way to find free memory on the heap, when all is reserved, is there?
Incase an allocation fails, and an std::bad_alloc is thrown, how can we assume that since there is not enough memory to allocate an object (Eg. a new int), there will be enough memory to store an exception ??
Thanks for your time. I hope the question is in line with the rules.
Since
dealing with Exceptions is heavier
compared to a simple if(t), why isn't
the normal new T() not considered less
good practice, considering we will
have to use try-catch() to check if a
simple allocation succeeded (and if we
don't, just watch the program die)??
What are the benefits (if any) of the
normal new allocation compared to
using a nothrow new? Exception's
overhead in that case is insignificant
?
The penalty for using exceptions is indeed very heavy, but (in a decently tuned implementation) the penalty is only paid when an exception is thrown - so the mainline case stays very fast, and there is unlikely to be any measurable performance between the two in your example.
The advantage of exceptions is that your code is simpler: if allocating several objects you don't have to do "allocate A; if (A) { allocate B; if (B) etc...". The cleanup and termination - in both the exception and mainline case - is best handled automatically by RAII (whereas if you're checking manually you will also have to free manually, which makes it all too easy to leak memory).
Also, Assume that an allocation fails
(eg. no memory exists in the system).
Is there anything the program can do
in that situation, or just fail
gracefully. There is no way to find
free memory on the heap, when all is
reserved, is there?
There are many things that it can do, and the best thing to do will depend on the program being written. Failing and exiting (gracefully or otherwise) is certainly one option. Another is to reserve sufficient memory in advance, so that the program can carry on with its functions (perhaps with reduced functionality or performance). It may be able to free up some of its own memory (e.g. if it maintains caches that can be rebuilt when needed). Or (in the case of a server process), the server may refuse to process the current request (or refuse to accept new connections), but stay running so that clients don't drop their connections, and things can start working again once memory returns. Or in the case of an interactive/GUI application, it might display an error to the user and carry on (allowing them to fix the memory problem and try again - or at least save their work!).
Incase an allocation fails, and an
std::bad_alloc is thrown, how can we
assume that since there is not enough
memory to allocate an object (Eg. a
new int), there will be enough memory
to store an exception ??
No, usually the standard libraries will ensure, usually by allocating a small amount of memory in advance, that there will be enough memory for an exception to be raised in the event that memory is exhausted.
Nothrow was added to C++ primarily to support embedded systems developers that want to write exception free code. It is also useful if you actually want to handle memory errors locally as a better solution than malloc() followed by a placement new. And finally it is essential for those who wished to continue to use (what were then current) C++ programming styles based on checking for NULL. [I proposed this solution myself, one of the few things I proposed that didn't get downvoted :]
FYI: throwing an exception on out of memory is very design sensitive and hard to implement because if you, for example, were to throw a string, you might double fault because the string does heap allocation. Indeed, if you're out of memory because your heap crashed into the stack, you mightn't even be able to create a temporary! This particular case explains why the standard exceptions are fairly restricted. Also, if you're catching such an exception fairly locally, why you should catch by reference rather than by value (to avoid a possible copy causing a double fault).
Because of all this, nothrow provide a safer solution for critical applications.
I think that the rationale behind why you'd use the regular new instead of the nothrow new is connected to the reason why exceptions are usually preferred to explicitly checking the return value of each function. Not every function that needs to allocate memory necessarily knows what to do if no memory can be found. For example, a deeply-nested function that allocates memory as a subroutine to some algorithm probably has no idea how what the proper course of action to take is if memory can't be found. Using a version of new that throws an exception allows the code that calls the subroutine, not the subroutine itself, to take a more appropriate course of action. This could be as simple as doing nothing and watching the program die (which is perfectly fine if you're writing a small toy program), or signalling some higher-level program construct to start throwing away memory.
In regards to the latter half of your question, there actually could be things you could do if your program ran out of memory that would make memory more available. For example, you might have a part of your program that caches old data, and could tell the cache to evict everything as soon as resources became tight. You could potentially page some less-critical data out to disk, which probably has more space than your memory. There are a whole bunch of tricks like this, and by using exceptions it's possible to put all the emergency logic at the top of the program, and then just have every part of the program that does an allocation not catch the bad_alloc and instead let it propagate up to the top.
Finally, it usually is possible to throw an exception even if memory is scarce. Many C++ implementations reserve some space in the stack (or some other non-heap memory segment) for exceptions, so even if the heap runs out of space it can be possible to find memory for exceptions.
Hope this helps!
Going around exceptions because they're "too expensive" is premature optimisation. There is practically no overhead of a try/catch if an exception is not thrown.
Is there anything the program can do
in that situation
Not usually. If there's no memory in the system, you probably can't even write anything to a log, or print to stdout, or anything. If you're out of memory, you're pretty much screwed.
Running out of memory is expected to be a rare event, so the overhead of throwing an exception when it happens isn't a problem. Implementations can "pre-allocate" any memory that's needed for throwing a std::bad_alloc, to ensure that it's available even when the program has otherwise run out of memory.
The reason for throwing an exception by default, instead of returning null, is that it avoids the need for null checks after every allocation. Many programmers wouldn't bother doing that, and if the program were to continue with a null pointer after a failed allocation, it would probably just crash later with something like a segmentation fault, which doesn't indicate the real cause of the problem. The use of an exception means that if the OOM condition isn't handled, the program will immediately terminate with an error that actually indicates what went wrong, which makes debugging much easier.
It's also easier to write handling code for out-of-memory situations if they throw exceptions: instead of having to individually check the result of every allocation, you can put a catch block somewhere high in the call stack to catch OOM conditions from many places throughout the program.
In Symbian C++ it works the other way around. If you want an exception thrown when OOM you have to do
T* t = new(ELeave) T();
And you're right about the logic of throwing a new exception when OOM being strange. A scenario that is manageable suddenly becomes a program termination.