Simple memory issue exercise - c++

I'm working mostly in high-level programming languages, but so yesterday a friend asked me to help him solve a simple C++ exercise and while I was working on it I wrote this piece of code:
for (int x = 0; x < 10; x++){
int a, b, c;
a = x;
b = x*2;
c = x+5;
}
My question here is: will this cause a memory leak making a, b, c always be created in different locations of memory, or will they always be overwritten with every loop?

Memory leak can happen only if you have dynamically allocated variables(by calling new or new [] or malloc or calloc). There are none in your code so NO.
What you have are local or automatic variables. As the name says automatic variables are implicitly deallocated when the scope {} in which they are created ends.

a, b and c will be allocated on the stack. Automatic variables can never cause a memory leak unless the types themselves cause a leak through their constructor and/or destructor.
Regarding the question of whether they will overwrite each loop: I strongly suspect that every single compiler on Earth will do it that way but, in principle, this is not guaranteed. If you look at the assembly output you will discover that either (a) all the variables are in registers or (b) they retrieved as fixed offsets from the stack pointer. Since the same assembly is being executed each time through the loop they will, indeed, be overwriting.

You ask three questions:
this will cause a memory leak
No, there is no memory leak here. A memory leak, as that term is commonly used, requires a new without a delete, a new[] without a delete[], or a malloc() without a free().
a, b, c always be created in different locations of memory
They might be. They might not be. This is an implementation detail of which you program need not be aware. All you need to know is that the objects are created at the line that defines them, and destroyed at the closing brace of that scope.
they will be always overwritten with every loop?
They might be. They might not be. This is an implementation detail of which your program need not be aware. Regardless of whether they are overwritten, they are destroyed each time around the loop.

The same place in memory will be used to store the values in a, b and c in every iteration of the loop.

if you create a variable like this
int i = 5;
The compiler will put them on the stack, you don't have to deallocate the int.
however if you create a int on the heap,
int* i = new int; /*C++ style*/
int* j= (int*) malloc(sizeof(int)); /*C style*/
You do have deallocate the memory like this:
delete i; /*C++ style*/
free(j); /*C style*/
otherwise you'll have a memory leak.
most importantly you don't want to mix the c- and c++-style of memory handling.

Related

Is pointer to variable the same as a pointer to an array of one element, or of zero elements?

Array with size 0 Has good explanations of zero-length arrays and is certainly worthwhile and pertinent. I am not seeing it compare zero-length with single-element arrays and with pointer-to-variable.
When I asked before (Is c++ delete equivalent to delete[1]?) I did not express myself well. My question seemed to be the same or included in more general answers about new, new[], delete, and delete[]. Some understood that I was asking only about a single element. All answers in comments seemed correct and consistent.
There is a question that looks like the same as this question. But the body is about using C++ and Java together. Here, we are talking only about C++.
Checking my understanding
I will present pairs of proposed equivalent statements. The statements are declarations or instantiations of a pointer to a single variable followed by a pointer to an array of one element. Then I will state why I would think they are equivalent or not.
Are these pairs equivalent?
int one = 1;
// Sample 1. Same in the sense of pointing to an address somewhere
// whose contents equal one. Also same in the sense of not being able to
// change to point to a different address:
int * const p_int = &one;
int a_int[1] = {1};
// Sample 2.
int * p_int1 = new int;
int * a_int1 = new int[1];
// Sample 3.
delete p_int1;
delete[] a_int1;
// Sample 4. If Sample 3 is an equivalent pair, then (given Sample 2)
// we can write
delete[] p_int1;
delete a_int1;
Granted, Sample 4 is bad practice.
I am thinking: "delete" will call the destructor of the object. delete[] will call the destructor for each element of the array, and then call the destructor for the array. new in Sample 2 would malloc (so to speak) the variable. new[] would malloc an array of one element, then malloc the one element. And then that one element would be set equal to 1. So, I'm thinking THAT'S why I need to call delete[] and not delete when I have an array of even one element. Am I understanding?
And if I am understanding, then calling delete instead of delete[] to free an array of one element, then I will certainly have a memory leak. A memory leak is the specific "bad thing" that will happen.
However, what about this:
int * a_int0 = new int[0];
delete a_int0;
Would THAT result in a memory leak?
I invite corrections of my misuse of terminology and anything else.
Sample 1:
int const * p_int = &one;
int a_int[1] = {1};
NO, these are not equivalent. A pointer is not the same thing as an array. They are not equivalent for the same reason that 1 is not the same as std::vector<int>{1}: a range of one element is not the same thing as one element.
Sample 2:
int * p_int1 = new int;
int * a_int1 = new int[1];
These are sort of equivalent. You have to delete them differently, but otherwise the way you would use p_int1 and a_int1 is the same. You could treat either as a range (ending at p_int1+1 and a_int1+1, respectively).
Sample 3:
delete p_int1;
delete[] a_int1;
These are I suppose equivalent in the sense that both correctly deallocate the respective memory of the two variables.
Sample 4:
delete[] p_int1;
delete a_int1;
These are I suppose equivalent in the sense that both incorrectly deallocate the respective memory of the two variables.
int const * p_int = &one;
int a_int[1] = {1};
They are not equivalent, the first is a pointer to another variable, the other a mere array of size one, initialized straight away with the value one.
Understand this: pointers are entities in themselves, distinct from what they point to. Which is to say, the memory address of your p_int there, is entirely different from the memory address of your variable one. What's now stored in your p_int however, is the memory address of your variable one.
// Sample 2.
int * p_int1 = new int;
int * a_int1 = new int[1];
Here though, they are effectively the same thing in terms of memory allocation. In both cases you create a single int on the heap(new means heap space), and both pointers are immediately assigned the address of those heap allocated ints. The latter shouldn't be used in practice though, even though it's technically not doing anything outright wrong, it's confusing to read for humans as it conveys the notion that there is an array of objects, when there is in reality just one object, ie no arrayment has taken place.
// Sample 3.
delete p_int1;
delete[] a_int1;
Personally, I've never used the "array" delete, but yeah, what you're saying is the right idea: delete[] essentially means "call delete on every element in the array", while regular delete means "delete that one object the pointer points to".
// Sample 4. If Sample 3 is an equivalent pair, then (given Sample 2)
// we can write
delete[] p_int1;
delete a_int1;
Delete on array is undefined according to the standard, which means we don't really know what'll happen. Some compilers may be smart enough to see what you mean is a regular delete and vice versa, or they may not, in any case this is risky behavior, and as you say bad practice.
Edit: I missed your last point. In short, I don't know.
new int[0];
Basically means "allocate space for 0 objects of type int". That of course means 0 * 4, ie 0 bytes. As for memory leaks, what seems intuitive is no, as no memory has been allocated, so if the pointer goes out of scope, there is nothing on the heap anyway. I can't give a more in dept answer than that. My guess is that this is an example of undefined behavior, that doesn't have any consequences at all.
However, memory leaks happen when you do this:
void foo()
{
int* ptr = new int;
}
Notice how no delete is called as the function returns. The pointer ptr itself gets deallocated automatically, since it's located on the stack(IE automatic memory), while the int itself is not. Since heap memory isn't automatically deallocated, that tiny int will no longer be addressable, since you got no pointer to it anymore in your stack. Basically, those 4 bytes of heap memory will be marked as "in use" by the operating system, for as long as the process runs, so it won't be allocated a second time.
Edit2: I need to improve my reading comprehension, didn't notice you did delete the variable. Doesn't change much though: You never have memory leaks when you remember to delete, memory leaks arise when you forget to call delete on heap allocated objects(ie new) prior to return from the scope the pointer to heap memory was located. My example is the simplest one I could think of.
This is a syntax-only answer. I checked this in a debugger with the following code:
// Equivalence Test 1.
*p_int = 2;
p_int[0] = 3;
*a_int = 2;
a_int[0] = 3;
Because I can access and manipulate the declarations as an array or as a pointer to variable, I think the declarations are syntactically equivalent, at least approximately.
I have to apologize
(1) I did not think to define my terms clearly. And it is very hard to talk about anything without my defining my terms. I should have realized and stated that I was thinking syntax and what a typical compiler would probably do. (2) I should have thought of checking in a debugger much earlier.
I think the previous answers are correct semantically. And of course good programming practice would declare an array when an array is the meaning, etc. for a pointer to variable.
I appreciate the attention that has been given to my question. And I hope you can be accepting of my slowness to figure out what I am trying to say and ask.
I figure that the other declarations can be checked out similarly in a debugger to see whether they are syntactically equivalent.
A Compiler's generating the same assembly would show syntactic equivalence. But if the assembly generated differs, then the assembly needs to be studied to see whether each does the same thing or not.

Object constructed at a predetermined position in memory - SEGFAULT

I have this piece of code
int main()
{
int *b = new int(8);
cout<<" &b = "<<b<<endl;
delete b;
void *place = (void*)0x3c0fa8; //in my output i am getting this value in &b
int *i = new(place) int(8);
system("PAUSE");
return 0;
}
My doubt is that i have allocate space for "b", and deleted it, now if i allocate another integer space, it comes out to be the same location as allocated previously, now if i forcefully put integer value to this value(after delete), i am getting SEGFAULT.
What's wrong in this code?
Thanks.
It is not guaranteed that each time b will be allocated at same memory location, in fact it is extremely uncommon and you should not rely on this.
And for this case, place actually points to an invalid address, and accessing it causes Segfault.
Even if place points to same location where b were allocated, after deleting b, the memory is de-allocated and does not belong to your program. Before int *i = new(place) int(8); gets executed, that memory location may have been allocated by any other process. Hence again accessing it will cause Segfault.
Using memory that was allocated from the heap, after it has been freed (with delete) is undefined behaviour. For all we know, that cell of the heap may have been completely freed back to the OS [and thus no longer available as memory in your process] (in fact, Windows pretty much calls into the OS directly for all heap allocations, and it is possible that it frees the entire lump of memory that the heap is in).
However, it's more likely that the second new call works out, and you are simply overwriting some piece of heap memory that belongs to the heap, so when the code tries to exit (and free some stuff allocated before main), it falls over.
If you were to do it
int main()
{
int *b = new int(8);
cout<<" &b = "<<b<<endl;
// delete b;
int *i = new(b) int(8);
}
it has a good chance of working, since you are no longer using heap memory AFTER it has been freed. (Of course, you may want to change the second 8 to a 9 or something to see the difference... ;)
It looks like you are using the placement new wrong. The placement new operator is used for constructing stuff in a pre-defined location. It's meant to be used like this (This question might be useful too):
char *buffer = new char[sizeof(string)];
string *str = new (buffer) string("hello, world");//<-- this is your placement new
As you can see, the memory in which you are constructing, should be acquired by you in advance. In your example, however, you are not doing it, and trying to write into memory that you have not acquired before, which leads to segfault.
Also, as the other answer mentioned, you are by no means guaranteed to be getting the same address for b each time. Moreover, even if you get lucky, and place points to the same address as b, you still are trying to write into the memory that doesn't belong to you, since you do a delete b beforehand.

(C++) What happened to an array allocated on the stack when the function is finished?

I come from many years of development in Java, and now that I want switch to C++ I have hard times understanding the memory management system.
Let me explain the situation with a small example:
From my understanding, you can allocate space either on the stack or on the heap. The first is done by declaring a variable like this:
int a[5]
or
int size = 10;
int a[size]
On the contrary, if you want to allocate memory on the heap, then you can do it using the "new" command. For example like:
int *a = new int[10]; (notice that I haven't tried all the code, so the syntax might be wrong)
One difference between the two is that if it is allocated on the stack when the function is finished then the space is automatically deallocated, while on the other case we must explicitly deallocate it with delete().
Now, suppose I have a class like this:
class A {
const int *elements[10];
public void method(const int** elements) {
int subarray[10];
//do something
elements[0] = subarray;
}
}
Now, I have few questions:
in this case, subarray is allocated on the stack. Why after the function method has finished, if I look on elements[0] I still see the data of subarray? Has the compiler translated the first allocation in a heap allocation (and in this case, is this a good practice)?
if I declare subarray as "const", then the compiler does not let me assign it to elements. Why not? I thought that the const only concerns the inability to change the pointer, but nothing else.
(this is probably quite stupid) suppose I want to allocate "elements" not with a fixed 10 elements, but with a parameter that comes from the constructor. Is it still possible to allocate it in the stack, or the constructor will always allocate it in the heap?
Sorry for such questions (that might look silly to a expert C programmer), but the memory management system of C++ is VERY different from Java, and I want to avoid leakings or slow code. Many thanks in advance!
a) in this case, subarray is allocated on the stack. Why after the function method has finished, if I look on elements[0] I still see the data of subarray? Has the compiler translated the first allocation in a heap allocation (and in this case, is this a good practice)?
It's called "undefined behavior" and anything can happen. In this case, the values that subarray held are still there, incidentally, probably because you access that memory immediately after the function returns. But your compiler could also zero-out those values before returning. Your compiler could also send fire-spewing dragons to your home. Anything can happen in "undefined behavior"-land.
b) if I declare subarray as "const", then the compiler does not let me assign it to elements. Why not? I thought that the const only concerns the inability to change the pointer, but nothing else.
This is a rather unfortunate quirk of the language. Consider
const int * p1; // 1
int const * p2; // 2
int * const p3; // 3
int * p4; // 4
int const * const p5; // 5
This is all valid C++ syntax. 1 says that we have a mutable pointer to a const int. 2 says the same as 1 (this is the quirk). 3 says that we have a const pointer to a mutable int. 4 says that we have a plain old mutable pointer to a mutable int. 5 says that we have a const pointer to a const int. The rule is roughly this: Read const from right-to-left, except for the very last const, which can either be on the right or on the left.
c) suppose I want to allocate "elements" not with a fixed 10 elements, but with a parameter that comes from the constructor. Is it still possible to allocate it in the stack, or the constructor will always allocate it in the heap?
If you need dynamic allocation, then this will usually be on the heap, but the notion of stack and heap is implementation-dependent (i.e. whatever your compiler vendor does).
Lastly, if you have a Java background, then you'll need to consider ownership of memory. For example, in your method void A::method(const int**), you point your pointers to locally created memory, while that memory goes away after the method returns. Your pointers now point to memory that nobody owns. It would be better to actually copy that memory into a new area (for example, a data member of the class A), and then let your pointers point to that piece of memory.
Also, while C++ can do pointers, it would be wise to avoid them at all costs. For example, strive to use references instead of pointers when possible and appropriate, and use the std::vector class for arbitrary sized arrays. This class'll also take care of the ownership problem, as assigning a vector to another vector will actually copy all the elements from the one to the other (except now with rvalue references, but forget that for the moment). Some people regard a "naked" new/delete as bad programming practice.
A) No, the compiler has not translated it and you're not venturing into undefined behavior. To try to find some parallels to a Java developer, think about your function arguments. When you do:
int a = 4;
obj.foo(a);
what happens to a when it's passed to the method foo? A copy is made, it is added to the stack frame, and then when the function returns the frame is now used for other purposes. You can think of local stack variables to be a continuation of the arguments, since they're typically treated similarly, barring calling convention. I think reading more about how the stack (the language-agnostic stack) works can illuminate further on the issue.
B) You can mark the pointer const, or you can mark the stuff it points to const.
int b = 3
const int * const ptr = &b;
^ ^
| |- this const marks the ptr itself const
| - this const marks the stuff ptr points to const
C) It is possible to allocate it on the stack in some C++ standards, but not in others.
One of the major differences between Java and C/C++ is explicit Undefined Behavior (UB). The existence of UB is a major source of performance for C/C++. The difference between UB and "Not allowed" is that UB is unchecked, so anything can happen. In practice, when a C/C++ compiler compiles code that triggers UB the compiler will do whatever produces the most performant code.
Most of the time that means "no code" because you can't get any faster than that, but sometimes there are more aggressive optimizations that come from conclusions of UB, e.g a pointer that was dereferenced cannot be NULL (because that would be UB), so a check for NULL later should always be false, therefore the compiler will rightfully decide that the check can be left away.
Since it is often also hard for the compiler to identify UB (and not required by the standard), it truly is correct that "anything can happen".
1) According to the standard it is UB to dereference a pointer to an automatic variable after you left the scope. Why does that work? Because the data still is there in the location you left it. Until the next function call overwrites it. Think of it like driving a car after you sold it.
2) There are actually two consts possible in a pointer:
int * a; // Non const pointer to non const data
int const * b; // Non const pointer to const data
int * const c = &someint; // Const pointer to non const data
int const * const d = &someint; // Const pointer to const data
The const before the * refers to the data and the const after the * refers to the pointer itself.
3) Not a stupid question. In C it is legal to allocate an array on the stack with dynamic size, but in C++ it is not. This is because in C there is no need to call constructors and destructors. This is a hard problem in C++ and was discussed for the latest C++11 standard but it was decided that it will stay the way it was: It's not part of the standard.
So why does it work sometimes? Well, it works in GCC. This is a non-standard compiler extension of GCC. I suspect that they simply use the same code for C and C++ and they "left it in there". You can turn this off whith that GCC switch that makes it behave in a standard way.
a) You see it because the stack space for it has not yet been reclaimed. This memory is subject to being overwritten as the stack grows and shrinks. Do not do this, results are undefined!
b) subarray is a integer array, not a pointer. If it is const, you cannot assign to it.
c) Not a stupid question at all. You can do it with a placement new. It is also possible to use a variable to dimension an array on the stack.
re a): When the function returns the data is still where you put it, on the stack. But it is undefined behavior to access it there, and that storage will be reused almost immediately. It will certainly be reused upon the next call to any function. That's inherent in the way the stack is used.
The standard does not talk about the stack or the heap, in this case your array has automatic storage which in most modern systems will be on the stack. It is just plain undefined behavior to keep a pointer to an automatic object once you have exited the scope and then access it. The draft C++ standard in section 3.7.3 paragraph 1 says(emphasis mine):
Block-scope variables explicitly declared register or not explicitly declared static or extern have automatic storage duration. The storage for these entities lasts until the block in which they are created exits.

int* variable or int variable

I am working in C++ and have been using pointers a lot lately. I found that there are a few ways to initialize the chunks of memory that I need to use.
void functioncall(int* i)
{
*i = *i + 1;
}
int main(){
int* a = (int*)malloc(sizeof(int));
int az = 0;
functioncall(a);
functioncall(&az);
}
Notice that the first variable int* a is declared as a pointer and then I malloc the memory for it. But, with az it is not a pointer but when calling the function I get the address of the memory.
So, my question is: is there a preferred way =, or is there any penalties one over the other?
int* a = (int*)malloc(sizeof(int));
This allocates memory on the heap. You have to deallocate it on your own, or you'll run into memory leaks. You deallocate it by calling free(a);. This option is most definitely slower (since the memory has to be requested and some other background stuff has to be done) but the memory may be available as long as you call free.
int az = 0;
This "allocates" memory on the stack, which means it gets automatically destroyed when you leave the function it is declared (unless for some really rare exceptions). You do not have to tidy up the memory. This option is faster, but you do not have control over when the object gets destroyed.
a is put onto the heap, az is on the stack. The heap you are responsible to freeing the memory. With the stack when it goes out of scope it is automatically free. So the answer is when you want the data to be placed and if you require if at the end of the scope.
PS You should use new in C++
In general you should avoid dynamic memory allocations (malloc, calloc, new) when it's reasonably easy: they are slower than stack allocations, but, more importantly, you must remember to free (free, delete) manually the memory obtained with dynamic allocation, otherwise you have memory leaks (as happens in your code).
I'm not sure what you're trying to do, but there is almost never a
reason for allocating a single int (nor an array of int, for that
matter). And there are at least two errors in your functioncall:
first, it fails to check for a null pointer (if the pointer can't be
null, pass by reference), and second, it doesn't do anything: it
increments the copy of the pointer passed as an argument, and then
dereferences the initial value and throws out the value read.
Allocating small variables directly on the stack is generally faster since you don't have to do any heap operations. There's also less chance of pointer-related screwups (e.g., double frees). Finally, you're using less space. Heap overheads aside, you're still moving a pointer and an int around.
The first line (int* a = ...) is called dynamically-allocated variable, it is usually used if you don't know before the runtime that how much variables you needed, or if you need it at all.
The second line (int az = 0) is called automatic variable, it is used more regularly.
int az = 0;
functioncall(a);
This is okay, as far as behavior is concerned.
int* a = (int*)malloc(sizeof(int));
functioncall(&az);
This invokes undefined-behaviour (UB), inside the function, when you do *i++. Because malloc only allocates the memory, it does not initialize it. That means, *i is still uninitialized, and reading an uninitialized memory invokes UB; that explains why *i++ is UB. And UB, if you know, is the most dangerous thing in C++, for it means, anything can happen.
As for the original question, what would you prefer? So the answer is, prefer automatic variable over pointer (be it allocated with malloc or new).
Automatic means Fast, Clean and Safe.
func(typename* p)
pointer is call value
*p++ is *p and p++
if change this pointer , not change original.

Automatic heap cleanup during stack destruction

int* f()
{
int *p = new int[10];
return p;
}
int main()
{
int *p = f();
//using p;
return 0;
}
Is it true that during stack destruction when function return it's value some compilers (common ones like VS or gcc were implied when I was told that) could try to automatically free memory pointed by local pointers such as p in this example? Even if it's not, would I be able to normally delete[] allocated memory in main? The problem seems to be that information about exact array size is lost at that point. Also, would the answer change in case of malloc and free?
Thank you.
Only Local variables are destroyed-released.
In your case p is "destroyed" (released) , but what what p points to, is not "destroyed" (released using delete[]).
Yes you can, and should/must use a delete[] on your main. But this does not imply using raw pointers in C++. You might find this e-book interesting : Link-Alf-Book
If you want to delete what a local variable points to when the function is "over" (out of scope) use std::auto_ptr() (only works for non-array variables though, not the ones which require delete[])
Also, would the answer change in case
of malloc and free?
Nope, but you should make sure that you do not mix free()/new/delete/malloc(). The same applies for new/delete[] and new[]/delete.
No, they won't free or delete what your pointer points to. They will only release the few bytes that the pointer itself occupies. A compiler that called free or delete would, I believe, violate the language standard.
You will only be able to delete[] memory in main if you a pointer to the memory, i.e., the result from f(). You don't need keep track of the size of the allocation; new and malloc do that for you, behind the scenes.
If you want memory cleaned up at function return, use a smart pointer such as boost::scoped_ptr or boost::scoped_array (both from the Boost collection of libraries), std::auto_ptr (in the current C++ standard, but about to be deprecated) or std::unique_ptr (in the upcoming standard).
In C, it's impossible to create a smart pointer.
Is it true that during stack destruction when function return it's value some compilers (common ones like VS or gcc were implied when I was told that) could try to automatically free memory pointed by local pointers such as p in this example?
Short Answer: No
Long Answer:
If you are using smart pointers or container (like you should be) then yes.
When the smart pointer goes out of scope the memory is released.
std::auto_ptr<int> f()
{
int *p = new int;
return p; // smart pointer credated here and returned.
// p should probably have been a smart pointer to start with
// But feeling lazy this morning.
}
std::vector<int> f1()
{
// If you want to allocate an array use a std::vector (or std::array from C++0x)
return std::vector<int>(10);
}
int main()
{
std::auto_ptr<int> p = f();
std::vector<int> p1 = f1();
//using p;
return 0; // p destroyed
}
Even if it's not, would I be able to normally delete[] allocated memory in main?
It is normal to make sure all memory is correctly freed as soon as you don't need it.
Note delete [] and delete are different so be careful about using them.
Memory allocated with new must be released with delete.
Memory allocated with new [] must be released with delete [].
Memory allocated with malloc/calloc/realloc must be released with free.
The problem seems to be that information about exact array size is lost at that point.
It is the runtime systems problem to remember this information. How it is stored it is not specified by the standard but usually it is close to the object that was allocated.
Also, would the answer change in case of malloc and free?
In C++ you should probably not use malloc/free. But they can be used. When they are used you should use them together to make sure that memory is not leaked.
You were misinformed - local variables are cleaned up, but the memory allocated to local pointers is not. If you weren't returning the pointer, you would have an immediate memory leak.
Don't worry about how the compiler keeps track of how many elements were allocated, it's an implementation detail that isn't addressed by the C++ standard. Just know that it works. (As long as you use the delete[] notation, which you did)
When you use new[] the compiler adds extra bookkeeping information so that it knows how many elements to delete[]. (In a similar way, when you use malloc it knows how many bytes to free. Some compiler libraries provide extensions to find out what that size is.)
I haven't heard of a compiler doing that, but it's certainly possible for a compiler to detect (in many cases) whether the allocated memory from the function isn't referenced by a pointer anymore, and then free that memory.
In your case however, the memory is not lost because you keep a pointer to it which is the return value of the function.
A very common case for memory leaks and a perfect candidate for such a feature would be this code:
int *f()
{
int *p = new int[10];
// do something that doesn't pass p to external
// functions or assign p to global data
return p;
}
int main()
{
while (1) {
f();
}
return 0;
}
As you can notice, the pointer to the allocated memory is lost and that can be detected by the compiler with absolute certainty.