I remember reading in a book on programming for computer games, sorry can't remember title. That an easy way to improve performance is do something like this near the start:
int main()
{
{
char dummy* = new char[10000000]; // 10Mbytes ish
delete[] dummy;
}
...
}
The idea is that the expensive part of dynamic memory allocation is the request to get memory from the OS, which doesn't normally get returned until the end of the program. Has anybody used this and seen performance improvements ?
Whether this works or not depends on the OS in question. A lot of modern OSes use mmap under the hood for large memory allocation and bypass the process's heap altogether. This means the allocation will be made directly from the OS and then returned directly to the OS when freed.
A much better strategy is to generally do your memory allocation at the start and reuse space allocated as much as possible before returning the memory to the heap. This is the logic behind STL allocators.
That doesn't make sense. You allocate a huge block, then free it and the heap takes ownership of the memory occupied by the block and can legally fragment it while it is used later.
That doesn't necessarily improve performance, as the current Standard says nothing about how the memory is dynamically allocated, deallocated and then reallocated. However, an implementation can make use of the same memory region in the rest of the program whenever it needs to allocate memory . It's more like memory pooling.
Anything could be possible. It entirely depends on implementation. Also it may even choose to remove the code altogether, as it does nothing.
Depending on your environment, there may be loader flags that let you specify an initial heap size. This memory will be allocated when your program is loaded into memory, so it becomes part of the start-up time. This should give the same results, with the benefit that the compiler won't optimize it away.
Related
int memory()
{
int* p = new int[10000];
}
will this make 10000 ints unusable for things like local memory?
I don't know if it does.
The new is used to allocate the requested amount of memory(if available) to the object/array variable that you requested. In your case, the pointer p will store the address to this allocated block of memory.
There's only a finite amount of memory that's available to you and so your program can only use from that amount of memory. This will vary from system to system but it will be finite. When you request for allocation of more memory than what can be allocated, the program will probably throw some exception like std::bad_alloc to indicate failure to allocate storage which if unhandled will crash your program.
Does the memory allocated for the free store affect the whole
program's available memory?
YES.
The memory allocation in your program will be from heap storage (C++ knows if as free store). To answer your question, Yes, the allocated memory will reduce the amount of free memory available for other variables to use. There isn't an infinite amount of memory available for your program to use. So you might eventually run out of memory space once you have used up all the space available to you. However, you could also free up the memory once its work is done by using delete operator to free up the used memory space and make it available back again for use.
The memory space allocated to you will be implementation based. C++ itself knows nothing about the heap or how memory is allocated to it.
In practice: All storage uses memory (unless optimised away).
If you allocate 40000 bytes, and the operating system doesn't have 40000 bytes available (plus overhead that the free storage uses for managing the memory), then the operating system has to take actions to free some memory - typically by terminating your process, or some other process. This action may be delayed for when you actually use the memory rather than immediately upon allocation.
None of this is specified in the C++ language, and is an example of how an implementation of C++ language might behave. Other implementations may be different.
In (portable) C++ you don't really know how much memory is there and what it can be used for.
new allocates from the free store and local variables are allocated in local storage. Then you have static storage and thread-local storage (and some other more "magic" things like the storage where exception objects are placed).
However for the C++ language all these memory locations come magically from those calls or syntax construct, each one with its specific semantic and for example you cannot know (portably) how much memory there is for each type and you cannot know if allocating memory of one type may lower the available amount of memory of another type. May be, may be not.
Consider even that for example for local storage you cannot know if you are running out of available local memory because the results of allocating too much local storage are simply undefined: think about it... you cannot know how much local memory there is but you cannot try to use more than that unknown amount because no check is done by the system and you jump straight into undefined behavior hell.
So for example for local storage the only practical advice is don't use "too much" of it.
I have read this question and answer
dynamically allocated memory after program termination,
and I want to know if it is okay to NOT delete dynamically allocated memory and let it get freed by the OS after programme termination.
So, if I have allocated some memory for objects that I need thoughout the programme, is it ok to skip deleting them at the end of the programme, in order to make the code run faster?
The short answer is yes, you can, the long answer is possibly you better not do that: if your code needs to be refactored and turned into a library, you are delivering a considerable amount of technical debt to the person who is going to do that job, which could be you.
Furthermore, if you have a real, hard-to-find memory leak (not a memory leak caused by you intentionally not freeing long-living objects) it's going to be quite time consuming to debug it with valgrind due to a considerable amount of noise and false positives.
Have a look at std::shared_ptr and std::unique_ptr. The latter has no overhead.
Most sane OS release all memory and local resources own by the process upon termination (the OS may do it in lazy manner, or reduce their share counter, but it does not matter much on the question). So, it is safe to skip releasing those resources.
However, it is very bad habit, and you gain almost nothing. If you found releasing object takes long time (like walking in a very long list of objects), you should refine your code and choose a better algorithm.
Also, although the OS will release all local resources, there are exceptions like shared memory and global space semaphore, which you are required to release them explicitly.
First of all, the number one rule in C++ is:
Avoid dynamic allocation unless you really need it!
Don't use new lightly; not even if it's safely wrapped in std::make_unique or std::make_shared. The standard way to create an instance of a type in C++ is:
T t;
In C++, you need dynamic allocation only if an object should outlive the scope in which it was originally created.
If and only if you need to dynamically allocate an object, consider using std::shared_ptr or std::unique_ptr. Those will deallocate automatically when the object is no longer needed.
Second,
is it ok to skip deleting them at the end of the programme, in order
to make the code run faster?
Absolutely not, because of the "in order to make the code run faster" part. This would be premature optimisation.
Those are the basic points.
However, you still have to consider what constitutes a "real", or bad memory leak.
Here is a bad memory leak:
#include <iostream>
int main()
{
int count;
std::cin >> count;
for (int i = 0; i < count; ++i)
{
int* memory = new int[100];
}
}
This is not bad because the memory is "lost forever"; any remotely modern operating system will clean everything up for you once the process has gone (see Kerrek SB's answer in your linked question).
It is bad because memory consumption is not constant when it could be; it will unnecessarily grow with user input.
Here is another bad memory leak:
void OnButtonClicked()
{
std::string* s = new std::string("my"); // evil!
label->SetText(*s + " label");
}
This piece of (imaginary and slightly contrived) code will make memory consumption grow with every button click. The longer the program runs, the more memory it will take.
Now compare this with:
int main()
{
int* memory = new memory[100];
}
In this case, memory consumption is constant; it does not depend on user input and will not become bigger the longer the program runs. While stupid for such a tiny test program, there are situations in C++ where deliberately not deallocating makes sense.
Singleton comes to mind. A very good way to implement Singleton in C++ is to create the instance dynamically and never delete it; this avoids all order-of-destruction issues (e.g. SettingsManager writing to Log in its destructor when Log was already destroyed). When the operating system clears the memory, no more code is executed and you are safe.
Chances are that you will never run into a situation where it's a good idea to avoid deallocation. But be wary of "always" and "never" rules in software engineering, especially in C++. Good memory management is much harder than matching every new with a delete.
This question is as in title:
Is it possible to produce a memory leak without using any kernel specific means like malloc, new, etc?
What if I will make a linked list inside a function with lot of elements in there, and after it I'll exit from this function without cleaning a list. The list will be created without using any malloc calls, i.e.
struct list_head {
struct list_head *next, *prev;
}
Can it be guaranteed that all resources will be freed after exiting from this function? So I can freely execute it a million times and nothing will be leaked?
Subject: If you not using any particular malloc or new calls you won't get a heap memory leak. Never. Is that right?
A leak is always connected to a resource. A resource is by definition something that you acquire manually, and that you must release manually. Memory is a prime example, but there are other resources, too (file handles, mutex locks, network connections, etc.).
A leak occurs when you acquire a resource, but subsequently lose the handle to the resource so that nobody can release it. A lesser version of a leak is a "still-reachable" kind of situation where you don't release the resource, but you still have the handle and could release it. That's mostly down to laziness, but a leak by contrast is always a programming error.
Since your code never acquires any resources, it also cannot have any leaks.
The variables you applied without malloc or new is located at stack
space in the memory. So when the function returned, the variable is
taken back.
On the other hand, the memory you applied with malloc or new is
located at heap space. The system doesn't care whether you release the
space or not. In this situation, if you don't use free or delete,
memory leak will happen.
Subject: If you not using any particular malloc or new calls you won't get a heap memory leak. Never. Is that right?
That assumption is not entirely correct. The problem is that the operating system itself (or other third party components you have to rely on) can have memory leaks as well. In that case you might not actively call malloc, but call other (operating system) functions which could leak.
So your assumption depends on how strongly you consider such a thing. You can argue that the OS/third party implementation is outside your domain, then this assumption would be correct. If you have a well defined system and your requirements are such that you have to garuantee a certain uptime, something like this may have to be considered as well.
So the answer to this question ...
Is it possible to make memory leak without using malloc?
... is:
Yes, it is possible.
malloc() allocates memory from the heap, while space for string and struct literals (string1, string2 and those list_head's) will be reserved at compile time at the stack.
Actually any memory allocated for a program (heap or stack) will be reclaimed by the kernel when the process exits (at *nix system at least).
I would define memory leak as allocating memory on heap and without freeing it when your program exits. This definition actually answers your question.
There are standard functions (like strdup) that will allocate memory on heap, beware of them.
Another example of a resource that you can allocate and forget to free:
If you're using OpenGL, and you call glGenBuffers() a million times without the corresponding glDeleteBuffers calls, it's extremely likely that you will run out of VRAM and your graphics driver will start leaking to system memory.
I just had this happen. Fortunately, Visual Studio's memory profiler made it pretty easy to find. It showed up as a large number of allocations made by the external process nvoglv32.dll, which is my NVIDIA OpenGL driver.
I know this will sound weird but i need my app to run fast and it does a lot of new and delete. All function calls new and passes the ptr back expect for the ones pushing a pointer to a list or deque.
At the end of the main loop the program goes across all of that memory and deletes it (unless i forgot to delete it). I am not exaggerating. Is there a mode that allows my code to allocate objs for new but doesnt delete them on delete but just mark it as unused so the next new for that struct will use it instead of doing a full allocation?
I imagine that would boost performance. It isnt fully done so i cant benchmark but i am sure i'd see a boost and if this was automatic then great. Is there such a mode or flag i can use?
I am using gcc (linux, win) and MSVC2010(win).
Try object pooling via Boost - http://www.boost.org/doc/libs/1_44_0/libs/pool/doc/index.html
What do you mean by "end of the main loop" - after the loop finishes, or just before it repeats?
If the former, then you can safely leave memory allocated when your process exits, although it isn't recommended. The OS will recover it, probably faster than you'd do by deleting each object. Destructors won't be called (so if they do anything important other than freeing resources associated with the process, then don't do this). Debugging tools will tell you that you have memory leaks, which isn't very satisfactory, but it works on the OSes you name.
If the latter, then "marking the memory unused so that the next new will use it" is exactly what delete does (well, after destructors). Some special-purpose memory allocators are faster than general-purpose allocators, though. You could try using a memory pool allocator instead of the default new/delete, if you have a lot of objects of the same size.
"I imagine that would boost performance"
Unfortunately we can't get performance boosts just by imagining them ;-p Write the code first, measure performance, then worry about changing your allocation once you know what you're up against. "Faster" is pretty much useless if the boring, simple version of your code is already "easily fast enough". You can usually change your allocation mechanism without significant changes to the rest of your code, so you don't have to worry about it in up-front design.
What your are describing is what malloc and co usually do, keeping memory around and reallocating it for similar sized allocations.
i believe what you are looking for is a "placement new".
Use new to allocate byte size memory only once.
And later on just use the ptr as follows
Type* ptr = static_cast<Type*>(operator new (sizeof(Type))); // need to call only once and store the pointer
Type* next_ptr = new (ptr) Type();
Manually call the destructors instead of delete.
next_ptr->~Type();
Since no memory allocation happens this should definitely be fast. "how fast" i am not sure
Using a memory pool is what you are looking to achieve.
You also could use a few of the windows Heap allocation methods, and instead of just free'ing each individual allocation, you could just free the entire heap all at once. Though if you are using a memory profiling tool (like bounds checker) it will think it's a problem.
Do you know if there is a way to bring back malloc in its initial state, as if the program was just starting ?
reason : I am developing an embedded application with the nintendods devkitpro and I would like to be able to improve debugging support in case of software faults. I can already catch most errors and e.g. return to the console menu, but this fails to work when catching std::bad_alloc.
I suspect that the code I use for "soft reboot" involves malloc() itself at some point I cannot control, so I'd like to "forget everything about the running app and get a fresh start".
There is no way of doing this portably, though concievably an embedded implementation of C++ might supply it as an extension. You should instead look at writing your own allocation system, using memory pools, or use an existing library.
Only time I did something similar, we used our own allocator which would keep a reference to each allocated blocks. If we wanted to rollback, we would free all the allocated blocks and do a longjmp to restart the programme.
Squirrel away a bit of memory in a global location e.g.
int* not_used = new i[1024];
Then when you get a std::bad_alloc, delete not_used and move on to your error console. The idea is to give your crash handler just enough space to do what you need. You'll have to tune how much memory is reserved so that your console doesn't also received out of memory errors.
If you're clever, not_used could actually be used. But you'd have to be careful that whatever was using memory could be deleted without notice.
The only way to get a fresh start is to reload the application from storage. The DS loads everything into RAM which means that the data section is modified in place.
I suppose if nothing else is running you could zero-write the whole memory block that the API provides on the Nintendo? But otherwise just keep track of your allocates.
In fact, if you create a CatchAndRelease class to keep a reference to each and every allocated memory block, at the required time you could go back and clear those out.
Otherwise, you may need to write your own memory pool, as mentioned by Neil.
Do you ever need to free memory in anything other than last-in-first-out order? If not, I'd suggest that you define an array to use all available memory (you'll probably have to tweak the linker files to do this) and then initialize a pointer to the start of that array. Then write your own malloc() function:
char *allocation_ptr = big_array;
void *malloc(size_t n)
{
void *temp = (void*)allocation_ptr;
if (allocation_ptr > END_OF_ALLOCATION_AREA - n)
return 0;
allocation_ptr += n;
return temp;
}
void free_all_after(void *ptr)
{
if (ptr)
allocation_ptr = (char*)ptr;
}
In this implementation, free_all_after() will free the indicated pointer and everything allocated after it. Note that unlike other implementations of malloc(), this one has zero overhead. The LIFO allocation is very limiting, but for many embedded systems it would be entirely adequate.
std::bad_alloc occurs when new fails and cannot allocate the memory requested. This will normally occur when the heap has run out of memory and therefore cannot honour the request. For this reason, you will not be able to allocate any new memory reliably in the cleanup.
This means that you may not allocate new memory for cleanup. Your only hope of cleaning up successfully is to ensure that memory for the cleanup code is pre-allocated well before you actually need it.
Objects can still be newed into this cleanup memory using the inplace new operator (ie new where you supply a memory address)