In C (resp. C++), it is possible to allocate arrays, structures, (resp. objects) which are strictly local to a function (resp. method) in the stack frame allocated for this function.
However, in java, all objects are allocated on the heap, even objects which are completely local to a method and are never leaked outside the method.
In some cases, being able to allocate such objects on the stack rather than the heap would provide great efficiency gains.
Does RoboVM code generation support this, or could it support this in the future?
Regards
RoboVM does not support stack allocations. Some VMs do escape analysis to determine if a certain allocation is local to a method and can be done on the stack instead of the heap. We might add that to RoboVM in the future as an optimization though it wouldn't be directly user controllable. I know that IBM is experimenting with something similar to .NET's structs: http://www.slideshare.net/mmitran/ibm-java-packed-objects-mmit-20121120. If that is ever accepted as a standard we will try to implement it in RoboVM.
Related
Hello I heard that in c++ stack memory is being used for "normal" variables. How do I make stack full? I tried to use ton of arrays but it didnt help. How big is stack and where is it located?
The C++ language doesn't specify such thing as "stack". It is an implementation detail, and as such it doesn't make sense deliberating about unless we are discussing a particular implementation of C++.
But yes, in a typical C++ implementation, automatic variables are stored on the execution stack.
How do I make stack full?
Step 1: Use a language implementation that has limited stack size. This is quite common.
Step 2: Create an automatic variable that exceeds the limit. Or nest too many non-tail-recursive function calls. If you're lucky, the program may crash.
You wouldn't want stack to be exhausted in production use.
How big is stack
Depends on language implementation. It may even be configurable. The default is one to a few megabytes on common desktop/server systems. Less on embedded systems.
and where is it located?
Somewhere in memory where the language implementation has chosen.
The most important thing to take out of this is that the memory available for automatic variables is typically limited. As such:
Don't use large automatic variables.
Don't use recursion when asymptotic growth of depth is linear or worse.
Don't let user input affect the amount or size of automatic variables or depth of recursion without constraint.
Hello I heard that in c++ stack memory is being used for "normal" variables.
Local (automatic) variables declared in a function or main are allocated memory mostly on stack (or register) and are deallocated when the execution is done.
How do I make stack full? I tried to use ton of arrays but it didnt help.
Using ton of arrays, many recursive calls, parameter passing large structs that contain ton of arrays are ways. Another way might also be to reduce stack size: -Wl,--stack,number (for gcc)
How big is stack and where is it located?
It depends on platform, operating system so on. Standard does not determine any stack size. Its location is determined by OS before the program starts. OS allocates a memory for stack from virtual memory.
At what struct size should I consider allocating on the heap / free store using new keyword (or any other method of dynamic allocation) instead of on the stack?
10 bytes? 20 bytes? 200 bytes? 2KB? 2MB? Never?
Even if I wanted to pass it around by pointer, I could still take a reference from the stack variable. I understand that a stack variable will disappear at the end of the scope and dynamically allocated variables will not. I can deal with that either way, but so far I've not found any guidance for when to allocate dynamically. Sure, avoid stack overflow by not putting too much on the stack... but how much is too much?
Any guidance would be appreciated.
To actually answer the question, you'll need to know:
How big the stack is. This is often configurable at a compile-time, but may be capped by the target platform.
What is on the stack already. This knowledge is obtainable either by using deterministic call graph or by making decision actively, based on the current value of the stack pointer.
Without all of the above, any passive decision would be a gamble. Which also means that it's a gamble by default — indeed, in most cases we have to trust the compiler developers to understand how much of a stack space a "typical" program would need, and that our views on "typical" programs do align well and often.
In the long term, just like with any optimization problem, put your bets on measuring the overall performance and testing edge cases that may cause the stack overflow.
(Note. I probably should have searched before answering, but this question is essentially a duplicate of How much stack usage is too much?, nevertheless, here is my opinion on it.)
If you intend to keep a large buffer around for an extended period of time, then you should allocate it on the heap.
If you are in a recursive function, then allocating large buffers on the stack can quickly lead to problems.
Personally, I would keep buffers below ~4KiB on the stack and allocate larger buffers on the heap, unless you have a good overview of your program, and more specifically, how and where your functions are called.
That being said, if you constantly create and destroy buffers, consider putting them on the stack.
(If you are working on an embedded system, then that's a very different story.)
If and when does c++ trigger the operation of allocating a secondary heap and are there any reasons someone would want to allocate more than one heap? Do any standard actions in c++ like creating a new namespace trigger this, how does the memory handle multiple object with the same name?
According to the Dartmouth_edu article in my comment above there are quite a few time a program may utilize multiple heaps.
"IBM C and C++ Compilers lets you create and use your own pools of memory, called heaps. "
Good examples are if you think a heap object may corrupt the heap isolate it in its own heap.
If you allocate a whole heap for a multipart object you can just destroy the heap instead of having to free the memory of every component.
If you want to do fancy stuff like multithreading! You can also speed up memory access by allowing one thread to free memory from its heap while another is using its own separate heap.
Normal user actions do not create new heaps. Explicitly creating a new heap creates a new heap.
Namespaces are handled by address memory scopes and pointers. #parktomatomi thanks for the help.
C++ does not dictate the use of a heap, much less the use of a secondary heap. That is an implementation detail that is left up to each compiler to determine. As far as the language is concerned, variables can have dynamic storage duration, but the standard does not say how this is achieved.
In practice, all the compilers I know of do use heap memory for dynamic allocations. In theory, each allocation method (new vs. malloc) could have its own heap, but there is little reason to complicate memory management by introducing more heaps than necessary. Plus, you shouldn't mix allocation methods. The benefits of multiple heaps tend to depend on manual fine-tuning that is currently beyond the ken of compilers. (A programmer can implement multiple heaps, but that is not the same as "triggering" multiple heaps.)
Namespaces and object names are an unrelated subject, as those do not exist in an executable (unless retained as notes for a debugger).
I'm doing an special software for windows that needs to use two simultaneous heaps for a number of reasons.
I have read this article https://msdn.microsoft.com/en-us/library/ms810603 and it describes how new heaps can be created and use them.
There is one heap that is created by the compiler called the default heap, which is the one we all use in C/C++ when we call new and malloc.
The question I have is, Is it possible to set a new heap as default heap so that all function calls that use memory allocations use this heap instead of the original one? Then when I need it switch back to the original
I know it looks tricky but I need to deal with this to avoid some heap corruption during hardware interrupts.
Thanks in advance,
Martin
My environment is gcc, C++, Linux.
When my application does some data calculation, it may need a "large" (may be a few MBs) number of memory to store data, calculation results and other things. I got some code using kind of new, delete to finish this. Since there is no ownership outside some function scope, i think all these memory can be allocated in stack.
The problem is, the default stack size(8192Kb in my system) may not be enough. I may need to change stack size for these stack allocation. Morever, if the calculation needs more data in future, i may need to extend stack size again.
So is it an option to extend stack size? Since it cannot be allocated for specific functions, how will it impact on the whole app? Is it REALLY an improvement to allocate data on stack instead of on heap?
You bring up a controversial question that does not have a direct answer. There are pros and cons on each side. In particular:
Memory on the heap is more easy to control: you can check the return value or allow throwing exceptions. When the stack overflows, your thread is simply unloaded with a good change that debugger will not show anything meaningful.
On the contrary, stack allocations happen automatically, and you do not need to do anything specific. I always favor this simplicity.
There is nothing fundamentally wrong in allocating large amounts of data on the stack. At the end of the day any type of memory is finally a memory. This is means that the total amount of required memory is what really matters. Where this memory is allocated is less important. When there is enough memory for your application to work, there is no difference where the memory is allocated. It can be static for example.
Different systems have different rules of allocation. This means that final decision may depend on the actual system.
While it's true that stack allocations are more efficient (more apparent in multi-threaded programs), but if the usage pattern in your case is "allocate a big chunk of memory, process the data, deallocate it", then there won't be much of improvement.
Instead rewrite the code to use RAII, e.g. std::vector or std::unique_ptr so there won't be explicit buggy deletes.
If you use Linux, you can change the stack size with the ulimit command. However, I think the memory which allocated from the heap also is good for you.