Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Does memory allocation in multiple threads in modern C++ compilers cause a global lock access? How much does that vary between compilers and operation systems? How much benefit is there to putting small amounts of data in a pre-allocated global array (less clean, less convenient) instead of dynamically allocating it when needed by individual threads?
All threads share a common virtual address space, so any memory allocation from the heap (malloc or new) will result in an update to the virtual address spaces used by all threads. How this is implemented will depend on the operating system as well as the compiler.
If the allocated memory only needs function scope and isn't too large, then it could be allocated using alloca() (or _alloca()), which allocates from the stack, which would be a thread and function local instance of that allocated memory.
In the multi-threaded programs I've written, I've used message and/or buffer "free" pools that are allocated at startup, then have the threads "allocate" and "free" the messages and/or buffers from the "free" pools.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 months ago.
Improve this question
I am trying to code a relatively simple program to understand in which scenarios it would be more efficient and useful to use the heap.
I first read that you better store large objects on the heap.
So I created a std::vector on the heap and filled it with an insane amount of bytes, something like 18Gb. At some point, it threw a std::bad_alloc exception, then my OS (Linux mint) killed the process after the swap was full. But the result was the same with the stack, so I don't understand how it's better on the heap.
Maybe I lack creativity, but I cannot think of a design where I would absolutely need to use the heap, I can always pass a reference of my object on the stack and achieve the same memory usage efficiency. For every program I wrote, it was always more efficient to use the stack speed wise and was the same for memory usage.
So, in what scenario is it useful to use the heap regarding memory usage efficiency or speed?
You use the heap whenever you need more flexibility than the stack provides. If you need to allocate memory to return something from a function, then it can't be on the stack, because it would be freed when the function returned. And it can't be global, because you might call the same function more than once.
In C++, you might not always realize that you are using the heap, because classes like std::vector and std::string use the heap internally. You don't always have to write new yourself to use the heap.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Does memory fragmentation leads to "out of memory exception" or program and system can handle this issue at runtime?
Yes, it's theoretically possible for fragmentation to cause out-of-memory exceptions. Suppose you do lots of allocations of small objects that mostly fill your memory, then you delete every other object. This will produce a large total amount of free memory, but they'll all be very small blocks -- this is extreme fragmentation. If you try to allocate an object bigger than any of these blocks, the allocation will fail.
The runtime system generally can't fix this up, because in most implementations addresses in pointers can't be changed automatically. So allocations can't be rearranged to consolidate all the free space.
Good heap management implementations are designed to make this unlikely. One common technique is to use different areas of memory for different allocation sizes. Small allocations come from one area, medium allocations from another area, and large allocations from their own area. So if you get lots of fragmentation in the small area, it won't cause a problem for large allocations.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Preempt: (if anyone can link helpful articles explaining the stack and heap at a deep level, up to registers, would be much appreciated)
I am new to C++: I am trying to really grasp how memory management works. At this point, I understand any declaration of the type ObjClass obj; to have automatic duration within the scope it's declared in. Yet, ObjClass* obj = new ObjClass(); stores the obj pointer on stack, but assigns it an address of memory on the heap. What I'm wondering about is, in more complex programs, what design pattern is used to prevent stack overflow? I could see the storage on stack quickly exceeding 1mb. Is this achieved by making multiple smaller functions which run, use stack, and then automatically deallocate?
Related Q: As for global variables, I know they are held in "static" storage, yet am unsure how that works in the context of stack/heap. How is their memory allocated, and is there a small limit like the stack? Is heap close to the size of system RAM minus the OS-reserved memory?
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I was hoping someone had some schooling they could lay down about the whole HEAP and stack ordeal. I am trying to make a program that would attempt to create about 20,000 instances of just one union and if so some day I may want to implement a much larger program. Other than my current project consisting of a maximum of just 20,000 unions stored where ever c++ will allocate them do you think I could up the anti into the millions while retaining a reasonable return speed on function calls, approximately 1,360,000 or so? And how do you think it will handle 20,000?
Heap is an area used for dynamic memory allocation.
It's usually used to allocate space for variable collection size, and/or to allocate a large amount of memory. It's definitely not a CPU register(s).
Besides this I think there is no guarantee what heap is.
This may be RAM, may be processor cache, even HDD storage. Let OS and hardware decide what it will be in particular case.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Generally when do you allocate memory in C++, you have new/delete and virtualalloc, amongst a few other API calls, is the generally for dynamic allocation, but then we have vector and such so what are the common uses for allocating memory?
If you don't know, at compilation time, how many items you will need, your best option is to use dynamic allocation.
That way you can (hopefully) deal with all the input without wasting memory by reserving an humongous space with a big array.
// ...
int humongous[10000]; // I only expect 10 items, so this should be enough for creative users
// ...
If you want to deal with large memory (i.e memory that can't be allocated on stack) then you can use dynamic allocation.
As a general answer: "there may be cases where the memory needs of a program can only be determined during runtime. For example, when the memory needed depends on user input. On these cases, programs need to dynamically allocate memory, for which the C++ language integrates the operators new and delete."
source: http://www.cplusplus.com/doc/tutorial/dynamic/