What's the best way to allocate HUGE amounts of memory? [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I'm allocating 10 GB of RAM for tons of objects that I will need. I want to be able to squeeze every last byte of RAM I can before hitting some problem like null pointer.
I know the allocator returns continuous memory, so if I have scattered memory from other programs, the max continuous size will be quite small (I assume), or smaller than the actual amount of remaining free memory.
Is it better to allocate the entire size of continuous memory I need in one go (10GB) or is it better to allocate smaller non-contiguous chunks and link them together?
Which one is more likely to always return all the memory I need?

Related

When should you increase stack size ( Visual Studio C++ ) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 months ago.
Improve this question
When should we increase the stack size for a C++ program?
Why isn't it unlimited and are there any reasons for not increasing the stack size?
And does a program crash if the stack is full?
And also can we increase the stack size for a particular thread?
When should we increase the stack size for a C++ program?
When you have a specific program or a use case that overflows the stack. Note that the ideal solution is to modify the program or algorithm to behave within reasonable stack size, but that isn't always possible in practice (e.g. you have a program you cannot modify).
Why isn't it unlimited and are there any reasons for not increasing the stack size?
Because is not possible within the current architecture. In the virtual memory space of a program there are multiple stacks, one for each thread so specific limited space must be reserved for each stack. Keep in mind that a stack cannot be fragmented and cannot move (relative to the virtual memory space).
And does a program crash if the stack is full?
Please forgive my pedantic note: If the stack is full but you don't exceed it there is not problem. The problem is when the program overflows the stack.
I am not sure exactly. I think there is OS level protection against stack overflow in which case the program crashes with stack overflow exception. If am wrong and there is no protection (or if there is a setting to disable it) it depends on what it is in the memory you overflow. In any case, nothing good happens.
Why is the stack size set so small by default?
ok, it's not your question, but I feel it ties in here neatly
It's not. The OSes need to find a balance between too big of a stack and too small. Too big and you cut into the heap memory, too small and you make programs overflow it.
What can reside on the stack: call frames and local variables allocated on the stack. Call frames are very small (they usually contain just 1 pointer per frame) and local variables usually are pretty small. Big objects go on the heap.
What can overflow a stack? The most likely culprit is recursion. A recursive algorithm can easily overflow the stack with a big maximum recursion depth. But every recursive algorithm can be rewritten. Either there is an equivalent iterative algorithm or simply use a stack on the heap instead. That is the reason why in stack based memory allocation languages like C and C++ in the real world recursive algorithms with unbound recursion depths are avoided.

Does memory fragmentation leads to out of memory exception? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Does memory fragmentation leads to "out of memory exception" or program and system can handle this issue at runtime?
Yes, it's theoretically possible for fragmentation to cause out-of-memory exceptions. Suppose you do lots of allocations of small objects that mostly fill your memory, then you delete every other object. This will produce a large total amount of free memory, but they'll all be very small blocks -- this is extreme fragmentation. If you try to allocate an object bigger than any of these blocks, the allocation will fail.
The runtime system generally can't fix this up, because in most implementations addresses in pointers can't be changed automatically. So allocations can't be rearranged to consolidate all the free space.
Good heap management implementations are designed to make this unlikely. One common technique is to use different areas of memory for different allocation sizes. Small allocations come from one area, medium allocations from another area, and large allocations from their own area. So if you get lots of fragmentation in the small area, it won't cause a problem for large allocations.

Is the HEAP a term for ram, processor memory or BOTH? And how many unions can I allocate for at once? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I was hoping someone had some schooling they could lay down about the whole HEAP and stack ordeal. I am trying to make a program that would attempt to create about 20,000 instances of just one union and if so some day I may want to implement a much larger program. Other than my current project consisting of a maximum of just 20,000 unions stored where ever c++ will allocate them do you think I could up the anti into the millions while retaining a reasonable return speed on function calls, approximately 1,360,000 or so? And how do you think it will handle 20,000?
Heap is an area used for dynamic memory allocation.
It's usually used to allocate space for variable collection size, and/or to allocate a large amount of memory. It's definitely not a CPU register(s).
Besides this I think there is no guarantee what heap is.
This may be RAM, may be processor cache, even HDD storage. Let OS and hardware decide what it will be in particular case.

Memory and performance in C++ Game [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm still new to C++ but I catch on quick and also have experience in C#. Something I want to know is, what are some performance safe actions I can take to ensure that my game runs efficiently. Also, in which scenario am I likely to run out of memory on. 2k to 3k bullet objects on the stack, or heap? I think the stack is generally faster, but I heard that too much causes a stack overflow. That being said, how much is too much exactly?
Sorry for the plethora of questions, I just want to make sure I don't design a game engine in which it relies on good PCs in order to run well.
Firstly, program your game safely and only worry about optimizations like memory layout after profiling & debugging.
That being said, I have to dispel the myth that the stack is faster than the heap. What matters is cache performance.
The stack generally is faster for small quick accesses, because the stack usually already is in the cache. But when you are iterating over thousands of bullet objects on the heap, as long as you store them contiguously (e.g. std::vector, not std::list), everything should be loaded into the cache, and there should be no performance difference.

When to allocate memory in C++? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Generally when do you allocate memory in C++, you have new/delete and virtualalloc, amongst a few other API calls, is the generally for dynamic allocation, but then we have vector and such so what are the common uses for allocating memory?
If you don't know, at compilation time, how many items you will need, your best option is to use dynamic allocation.
That way you can (hopefully) deal with all the input without wasting memory by reserving an humongous space with a big array.
// ...
int humongous[10000]; // I only expect 10 items, so this should be enough for creative users
// ...
If you want to deal with large memory (i.e memory that can't be allocated on stack) then you can use dynamic allocation.
As a general answer: "there may be cases where the memory needs of a program can only be determined during runtime. For example, when the memory needed depends on user input. On these cases, programs need to dynamically allocate memory, for which the C++ language integrates the operators new and delete."
source: http://www.cplusplus.com/doc/tutorial/dynamic/