Memory and performance in C++ Game [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm still new to C++ but I catch on quick and also have experience in C#. Something I want to know is, what are some performance safe actions I can take to ensure that my game runs efficiently. Also, in which scenario am I likely to run out of memory on. 2k to 3k bullet objects on the stack, or heap? I think the stack is generally faster, but I heard that too much causes a stack overflow. That being said, how much is too much exactly?
Sorry for the plethora of questions, I just want to make sure I don't design a game engine in which it relies on good PCs in order to run well.

Firstly, program your game safely and only worry about optimizations like memory layout after profiling & debugging.
That being said, I have to dispel the myth that the stack is faster than the heap. What matters is cache performance.
The stack generally is faster for small quick accesses, because the stack usually already is in the cache. But when you are iterating over thousands of bullet objects on the heap, as long as you store them contiguously (e.g. std::vector, not std::list), everything should be loaded into the cache, and there should be no performance difference.

Related

C++ : Use case / real world application of using the heap memory [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 months ago.
Improve this question
I am trying to code a relatively simple program to understand in which scenarios it would be more efficient and useful to use the heap.
I first read that you better store large objects on the heap.
So I created a std::vector on the heap and filled it with an insane amount of bytes, something like 18Gb. At some point, it threw a std::bad_alloc exception, then my OS (Linux mint) killed the process after the swap was full. But the result was the same with the stack, so I don't understand how it's better on the heap.
Maybe I lack creativity, but I cannot think of a design where I would absolutely need to use the heap, I can always pass a reference of my object on the stack and achieve the same memory usage efficiency. For every program I wrote, it was always more efficient to use the stack speed wise and was the same for memory usage.
So, in what scenario is it useful to use the heap regarding memory usage efficiency or speed?
You use the heap whenever you need more flexibility than the stack provides. If you need to allocate memory to return something from a function, then it can't be on the stack, because it would be freed when the function returned. And it can't be global, because you might call the same function more than once.
In C++, you might not always realize that you are using the heap, because classes like std::vector and std::string use the heap internally. You don't always have to write new yourself to use the heap.

Profiling and std::vector part of Hot Path? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to find the source of performance issues with my application. Using Visual Studio 2017 profiling tools I got this result:
I'm relatively new to C++ so I'm not sure what this std::vector<bool,std::allocator<bool> >::operator[] stuff is or if this is really the bottleneck in my program or not. Any help is appreciated.
Here is my code:
https://github.com/k-vekos/GameOfLife/tree/multithread
In a game of life, what you do is read state to make decisions. So sure, that is most of the time.
Your access is near random due to your std vector of std vector in virtual address space. A single buffer, with a vector of spans, would improve memory locality significantly.
If you keep a 0 or 1 in those locations, doing += instead of a branch might help.
Also vector of bool is packed bits; this makes access slower. Vector of single bytes could be faster with your simpke algorithm.
Note that fancy games of life do zone based hashing to skip frames in large areas.

Does memory fragmentation leads to out of memory exception? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Does memory fragmentation leads to "out of memory exception" or program and system can handle this issue at runtime?
Yes, it's theoretically possible for fragmentation to cause out-of-memory exceptions. Suppose you do lots of allocations of small objects that mostly fill your memory, then you delete every other object. This will produce a large total amount of free memory, but they'll all be very small blocks -- this is extreme fragmentation. If you try to allocate an object bigger than any of these blocks, the allocation will fail.
The runtime system generally can't fix this up, because in most implementations addresses in pointers can't be changed automatically. So allocations can't be rearranged to consolidate all the free space.
Good heap management implementations are designed to make this unlikely. One common technique is to use different areas of memory for different allocation sizes. Small allocations come from one area, medium allocations from another area, and large allocations from their own area. So if you get lots of fragmentation in the small area, it won't cause a problem for large allocations.

Is the HEAP a term for ram, processor memory or BOTH? And how many unions can I allocate for at once? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I was hoping someone had some schooling they could lay down about the whole HEAP and stack ordeal. I am trying to make a program that would attempt to create about 20,000 instances of just one union and if so some day I may want to implement a much larger program. Other than my current project consisting of a maximum of just 20,000 unions stored where ever c++ will allocate them do you think I could up the anti into the millions while retaining a reasonable return speed on function calls, approximately 1,360,000 or so? And how do you think it will handle 20,000?
Heap is an area used for dynamic memory allocation.
It's usually used to allocate space for variable collection size, and/or to allocate a large amount of memory. It's definitely not a CPU register(s).
Besides this I think there is no guarantee what heap is.
This may be RAM, may be processor cache, even HDD storage. Let OS and hardware decide what it will be in particular case.

Considering the Chaos Monkey in Designing and Architecting an Embedded Systems [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I work on embedded systems with limited memory and throughput. Making a system more robust requires more memory, and time on the processor. I like the idea of the Chaos Monkey to figure out if your system is fault tolerant, however with limited resources I'm not sure how feasible it is to just keep adding code to it. Are there certain design considerations whether in the architecture or otherwise that would improve the fault handling capabilities, without necessarily adding "more code" to a system?
One way I have seen to help in preventing writing an if then statement in c (or c++) that assigns rather then compares a static value, recommends writing that on the left hand side of the comparison, this way if you try to assign your variable to say the number 5, things will complain and you're likely to find any issues right away.
Are there Architectural or Design decisions that can be made early on that prevent possible redundancy/reliability issues in a similar way?
Yes many other techniques can be used. You'd do well to purchase and read "Code Complete".
Code Complete on Amazon