C++ -Sorting a Big List (RAM USAGE HIGH!) [closed] - c++

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
i'm writing this because i'm noticing that when i need to sort a list of n elements, my ram usage keeps growing even if all the elements are allocated and the only operations requested are swapping and moving elements...
The problem is not the speed of my algorithm, but the fact that on every new cycle a lot of ram gets allocated, and i don't understand why, could you please help me?
Thanks!

Write a test with 10 elements in the sequence
Run it under valgrind --tool=massif
...
Profit
There are tons of sorting algorithms and containers implementations around, many (if not most) container implementations allocate/deallocate memory on each insert/erase operation, so you really need to go all the way down to finest details and pick the right combination if dynamic allocation is a problem.

Related

TLE whileusing hashmap but not while using map[256]={} [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I am doing this code and when I am trying to solve this using a hashmap like unordered_map<char,int> m I am getting TLE (Time Limit exceeded), but whenever I am using the hashamp like this map[256]={0} then my code runs fine. Why is this happening? Isn't the unordered_map works the same way as an array does with O(1) access time complexity?
unordered_map is asymptotically O(1) but in real time it is often much slower than an array.
This is for several reasons. Hash functions take some time to compute which are not needed for an array and you need to handle hash collisions which can be very expensive in comparison. An array is contiguous memory which reduces Cache misses by a lot.
The main takeaway is that the asymptotic O notation is an approximation to performance but is only works for big inputs and does not care about constants. So one O(1) can be 100 times faster than an other O(1).
The other takeaway is that you should always prefer arrays or vectors over other data structures if you do not have a good reason to do otherwise.
A hash map would be good if your Key-Space would be to big for an array for example unordered_map<long long, int>.

Could not understand working of vector.resize() and vector.shrink_to_fit() [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I came across 2 codes depicting the use of std::vector::resize() and std::vector::shrink_to_fit() and could not understand which one demolishes the elements of the vector.
Example 1
Example 2
In example-1 vector has all the elements even after resize(5) is used while in example-2 resize(4) eliminates the 5th element of the vector. Have a look and tell if I'm getting something wrong.
That first example is undefined behavior as the vector only has 5 elements after resize(5) is called and elements up to index 9 are accessed. However, its likely to work in a release build as the memory for the rest of the elements haven't been freed yet. A debug build will probably catch the error though.
shrink_to_fit() won't change the contents of the vector. However, it might move the elements to a smaller piece of memory and free the old memory causing the previous bug to show.
The second example uses the vector correctly.

Profiling and std::vector part of Hot Path? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to find the source of performance issues with my application. Using Visual Studio 2017 profiling tools I got this result:
I'm relatively new to C++ so I'm not sure what this std::vector<bool,std::allocator<bool> >::operator[] stuff is or if this is really the bottleneck in my program or not. Any help is appreciated.
Here is my code:
https://github.com/k-vekos/GameOfLife/tree/multithread
In a game of life, what you do is read state to make decisions. So sure, that is most of the time.
Your access is near random due to your std vector of std vector in virtual address space. A single buffer, with a vector of spans, would improve memory locality significantly.
If you keep a 0 or 1 in those locations, doing += instead of a branch might help.
Also vector of bool is packed bits; this makes access slower. Vector of single bytes could be faster with your simpke algorithm.
Note that fancy games of life do zone based hashing to skip frames in large areas.

Memory and performance in C++ Game [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm still new to C++ but I catch on quick and also have experience in C#. Something I want to know is, what are some performance safe actions I can take to ensure that my game runs efficiently. Also, in which scenario am I likely to run out of memory on. 2k to 3k bullet objects on the stack, or heap? I think the stack is generally faster, but I heard that too much causes a stack overflow. That being said, how much is too much exactly?
Sorry for the plethora of questions, I just want to make sure I don't design a game engine in which it relies on good PCs in order to run well.
Firstly, program your game safely and only worry about optimizations like memory layout after profiling & debugging.
That being said, I have to dispel the myth that the stack is faster than the heap. What matters is cache performance.
The stack generally is faster for small quick accesses, because the stack usually already is in the cache. But when you are iterating over thousands of bullet objects on the heap, as long as you store them contiguously (e.g. std::vector, not std::list), everything should be loaded into the cache, and there should be no performance difference.

What's the best way to allocate HUGE amounts of memory? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I'm allocating 10 GB of RAM for tons of objects that I will need. I want to be able to squeeze every last byte of RAM I can before hitting some problem like null pointer.
I know the allocator returns continuous memory, so if I have scattered memory from other programs, the max continuous size will be quite small (I assume), or smaller than the actual amount of remaining free memory.
Is it better to allocate the entire size of continuous memory I need in one go (10GB) or is it better to allocate smaller non-contiguous chunks and link them together?
Which one is more likely to always return all the memory I need?