Arrays vs. Vectors vs. Boost::arrays - c++

I will be allocating and deallocating MANY dynamic, multidimensional arrays that represent matrices, every frame.
Priorities, even at the expense of error-checking and manual memory management:
Speed
Small memory footprint
Are C-style arrays the best choice, given these priorities? I know this is an oft-asked question, but I haven't been able to find a definitive answer for my circumstance.

If you can characterize the maximum amount of memory needed for a set of these arrays that will be used for any particular 'frame' (whatever that is), and if you will be dealing with only a single frame at a time (in other words, you'll be performing work on a single set of arrays, then dumping all of those arrays before performing another round of work on another set of arrays) then you'll likely get the best performance by allocating your arrays from a block of static memory that you've size appropriate for your largest possible work set.
Then your array allocation can be a simple pool allocator that carves out memory for an array from the front of the block and adjusts the block pointer to just past that allocation to be ready for the next array allocation. When you're done with the work on that set of arrays, everything can be freed by 'cleaning the pool' - simply resetting the block pointer to the start of the static memory pool.
Of course, since you haven't given much in the way of details for how your work must be done, this technique might not fit at all (that's probably why you haven't found a definitive answer yet - such an answer depends on the specific characteristics of the work you're performing).

std::vectors are typically as good as C-arrays, but if you want ultimate bare-bone speed and you know what you are doing, nothing can match managing C-array yourself.
There are trade-offs you have to consider.
- How much of time you are willing to spend debugging custom code?
- How much of custom code you are willing to write?
Also array/vector libraries are very well tested and optimized for speed and memory consumption, you might want to benchmark them using various compiler settings before deciding (in case if you do please share the results).

You give us too little information about your problem to give you a good answer. Is your program supposed to run only on one platform, or should it be platform independent? Is time-efficiency critical for your project? If so, perhaps using 'new' and 'delete' will be too slow for you and you need to resort to some platform specific or third party allocator. Then, the choice between a dynamically allocated array and an std::vector shouldn't make a difference.
Or do you want to allocate the arrays on the stack? But there is a limit to the size of the array you can create on the stack. What are the sizes of matrices?

If you want to represent a matrix, one unidimensionnal array is already better than a multidimensional array. An array is already the simplest structure you have at your disposition, so it is more appropriate.
The purpose of Vector is to implement a dynamic array, and you may not need this feature as a matrix has a fixed size.

Related

How to allocate a large dynamic array in C++?

So I am currently trying to allocate dynamically a large array of elements in C++ (using "new"). Obviously, when "large" becomes too large (>4GB), my program crashes with a "bad_alloc" exception because it can't find such a large chunk of memory available.
I could allocate each element of my array separately and then store the pointers to these elements in a separate array. However, time is critical in my application so I would like to avoid as much cache misses as I can. I could also group some of these elements into blocks but what would be the best size for such a block?
My question is then: what is the best way (timewise) to allocate dynamically a large array of elements such that elements do not have to be stored contiguously but they must be accessible by index (using [])? This array is never going to be resized, no elements is going to be inserted or deleted of it.
I thought I could use std::deque for this purpose, knowing that the elements of an std::deque might or might not be stored contiguously in memory but I read there are concerns about the extra memory this container takes?
Thank you for your help on this!
If your problem is such that you actually run out of memory allocating fairly small blocks (as is done by deque) is not going to help, the overhead of tracking the allocations will only make the situation worse. You need to re-think your implementation such that you can deal with it in blocks that will still fit in memory. For such problems, if using x86 or x64 based hardware I would suggest blocks of at least 2 megabytes (the large page size).
Obviously, when "large" becomes too large (>4GB), my program crashes
with a "bad_alloc" exception because it can't find such a large chunk
of memory available.
You should be using 64-bit CPU and OS at this point, allocating huge contiguous chunk of memory should not be a problem, unless you are actually running out of memory. It is possible that you are building 32-bit program. In this case you won't be able to allocate more than 4 GB. You should build 64-bit application.
If you want something better than plain operator new, then your question is OS-specific. Look at API provided by your OS: on POSIX system you should look for mmap and for VirtualAlloc on Windows.
There are multiple problems with large allocations:
For security reasons OS kernel never gives you pages filled with garbage values, instead all new memory will be zero initialized. This means you don't have to initialize that memory as long as zeroes are exactly what you want.
OS gives you real memory lazily on first access. If you are processing large array, you might waste a lot of time taking page faults. To avoid this you can use MAP_POPULATE on Linux. On Windows you can try PrefetchVirtualMemory (but I am not sure if it can do the job). This should make init allocation slower, but should decrease total time spent in kernel.
Working with large chunks of memory wastes slots in Translation Lookaside Buffer (TLB). Depending on you memory access pattern, this can cause noticeable slowdown. To avoid this you can try using large pages (mmap with MAP_HUGETLB, MAP_HUGE_2MB, MAP_HUGE_1GB on Linux, VirtualAlloc and MEM_LARGE_PAGES). Using large pages is not easy, as they are usually not available by default. They also cannot be swapped out (always "locked in memory"), so using them requires privileges.
If you don't want to use OS-specific functions, the best you can find in C++ is std::calloc. Unlike std::malloc or operator new it returns zero initialized memory so you can probably avoid wasting time initializing that memory. Other than that, there is nothing special about that function. But this is the closest you can get while staying withing standard C++.
There are no standard containers designed to handle large allocations, moreover, all standard container are really really bad at handling those situations.
Some OSes (like Linux) overcommit memory, others (like Windows) do not. Windows might refuse to give you memory if it knows it won't be able to satisfy your request later. To avoid this you might want to increase your page file. Windows needs to reserve that space on disk beforehand, but it does not mean it will use it (start swapping). As actual memory is given to programs lazily, there are might be a lot of memory reserved for applications that will never be actually given to them.
If increasing page file is too inconvenient, you can try creating large file and map it into memory. That file will serve as a "page file" for your memory. See CreateFileMapping and MapViewOfFile.
The answer to this question is extremely application, and platform, dependent. These days if you just need a small integer factor greater than 4GB, you use a 64-bit machine, if possible. Sometimes reducing the size of the element in the array is possible as well. (E.g. using 16-bit fixed-point of half-float instead of 32-bit float.)
Beyond this, you are either looking at sparse arrays or out-of-core techniques. Sparse arrays are used when you are not actually storing elements at all locations in the array. There are many possible implementations and which is best depends on both the distribution of the data and the access pattern of the algorithm. See Eigen for example.
Out-of-core involves explicitly reading and writing parts of the array to/from disk. This used to be fairly common, but people work pretty hard to avoid doing this now. Applications that really require such are often built on top of a database or similar to handle the data management. In scientific computing, one ends up needing to distribute the compute as well as the data storage so there's a lot of complexity around that as well. For important problems the entire design may be driven by having good locality of reference.
Any sparse data structure will have overhead in how much space it takes. This can be fairly low, but it means you have to be careful if you actually have a dense array and are simply looking to avoid memory fragmentation.
If your problem can be broken into smaller pieces that only access part of the array at a time and the main issue is memory fragmentation making it hard to allocate one large block, then breaking the array in to pieces, effectively adding an outer vector of pointers, is a good bet. If you have random access to an array larger than 4 gigabytes and no way to localize the accesses, 64-bit is the way to go.
Depending on what you need the memory for and your speed concerns, and if you're using Linux, you can always try using mmap and simulate a sort of swap. It might be slower, but you can map very large sizes. See Mmap() an entire large file

Most efficient way to grow array C++

Apologies if this has been asked before, I can't find a question that fully answers what I want to know. They mention ways to do this, but don't compare approaches.
I am writing a program in C++ to solve a PDE to steady state. I don't know how many time steps this will take. Therefore I don't know how long my time arrays will be. This will have a maximum time of 100,000s, but the time step could be as small as .001, so it could be as many as 1e8 doubles in length in the worst case (not necessarily a rare case either).
What is the most efficient way to implement this in terms of memory allocated and running time?
Options I've looked at:
Dynamically allocating an array with 1e8 elements, most of which won't ever be used.
Allocating a smaller array initially, creating a larger array when needed and copying elements over
Using std::vector and it's size increasing functionality
Are there any other options?
I'm primarily concerned with speed, but I want to know what memory considerations come into it as well
If you are concerned about speed just allocate 1e8 doubles and be done with it.
In most cases vector should work just fine. Remember that amortized it's O(1) for the append.
Unless you are running on something very weird the OS memory allocation should take care of most fragmentation issues and the fact that it's hard to find a 800MB free memory block.
As noted in the comments, if you are careful using vector, you can actually reserve the capacity to store the maximum input size in advance (1e8 doubles) without paging in any memory.
For this you want to avoid the fill constructor and methods like resize (which would end up accessing all the memory) and use reserve and push_back to fill it and only touch memory as needed. That will allow most operating systems to simply page in chunks of your accessed vector at a time instead of the entire contents all at once.
Yet I tend to avoid this solution for the most part at these kinds of input scales, but for simple reasons:
A possibly-paranoid portability fear that I may encounter an operating system which doesn't have this kind of page-on-demand behavior.
A possibly-paranoid fear that the allocation may fail to find a contiguous set of unused pages and face out of memory errors (this is a grey zone -- I tend to worry about this for arrays which span gigabytes, hundreds of megabytes is borderline).
Just a totally subjective and possibly dumb/old bias towards not leaning too heavily on the operating system's behavior for paging in allocated memory, and preferring to have a data structure which simply allocates on demand.
Debugging.
Among the four, the first two could simply be paranoia. The third might just be plain dumb. Yet at least on operating systems like Windows, when using a debug build, the memory is initialized in its entirety early, and we end up mapping the allocated pages to DRAM immediately on reserving capacity for such a vector. Then we might end up leading to a slight startup delay and a task manager showing 800 megabytes of memory usage for a debug build even before we've done anything.
While generally the efficiency of a debug build should be a minor concern, when the potential discrepancy between release and debug is enormous, it can start to render production code almost incapable of being effectively debugged. So when the differences are potentially vast like this, my preference is to "chunk it up".
The strategy I like to apply here is to allocate smaller chunks -- smaller arrays of N elements, where N might be, say, 512 doubles (just snug enough to fit a common denominator page size of 4 kilobytes -- possibly minus a couple of doubles for chunk metadata). We fill them up with elements, and when they get full, create another chunk.
With these chunks, we can aggregate them together by either linking them (forming an unrolled list) or storing a vector of pointers to them in a separate aggregate depending on whether random-access is needed or merely sequential access will suffice. For the random-access case, this incurs a slight overhead, yet one I've tended to find relatively small at these input scales which often have times dominated by the upper levels of the memory hierarchy rather than register and instruction level.
This might be overkill for your case and a careful use of vector may be the best bet. Yet if that doesn't suffice and you have similar concerns/needs as I do, this kind of chunky solution might help.
The only way to know which option is 'most efficient' on your machine is to try a few different options and profile. I'd probably start with the following:
std::vector constructed with the maximum possible size.
std::vector constructed with a conservative ballpark size and push_back.
std::deque and push_back.
The std::vector vs std::deque debate is ongoing. In my experience, when the number of elements is unknown and not too large, std::deque is almost never faster than std::vector (even if the std::vector needs multiple reallocations) but may end up using less memory. When the number of elements is unknown and very large, std::deque memory consumption seems to explode and std::vector is the clear winner.
If after profiling, none of these options offers satisfactory performance, then you may want to consider writing a custom allocator.

Very fast object allocator for small object of same size

I just have to write some code that has to have best performance.
Requirements:
I need a very fast object allocator for quick creation of object. My object as only 3 doubles in it. Allocation and deallocation will occurs one object at a time only.
I made a lots of research and come up with:
std::vector<MyClass, boost::fast_pool_allocator<MyClass>>
I wonder (in 2014-07):
Does stl have something equivalent to boost::boost:fast_pool_allocator ?
Is there a better solution to what I have found ?
There is additional information to answer some comments:
The code will be used to optimize my algorithm for: Code Project article on Convex Hull
I need to convert C# code to C or C++ to improve performance. I should compete against another algorithm written in pure "C". I just discover that my comparison chart in my article have errors because I tested against code compiled in C for x86-Debug. In x64-release the "C" code is a lot faster (a factor of 4 to 5 times faster than in x86-debug).
According to This Boost documentation and this Answer at StackOverFlow, boost:fast_pool_allocator seems to be the best allocator to use for small memory chunk of same size query one by one. But I would like to make sure nothing else exists that is either more standard (part of stl) or faster.
My code will be developed on Visual Studio 2013 and target any windows platform (no phone or tablet).
My intend is not to have fast code, it is to have the fastest code. I prefer not having too much twisted code if possible and also look for code that is maintainable (at least a minimum).
If possible, I also would like to know the impact of using std:vector vs array (ie: []).
For more info, you could see Wikipedia - Object pool pattern
The closest thing to what I was looking for was Paulo Zemek Code Project article: O(1) Object Pool in C++.
But I finally did allocate/reserve memory of size= the maximum size possible * my object size.
Because I was not using any objects that required to live longer than my algorithm loop, I cheated by saying that location in reserved memory space was object. After my algorithm loop, I flushed the reserved memory space. It appears to me to be the fastest. Very not elegant but extremely fast and only require one allocation and one deallocation.
I was not totally satisfy with the answer, that's why I answered myself. I also added a comments to the questions about every comments and added this answer to make thinks clear. I know that my decision/implementation was not totally in accordance with the question but I think that it should have conducted to something similar.
Search for memory-pool heaps. Basically, you create a heap dedicated for objects of a single size (typically powers of 2, 4 bytes, 16 bytes, etc) and allocate objects from the heap that can contain blocks of the smallest size that can fit your object in. As each heap only contains fixed-size blocks, its very easy to manage the blocks that are allocated in it, a bitmap can show you which blocks are free or in-use so insertion can be very fast (especially if you just allocate at the end and increment a pointr)
As an example, here's one, you may be able to take it and optimise it explicitly for your particular object size and requirements.
I found this solution very useful to me: Fast C++11 allocator for STL containers. It slightly speeds up STL containers on VS2017 (~5x) as well as on GCC (~7x). Moreover you can manually specify grow size or if you know the maximal number of elements in your list, you can preallocate them all.

Dynamically allocate or waste memory?

I have a 2d integer array used for a tile map.
The size of the map is unknown and read in from a file at runtime. currently the biggest file is 2500 items(50x50 grid).
I have a working method of dynamic memory allocation from an earlier question but people keep saying that it a bad idea so I have been thinking whether or not to just use a big array and not fill it all up when using a smaller map.
Do people know of any pros or cons to either solution ? any advice or personal opinions welcome.
c++ btw
edit: all the maps are made by me so I can pick a max size.
Probably the easiest way is for example a std::vector<std::vector<int> > to allow it to be dynamically sized AND let the library do all the allocations for you. This will prevent accidentally leaking memory.
My preference would be to dynamically allocate. That way should you encounter a surprisingly large map you (hopefully) won't overflow if you've written it correctly, whereas with the fixed size your only option is to return an error and fail.
Presumably loading tile maps is a pretty infrequent operation. I'd be willing to bet too that you can't even measure a meaningful difference in speed between the two. Unless there is a measurable performance reduction, or you're actually hitting something else which is causing you problems the static sized one seems like a premature optimisation and is asking for trouble later on.
It depends entirely on requirements that you haven't stated :-)
If you want your app to be as blazingly fast as possible, with no ability to handle larger tile maps, then by all means just use a big array. For small PIC-based embedded systems this could be an ideal approach.
But, if you want your code to be robust, extensible, maintainable and generally suitable for a wider audience, use STL containers.
Or, if you just want to learn stuff, and have no concern about maintainability or performance, try and write your own dynamically allocating containers from scratch.
I believe the issue people refer to with dynamic allocation results from allocating randomly sized blocks of memory and not being able to effectively manage the random sized holes left when deallocated. If you're allocating fixed sized tiles then this may not be an issue.
I see quite a few people suggest allocating a large block of memory and managing it themselves. That might be an alternative solution.
Is allocating the memory dynamically a bottleneck in your program? Is it the cause of a performance issue? If not, then simply keep dynamic allocation, you can handle any map size. If yes, then maybe use some data structure that does not deallocate the memory it has allocated but rather use its old buffer and if needed, reallocate more memory.

How to implement a memory heap

Wasn't exactly sure how to phrase the title, but the question is:
I've heard of programmers allocating a large section of contiguous memory at the start of a program and then dealing it out as necessary. This is, in contrast to simply going to the OS every time memory is needed.
I've heard that this would be faster because it would avoid the cost of asking the OS for contiguous blocks of memory constantly.
I believe the JVM does just this, maintaining its own section of memory and then allocating objects from that.
My question is, how would one actually implement this?
Most C and C++ compilers already provide a heap memory-manager as part of the standard library, so you don't need to do anything at all in order to avoid hitting the OS with every request.
If you want to improve performance, there are a number of improved allocators around that you can simply link with and go. e.g. Hoard, which wheaties mentioned in a now-deleted answer (which actually was quite good -- wheaties, why'd you delete it?).
If you want to write your own heap manager as a learning exercise, here are the basic things it needs to do:
Request a big block of memory from the OS
Keep a linked list of the free blocks
When an allocation request comes in:
search the list for a block that's big enough for the requested size plus some book-keeping variables stored alongside.
split off a big enough chunk of the block for the current request, put the rest back in the free list
if no block is big enough, go back to the OS and ask for another big chunk
When a deallocation request comes in
read the header to find out the size
add the newly freed block onto the free list
optionally, see if the memory immediately following is also listed on the free list, and combine both adjacent blocks into one bigger one (called coalescing the heap)
You allocate a chunk of memory at the beginning of the program large enough to sustain its need. Then you have to override new and/or malloc, delete and/or free to return memory from/to this buffer.
When implementing this kind of solution, you need to write your own allocator(to source from the chunk) and you may end up using more than one allocator which is often why you allocate a memory pool in the first place.
Default memory allocator is a good all around allocator but is not the best for all allocation needs. For example, if you know you'll be allocating a lot of object for a particular size, you may define an allocator that allocates fixed size buffer and pre-allocate more than one to gain some efficiency.
Here is the classic allocator, and one of the best for non-multithreaded use:
http://gee.cs.oswego.edu/dl/html/malloc.html
You can learn a lot from reading the explanation of its design. The link to malloc.c in the article is rotted; it can now be found at http://gee.cs.oswego.edu/pub/misc/malloc.c.
With that said, unless your program has really unusual allocation patterns, it's probably a very bad idea to write your own allocator or use a custom one. Especially if you're trying to replace the system malloc, you risk all kinds of bugs and compatibility issues from different libraries (or standard library functions) getting linked to the "wrong version of malloc".
If you find yourself needing specialized allocation for just a few specific tasks, that can be done without replacing malloc. I would recommend looking up GNU obstack and object pools for fixed-sized objects. These cover a majority of the cases where specialized allocation might have real practical usefulness.
Yes, both stdlib heap and OS heap / virtual memory are pretty troublesome.
OS calls are really slow, and stdlib is faster, but still has some "unnecessary"
locks and checks, and adds a significant overhead to allocated blocks
(ie some memory is used for management, in addition to what you allocate).
In many cases its possible to avoid dynamic allocation completely,
by using static structures instead. For example, sometimes its better (safer etc) to define a 64k
static buffer for unicode filename, than define a pointer/std:string and dynamically
allocate it.
When the program has to allocate a lot of instances of the same structure, its
much faster to allocate large memory blocks and then just store the instances there
(sequentially or using a linked list of free nodes) - C++ has a "placement new" for that.
In many cases, when working with varible-size objects, the set of possible sizes
is actually very limited (eg. something like 4+2*(1..256)), so its possible to use
a few pools like [3] without having to collect garbage, fill the gaps etc.
Its common for a custom allocator for specific task to be much faster than one(s)
from standard library, and even faster than speed-optimized, but too universal implementations.
Modern CPUs/OSes support "large pages", which can significantly improve the memory
access speed when you explicitly work with large blocks - see http://7-max.com/
IBM developerWorks has a nice article about memory management, with an extensive resources section for further reading: Inside memory management.
Wikipedia has some good information as well: C dynamic memory allocation, Memory management.