I hope that this question hasn’t been asked before, but I couldn’t find any answer after googling for an hour.
I have the following problem: I do numerical mathematics and I have about 40 MB of doubles in the form of certain matrices that are compile time constants. I very frequently use these matrices throughout my program. The creation of these matrices takes a week of computation, so I do not want to compute them on the fly but use precomputed values.
What is the best way of doing this in a portable manner? Right now I have some very large CPP-files, in which I create dynamically allocated matrices with the constants as initialiser lists. Something along the lines:
data.cpp:
namespace // Anonymous
{
// math::matrix uses dynamic memory internally
const math::matrix K { 3.1337e2, 3.1415926e00, /* a couple of hundred-thousand numbers */ };
}
const math::matrix& get_K() { return K; }
data.h
const math::matrix& get_K();
I am not sure if this can cause problems with too much data on the stack, due to the initialiser list. Or if this can crash some compilers. Or if this is the right way to go about things. It does seem to be working though.
I also thought about loading the matrices at program startup from an external file, but that also seems messy to me.
Thank you very much in advance!
I am not sure if this can cause problems with too much data on the stack, due to the initialiser list.
There should not be a problem assuming it has static storage with non-dynamic initialisation. Which should be the case if math::matrix is an aggregate.
Given that the values will be compile time constant, you might consider defining them in a header, so that all translation units can take advantage of them at compile time. How beneficial that would be depends on how the values are used.
I also thought about loading the matrices at program startup from an external file
The benefit of this approach is the added flexibility that you gain because you would not need to recompile the program when you change the data. This is particularly useful if the program is expensive to compile.
A slightly cleaner approach:
// math::matrix uses dynamic memory internally
const math::matrix K {
#include "matrix_initial_values.h"
};
And, in the included header,
3.1337e2, 3.1415926e00, 1,2,3,4,5e6, 7.0,...
Considering your comment of "A few hundred thousand" float values: 1M double values takes 8,000,000 bytes, or about 7.6MB. That's not going to blow the stack. Win64 has a max stack size of 1GB, so you'd have to go really, really nuts, and that's assuming that these values are actually placed on the stack, which they should not be given that it's const.
This is probably implementation-specific, but a large block of literals is typically stored as a big chunk of code-space data that's loaded directly into the process' memory space. The identifier (K) is really just a label for that data. It doesn't exist on the stack or the heap anymore than code does.
Related
Our library has a lot of chained functions that are called thousands of times when solving an engineering problem on a mesh every time step during a simulation. In these functions, we must create arrays whose sizes are only known at runtime, depending on the application. There are three choices we have tried so far, as shown below:
void compute_something( const int& n )
{
double fields1[n]; // Option 1.
auto *fields2 = new double[n]; // Option 2.
std::vector<double> fields3(n); // Option 3.
// .... a lot more operations on the field variables ....
}
From these choices, Option 1 has worked with our current compiler, but we know it's not safe because we may overflow the stack (plus, it's non standard). Option 2 and Option 3 are, on the other hand, safer, but using them as frequently as we do, is impacting the performance in our applications to the point that the code runs ~6 times slower than using Option 1.
What are other options to handle memory allocation efficiently for dynamic-sized arrays in C++? We have considered constraining the parameter n, so that we can provide the compiler with an upper bound on the array size (and optimization would follow); however, in some functions, n can be pretty much arbitrary and it's hard to come up with a precise upper bound. Is there a way to circumvent the overhead in dynamic memory allocation? Any advice would be greatly appreciated.
Create a cache at startup and pre-allocate with a reasonable size.
Pass the cache to your compute function or make it part of your class if compute() is a method
Resize the cache
std::vector<double> fields;
fields.reserve( reasonable_size );
...
void compute( int n, std::vector<double>& fields ) {
fields.resize(n);
// .... a lot more operations on the field variables ....
}
This has a few benefits.
First, most of the time the size of the vector will be changed but no allocation will take place due to the exponential nature of std::vector's memory management.
Second, you will be reusing the same memory so it will be likely it will stay in cache.
I'm trying to make my code a bit faster and I'm trying to find out if I can gain some performance by better managing arrays stored in objects and stuff.
So the basic idea behind that is that I tend to keep separate arrays for temporary and permanent states. This means that they have to be indexed separately all the time having to explicitly write the proper member name every time I want to use them.
This is how a particular class with such arrays looks like:
class solution
{
public:
//Costs
float *cost_array;
float *temp_cost_array;
//Cost trend
float *d_cost_array;
float *temp_d_cost_array;
...
}
Now, because of the fact that I have functions/methods that work on the temp or the permanent status depending on input arguments, these look like this:
void do_stuff(bool temp){
if (temp)
work_on(this->temp_cost_array);
else
work_on(this->cost_array);
}
These are very simplistic examples of such branches. These arrays may be indexed separately here and there within the code. So exactly because of the fact that such stuff is all over the place I thought that this is yet another reason to combine everything so that I could get rid of that code branches as well.
So I converted my class to:
class solution
{
public:
//Costs
float **cost_array;
//Cost trend
float **d_cost_array;
...
}
Those double arrays are of size 2, with each element being a float* array. Those are dynamically allocated just once during object creation at the start of the program and deleted at the end of the program.
So after that I also converted all the temp branches of my code like this:
void do_stuff(bool temp){
work_on(this->cost_array[temp]);
}
It looks WAY more elegant than before but for some reason performance got way worse than before (almost 2 times worse), and I seriously can't understand why that happened.
So, as a first insight, I'd really love to hear from more experienced people, if my rationale behind that code optimization was valid or not.
Could that extra indexing required to access each array introduce such major performance hit to overcome all the if branching and stuff? For sure it depends on how the whole thing works but the code is a beast and I've no idea how to properly profile that thing all-together.
Thanks
EDIT:
Environment settings:
Running on Windows 10, VS 2017, Full Optimization enabled (/Ox)
The reason for such a huge performance degradation might be that with the change we have introduced another level of indirection, accessing which might slow down the program quite significantly.
The object prior the change:
*array -> data[]
*temp_array -> data[]
Assuming the object (i.e. this) is in the CPU cache, prior the change you had one cache miss: take either of the pointers from cache (cache hit) and access a cold data (cache miss).
The object after the change:
**array -> * -> data[]
* -> data[]
Now we have to access pointer to an array (cache hit) then index the cold data (cache miss) then access the cold data (another cache miss).
Sure, that is the worst scenario described above, but it might be the case.
The fix is quite easy: allocate those pointers in the object with float *cost_array[2], not dynamically, i.e.:
*array[2] -> data[]
-> data[]
So in therms of storage and levels of indirections this is exactly corresponds to the original data structure prior the change and should behave quite the same.
Well, after a full year of programming and only knowing of arrays, I was made aware of the existence of vectors (by some members of StackOverflow on a previous post of mine). I did a load of researching and studying them on my own and rewrote an entire application I had written with arrays and linked lists, with vectors. At this point, I'm not sure if I'll still use arrays, because vectors seem to be more flexible and efficient. With their ability to grow and shrink in size automatically, I don't know if I'll be using arrays as much. At this point, the only advantage I personally see is that arrays are much easier to write and understand. The learning curve for arrays is nothing, where there is a small learning curve for vectors. Anyway, I'm sure there's probably a good reason for using arrays in some situation and vectors in others, I was just curious what the community thinks. I'm an entirely a novice, so I assume that I'm just not well-informed enough on the strict usages of either.
And in case anyone is even remotely curious, this is the application I'm practicing using vectors with. Its really rough and needs a lot of work: https://github.com/JosephTLyons/Joseph-Lyons-Contact-Book-Application
A std::vector manages a dynamic array. If your program need an array that changes its size dynamically at run-time then you would end up writing code to do all the things a std::vector does but probably much less efficiently.
What the std::vector does is wrap all that code up in a single class so that you don't need to keep writing the same code to do the same stuff over and over.
Accessing the data in a std::vector is no less efficient than accessing the data in a dynamic array because the std::vector functions are all trivial inline functions that the compiler optimizes away.
If, however, you need a fixed size then you can get slightly more efficient than a std::vector with a raw array. However you won't loose anything using a std::array in those cases.
The places I still use raw arrays are like when I need a temporary fixed-size buffer that isn't going to be passed around to other functions:
// some code
{ // new scope for temporary buffer
char buffer[1024]; // buffer
file.read(buffer, sizeof(buffer)); // use buffer
} // buffer is destroyed here
But I find it hard to justify ever using a raw dynamic array over a std::vector.
This is not a full answer, but one thing I can think of is, that the "ability to grow and shrink" is not such a good thing if you know what you want. For example: assume you want to save memory of 1000 objects, but the memory will be filled at a rate that will cause the vector to grow each time. The overhead you'll get from growing will be costly when you can simply define a fixed array
Generally speaking: if you will use an array over a vector - you will have more power at your hands, meaning no "background" function calls you don't actually need (resizing), no extra memory saved for things you don't use (size of vector...).
Additionally, using memory on the stack (array) is faster than heap (vector*) as shown here
*as shown here it's not entirely precise to say vectors reside on the heap, but they sure hold more memory on the heap than the array (that holds none on the heap)
One reason is that if you have a lot of really small structures, small fixed length arrays can be memory efficient.
compare
struct point
{
float coords[4]
}
with
struct point
{
std::vector<float> coords;
}
Alternatives include std::array for cases like this. Also std::vector implementations will over allocate, meaning that if you want resize to 4 slots, you might have memory allocated for 16 slots.
Furthermore, the memory locations will be scattered and hard to predict, killing performance - using an exceptionally larger number of std::vectors may also need to memory fragmentation issues, where new starts failing.
I think this question is best answered flipped around:
What advantages does std::vector have over raw arrays?
I think this list is more easily enumerable (not to say this list is comprehensive):
Automatic dynamic memory allocation
Proper stack, queue, and sort implementations attached
Integration with C++ 11 related syntactical features such as iterator
If you aren't using such features there's not any particular benefit to std::vector over a "raw array" (though, similarly, in most cases the downsides are negligible).
Despite me saying this, for typical user applications (i.e. running on windows/unix desktop platforms) std::vector or std::array is (probably) typically the preferred data structure because even if you don't need all these features everywhere, if you're already using std::vector anywhere else you may as well keep your data types consistent so your code is easier to maintain.
However, since at the core std::vector simply adds functionality on top of "raw arrays" I think it's important to understand how arrays work in order to be fully take advantage of std::vector or std::array (knowing when to use std::array being one example) so you can reduce the "carbon footprint" of std::vector.
Additionally, be aware that you are going to see raw arrays when working with
Embedded code
Kernel code
Signal processing code
Cache efficient matrix implementations
Code dealing with very large data sets
Any other code where performance really matters
The lesson shouldn't be to freak out and say "must std::vector all the things!" when you encounter this in the real world.
Also: THIS!!!!
One of the powerful features of C++ is that often you can write a class (or struct) that exactly models the memory layout required by a specific protocol, then aim a class-pointer at the memory you need to work with to conveniently interpret or assign values. For better or worse, many such protocols often embed small fixed sized arrays.
There's a decades-old hack for putting an array of 1 element (or even 0 if your compiler allows it as an extension) at the end of a struct/class, aiming a pointer to the struct type at some larger data area, and accessing array elements off the end of the struct based on prior knowledge of the memory availability and content (if reading before writing) - see What's the need of array with zero elements?
embedding arrays can localise memory access requirement, improving cache hits and therefore performance
I have a program, that uses dynamic programming to calculate some information. The problem is, that theoretically the used memory grows exponentially. Some filters that I use limit this space, but for a big input they also can't avoid that my program runs out of RAM - Memory.
The program is running on 4 threads. When I run it with a really big input I noticed, that at some point the program starts to use the swap memory, because my RAM is not big enough. The consequence of this is, that my CPU-usage decreases from about 380% to 15% or lower.
There is only one variable that uses the memory which is the following datastructure:
Edit (added type) with CLN library:
class My_Map {
typedef std::pair<double,short> key;
typedef cln::cl_I value;
public:
tbb::concurrent_hash_map<key,value>* map;
My_Map() { map = new tbb::concurrent_hash_map<myType>(); }
~My_Map() { delete map; }
//some functions for operations on the map
};
In my main program I am using this datastructure as globale variable:
My_Map* container = new My_Map();
Question:
Is there a way to avoid the shifting of memory between SWAP and RAM? I thought pushing all the memory to the Heap would help, but it seems not to. So I don't know if it is possible to maybe fully use the swap memory or something else. Just this shifting of memory cost much time. The CPU usage decreases dramatically.
If you have 1 Gig of RAM and you have a program that uses up 2 Gb RAM, then you're going to have to find somewhere else to store the excess data.. obviously. The default OS way is to swap but the alternative is to manage your own 'swapping' by using a memory-mapped file.
You open a file and allocate a virtual memory block in it, then you bring pages of the file into RAM to work on. The OS manages this for you for the most part, but you should think about your memory usage so not to try to keep access to the same blocks while they're in memory if you can.
On Windows you use CreateFileMapping(), on Linux you use mmap(), on Mac you use mmap().
The OS is working properly - it doesn't distinguish between stack and heap when swapping - it pages you whatever you seem not to be using and loads whatever you ask for.
There are a few things you could try:
consider whether myType can be made smaller - e.g. using int8_t or even width-appropriate bitfields instead of int, using pointers to pooled strings instead of worst-case-length character arrays, use offsets into arrays where they're smaller than pointers etc.. If you show us the type maybe we can suggest things.
think about your paging - if you have many objects on one memory page (likely 4k) they will need to stay in memory if any one of them is being used, so try to get objects that will be used around the same time onto the same memory page - this may involve hashing to small arrays of related myType objects, or even moving all your data into a packed array if possible (binary searching can be pretty quick anyway). Naively used hash tables tend to flay memory because similar objects are put in completely unrelated buckets.
serialisation/deserialisation with compression is a possibility: instead of letting the OS swap out full myType memory, you may be able to proactively serialise them into a more compact form then deserialise them only when needed
consider whether you need to process all the data simultaneously... if you can batch up the work in such a way that you get all "group A" out of the way using less memory then you can move on to "group B"
UPDATE now you've posted your actual data types...
Sadly, using short might not help much because sizeof key needs to be 16 anyway for alignment of the double; if you don't need the precision, you could consider float? Another option would be to create an array of separate maps...
tbb::concurrent_hash_map<double,value> map[65536];
You can then index to map[my_short][my_double]. It could be better or worse, but is easy to try so you might as well benchmark....
For cl_I a 2-minute dig suggests the data's stored in a union - presumably word is used for small values and one of the pointers when necessary... that looks like a pretty good design - hard to improve on.
If numbers tend to repeat a lot (a big if) you could experiment with e.g. keeping a registry of big cl_Is with a bi-directional mapping to packed integer ids which you'd store in My_Map::map - fussy though. To explain, say you get 987123498723489 - you push_back it on a vector<cl_I>, then in a hash_map<cl_I, int> set [987123498723489 to that index (i.e. vector.size() - 1). Keep going as new numbers are encountered. You can always map from an int id back to a cl_I using direct indexing in the vector, and the other way is an O(1) amortised hash table lookup.
I have a little problem here, i write c++ code to create an array but when i want to set array size to 100,000,000 or more i got an error.
this is my code:
int i=0;
double *a = new double[n*n];
this part is so important for my project.
When you think you need an array of 100,000,000 elements, what you actually need is a different data structure that you probably have never heard of before. Maybe a hash map, or maybe a sparse matrix.
If you tell us more about the actual problem you are trying to solve, we can provide better help.
In general, the only reason that would fail would be due to lack of memory/memory fragmentation/available address space. That is, trying to allocate 800MB of memory. Granted, I have no idea why your system's virtual memory can't handle that, but maybe you allocated a bunch of other stuff. It doesn't matter.
Your alternatives are to tricks like memory-mapped files, sparse arrays, and so forth instead of an explicit C-style array.
If you do not have sufficient memory, you may need to use a file to store your data and process it in smaller chunks.
Don't know if IMSL provides what you are looking for, however, if you want to work on smaller chunks you might devise an algorithm that can call IMSL functions with these small chunks and later merge the results. For example, you can do matrix multiplication by combining multiplication of sub-matrices.