Finding the size of a large boost::unordered_map - c++

I would like to find the size of a boost::unordered_map I have that contains a pointer to a class mapped by a std::string. I am doing a sizeof(unordered_map var). Is that right? Would it give me the space it occupies? Including the house keeping it takes up? Wanted to measure it to compare it to a std::map that will hold the same data, which also I would measure by sizeof(std::map var). I would like to know both to decide how much storage each occupies, and which is a better alternative to go with, comparing the speed and space.
Please let me know if my way of calculating the sizes are right and will give me the actual/correct sizes and will help me make the right decision.
Edit 1:
If my way of trying to get the size is wrong, please let me know ways of getting the correct size(inclusive of house keeping)
TIA
-R

The sizeof() operator returns only the size of an object, but not the space it occupies on the heap (dynamically allocated memory). Since maps and strings may very well allocate memory on the heap, this will not help you.
There is no simple way to measure the total memory footprint of certain parts of your program. However, it is not impossible. One option is to use a custom allocator, which records its memory allocation and which you use for all objects related to the entities you want to measure (for the map and its objects including the strings).

You're simply not going to be able to reliably calculate the amount of space used up by your map. There's types and space you have no access to.
What you should do is ask a totally different question having to do with the problem you're trying to solve where you think this is necessary.

Related

c++ Alternative implementation to avoid shifting between RAM and SWAP memory

I have a program, that uses dynamic programming to calculate some information. The problem is, that theoretically the used memory grows exponentially. Some filters that I use limit this space, but for a big input they also can't avoid that my program runs out of RAM - Memory.
The program is running on 4 threads. When I run it with a really big input I noticed, that at some point the program starts to use the swap memory, because my RAM is not big enough. The consequence of this is, that my CPU-usage decreases from about 380% to 15% or lower.
There is only one variable that uses the memory which is the following datastructure:
Edit (added type) with CLN library:
class My_Map {
typedef std::pair<double,short> key;
typedef cln::cl_I value;
public:
tbb::concurrent_hash_map<key,value>* map;
My_Map() { map = new tbb::concurrent_hash_map<myType>(); }
~My_Map() { delete map; }
//some functions for operations on the map
};
In my main program I am using this datastructure as globale variable:
My_Map* container = new My_Map();
Question:
Is there a way to avoid the shifting of memory between SWAP and RAM? I thought pushing all the memory to the Heap would help, but it seems not to. So I don't know if it is possible to maybe fully use the swap memory or something else. Just this shifting of memory cost much time. The CPU usage decreases dramatically.
If you have 1 Gig of RAM and you have a program that uses up 2 Gb RAM, then you're going to have to find somewhere else to store the excess data.. obviously. The default OS way is to swap but the alternative is to manage your own 'swapping' by using a memory-mapped file.
You open a file and allocate a virtual memory block in it, then you bring pages of the file into RAM to work on. The OS manages this for you for the most part, but you should think about your memory usage so not to try to keep access to the same blocks while they're in memory if you can.
On Windows you use CreateFileMapping(), on Linux you use mmap(), on Mac you use mmap().
The OS is working properly - it doesn't distinguish between stack and heap when swapping - it pages you whatever you seem not to be using and loads whatever you ask for.
There are a few things you could try:
consider whether myType can be made smaller - e.g. using int8_t or even width-appropriate bitfields instead of int, using pointers to pooled strings instead of worst-case-length character arrays, use offsets into arrays where they're smaller than pointers etc.. If you show us the type maybe we can suggest things.
think about your paging - if you have many objects on one memory page (likely 4k) they will need to stay in memory if any one of them is being used, so try to get objects that will be used around the same time onto the same memory page - this may involve hashing to small arrays of related myType objects, or even moving all your data into a packed array if possible (binary searching can be pretty quick anyway). Naively used hash tables tend to flay memory because similar objects are put in completely unrelated buckets.
serialisation/deserialisation with compression is a possibility: instead of letting the OS swap out full myType memory, you may be able to proactively serialise them into a more compact form then deserialise them only when needed
consider whether you need to process all the data simultaneously... if you can batch up the work in such a way that you get all "group A" out of the way using less memory then you can move on to "group B"
UPDATE now you've posted your actual data types...
Sadly, using short might not help much because sizeof key needs to be 16 anyway for alignment of the double; if you don't need the precision, you could consider float? Another option would be to create an array of separate maps...
tbb::concurrent_hash_map<double,value> map[65536];
You can then index to map[my_short][my_double]. It could be better or worse, but is easy to try so you might as well benchmark....
For cl_I a 2-minute dig suggests the data's stored in a union - presumably word is used for small values and one of the pointers when necessary... that looks like a pretty good design - hard to improve on.
If numbers tend to repeat a lot (a big if) you could experiment with e.g. keeping a registry of big cl_Is with a bi-directional mapping to packed integer ids which you'd store in My_Map::map - fussy though. To explain, say you get 987123498723489 - you push_back it on a vector<cl_I>, then in a hash_map<cl_I, int> set [987123498723489 to that index (i.e. vector.size() - 1). Keep going as new numbers are encountered. You can always map from an int id back to a cl_I using direct indexing in the vector, and the other way is an O(1) amortised hash table lookup.

Dynamical initialization of memory at a given memory address

Ok this might seem odd but please bear with me, I'm just a beginner. Over the past few days i have been trying to develop a general purpose hash function for maintaining an associative array with a hash table using all the best parts of hash functions like RS ,JS , ELF e.t.c to reduce hash collisions. but now the problem is even now to avoid a appreciable amount of collision i will have to use unsigned long values with atleast 6 digits to avoid collision.
Lets just assume i just need to map names of students to their marks.So i maintain an integer array for the marks.
Now back to my question.
The idea i thought of was to use these values as few lower order bits of of an actual memory address and then dynamically initialize memory large enough to store a integer for the marks obtained. This process is repeated for each new value added.
Now assuming i somehow managed to avoid all memory locations that would be reserved by the OS
Is there any viable way to dynamically initialize memory at an address we like rather than letting the new operator to initialize it and then return a pointer to that address location in C++. (i'm using gcc).
It is platform-dependant. On Unix systems, you might try using mmap(). The Windows equivalent is VirtualAlloc(). But there is no guarantee since the address might already be in use.

What does "STL allocate memory internally" means?

I was reading this answer and maybe because I have never encountered this words, I don't understand what the user was mentioning in the first point of that answer, can someone use simpler words or an example to show what that statement means ?
When you use something like vectors or map ,... it belongs to STL (STANDARD TEMPLATE LIBRARY). you don't need to allocate memory as you do in arrays. In realtime the arrays are not sufficient and we cannot determine size.
STL containers will allocate memory internally, as you add elements to it. so there is good memory management. [if users manually allot, it might be not enough if alloted less or gets wasted if alloted too much memory].

Stack overflow with large array but not with equally large vector?

I ran into a funny issue today working with large data structures. I initially was using a vector to store upwards of 1000000 ints but later decided I didn't actually need the dynamic functionality of the vector (I was reserving 1000000 spots as soon as it was declared anyway) and it would be beneficial to, instead, be able to add values any place in the data structure. So I switched it to an array and BAM stack overflow. I'm guessing this is because declaring the size of the array at compile time puts it in the stack and making use of a dynamic vector instead placed it on the heap (which I'm guessing is larger?).
So what's the right answer here? Move back to a dynamic memory system just so it gets put on the heap? Increase the size of the stack? Or am I way off base on the whole thing here...?
Thanks!
I initially was using a vector to store upwards of 1000000 ints
Good idea.
but later decided I didn't actually need the dynamic functionality of the vector (I was reserving 1000000 spots as soon as it was declared anyway)
Not such a good idea. You did need it.
and it would be beneficial to, instead, be able to add values any place in the data structure.
I don't follow.
I'm guessing this is because declaring the size of the array at compile time puts it in the stack and making use of a dynamic vector instead placed it on the heap (which I'm guessing is larger?).
Much. The call stack is typically of the order of 1MB-2MB in size by default. Your "heap" (free store) is only really bounded by your available RAM.
So what's the right answer here? Move back to a dynamic memory system just so it gets put on the heap?
Yes.
[edit: Joachim's right — static is another possible answer.]
Increase the size of the stack?
You could but even if you could stretch 4MB out of it, you've left yourself no wiggle room for other local data variables. Best use dynamic memory — that's the appropriate thing to do.
Or am I way off base on the whole thing here...?
No.

What is the best way to treat large arrays of numbers in C++?

I need to deal with large arrays of float numbers (>200,000 numbers) and perform some maths with these arrays.
What do you suggest to treat these arrays so that I do not get any stack overflow problems?
UPDATE: I want to do simple and complex maths (sums, products, sin, cos, arctan) operations.
Plain numerical data you need to sequentially operate on?
std::valarray<double>
If profiling shows this is slowing you down, look for ways to make it faster by
std::valarray<double>::resize()
(yes, there's no reserve() unfortunately.
Why std::valarray<double> for numerical data? If you want to perform an operation on every element, just call
std::valarray<double>::apply(somefunction)
See for more information: a C++ reference.
If you want to be able to reserve(), you'll need std::vector, which is also fine but doesn't have overloads for the math functions you may want to use.
EDIT: This is of course assuming you have enough memory to fit all your arrays into std::valarrays. If not, you should split the 200,000 rows up so that only so many floats are in memory at the same time.
If your data are sparse, you could probably use boost's sparse_matrix http://www.boost.org/doc/libs/1_41_0/libs/numeric/ublas/doc/matrix_sparse.htm to represent you data structure and significantly reduce memory requirements.
Otherwise I would suggest looking into ways that you can split the data into chunks and work on one chunk in memory, then store that state to file and repeat.
I suggest you treat them like 10.000 per 10.000 then sum everything ?
It depends of what operations you are doing.
Depends on what you want to do with them.
Also, as chris said in the comments, dynamically allocate memory for your array (to get memory from the heap) and avoid using it as a local variable (which is allocated in the stack).