Create array upto 10^12 - c++

I tried to create an array with size upto 10^12 elements in c++. But I can only make array upto 1000001 size. i.e
long long int dp[1000001]
But I want to store data upto 10^12 values in the array. Any Idea how can I implement this in C++ ?

First, you must realize that the size of that array is nearly 8 TB. Does your computer have that much memory? Probably not. In such case, you cannot store that much data in memory, and practically cannot have such a large array.
Any Idea how can I implement this
Instead of an array in memory, you could store the data in the file system... Assuming you have 8 TB free storage. You can use a paging mechanism to read and write small pieces of the file at a time.
The simplest way to implement that in C++ is to use operating system functionality to map the file into the memory. That way the operating system takes care of the paging. There is no standard way to map files into memory in C++, so first step is to figure out what operating system you're using. POSIX standard specifies mmap function for this purpose.
Before doing that however, I recommend considering whether you actually need to store that much data. Perhaps you need a smarter algorithm instead.

Related

c++ Alternative implementation to avoid shifting between RAM and SWAP memory

I have a program, that uses dynamic programming to calculate some information. The problem is, that theoretically the used memory grows exponentially. Some filters that I use limit this space, but for a big input they also can't avoid that my program runs out of RAM - Memory.
The program is running on 4 threads. When I run it with a really big input I noticed, that at some point the program starts to use the swap memory, because my RAM is not big enough. The consequence of this is, that my CPU-usage decreases from about 380% to 15% or lower.
There is only one variable that uses the memory which is the following datastructure:
Edit (added type) with CLN library:
class My_Map {
typedef std::pair<double,short> key;
typedef cln::cl_I value;
public:
tbb::concurrent_hash_map<key,value>* map;
My_Map() { map = new tbb::concurrent_hash_map<myType>(); }
~My_Map() { delete map; }
//some functions for operations on the map
};
In my main program I am using this datastructure as globale variable:
My_Map* container = new My_Map();
Question:
Is there a way to avoid the shifting of memory between SWAP and RAM? I thought pushing all the memory to the Heap would help, but it seems not to. So I don't know if it is possible to maybe fully use the swap memory or something else. Just this shifting of memory cost much time. The CPU usage decreases dramatically.
If you have 1 Gig of RAM and you have a program that uses up 2 Gb RAM, then you're going to have to find somewhere else to store the excess data.. obviously. The default OS way is to swap but the alternative is to manage your own 'swapping' by using a memory-mapped file.
You open a file and allocate a virtual memory block in it, then you bring pages of the file into RAM to work on. The OS manages this for you for the most part, but you should think about your memory usage so not to try to keep access to the same blocks while they're in memory if you can.
On Windows you use CreateFileMapping(), on Linux you use mmap(), on Mac you use mmap().
The OS is working properly - it doesn't distinguish between stack and heap when swapping - it pages you whatever you seem not to be using and loads whatever you ask for.
There are a few things you could try:
consider whether myType can be made smaller - e.g. using int8_t or even width-appropriate bitfields instead of int, using pointers to pooled strings instead of worst-case-length character arrays, use offsets into arrays where they're smaller than pointers etc.. If you show us the type maybe we can suggest things.
think about your paging - if you have many objects on one memory page (likely 4k) they will need to stay in memory if any one of them is being used, so try to get objects that will be used around the same time onto the same memory page - this may involve hashing to small arrays of related myType objects, or even moving all your data into a packed array if possible (binary searching can be pretty quick anyway). Naively used hash tables tend to flay memory because similar objects are put in completely unrelated buckets.
serialisation/deserialisation with compression is a possibility: instead of letting the OS swap out full myType memory, you may be able to proactively serialise them into a more compact form then deserialise them only when needed
consider whether you need to process all the data simultaneously... if you can batch up the work in such a way that you get all "group A" out of the way using less memory then you can move on to "group B"
UPDATE now you've posted your actual data types...
Sadly, using short might not help much because sizeof key needs to be 16 anyway for alignment of the double; if you don't need the precision, you could consider float? Another option would be to create an array of separate maps...
tbb::concurrent_hash_map<double,value> map[65536];
You can then index to map[my_short][my_double]. It could be better or worse, but is easy to try so you might as well benchmark....
For cl_I a 2-minute dig suggests the data's stored in a union - presumably word is used for small values and one of the pointers when necessary... that looks like a pretty good design - hard to improve on.
If numbers tend to repeat a lot (a big if) you could experiment with e.g. keeping a registry of big cl_Is with a bi-directional mapping to packed integer ids which you'd store in My_Map::map - fussy though. To explain, say you get 987123498723489 - you push_back it on a vector<cl_I>, then in a hash_map<cl_I, int> set [987123498723489 to that index (i.e. vector.size() - 1). Keep going as new numbers are encountered. You can always map from an int id back to a cl_I using direct indexing in the vector, and the other way is an O(1) amortised hash table lookup.

C++ vector out of memory

I have a very large vector (millions of entries 1024 bytes each). I am exceeding the maximum size of the vector (getting a bad memory alloc exception). I am doing a recursive operation over the vector of items which requires accessing other elements in the vector. The operations need to be done quickly. I am trying to avoid writing to disk for speed reasons. Is there any other way to store this data that would not require writing to disk? If I have to write the data to disk, what would be the most ideal way to do it>
edit for a few more details.
The operations that I am performing on the data set is generating a string recursively based on other data points in the vector. The data is sorted when it is read in. Data sets ranging from 50,000 to 50,000,0000.
The easiest way to solve this problem is to use STXXL. It's a reimplementation of the STL for large structures that transparently writes to disk when the data won't fit in memory.
Your problem cannot be solved as stated and clarified in the comments.
You have requested a way to have a contiguous in-memory buffer of 50,000,000 entries of size 1024 on a 32 bit system.
A 32 bit system has only 4294967296 bytes of addressable memory. You are asking for 51200000000 bytes of addressable memory, or 11.9 times the amount of memory address space on your system.
If you don't require that your data be contiguous and memory-addressable, if you don't require that your data all be in memory at once, or if you relax other requirements, there may be an answer to your problem. Ie, some OSs expose access to a non-memory space of values that corresponds to RAM (there where ways in 8 gig windows systems to use more than 4 gigs of total RAM) through some hacky interface or other.
But as stated, the answer is "no, you cannot do that".
Because your data must be contiguous, and you know how many elements you need to store, just create a std::vector and use the reserve() function to attempt to gain a contiguous block of memory of the required size.
There is very little overhead in storing a vector (just a few pointers to manage the beginning and end). This is as good as you'll be able to do.
If that fails:
add more memory to your machine (may not actually help, if you've run up against addressing or implementation constraints)
switch to a raw array
find a way to reduce the size of your elements
try to find a solution that can tackle the problem in small blocks
That is 1GB of memory (1024KB * 10^6 = 1MB * 10^3 = 1GB). Ideally for a 32 bit machine upto 4GB memory operations can be performed.
To answer your question, try first a normal malloc() call and allocate 1 GB of memory. This should be done without any error.
Also, please paste the exact error msg that you get while using the vector.

Saving and loading large array in c++

I need to save a large 3D array of integers into a file, and load it again in C++.
It is 256*256*256 = 16777216 integers.
What is the best way to save this and load it again? I am mostly interested in a quick load time.
If the array is allocated in contiguous memory (i.e.: you don't allocate each dimension separately) - you can just dump the whole memory block to file. It takes as much as it takes, but that would be the least overhead (i.e.: call binary write on the whole chunk of data).
If you're saving on one system and loading on another, you might have issues with data representation, in this case you'd probably want to serialize the array and save each value in a controlled matter.
You may be interested in Boost.Serialization, particularly if you (1) want the ability to easily store such data on disk, (2) want a coherent way to save more complex objects, and (3) want a solution that's portable.

How to create an array with size more than C++ limits

I have a little problem here, i write c++ code to create an array but when i want to set array size to 100,000,000 or more i got an error.
this is my code:
int i=0;
double *a = new double[n*n];
this part is so important for my project.
When you think you need an array of 100,000,000 elements, what you actually need is a different data structure that you probably have never heard of before. Maybe a hash map, or maybe a sparse matrix.
If you tell us more about the actual problem you are trying to solve, we can provide better help.
In general, the only reason that would fail would be due to lack of memory/memory fragmentation/available address space. That is, trying to allocate 800MB of memory. Granted, I have no idea why your system's virtual memory can't handle that, but maybe you allocated a bunch of other stuff. It doesn't matter.
Your alternatives are to tricks like memory-mapped files, sparse arrays, and so forth instead of an explicit C-style array.
If you do not have sufficient memory, you may need to use a file to store your data and process it in smaller chunks.
Don't know if IMSL provides what you are looking for, however, if you want to work on smaller chunks you might devise an algorithm that can call IMSL functions with these small chunks and later merge the results. For example, you can do matrix multiplication by combining multiplication of sub-matrices.

How many character can a STL string class can hold?

I need to work with a series of characters. The number of characters can be upto 1011.
In a usual array, it's not possible. What should I use?
I wanted to use gets() function to hold the string. But, is this possible for STL containers?
If not, then what's the way?
Example:
input:
AMIRAHID
output: A.M.I.R.A.H.I.D
How to make this possible if the number of characters lessened to 10^10 in 32-bit machine ?
Thank you in advance.
Well, that's roughly 100GByte of data. No usual string class will be able to hold more than fits into your main memory. You might want to look at STXXL, which is an implementation of STL allowing to store part of the data on disk.
If your machine has 1011 == 93GB of memory then it's probably a 64bit machine, so string will work. Otherwise nothing will help you.
Edited answer for the edited question: In that case you don't really need to store the whole string in memory. You can store only small part of it that fits into the memory.
Just read every character from the input, write it to the output and write a dot after it. Repeat it until you get and EOF on the input. To increase performance you can read and write large chunks of the data but such that still can fit into the memory.
Such algorithms are called online algorithms.
It is possible for an array that large to be created. But not on a 32-bit machine. Switching to STL will likely not help, and is unnecessary.
You need to contemplate how much memory that is, and if you have any chance of doing it at all.
1011 is roughly 100 gigabytes, which means you will need a 64-bit system (and compiler) to even be able to address it.
STL's strings support a max of max_size() characters, so the answer can change with the implementation.
A string suffers from the same problem as an array: *it has to fit in memory.
10^11 characters would take up over 4GB. That's hard to fit into memory on a 32-bit machine which has a 4GB memory space. You either need to split up your data into smaller chunks, and only load a bit of it at a time, or switch to 64-bit, in which case both arrays and strings should be able to hold the data (although it may still be preferable to split it up into multiple smaller strings/arrays
The SGI version of STL has a ROPE class (A rope is a big string, get it).
I am not sure it is designed to handle that much data but you can have a look.
http://www.sgi.com/tech/stl/Rope.html
If all you're trying to do is read in some massive file and write to another file the same data with periods interspersed between each character, why bother reading the whole thing into memory at once? Pick some reasonable buffer size and do it in chunks.