I use next_permutation function to have a vector list permutation, but the stack is overflow when running the program, due to size of the vector
#include <vector>
#include <algorithm>
vector<int> uuid_list_index;
vector<vector<int>> permutation_uuid_lists;
for(size_t i=0; i<12; i++)
{
uuid_list_index.push_back(i);
}
do permutation_uuid_lists.push_back(uuid_list_index);
while(next_permutation(uuid_list_index.begin(), uuid_list_index.end()));
when run the program, the binary overflow crash, How implement a permutation function for list {0,1,2,3,4,5,6,7,8,9,10,11,12}?
This is not too surprising at all. uuid_list_index has 12 different entries.
The number of permutations of a sequence of length N is N!; and
12! = 479001600.
permutation_uuid_lists contains more than 479 million std::vector<int>s; since each vector has at least a 12 Byte header (armv8) and contains a pointer to at least 12× 4-byte integer memory elements, plus implies a 16-byte entry in a memory allocation table: you're trying to use around 30 GB of RAM. That's more than your phone has.
Related
Why can't this code handle n=100000000 (8 zeros)?
The code will be terminated by this error:
terminate called after throwing an instance of 'std::bad_alloc'
So, the word "DONE" won't be printed out.
using namespace std;
int main()
{
vector <long long> v1;
long long n; cin>>n;
for(long long i=0;i<n;i++){
v1.push_back(i+1);
}
cout<<"DONE"<<endl;
}
Although the maximum size of v1 is 536870911.
What is the maximum size of a vector?
Depends on several factors. Here are some upper limits:
The size of the address space
The maximum representable value of std::vector::size_type
The theoretical upper limit given by std::vector::max_size
Available memory - unless system overcommits and you don't access the entire vector
Maximum available memory for a single process may be limited by the operating system.
Available contiguous address space which can be less than all of the free memory due to fragmentation.
The address space isn't an issue in 64 bit world.
Note that the size of the element type affects the number of elements that fit in a given range of memory.
The most likely the strictest limit in your case was the available memory. 536870911 is one long long short of 4 gigabytes.
From cppref max_size() docs:
This value typically reflects the theoretical limit on the size of the container ...
At runtime, the size of the container may be limited to a value smaller than max_size() by the amount of RAM available.
std::vector contains a pointer to a single dynamically allocated contiguous array. As you call push_back() in your loop, that array has to grow over time whenever the vector's size() exceeds is capacity(). As the array grows large, it becomes more difficult for a new larger contiguous array to be allocated to copy the old array elements into. Eventually, the array is so large that a new copy simply can't be allocated anymore, and thus std::bad_alloc gets thrown.
To avoid all of those reallocations while looping, call the vector's reserve() method before entering the loop:
int main() {
vector <long long> v1;
long long n; cin>>n;
v1.reserve(n); // <-- ADD THIS!
for(long long i = 0; i < n; ++i){
v1.push_back(i+1);
}
cout<<"DONE"<<endl;
}
That way, the array is allocated only 1 time.
I wrote the following code to accept test-cases on a competetive programming website. It uses a vector input of the structure case to store the inputs for given test-cases all at once, and then process them one at a time( I have left out the loops that take the input and calculate the output because they are irrelevant to the question.)
#include<iostream>
#include<vector>
using namespace std;
struct case{
int n, m;
vector<int> jobsDone;
};
int main(){
int testCase;
cin>>testCase;
vector<case> input;
input.reserve(testCase);
//The rest of the code is supposed to be here
return 0;
}
As I was writing this code, I realised that the working of input.reserve(t) in such a case where the element size is variable(since each instance of the structure case also has a vector of variable size) would be difficult. Infact, even if I had not explicitly written the reserve() statement, the vector still would have reserved a minumum number of elemtns.
For this particular situation, I have the following questions regarding the vector input:
Wouldn't random access in O(1) time be impossible in this case, since the beginning position of every element is not known?
How would the vector input manage element access at all when the beginning location of every element cannot be calculated? Will it pad all the entries to the size of the maximum entry?
Should I rather be implementing cases using a vector of pointers pointing to each instance of case? I am thinking about this because if the vector pads each element to a size and wastes space, or it maintains the location to each element, and random access is not constant in time, hence there is no use for a vector anyway.
Every object type has a fixed size. This is what sizeof returns. A vector itself typically holds a pointer to the array of objects, the number of objects for which space has been allocated, and the number of objects actually contained. The size of these three things is independent of the number of elements in the vector.
For example, a vector<int> might contain:
1) An int * holding the address of the data.
2) A size_t holding the number of objects we've allocated space for
3) A size_t holding the number of objects contained in the vector.
This will probably be somewhere around 24 bytes, regardless of how many objects are in the vector. And this is what sizeof(vector<int>) will return.
I have been trying to figure out the size of 2 dimensional vector and not able to figure out entirely.
The test program that I have written is as below.
#include <iostream>
#include <vector>
using namespace std;
int main()
{
vector<int> one(1);
vector < vector<int> > two(1, vector <int>(1));
return 0;
}
Memory allocation when I check with the help of valgrind is confusing me. After executing the first statement in the main block, I get the below output.
==19882== still reachable: 4 (+4) bytes in 1 (+1) blocks
So far so good. But after running the next statement I get the below log.
==19882== still reachable: 32 (+28) bytes in 3 (+2) blocks
Now this is confusing, I don't know how to justify the 28 bytes allocated.
If I change the second line as below
vector < vector<int> > two(1, vector <int>(0));
I get the below log
==19882== still reachable: 32 (+24) bytes in 3 (+1) blocks
Kindly help in understanding how the memory is allocated.
tl;dr
The first case just shows the allocation for the (int) storage managed by the vector. The second shows both the inner vector's int storage, and the storage for the inner vector object itself.
So it's telling you this
vector<int> one(1);
allocates one block of 4 bytes.
It doesn't tell you about the automatic storage for the vector object itself, only the dynamic storage for the single integer: assuming sizeof(int)==4, this seems pretty reasonable.
Next it tells you this:
vector < vector<int> > two(1, vector <int>(1));
allocates two more blocks of 28 bytes in total.
Now, one of those blocks will contain the dynamic storage for the vector<int> - remember the previous instance was an automatic local - and the other block will contain the dynamic storage for the nested vector's single integer.
We can assume the second (single integer) allocation is a single block of 4 bytes, as it was last time. So, the dynamically-allocated vector<int> itself is taking 24 bytes in another single block.
Is 24 bytes a reasonable size for a std::vector instance? That could easily be
template <typename T> class vector {
T* begin;
size_t used;
size_t allocated;
};
on a platform with 64-bit pointers and size_t. This assumes a stateless allocator, which is probably right.
#include <iostream>
#include <vector>
#include "mixmax.h"//<-here, there is random number generator called mixmax
#include <algorithm>
#include <cmath>
using namespace std;
int main()
{
const unsigned long long int n=10000000;
vector < float > f(n);
vector < float > distance_1(n);
vector < float > distance_2(n);
rng_state_t s;
rng_state_t *x=&s;
seed_spbox(x,12345);//<-here we just devlare our random number generator
for(int i=0;i<n;i++)
f[i]=int(n*get_next_float(x));//,<-here we just get random numbers,like rand()
sort(f.begin(),f.end());
for(int i=0;i<n;i++)
{
distance_1[i]=f[i]-i;
distance_2[i]=(i+1)-f[i];
}
float discrep=max(*max_element(distance_1.begin(),distance_1.end()),*max_element(dis tance_2.begin(),distance_2.end()));
cout<<"discrep= "<<discrep<<endl;
cout<<"sqrt(n)*discrep= "<<discrep/sqrt(n)<<endl;
}
When I print f.max_size() (the vector declined above in code) gives me this huge number 4611686018427387903, but when I take n=10000000000, it does not work, it gives this error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped).
(I tried it in Visual Studio under windows.)
What's the problem ??? If vectors do not work for big sizes, can anyone tell me how can I use vectors or arrays with very big sizes ???
Quoting cplusplus.com,
std::vector::max_size
Returns the maximum number of elements that the
vector can hold.
This is the maximum potential size the container can reach due to
known system or library implementation limitations, but the container
is by no means guaranteed to be able to reach that size: it can still
fail to allocate storage at any point before that size is reached.
Hence, vector doesn't guarantee that it can hold max_size elements, it just an implementation limitation.
Also, chris mentioned:
Well 10 GB * sizeof(float) * 3 is a ton of memory to allocate. I'm
going to guess your OS isn't letting you allocate it all for a good
reason.
The OP asks,
If vectors do not work for big sizes, can anyone tell me how can I use
vectors or arrays with very big sizes ???
Yes you can. Try Roomy or STXXL.
max_size() is different of size() and is different of capacity()
Current capacity is n=10000000 so the last element is distance_1[9999999]
What's the problem ???
Presumably, the allocation fails because your computer doesn't have 120GB of memory available. max_size tells you how many elements the implementation of vector can theoretically manage, given enough memory; it doesn't know how much memory will actually be available when you run the program.
If this vectors do not work for big sizes Can anyone tell me how can I use vectors or arrays with big, very big size ???
Increase the amount of RAM or swap space on your computer (and make sure the OS is 64-bit, though from the value of max_size() I guess it is). Or use something like STXXL to use files to back up huge data structures.
I'm trying to understand which way I should implement a vector so I can reduce my run time and memory usage in a program or it doesn't matter (depending solely of the computations my program does with those elements)?
Let's say I define a vector without knowing how many elements I'll use in my program but I know the max number of elements I'll be working with
#define MAX 10000
vector<int> eg(MAX);
In the other case I indicate first how many elements and then size it accordingly
vector<int> eg;
int n;
cin >> n;
eg.resize(n);
If you know the maximum number of elements that the vector will store then it is better to use member function reserve. For example
const std::vector<int>::size_type MAX = 10000;
vector<int> eg;
eg.reserve( MAX );
Both. Because when you go to resize to the final number of elements, you will only be resizing to a lesser number of elements and that takes fewer cpu cycles than resizing to a greater number of elements (if you hadn't set MAX) because it doesn't have to copy elements to different locations if there isn't room in the current contiguous location.