I wrote a c++ code where i'm testing the running time of vector push_back. I have a vector of vector. I called my main vector, mainVec, and embedded vector, subVec. So, I push backed 2^20 elements into subVec and then push backed subVec 2^20 times into mainVec. However, in the loop of subVec-push_back I have a cout command which doesn't get executed. I was hoping you can point out my mistake.
Here is the code (There is no error in the code, though):
vector<int> subVec;
vector< vector<int> > mainVec;
//Fills the subvector with 2^20 elements
for( size_t i = 0; i < (pow(2,20)+1); ++i) subVec.push_back(i);
//Filling of the maiVec with 2^20 subVec
for( size_t j = 10; j < 21; ++j) {
cout << pow(2,j) << endl;
clock_t t1 = clock();
//2^j times subVec is push_backed for j < 21
for( size_t k = 0; k < pow(2,j); ++k ) mainVec.push_back( subVec );
t1 = clock()-t1;
//Outputting to file
cout << "\t" << (float(t1) / CLOCKS_PER_SEC) << endl;
//ofs << pow(2,j) << "\t\t" << (float(t1) / CLOCKS_PER_SEC) << endl;
}
There are several issues with your code.
First, you don't need the +1 in the first loop, i.e,. pow(2,20)+1. Since you're starting with 0 and you want 2^20 times, you need to do until i<2^20.
Second, it's better to calculate the pows before the loop, otherwise it will calculate them each time and that could take forever.
Third, you can do 1<<j instead of pow(2,j). Just FYI.
Forth, and most important, we are talking about a tremenduous amount of memory here. Even your smallest loop is doing 2^30 ints which is 4GB of memory. My guess is that your program is just killing your computer and the reason it never prints the second cout is that it doesn't get there (because it's trying to use swap file for the memory). Try using smaller numbers, say 2^10 for the first loop, and see if you get the outputs.
Related
So the user inputs values within the for loop and the vector pushes it back, creating its own index. The problem arises in the second for loop, I think it has to do something with sizeof(v)/sizeof(vector).
vector<int> v;
for (int i; cin >> i;)
{
v.push_back(i);
cout << v.size() << endl;
}
for (int i =0; i < sizeof(v)/sizeof(vector); i++)
{
cout << v[i] << endl;
}
How will I determine the size of the vector after entering values?
(I'm quite new to C++ so If I have made a stupid mistake, I apologize)
Use the vector::size() method: i < v.size().
The sizeof operator returns the size in bytes of the object or expression at compile time, which is constant for a std::vector.
How will I determine the size of the vector after entering values?
v.size() is the number of elements in v. Thus,
another style for the second loop, which is easy to understand
for (int i=0; i<v.size(); ++i)
A different aspect of the 'size' function you might find interesting:
on Ubuntu 15.10, g++ 5.2.1,
Using a 32 byte class UI224, the sizeof(UI224) reports 32 (as expected)
Note that
sizeof(std::vector<UI224>) with 0 elements reports 24
sizeof(std::vector<UI224>) with 10 elements reports 24
sizeof(std::vector<UI224>) with 100 elements reports 24
sizeof(std::vector<UI224>) with 1000 elements reports 24
Note also, that
sizeof(std::vector<uint8_t> with 0 elements reports 24
(update)
Thus, in your line
for (int i =0; i < sizeof(v) / sizeof(vector); i++)
^^^^^^^^^ ^^^^^^^^^^^^^^
the 2 values being divided are probably not what you are expecting.
http://cppreference.com is a great site to look-up member functions of STL containers.
That being said you are looking for the vector::size() member function.
for (int i = 0; i < v.size(); i++)
{
cout << v[i] << endl;
}
If you have at your disposal a compiler that supports C++11 onwards you can use the new range based for loops:
for(auto i : v)
{
cout << i << endl;
}
A std::vector is a class. It's not the actual data, but a class that manages it.
Use std::vector.size() to get the size of the actual data.
Coliru example:
http://coliru.stacked-crooked.com/a/de0bffb1f4d8c836
I'm trying to simulate the mean behaviour of an ensemble of neurons. This means I need to do calculations with a matrix of a couple of billions of elements (steps~106, neurons~104).
To avoid eating my whole RAM (and dying trying) I decided to delete rows from the matrix as soon as I'm done doing calculations with it. I don't have too much experience with C++, but my understanding is that v.erase( v.begin()-i+1); should allow me to do so.
// Membrane potential matrix using st::vector
vector<vector<double>> v;
v.resize(steps + 1, vector<double>(neurons));
// Initialise v
for (size_t n = 0; n < neurons; n++) {
v[0][n] = v0;
}
double v_avg[steps + 1] = {v0};
// Loop
for (size_t i = 1; i < steps + 1; i++) {
for (size_t n = 0; n < neurons; n++) {
if(v[i-1][n] >= vp) {
v[i][n] = -vp;
}
else {
v[i][n] = v[i-1][n] + h * ( pow(v[i-1][n], 2) + I[i] + eta[n] );
}
v_avg[i] += v[i][n]; // Sum of membrane potentials
}
cout << "step " << i << "/" << steps << " done\n";
v.erase( v.begin()-i+1); // Erase row v[i-1]
v_avg[i] = v_avg[i]/neurons; // Mean membrane potential
}
v.erase( v.begin()+steps+1 ); // Erase last row
I'm not sure why I'm getting segmentation fault after the steps/2 step (I'm doing tests with a small steps value):
...
step 10/20 done
[1] 1791 segmentation fault (core dumped) ./qif_solve_vect
Update:
Thanks to #1201ProgramAlarm I see what's my problem. My question would be:
How can I work with the matrix in a way it isn't allocated from the very beginning.
How can I deallocate/free rows whilst keeping the indices (unlike v.erase( v.begin())). This is essential, as I will later implement different refractory times for each neuron when they produce a spike (v[i][n] = -vp;).
In your erase statement, you're subtracting from v.begin(), which will result in an invalid iterator since it will point before the start of the vector. You probably meant v.erase( v.begin() + i - 1);.
However, erasing like this isn't saving you any space since you already have the full matrix allocated. The erase will move all the remaining elements down one element, and your indexing for the next loop will be wrong (since you'd want to use v[0] all the time).
i am currently trying to learn some C++ and now i got stuck in an exercise with vectors. So the task is to read ints from a text file and store them in the vector which should be dynamic.
I guess there is something wrong with the while-loop?
If I start this, the program fails and if I set the vector size to 6, I get
6 0 0 0 0 0 as output.
Thanks for any hints.
int main()
{
const string filename = "test.txt";
int s = 0;
fstream f;
f.open(filename, ios::in);
vector<int> v;
if (f){
while(f >> s){
int i = 0;
v[i] = s;
i = i+1;
}
f.close();
}
for(int i = 0; i < 6; i++){
cout << v[i] << "\n";
}
}
You don't grow the vector. It is empty and cannot hold any ints. You'll need to either resize it every time you want to add another int or you use push_back which automatically enlarges the vector.
You set i = 0 for every iteration so you would change the first value of the vector every iteration instead of the next one.
Go for:
v.push_back(s);
in your loop and
for(int i = 0; i < v.size(); i++) { // ...
Remark:
You normally don't hardcode vector sizes/bounds. One major point about using std::vector is its ability to behave dynamically with respect to its size. Thus, the code dealing with vectors should not impose any restrictions about the size of the vector onto the respective object.
Example:
for(int i = 0; i < 6; i++){ cout << v[i] << "\n"; }
requires the vector to have at least 6 elements, otherwise (less than 6 ints) you access values out of bounds (and you potentially miss elements if v contains more than 6 values).
Use either
for(int i = 0; i < v.size(); i++){ cout << v[i] << "\n"; }
or
for(std::vector<int>::const_iterator i = v.begin(); i != v.end(); ++i)
{
cout << *i << "\n";
}
or
for(auto i = v.begin(); i != v.end(); ++i)
{
cout << *i << "\n";
}
or
for(int x : v){ cout << x << "\n"; }
or
for(auto && x : v){ cout << x << "\n"; }
or
std::for_each(v.begin(), v.end(), [](int x){ std::cout << x << "\n"; });
or variants of the above which possibly pre-store v.size() or v.end()
or whatever you like as long as you don't impose any restriction on the dynamic size of your vector.
The issue is in the line i= 0. Fixing that will give an issue in the line v[i] = s.
You always initialise i to 0 in the while loop, and that is responsible for the current output. You should shift it out of the while loop.
After fixing that, you have not allocated memory to that vector, and so v[i] doesn't make sense as it would access memory beyond bounds. This will give a segmentation fault. Instead, it should be v.push_back(i), as that adds elements to the end of a vector, and also allocates memory if needed.
If you are using std::vector you can use v.push_back(i) to fill this vector
Error is this line int i = 0;
because you declare i=0 every time in while-loop.
To correct this move this line outside from loop.
Note: this will work, if you declare v like normal array for example int v[101]
When you use std vectors you can just push element at the end of vector with v.push_back(element);
v[i] = s; //error,you dont malloc room for vector
change into : v.push_back(s);
I performed a small test to determine behavior of accessing a vector of pointers vs vector of values. It turns out that for small memory blocks both perform equally well, however, for large memory blocks there is a significant difference.
What is the explanation for such behavior?
For the code below, performed on my pc, the difference for D=0 is about 35% and for D=10 it is unnoticeable.
int D = 0;
int K = 1 << (22 - D);
int J = 100 * (1 << D);
int sum = 0;
std::vector<int> a(K);
std::iota(a.begin(), a.end(), 0);
long start = clock();
for (int j = 0; j < J; ++j)
for (int i = 0; i < a.size(); ++i)
sum += a[i];
std::cout << double(clock() - start) / CLOCKS_PER_SEC << " " << sum << std::endl;
sum = 0;
std::vector<int*> b(a.size());
for (int i = 0; i < a.size(); ++i) b[i] = &a[i];
start = clock();
for (int j = 0; j < J; ++j)
for (int i = 0; i < b.size(); ++i)
sum += *b[i];
std::cout << double(clock() - start) / CLOCKS_PER_SEC << " " << sum << std::endl;
Getting data from global memory is slow, so the CPU has a small bit of really fast memory to help memory access keep up with the processor. When handling memory requests, your computer will try and speed up future requests to an single integer or pointer in memory by requesting a whole bunch of them around the location you requested and storing them in cache. Once that fast memory is full is has to get rid of its least favorite bit whenever something new is requested.
Your small problems may fit entirely or substantially in cache and so memory access is super fast. Large problems can't fit in this fast memory so you have a problem. The vector is stored as K consecutive memory locations. When you access a vector of int it loads the int and a handful of his nearby values these can be used right away. However, when you load an int* it loads a pointer to an actual value as well as several other pointers. This takes up some memory. Then, when you dereference with * it loads the actual value and possibly some actual values nearby. This takes up more memory. Not only do you have to perform more work, but you also are filling up memory faster. The actual increase in time will vary as it is highly dependent on the architecture, operation (in this case +), and memory speeds. Also, your compiler will work quite hard to minimize the delays.
I'm having an issue in which a function that in theory should remove all duplicate values from an array doesn't work. Here's how it works:
I have two arrays, and then I populate them with random numbers
between 0 and 50 inclusive.
I sort the array values in order using a sort function
I then run my dedupe function
I sort the array values in order again
I then output the values in both arrays
The problem is, the loop in the dedupe function is ran 19 times regardless of how many duplicate entries it finds, which is extremely strange. Also, it still gives duplicates.
Any ideas? Thanks!
int* dedupe(int array[ARRAY_SIZE]) //remove duplicate array values and replace with new values.
{ bool dupe = false;
while(dupe!=true)
{
for(int j=0; j<ARRAY_SIZE; j++)
{ if(array[j] == array[j+1])
{ array[j] = rand();
array[j] = array[j] % 51;
dupe = false;
}
else { dupe = true; // the cout part is for debugging
cout << dupe << endl; }
}
} return array;
}
int main()
{
int a[9], b[9];
srand(time(0));
populate(b);
populate(a);
sort(a,ARRAY_SIZE);
sort(b,ARRAY_SIZE);
dedupe(a);
dedupe(b);
sort(a,ARRAY_SIZE);
sort(b,ARRAY_SIZE);
for(int i=0; i<10; i++)
{ cout << "a[" << i << "] = " << a[i] << "\t\t" << "b[" << i << "] = " << b[i] << endl; }
return 0;
}
Nothing suggested so far has solved the problem. Does anyone know of a solution?
You're not returning from inside the for loop... so it should run exactly ARRAY_SIZE times each time.
The problem that you want to solve and the algorithm that you provided do not really match. You do not really want to remove the duplicates, but rather guarantee that all the elements in the array are different, the difference being that by removing duplicates the number of elements in the array would be less than the size of the array, but you want a full array.
I don't know what the perfect solution would be (algorithmically), but one simple answer would be creating an array of all the values in the valid range (since the range is small), shuffling it and then picking up the first N elements. Think of this as using cards to pick the values.
const int array_size = 9;
void create_array( int (&array)[array_size] ) {
const int max_value = 51;
int range[max_value];
for ( int i = 0; i < max_value; ++i ) {
range[i] = i;
}
std::random_shuffle( range, range+max_value );
std::copy_n( range, array_size, array );
}
This is not the most efficient approach, but it is simple, and with a small number of elements there should not be any performance issues. A more complex approach would be to initialize the array with the random elements in the range, sort and remove duplicates (actually remove, which means that the array will not be full at the end) and then continue generating numbers and checking whether they are new against the previously generated numbers.
Simplest approach is just comparing with every other value which is linear time but on an array of 9 elements linear time is small enough not to matter.
you are doing it wrong at
array[j] = rand();
array[j] = array[j] % 51
It will always have 1 to ARRAY SIZE!!