Iterating 2D and 3D vectors column wise in C++11? - c++

I have a 2D and 3D vector
using namespace std;
vector< vector<int> > vec_2d;
vector<vector<vector<int>>> vec_3d
I know how to iterate 2D vector row-wise using two iterators. The first the iterator of the "rows" and the second the iterators of the "columns" in that "row". Now, I need to iterate over 2D vector such that the first iterator becomes the iterator of the "columns" and the second the iterator of the rows in that "column" i.e. column-wise.

Using iterators, this will be very difficult. I'd say you would probably need to implement your own iterator classes inheriting from std::iterator<random_access_iterator_tag, Type>.
If you don't actually need to use iterators and really have a good reason for wanting to traverse vectors of vectors in such an odd way (and are aware of how this will slow down memory access by preventing caching) then it could easily be done using indexes.
Here's an example using indexes which handles the tricky case where the inner vectors are not all of the same length.
using namespace std;
int main()
{
vector< vector<int> > vec_2d = { {1, 2, 3}, {4, 5, 6, 7}, {8, 9, 10} };
bool is_col_out_of_bounds = false;
for (size_t col=0; ! is_col_out_of_bounds; col++)
{
is_col_out_of_bounds = true;
for (size_t row=0; row<vec_2d.size(); row++)
{
if (col < vec_2d[row].size())
{
is_col_out_of_bounds = false;
cout << vec_2d[row][col] << endl;
}
}
}
return 0;
}
Output:
1
4
8
2
5
9
3
6
10
7
If you want to guarantee that all rows are of the same length, then vector<array<T, N>> may be a better choice.

A simple answer: Who said your layout should be interpreted row-wise? A std::vector<std::vector<Foo>> has no knowledge about rows and columns, so let the outer-most vector represent columns instead of rows.
This is a pain when printing to the terminal, which is likely to do it row-wise, but if column layout is preferred internally, do it that way.

Related

c++ Algorithm to Compare various length vectors and isolate "unique", sort of

I have a complex problem and have been trying to identify what needs to be a very, very efficient algorithm. I'm hoping i can get some ideas from you helpful folks. Here is the situation.
I have a vector of vectors. These nested vectors are of various length, all storing integers in a random order, such as (pseudocode):
vector_list = {
{ 1, 4, 2, 3 },
{ 5, 9, 2, 1, 3, 3 },
{ 2, 4, 2 },
...,
100 more,
{ 8, 2, 2, 4 }
}
and so on, up to over 100 different vectors at a time inside vector_list. Note that the same integer can appear in each vector more than once. I need to remove from this vector_list any vectors that are duplicates of another vector. A vector is a duplicate of another vector if:
It has the same integers as the other vector (regardless of order). So if we have
vec1 = { 1, 2, 3 }
vec2 = { 2, 3, 1 }
These are duplicates and I need to remove one of them, it doesnt matter which one.
A vector contains all of the other integers of the other vector. So if we have
vec1 = { 3, 2, 2 }
vec2 = { 4, 2, 3, 2, 5 }
Vec2 has all of the ints of vec1 and is bigger, so i need to delete vec1 in favor of vec2
The problem is as I mentioned the list of vectors can be very big, over 100, and the algorithm may need to run as many as 1000 times on a button click, with a different group of 100+ vectors over 1000 times. Hence the need for efficiency. I have considered the following:
Sorting the vectors may make life easier, but as I said, this has to be efficient, and i'd rather not sort if i didnt have to.
It's more complicated by the fact that the vectors aren't in any order with respect to their size. For example, if the vectors in the list were ordered by size:
vector_list = {
{ },
{ },
{ },
{ },
{ },
...
{ },
{ }
}
It might make life easier, but that seems like it would take a lot of effort and I'm not sure about the gain.
The best effort I've had so far to try and solve this problem is:
// list of vectors, just 4 for illustration, but in reality more like 100, with lengths from 5 to 15 integers long
std::vector<std::vector<int>> vector_list;
vector_list.push_back({9});
vector_list.push_back({3, 4, 2, 8, 1});
vector_list.push_back({4, 2});
vector_list.push_back({1, 3, 2, 4});
std::vector<int>::iterator it;
int i;
int j;
int k;
// to test if a smaller vector is a duplicate of a larger vector, i copy the smaller vector, then
// loop through ints in the larger vector, seeing if i can find them in the copy of the smaller. if i can,
// i remove the item from the smaller copy, and if the size of the smaller copy reaches 0, then the smaller vector
// was a duplicate of the larger vector and can be removed.
std::vector<int> copy;
// flag for breaking a for loop below
bool erased_i;
// loop through vector list
for ( i = 0; i < vector_list.size(); i++ )
{
// loop again, so we can compare every vector to every other vector
for ( j = 0; j < vector_list.size(); j++ )
{
// don't want to compare a vector to itself
if ( i != j )
{
// if the vector in i loop is at least as big as the vector in j loop
if ( vector_list[i].size() >= vector_list[j].size() )
{
// copy the smaller j vector
copy = vector_list[j];
// loop through each item in the larger i vector
for ( k = 0; k < vector_list[i].size(); k++ ) {
// if the item in the larger i vector is in the smaller vector,
// remove it from the smaller vector
it = std::find(copy.begin(), copy.end(), vector_list[i][k]);
if (it != copy.end())
{
// erase
copy.erase(it);
// if the smaller vector has reached size 0, then it must have been a smaller duplicate that
// we can delete
if ( copy.size() == 0 ) {
vector_list.erase(vector_list.begin() + j);
j--;
}
}
}
}
else
{
// otherwise vector j must be bigger than vector i, so we do the same thing
// in reverse, trying to erase vector i
copy = vector_list[i];
erased_i = false;
for ( k = 0; k < vector_list[j].size(); k++ ) {
it = std::find(copy.begin(), copy.end(), vector_list[j][k]);
if (it != copy.end()) {
copy.erase(it);
if ( copy.size() == 0 ) {
vector_list.erase(vector_list.begin() + i);
// put an extra flag so we break out of the j loop as well as the k loop
erased_i = true;
break;
}
}
}
if ( erased_i ) {
// break the j loop because we have to start over with whatever
// vector is now in position i
break;
}
}
}
}
}
std::cout << "ENDING VECTORS\n";
// TERMINAL OUTPUT:
vector_list[0]
[9]
vector_list[1]
[3, 4, 2, 8, 1]
So this function gives me the right results, as these are the 2 unique vectors. It also gives me the correct results if i push the initial 4 vectors in reverse order, so the smallest one comes last for example. But it feels so inefficient comparing every vector to every other vector. Plus i have to create these "copies" and try to reduce them to 0 .size() with every comparison I make. very inefficient.
Anyways, any ideas on how I could make this speedier would be much appreciated. Maybe some kind of organization by vector length, I dunno.... It seems wasteful to compare them all to each other.
Thanks!
Loop through the vectors and for each vector, map the count of unique values occurring in it. unordered_map<int, int> would suffice for this, let's call it M.
Also maintain a set<unordered_map<int, int>>, say S, ordered by the size of unordered_map<int, int> in decreasing order.
Now we will have to compare contents of M with the contents of unordered_maps in S. Let's call M', the current unordered_map in S being compared with M. M will be a subset of M' only when the count of all the elements in M is less than or equal to the count of their respective elements in M'. If that's the case then it's a duplicate and we'll not insert. For any other case, we'll insert. Also notice that if the size of M is greater than the size of M', M can't be a subset of M'. That means we can insert M in S. This can be used as a pre-condition to speed things up. Maintain the indices of vectors which weren't inserted in S, these are the duplicates and have to be deleted from vector_list in the end.
Time Complexity: O(N*M) + O(N^2*D) + O(N*log(N)) = O(N^2*D) where N is the number of vectors in vector_list, M is the average size of the vectors in vector_list and D is the average size of unordered_map's in S. This is for the worst case when there aren't any duplicates. For average case, when there are duplicates, the second complexity will come down.
Edit: The above procedure will create a problem. To fix that, we'll need to make unordered_maps of all vectors, store them in a vector V, and sort that vector in decreasing order of the size of unordered_map. Then, we'll start from the biggest in this vector and apply the above procedure on it. This is necessary because, a subset, say M1 of a set M2, can be inserted into S before M2 if the respective vector of M1 comes before the respective vector of M2 in vector_list. So now we don't really need S, we can compare them within V itself. Complexity won't change.
Edit 2: The same problem will occur again if sizes of two unordered_maps are the same in V when sorting V. To fix that, we'll need to keep the contents of unordered_maps in some order too. So just replace unordered_map with map and in the comparator function, if the size of two maps is the same, compare element by element and whenever the keys are not the same for the very first time or are same but the M[key] is not the same, put the bigger element before the other in V.
Edit 3: New Time Complexity: O(N*M*log(D)) + O(N*D*log(N)) + O(N^2*D*log(D)) = O(N^2*D*log(D)). Also you might want to pair the maps with the index of the respective vectors in vector_list so as to know which vector you must delete from vector_list when you find a duplicate in V.
IMPORTANT: In sorted V, we must start checking from the end just to be safe (in case we choose to delete a duplicate from vector_list as well as V whenever we encounter it). So for the last map in V compare it with the rest of the maps before it to check if it is a duplicate.
Example:
vector_list = {
{1, 2, 3},
{2, 3, 1},
{3, 2, 2},
{4, 2, 3, 2, 5},
{1, 2, 3, 4, 6, 2},
{2, 3, 4, 5, 6},
{1, 5}
}
Creating maps of respective vectors:
V = {
{1->1, 2->1, 3->1},
{1->1, 2->1, 3->1},
{2->2, 3->1},
{2->2, 3->1, 4->1, 5->1},
{1->1, 2->2, 3->1, 4->1, 6->1},
{2->1, 3->1, 4->1, 5->1, 6->1},
{1->1, 5->1}
}
After sorting:
V = {
{1->1, 2->2, 3->1, 4->1, 6->1},
{2->1, 3->1, 4->1, 5->1, 6->1},
{2->2, 3->1, 4->1, 5->1},
{1->1, 2->1, 3->1},
{1->1, 2->1, 3->1},
{1->1, 5->1},
{2->2, 3->1}
}
After deleting duplicates:
V = {
{1->1, 2->2, 3->1, 4->1, 6->1},
{2->1, 3->1, 4->1, 5->1, 6->1},
{2->2, 3->1, 4->1, 5->1},
{1->1, 5->1}
}
Edit 4: I tried coding it up. Running it a 1000 times on a list of 100 vectors, the size of each vector being in range [1-250], the range of the elements of vector being [0-50] and assuming the input is available for all the 1000 times, it takes around 2 minutes on my machine. It goes without saying that there is room for improvement in my code (and my machine).
My approach is to copy the vectors that pass the test to an empty vector.
May be inefficient.
May have bugs.
HTH :)
C++ Fiddle
#include <algorithm>
#include <iostream>
#include <iterator>
#include <vector>
int main(int, char **) {
using namespace std;
using vector_of_integers = vector<int>;
using vector_of_vectors = vector<vector_of_integers>;
vector_of_vectors in = {
{ 1, 4, 2, 3 }, // unique
{ 5, 9, 2, 1, 3, 3 }, // unique
{ 3, 2, 1 }, // exists
{ 2, 4, 2 }, // exists
{ 8, 2, 2, 4 }, // unique
{ 1, 1, 1 }, // exists
{ 1, 2, 2 }, // exists
{ 5, 8, 2 }, // unique
};
vector_of_vectors out;
// doesnt_contain_vector returns true when there is no entry in out that is superset of any of the passed vectors
auto doesnt_contain_vector = [&out](const vector_of_integers &in_vector) {
// is_subset returns true a vector contains all the integers of the passed vector
auto is_subset = [&in_vector](const vector_of_integers &out_vector) {
// contained returns true when the vector contains the passed integer
auto contained = [&out_vector](int i) {
return find(out_vector.cbegin(), out_vector.cend(), i) != out_vector.cend();
};
return all_of(in_vector.cbegin(), in_vector.cend(), contained);
};
return find_if(out.cbegin(), out.cend(), is_subset) == out.cend();
};
copy_if(in.cbegin(), in.cend(), back_insert_iterator<vector_of_vectors>(out), doesnt_contain_vector);
// show results
for (auto &vi: out) {
copy(vi.cbegin(), vi.cend(), std::ostream_iterator<int>(std::cout, ", "));
cout << "\n";
}
}
You could try something like this. I use std::sort and std::includes. Perhaps this is not the most effective solution.
// sort all nested vectors
std::for_each(vlist.begin(), vlist.end(), [](auto& v)
{
std::sort(v.begin(), v.end());
});
// sort vector of vectors by length of items
std::sort(vlist.begin(), vlist.end(), [](const vector<int>& a, const vector<int>& b)
{
return a.size() < b.size();
});
// exclude all duplicates
auto i = std::begin(vlist);
while (i != std::end(vlist)) {
if (any_of(i+1, std::end(vlist), [&](const vector<int>& a){
return std::includes(std::begin(a), std::end(a), std::begin(*i), std::end(*i));
}))
i = vlist.erase(i);
else
++i;
}

Fast union building of multiple vectors in C++

I’m searching for a fast way to build a union of multiple vectors in C++.
More specifically: I have a collection of vectors (usually 15-20 vectors with several thousand unsigned integers; always sorted and unique so they could also be an std::set). For each stage, I choose some (usually 5-10) of them and build a union vector. Than I save the length of the union vector and choose some other vectors. This will be done for several thousand times. In the end I'm only interested in the length of the shortest union vector.
Small example:
V1: {0, 4, 19, 40}
V2: {2, 4, 8, 9, 19}
V3: {0, 1, 2, 4, 40}
V4: {9, 10}
// The Input Vectors V1, V2 … are always sorted and unique (could also be an std::set)
Choose V1 , V3;
Union Vector = {0, 1, 2, 4, 19, 40} -> Size = 6;
Choose V1, V4;
Union Vector = {0,4, 9, 10, 19 ,40} -> Size = 6;
… and so on …
At the moment I use std::set_union but I’m sure there must be a faster way.
vector< vector<uint64_t>> collection;
vector<uint64_t> chosen;
for(unsigned int i = 0; i<chosen->size(); i++) {
set_union(collection.at(choosen.at(i)).begin(),
collection.at(choosen.at(i)).end(),
unionVector.begin(),
unionVector.end(),
back_inserter(unionVectorTmp));
unionVector.swap(unionVectorTmp);
unionVectorTmp.clear();
}
I'm grateful for every reference.
EDIT 27.04.2017
A new Idea:
unordered_set<unsigned int> unionSet;
unsigned int counter = 0;
for(const auto &sel : selection){
for(const auto &val : sel){
auto r = unionSet.insert(val);
if(r.second){
counter++;
}
}
}
If they're sorted you can roll your own thats O(N+M) in runtime. Otherwise you can use a hashtable with similar runtime
The de facto way in C++98 is set_intersection, but with c++11 (or TR1) you can go for unordered_set, provided the initial vector is sorted, you will have a nice O(N) algorithm.
Construct an unordered_set out of your first vector
Check if the elements of your 2nd vector are in the set
Something like that will do:
std::unordered_set<int> us(std::begin(v1), std::end(v1));
auto res = std::count_if(std::begin(v2), std::end(v2), [&](int n) {return us.find(n) != std::end(us);}
There's no need to create the entire union vector. You can count the number of unique elements among the selected vectors by keeping a list of iterators and comparing/incrementing them appropriately.
Here's the pseudo-code:
int countUnique(const std::vector<std::vector<unsigned int>>& selection)
{
std::vector<std::vector<unsigned int>::const_iterator> iters;
for (const auto& sel : selection) {
iters.push_back(sel.begin());
}
auto atEnd = [&]() -> bool {
// check if all iterators equal end
};
int count = 0;
while (!atEnd()) {
const int min = 0; // find minimum value among iterators
for (size_t i = 0; i < iters.size(); ++i) {
if (iters[i] != selection[i].end() && *iters[i] == min) {
++iters[i];
}
}
++count;
}
return count;
}
This uses the fact that your input vectors are sorted and only contain unique elements.
The idea is to keep an iterator into each selected vector. The minimum value among those iterators is our next unique value in the union vector. Then we increment all iterators whose value is equal to that minimum. We repeat this until all iterators are at the end of the selected vectors.

Sort multidimensional array and keep index C++

Is it possible to sort a multidimensional array (row by row) using sort in C++ such that I can keep the index?
For example,
13, 14, 5, 16
0, 4, 3, 2
7, 3, 7, 6
9, 1, 11, 12
Becomes:
{ 5,13,14,16}
{ 0,2,3,4 }
{ 3,6,7,7}
{ 1,9,11,12 }
And the array with the index would be:
{2,0,1,3}
{0,3,2,1}
{1,3,0,2}
{ 1,0,2,3}
First create the array of integer indices; here it is for 1D array:
int ind[arr.size()];
for( int i=0; i<arr.size(); ++i)
ind[i] = i;
Then create the comparison object. Here is a ballpark of that in C++99 lingo; for C++11 you can shortcut that by using a lambda:
struct compare
{
bool operator()( int left, int right ) {
return arr[left] < arr[right];
}
};
The sort the index array using that functor:
std::sort( ind, ind+sizeof(arr), compare );
Finally, use the sorted index array to order the values array.
Yes. To sort row by row, you have to set the appropriate starting and ending point in the sort function.To keep the index part, you can first create pairs of the array elements and index using the make_pair command. After executing the above code, you can reconstruct the index array.
You will need to do something like this (I haven't tried it out though):
for (i = 0; i < matrix.size(); i++)
{
sort(matrix[i].begin(), matrix[i].end());
}
Remember to add the index as the second element in the pair, because the default comparision operator for pairs checks the first element, followed by the second element.

How to get intersection of two Arrays

I have two integer arrays
int A[] = {2, 4, 3, 5, 6, 7};
int B[] = {9, 2, 7, 6};
And i have to get intersection of these array.
i.e. output will be - 2,6,7
I am thinking to sove it by saving array A in a data strcture and then i want to compare all the element till size A or B and then i will get intersection.
Now i have a problem i need to first store the element of Array A in a container.
shall i follow like -
int size = sizeof(A)/sizeof(int);
To get the size but by doing this i will get size after that i want to access all the elemts too and store in a container.
Here i the code which i am using to find Intersection ->
#include"iostream"
using namespace std;
int A[] = {2, 4, 3, 5, 6, 7};
int B[] = {9, 2, 7, 6};
int main()
{
int sizeA = sizeof(A)/sizeof(int);
int sizeB = sizeof(B)/sizeof(int);
int big = (sizeA > sizeB) ? sizeA : sizeB;
int small = (sizeA > sizeB) ? sizeB : sizeA;
for (int i = 0; i <big ;++i)
{
for (int j = 0; j <small ; ++j)
{
if(A[i] == B[j])
{
cout<<"Element is -->"<<A[i]<<endl;
}
}
}
return 0;
}
Just use a hash table:
#include <unordered_set> // needs C++11 or TR1
// ...
unordered_set<int> setOfA(A, A + sizeA);
Then you can just check for every element in B, whether it's also in A:
for (int i = 0; i < sizeB; ++i) {
if (setOfA.find(B[i]) != setOfA.end()) {
cout << B[i] << endl;
}
}
Runtime is expected O(sizeA + sizeB).
You can sort the two arrays
sort(A, A+sizeA);
sort(B, B+sizeB);
and use a merge-like algorithm to find their intersection:
#include <vector>
...
std::vector<int> intersection;
int idA=0, idB=0;
while(idA < sizeA && idB < sizeB) {
if (A[idA] < B[idB]) idA ++;
else if (B[idB] < A[idA]) idB ++;
else { // => A[idA] = B[idB], we have a common element
intersection.push_back(A[idA]);
idA ++;
idB ++;
}
}
The time complexity of this part of the code is linear. However, due to the sorting of the arrays, the overall complexity becomes O(n * log n), where n = max(sizeA, sizeB).
The additional memory required for this algorithm is optimal (equal to the size of the intersection).
saving array A in a data strcture
Arrays are data structures; there's no need to save A into one.
i want to compare all the element till size A or B and then i will get intersection
This is extremely vague but isn't likely to yield the intersection; notice that you must examine every element in both A and B but "till size A or B" will ignore elements.
What approach i should follow to get size of an unkown size array and store it in a container??
It isn't possible to deal with arrays of unknown size in C unless they have some end-of-array sentinel that allows counting the number of elements (as is the case with NUL-terminated character arrays, commonly referred to in C as "strings"). However, the sizes of your arrays are known because their compile-time sizes are known. You can calculate the number of elements in such arrays with a macro:
#define ARRAY_ELEMENT_COUNT(a) (sizeof(a)/sizeof *(a))
...
int *ptr = new sizeof(A);
[Your question was originally tagged [C], and my comments below refer to that]
This isn't valid C -- new is a C++ keyword.
If you wanted to make copies of your arrays, you could simply do it with, e.g.,
int Acopy[ARRAY_ELEMENT_COUNT(A)];
memcpy(Acopy, A, sizeof A);
or, if for some reason you want to put the copy on the heap,
int* pa = malloc(sizeof A);
if (!pa) /* handle out-of-memory */
memcpy(pa, A, sizeof A);
/* After you're done using pa: */
free(pa);
[In C++ you would used new and delete]
However, there's no need to make copies of your arrays in order to find the intersection, unless you need to sort them (see below) but also need to preserve the original order.
There are a few ways to find the intersection of two arrays. If the values fall within the range of 0-63, you can use two unsigned longs and set the bits corresponding to the values in each array, then use & (bitwise "and") to find the intersection. If the values aren't in that range but the difference between the largest and smallest is < 64, you can use the same method but subtract the smallest value from each value to get the bit number. If the range is not that small but the number of distinct values is <= 64, you can maintain a lookup table (array, binary tree, hash table, etc.) that maps the values to bit numbers and a 64-element array that maps bit numbers back to values.
If your arrays may contain more than 64 distinct values, there are two effective approaches:
1) Sort each array and then compare them element by element to find the common values -- this algorithm resembles a merge sort.
2) Insert the elements of one array into a fast lookup table (hash table, balanced binary tree, etc.), and then look up each element of the other array in the lookup table.
Sort both arrays (e.g., qsort()) and then walk through both arrays one element at a time.
Where there is a match, add it to a third array, which is sized to match the larger of the two input arrays (your result array can be no larger than the largest of the two arrays). Use a negative or other "dummy" value as your terminator.
When walking through input arrays, where one value in the first array is larger than the other, move the index of the second array, and vice versa.
When you're done walking through both arrays, your third array has your answer, up to the terminator value.

Multi-dimensional vector

How can I create a 2D vector? I know that in 2D array, I can express it like:
a[0][1]=98;
a[0][2]=95;
a[0][3]=99;
a[0][4]=910;
a[1][0]=98;
a[1][1]=989;
a[1][2]=981;
a[1][3]=987;
How can one do this using the C++ STL Vector?
vector<vector<int> > a;
If you want to define the rows and columns,
vector<vector<int> > a{{11, 2, 4}, {4, 5, 6}, {10, 8, -12}};
std::vector< std::vector< int > > a; // as Ari pointed
Using this for a growing matrix can become complex, as the system will not guarantee that all internal vectors are of the same size. Whenever you grow on the second dimension you will have to explicitly grow all vectors.
// grow twice in the first dimension
a.push_back( vector<int>() );
a.push_back( vector<int>() );
a[0].push_back( 5 ); // a[0].size() == 1, a[1].size()==0
If that is fine with you (it is not really a matrix but a vector of vectors), you should be fine. Else you will need to put extra care to keep the second dimension stable across all the vectors.
If you are planing on a fixed size matrix, then you should consider encapsulating in a class and overriding operator() instead of providing the double array syntax. Read the C++ FAQ regarding this here
std::vector< std::vector<int> > a;
//m * n is the size of the matrix
int m = 2, n = 4;
//Grow rows by m
a.resize(m);
for(int i = 0 ; i < m ; ++i)
{
//Grow Columns by n
a[i].resize(n);
}
//Now you have matrix m*n with default values
//you can use the Matrix, now
a[1][0]=98;
a[1][1]=989;
a[1][2]=981;
a[1][3]=987;
//OR
for(i = 0 ; i < m ; ++i)
{
for(int j = 0 ; j < n ; ++j)
{ //modify matrix
int x = a[i][j];
}
}
If you don't have to use vectors, you may want to try Boost.Multi_array. Here is a link to a short example.
Declaration of a matrix, for example, with 5 rows and 3 columns:
vector<vector<int> > new_matrix(5,vector<int>(3));
Another way of declaration to get the same result as above:
vector<int> Row;
One row of the matrix:
vector<Row> My_matrix;
My_matrix is a vector of rows:
My_matrix new_matrix(5,Row(3));
dribeas' suggestion is really the way to go.
Just to give a reason why you might want to go the operator() route, consider that for instance if your data is sparse you can lay it out differently to save space internally and operator() hides that internal implementation issue from your end user giving you better encapsulation and allowing you to make space or speed improving changes to the internal layout later on without breaking your interface.
Just use the following method to use 2-D vector.
int rows, columns;
// . . .
vector < vector < int > > Matrix(rows, vector< int >(columns,0));
Or
vector < vector < int > > Matrix;
Matrix.assign(rows, vector < int >(columns, 0));
// Do your stuff here...
This will create a Matrix of size rows * columns and initializes it with zeros because we are passing a zero(0) as a second argument in the constructor i.e vector < int > (columns, 0).
As Ari pointed, vector< vector< int>> is the right way to do it.
In addition to that, in such cases I always consider wrapping the inner vector (actually, whatever it represents) in a class, because complex STL structures tend to become clumsy and confusing.