Finding smallest values of given vectors - c++

How can I find the smallest value of each column in the given set of vectors efficiently ?
For example, consider the following program:
#include <iostream>
#include <vector>
#include <iterator>
#include <cstdlib>
using namespace std;
typedef vector<double> v_t;
int main(){
v_t v1,v2,v3;
for (int i = 1; i<10; i++){
v1.push_back(rand()%10);
v2.push_back(rand()%10);
v3.push_back(rand()%10);
}
copy(v1.begin(), v1.end(), ostream_iterator<double>(cout, " "));
cout << endl;
copy(v2.begin(), v2.end(), ostream_iterator<double>(cout, " "));
cout << endl;
copy(v3.begin(), v3.end(), ostream_iterator<double>(cout, " "));
cout << endl;
}
Let the output be
3 5 6 1 0 6 2 8 2
6 3 2 2 9 0 6 7 0
7 5 9 7 3 6 1 9 2
In this program I want to find the smallest value of every column (of the 3 given vectors) and put it into a vector. In this program I want to define a vector v_t vfinal that will have the values :
3 3 2 1 0 0 1 7 0
Is there an efficient way to do this ? I mention efficient because my program may have to find the smallest values among very large number of vectors. Thank you.
Update:
I'm trying to use something like this which I used in one of my previous programs
int count = std::inner_product(A, A+5, B, 0, std::plus<int>(), std::less<int>());
This counts the number of minimum elements between two arrays A and B. Wouldn't it be efficient enough if I could loop through and use similar kind of function to find the minimal values ? I'm not claiming it can be done or not. It's just an idea that may be improved upon but I don't know how.

You can use std::transform for this. The loops are still there, they're just hidden inside the algorithm. Each additional vector to process is a call to std::transform.
This does your example problem in two linear passes.
typedef std::vector<double> v_t;
int main()
{
v_t v1,v2,v3,vfinal(9); // note: vfinal sized to accept results
for (int i = 1; i < 10; ++i) {
v1.push_back(rand() % 10);
v2.push_back(rand() % 10);
v3.push_back(rand() % 10);
}
std::transform(v1.begin(), v1.end(), v2.begin(), vfinal.begin(), std::min<double>);
std::transform(v3.begin(), v3.end(), vfinal.begin(), vfinal.begin(), std::min<double>);
}
Note: this works in MSVC++ 2010. I had to provide a min functor for gcc 4.3.

I think that the lower bound of your problem is O(n*m), where n is the number of vectors and m the elements of each vector.
The trivial algorithm (comparing the elements at the same index of the different vectors) is as efficient as it can be, I think.
The easiest way to implement it would be to put all your vectors in some data structure (a simple C-like array, or maybe a vector of vectors).

The bst way to do this would be to use a vector of vectors, and just simple looping.
void find_mins(const std::vector<std::vector<int> >& inputs, std::vector<int>& outputs)
{
// Assuming that each vector is the same size, resize the output vector to
// change the size of the output vector to hold enough.
output.resize(inputs[0].size());
for (std::size_t i = 0; i < inputs.size(); ++i)
{
int min = inputs[i][0];
for (std::size_t j = 1; j < inputs[i].size(); ++j)
if (inputs[i][j] < min) min = inputs[i][j];
outputs[i] = min;
}
}

To find the smallest number in a vector, you simply have to examine each element in turn; there's no quicker way, at least from an algorithmic point-of-view.
In terms of practical performance, cache issues may affect you here. As has been mentioned in a comment, it will probably be more cache-efficient if you could store your vectors column-wise rather than row-wise. Alternatively, you may want to do all min searches in parallel, so as to minimise cache misses. i.e. rather than this:
foreach (col)
{
foreach (row)
{
x_min[col] = std::min(x_min[col], x[col][row]);
}
}
you should probably do this:
foreach (row)
{
foreach (col)
{
x_min[col] = std::min(x_min[col], x[col][row]);
}
}
Note that STL already provides a nice function to do this: min_element().

Related

Find uncommon elements using hashing

I think this is a fairly common question but I didn't find any answer for this using hashing in C++.
I have two arrays, both of the same lengths, which contain some elements, for example:
A={5,3,5,4,2}
B={3,4,1,2,1}
Here, the uncommon elements are: {5,5,1,1}
I have tried this approach- iterating a while loop on both the arrays after sorting:
while(i<n && j<n) {
if(a[i]<b[j])
uncommon[k++]=a[i++];
else if (a[i] > b[j])
uncommon[k++]=b[j++];
else {
i++;
j++;
}
}
while(i<n && a[i]!=b[j-1])
uncommon[k++]=a[i++];
while(j < n && b[j]!=a[i-1])
uncommon[k++]=b[j++];
and I am getting the correct answer with this. However, I want a better approach in terms of time complexity since sorting both arrays every time might be computationally expensive.
I tried to do hashing but couldn't figure it out entirely.
To insert elements from arr1[]:
set<int> uncommon;
for (int i=0;i<n1;i++)
uncommon.insert(arr1[i]);
To compare arr2[] elements:
for (int i = 0; i < n2; i++)
if (uncommon.find(arr2[i]) != uncommon.end())
Now, what I am unable to do is to send only those elements to the uncommon array[] which are uncommon to both of them.
Thank you!
First of all, std::set does not have anything to do with hashing. Sets and maps are ordered containers. Implementations may differ, but most likely it is a binary search tree. Whatever you do, you wont get faster that nlogn with them - the same complexity as sorting.
If you're fine with nlogn and sorting, I'd strongly advice just using set_symmetric_difference algorithm https://en.cppreference.com/w/cpp/algorithm/set_symmetric_difference , it requires two sorted containers.
But if you insist on an implementation relying on hashing, you should use std::unordered_set or std::unordered_map. This way you can be faster than nlogn. You can get your answer in nm time, where n = a.size() and m = b.size(). You should create two unordered_set`s: hashed_a, hashed_b and in two loops check what elements from hashed_a are not in hashed_b, and what elements in hashed_b are not in hashed_a. Here a pseudocode:
create hashed_a and hashed_b
create set_result // for the result
for (a_v : hashed_a)
if (a_v not in hashed_b)
set_result.insert(a_v)
for (b_v : hashed_b)
if (b_v not in hashed_a)
set_result.insert(b_v)
return set_result // it holds the symmetric diference, which you need
UPDATE: as noted in the comments, my answer doesn't count for duplicates. The easiest way to modify it for duplicates would be to use unordered_map<int, int> with the keys for elements in the set and values for number of encounters.
First, you need to find a way to distinguish between the same values contained in the same array (for ex. 5 and 5 in the first array, and 1 and 1 in the second array). This is the key to reducing the overall complexity, otherwise you can't do better than O(nlogn). A good possible algorithm for this task is to create a wrapper object to hold your actual values, and put in your arrays pointers to those wrapper objects with actual data, so your pointer addresses will serve as a unique identifier for objects. This wrapping will cost you just O(n1+n2) operations, but also an additional O(n1+n2) space.
Now your problem is that you have in both arrays only elements unique to each of those arrays, and you want to find the uncommon elements. This means the (Union of both array elements) - (Intersection of both array elements). Therefore, all you need to do is to push all the elements of the first array into a hash-map (complexity O(n1)), and then start pushing all the elements of the second array into the same hash-map (complexity O(n2)), by detecting the collisions (equality of an element from first array with an element from the second array). This comparison step will require O(n2) comparisons in the worst case. So for the maximum performance optimization you could have checked the size of the arrays before starting pushing the elements into the hash-map, and swap the arrays so that the first push will take place with the longest array. Your overall algorithm complexity would be O(n1+n2) pushes (hashings) and O(n2) comparisons.
The implementation is the most boring stuff, so I let it to you ;)
A solution without sorting (and without hashing but you seem to care more about complexity then the hashing itself) is to notice the following : an uncommon element e is an element that is in exactly one multiset.
This means that the multiset of all uncommon elements is the union between 2 multisets:
S1 = The element in A that are not in B
S2 = The element in B that are not in A
Using the std::set_difference, you get:
#include <set>
#include <vector>
#include <iostream>
#include <algorithm>
int main() {
std::multiset<int> ms1{5,3,5,4,2};
std::multiset<int> ms2{3,4,1,2,1};
std::vector<int> v;
std::set_difference( ms1.begin(), ms1.end(), ms2.begin(), ms2.end(), std::back_inserter(v));
std::set_difference( ms2.begin(), ms2.end(), ms1.begin(), ms1.end(), std::back_inserter(v));
for(int e : v)
std::cout << e << ' ';
return 0;
}
Output:
5 5 1 1
The complexity of this code is 4.(N1+N2 -1) where N1 and N2 are the size of the multisets.
Links:
set_difference: https://en.cppreference.com/w/cpp/algorithm/set_difference
compiler explorer: https://godbolt.org/z/o3KGbf
The Question can Be solved in O(nlogn) time-complexity.
ALGORITHM
Sort both array with merge sort in O(nlogn) complexity. You can also use sort-function. For example sort(array1.begin(),array1.end()).
Now use two pointer method to remove all common elements on both arrays.
Program of above Method
int i = 0, j = 0;
while (i < array1.size() && j < array2.size()) {
// If not common, print smaller
if (array1[i] < array2[j]) {
cout << array1[i] << " ";
i++;
}
else if (array2[j] < array1[i]) {
cout << array2[j] << " ";
j++;
}
// Skip common element
else {
i++;
j++;
}
}
Complexity of above program is O(array1.size() + array2.size()). In worst case say O(2n)
The above program gives the uncommon elements as output. If you want to store them , just create a vector and push them into vector.
Original Problem LINK

Find sum of 5 highest numbers in an array of 100 numbers

There are 100 numbers present in an array and I need to find out the average of top 5 highest numbers among them.
Also in the same way the average of top 5 lowest numbers among them. How could I go about doing it?
Use Hoare's select algorithm (or the median of medians, if you need to be absolutely certain of the computational complexity), then add the top partition (and divide by its size to get the average).
This is somewhat faster than the obvious method of sorting instead of partitioning -- partitioning is (O(N)) where sorting is O(N log(N) ).
Edit: In C++, for real code (i.e., anything except homework where part of the requirement is to do the task entirely on your own) you can use std::nth_element to partition the input into the top 5 and everything else.
Edit2: Here's another quick demo to complement #Nils', but this one in full C++11 regalia (so to speak):
#include <numeric>
#include <algorithm>
#include <iostream>
#include <iterator>
#include <vector>
int main(){
std::vector<int> x {1, 101, 2, 102, 3, 103, 4, 104, 5, 105, 6};
auto pos = x.end() - 5;
std::nth_element(x.begin(), pos, x.end());
auto sum = std::accumulate(pos, x.end(), 0);
auto mean = sum / std::distance(pos, x.end());
std::cout << "sum = " << sum << '\n' << "mean = " << mean << "\n";
return 0;
}
Jerry already explained how it works. I just want to add a practical code-example in c++:
#include <algorithm>
int averageTop5 (int list[100])
{
// move top 5 elements to end of list:
std::nth_element (list, list+95, list+100);
// get average (with overflow handling)
int avg = 0;
int rem = 0;
for (int i=95; i<100; i++)
{
avg += list[i]/5;
rem += list[i]%5;
}
return avg + (rem /5);
}
With Jerrys std::accumulate this becomes a two-liner but may fail with integer overflows:
#include <algorithm>
#include <numeric>
int averageTop5 (int list[100])
{
std::nth_element (list, list+95, list+100);
return std::accumulate (list+95, list+100, 0)/5;
}
Sort them in ascending and add the last five numbers
Copy the first 5 numbers into an array. Determine the position of the smallest element in that array. For each of the 95 numbers in the remainder of the list, compare it with that smallest number. If the new number is larger, then replace it and redetermine the position of the new smallest number in your short list.
At the end, sum your array and divide by 5.

Which STL container has facilities for slicing into two or more containers?

Say I have a vector / list whatever of ints populated with 2300 values
I want to be able to easily slice this into 4 vectors /lists (not necessarily of equal size).
e.g.
vec1 ( elements 0 - 500 )
vec2 ( elements 501 - 999)
vec3 ( elements 1001 - 1499)
etc.
A common way to do it would be to use the one container, and just define separate iterator ranges over it.
std::vector<int> vec(2300);
it0 = vec.begin();
it1 = vec.begin() + 500;
it2 = vec.begin() + 1000;
it3 = vec.begin() + 1500;
it4 = vec.begin() + 2000;
it5 = vec.end();
Now, the first range is simply defined by the iterators it0 and it1. The second by it1 and it2, and so on.
So, if you want to apply a function to every element in the third range, you'd simply do this:
std::for_each(it2, it3, somefunc);
Actually copying the elements into separate containers may be unnecessary, and would carry a performance cost.
std::list would be the best choice, as you just build lists by joining pointers. Finding the exact place to slice would be the problem, though, because you have to reach that point in the list iterator to make the cut.
EDIT:
As per comments (thanks for the insights), maybe using std::vector<int> and iterators is a good idea. However, with plain iterators, you loose the length of the vector, so I propose using, for instance, a boost::range_iterator:
std::vector<int> vec(2300);
it0 = vec.begin();
it1 = vec.begin() + 500;
it2 = vec.begin() + 1000;
it3 = vec.begin() + 1500;
it4 = vec.begin() + 2000;
it5 = vec.end;
typedef boost::iterator_range< std::vector<int>::iterator > my_slice_t;
my_slice_t slice1 = boost::make_iterator_range(it0, it1);
...
Then, you can use slice1 as a normal, underlying std::vector<int> as per iteration:
std::for_each(slice1.begin(), slice1.end(), /* stuff */);
See the fourth std::vector<> constructor documented here.
// given std::vector<T> vec with 2300 elements
std::vector<T> vec1(vec.begin(), vec.begin() + 500);
std::vector<T> vec2(vec.begin() + 500, vec.begin() + 1000);
std::vector<T> vec3(vec.begin() + 1000, vec.begin() + 1500);
std::vector<T> vec4(vec.begin() + 1500, vec.begin() + 2000);
std::vector<T> vec5(vec.begin() + 2000, vec.end());
Actually it is doable with the vector container
#include <vector>
#include <iostream>
using namespace std ;
int main()
{
vector<int> ints ;
vector<int> ints_sliced;
int i ;
// populate
for( i = 0 ; i < 100 ; i++ )
ints.push_back(i) ;
// slice from 10-19
ints_sliced.insert(ints_sliced.begin(), ints.begin()+10, ints.begin()+20) ;
// inspect
vector<int>::iterator it ;
for( it = ints_sliced.begin() ; it != ints_sliced.end() ; it++ )
cout << *it << endl ;
}
Oh, if you happen to be using g++ and GNU stdlibc++, you can use roughly
g++ -march=native -O3 -ftree-vectorize ...
If you also throw in GNU OpenMP support (libgomp) you can benefit (evaluate, profile!) from automatic parallelization of standard algorithms,
g++ -D_GLIBCXX_PARALLEL -fopenmp -march=native -O3 .... -lgomp
YMMV - I wanted to just throw this out there, because e.g. the parallel for_each seems to be close to what you want (but, automagic and self-adapting to container size, iterator type and number of processors)
In addition to #jalf's correct observation that actually copying the subvectors into fresh vectors might be a waste of time and space, let me point at valarray.
Intro: valarray
Valarray may be more complicated, but especially in the face of parallel processing, might lead to better ways to subvector work for the different threads. Things to look for:
algorithmic pre-science (if locations in a certain pattern have a certain property (e.g. are known to be zero), you can hand it to an optimized worker for those values)
subvector alignment (the aligment can make or break the availability of SIMD, SSE4 optimized versions; have a look at gcc -ftree-vectorizer for more background)
Now valarrays have quite a number of 'obscure' operations and tricks to them (gslices; basically revectored array dimensions to address the original array) that I won't go into here, but suffice it to say, if you want to do number crunching across subsets of contiguous arrays of (mainly) floating points[1], it will pay to read up on those.
Mandatory (braindead) teaser
// mask_array example
#include <iostream>
#include <valarray>
using namespace std;
int main ()
{
valarray<int> myarray (10);
for (int i=0; i<10; ++i) myarray[i]=i; // 0 1 2 3 4 5 6 7 8 9
valarray<bool> mymask (10);
for (int i=0; i<10; ++i)
mymask[i]= ((i%2)==1); // f t f t f t f t f t
myarray[mymask] *= valarray<int>(10,5); // 0 10 2 30 4 50 6 70 8 90
myarray[!mymask] = 0; // 0 10 0 30 0 50 0 70 0 90
cout << "myarray:\n";
for (size_t i=0; i<myarray.size(); ++i)
cout << myarray[i] << ' ';
cout << endl;
return 0;
}
This was copied verbatim from the above link, you will want to adapt to your specific need. There was probably a good reason why you kept the endgoal a bit vague, so I'll happily leave the rest of the work to you!
Wrapup
If you really want to go all the way, however, you should start looking at the big guns (Blitz++, et al.).
[1] these have historically been the focus for vectorized CPU instruction sets. However, als #jalf notes, SSE2 and higher includes SIMD integer instructions as well

Efficiently computing vector combinations

I'm working on a research problem out of curiosity, and I don't know how to program the logic that I've in mind. Let me explain it to you:
I've four vectors, say for example,
v1 = 1 1 1 1
v2 = 2 2 2 2
v3 = 3 3 3 3
v4 = 4 4 4 4
Now what I want to do is to add them combination-wise, that is,
v12 = v1+v2
v13 = v1+v3
v14 = v1+v4
v23 = v2+v3
v24 = v2+v4
v34 = v3+v4
Till this step it is just fine. The problem is now I want to add each of these vectors one vector from v1, v2, v3, v4 which it hasn't added before. For example:
v3 and v4 hasn't been added to v12, so I want to create v123 and v124. Similarly for all the vectors like,
v12 should become:
v123 = v12+v3
v124 = v12+v4
v13 should become:
v132 // This should not occur because I already have v123
v134
v14 should become:
v142 // Cannot occur because I've v124 already
v143 // Cannot occur
v23 should become:
v231 // Cannot occur
v234 ... and so on.
It is important that I do not do all at one step at the start. Like for example, I can do (4 choose 3) 4C3 and finish it off, but I want to do it step by step at each iteration.
How do I program this?
P.S.: I'm trying to work on an modified version of an apriori algorithm in data mining.
In C++, given the following routine:
template <typename Iterator>
inline bool next_combination(const Iterator first,
Iterator k,
const Iterator last)
{
/* Credits: Thomas Draper */
if ((first == last) || (first == k) || (last == k))
return false;
Iterator itr1 = first;
Iterator itr2 = last;
++itr1;
if (last == itr1)
return false;
itr1 = last;
--itr1;
itr1 = k;
--itr2;
while (first != itr1)
{
if (*--itr1 < *itr2)
{
Iterator j = k;
while (!(*itr1 < *j)) ++j;
std::iter_swap(itr1,j);
++itr1;
++j;
itr2 = k;
std::rotate(itr1,j,last);
while (last != j)
{
++j;
++itr2;
}
std::rotate(k,itr2,last);
return true;
}
}
std::rotate(first,k,last);
return false;
}
You can then proceed to do the following:
int main()
{
unsigned int vec_idx[] = {0,1,2,3,4};
const std::size_t vec_idx_size = sizeof(vec_idx) / sizeof(unsigned int);
{
// All unique combinations of two vectors, for example, 5C2
std::size_t k = 2;
do
{
std::cout << "Vector Indicies: ";
for (std::size_t i = 0; i < k; ++i)
{
std::cout << vec_idx[i] << " ";
}
}
while (next_combination(vec_idx,
vec_idx + k,
vec_idx + vec_idx_size));
}
std::sort(vec_idx,vec_idx + vec_idx_size);
{
// All unique combinations of three vectors, for example, 5C3
std::size_t k = 3;
do
{
std::cout << "Vector Indicies: ";
for (std::size_t i = 0; i < k; ++i)
{
std::cout << vec_idx[i] << " ";
}
}
while (next_combination(vec_idx,
vec_idx + k,
vec_idx + vec_idx_size));
}
return 0;
}
**Note 1:* Because of the iterator oriented interface for the next_combination routine, any STL container that supports forward iteration via iterators can also be used, such as std::vector, std::deque and std::list just to name a few.
Note 2: This problem is well suited for the application of memoization techniques. In this problem, you can create a map and fill it in with vector sums of given combinations. Prior to computing the sum of a given set of vectors, you can lookup to see if any subset of the sums have already been calculated and use those results. Though you're performing summation which is quite cheap and fast, if the calculation you were performing was to be far more complex and time consuming, this technique would definitely help bring about some major performance improvements.
I think this problem can be solved by marking which combination har occured.
My first thought is that you may use a 3-dimension array to mark what combination has happened. But that is not very good.
How about a bit-array (such as an integer) for flagging? Such as:
Num 1 = 2^0 for vector 1
Num 2 = 2^1 for vector 2
Num 4 = 2^2 for vector 3
Num 8 = 2^3 for vector 4
When you make a compose, just add all the representative number. For example, vector 124 will have the value: 1 + 2 + 8 = 11. This value is unique for every combination.
This is just my thought. Hope it helps you someway.
EDIT: Maybe I'm not be clear enough about my idea. I'll try to explain it a bit clearer:
1) Assign for each vector a representative number. This number is the id of a vector, and it's unique. Moreover, the sum of every sub-set of those number is unique, means that if we have sum of k representative number is M; we can easily know that which vectors take part in the sum.
We do that by assign: 2^0 for vector 1; 2^1 for vector 2; 2^2 for vector 3, and so on...
With every M = sum (2^x + 2^y + 2^z + ... ) = (2^x OR 2^y OR 2^z OR ...). We know that the vector (x + 1), (y + 1), (z +1) ... take part in the sum. This can easily be checked by express the number in binary mode.
For example, we know that:
2^0 = 1 (binary)
2^1 = 10 (binary)
2^2 = 100 (binary)
...
So that if we have the sum is 10010 (binary), we know that vector(number: 10) and vector(number: 10000) join in the sum.
And for the best, the sum here can be calculated by "OR" operator, which is also easily understood if you express the number in binary.
2) Utilizing the above facts, every time before you count the sum of your vector, you can add/OR their representative number first. And you can keep track them in something like a lookup array. If the sum already exists in the lookup array, you can omit it. By that you can solve the problem.
Maybe I am misunderstanding, but isn't this equivalent to generating all subsets (power set) of 1, 2, 3, 4 and then for each element of the power set, summing the vector? For instance:
//This is pseudo C++ since I'm too lazy to type everything
//push back the vectors or pointers to vectors, etc.
vector< vector< int > > v = v1..v4;
//Populate a vector with 1 to 4
vector< int > n = 1..4
//Function that generates the power set {nil, 1, (1,2), (1,3), (1,4), (1,2,3), etc.
vector< vector < int > > power_vec = generate_power_set(n);
//One might want to make a string key by doing a Perl-style join of the subset together by a comma or something...
map< vector < int >,vector< int > > results;
//For each subset, we sum the original vectors together
for subset_iter over power_vec{
vector<int> result;
//Assumes all the vecors same length, can be modified carefully if not.
result.reserve(length(v1));
for ii=0 to length(v1){
for iter over subset from subset_iter{
result[ii]+=v[iter][ii];
}
}
results[*subset_iter] = result;
}
If that is the idea you had in mind, you still need a power set function, but that code is easy to find if you search for power set. For example,
Obtaining a powerset of a set in Java.
Maintain a list of all for choosing two values.
Create a vector of sets such that the set consists of elements from the original vector with the 4C2 elements. Iterate over the original vectors and for each one, add/create a set with elements from step 1. Maintain a vector of sets and only if the set is not present, add the result to the vector.
Sum up the vector of sets you obtained in step 2.
But as you indicated, the easiest is 4C3.
Here is something written in Python. You can adopt it to C++
import itertools
l1 = ['v1','v2','v3','v4']
res = []
for e in itertools.combinations(l1,2):
res.append(e)
fin = []
for e in res:
for l in l1:
aset = set((e[0],e[1],l))
if aset not in fin and len(aset) == 3:
fin.append(aset)
print fin
This would result:
[set(['v1', 'v2', 'v3']), set(['v1', 'v2', 'v4']), set(['v1', 'v3', 'v4']), set(['v2', 'v3', 'v4'])]
This is the same result as 4C3.

C++ : iterating the vector

I'm very new to C++ and I'm trying to learn the vector in C++..
I wrote the small program as below. I like to foreach(var sal in salaries) like C# but it doesn't allow me to do that so I googled it and found that I have to use iterator.. Im able to compile and run this program but I dont get the expected output.. I'm getting "0 0 0 0 0 0 1 2 3 4 5 6 7 8 9" instead of "0 1 2 3 4 5 6 7 8 9"..
Could anyone please explain me why? Thanks.
#include <iostream>
#include <iomanip>
#include <vector>
using namespace std;
void show(int i)
{
cout << i << " ";
}
int main(){
vector<int> salaries(5);
for(int i=0; i < 10; i++){
salaries.push_back(i);
}
for_each(salaries.begin(), salaries.end(), show);
}
You created a vector with 5 elements, then you push 10 more onto the end. That gives you a total of 15 elements, and the results you're seeing. Try changing your definition of the vector (in particular the constructor call), and you'll be set. How about:
vector<int> salaries;
This code creates a vector with a size of 5, and with each of those 5 elements initialized to their default value (0):
vector<int> salaries(5);
push_back inserts a new element, so here, you insert 10 new elements, ending up with a vector with 15 elements:
for(int i=0; i < 10; i++){
salaries.push_back(i);
}
You can create your vector like this instead:
vector<int> salaries;
and you'll get a vector with size 0.
Alternatively, you could initialize it with size 10, and then overwrite each element, instead of inserting new ones:
vector<int> salaries(10);
for(int i=0; i < 10; i++){
salaries[i] = i;
}
In some cases, it may be more efficient to write something like this:
vector<int> salaries; // create a vector with size 0
// allocate space for 10 entries, but while keeping a size of 0
salaries.reserve(10);
for(int i=0; i < 10; i++){
// because we reserved space earlier, these new insertions happen without
// having to copy the vector contents to a larger array.
salaries.push_back(i);
}
When you declare salaries(5), it's adding 5 entries into the vector with values of 0. Then your loop adds 0..9. Therefore you have 15 elements in your vector instead of just 10. Try declaring the vector without the 5.
vector<int> salaries;
vector<int> salaries(5); means, that you are creating the vector which contains 5 int objects from the start, and each int object is initialized with default constructor, and in the case of int contructor sets zero value. That's why you have 5 zero integers at the beginning of the vector container.
#Michael: Which book is that? I'd say
it's wrong. Using resize() is a good
practice if you know in advance how
big you need the vector to be, but
don't set the size at creation unless
you need the vector to contain
default-initialized values.
You can also reserve some capacity in the array in advance which is subtely different than re-size. Reserving simply reserves "at least" that much capacity for the vector (but does not change the size of the vector), while resize adds\removes elements to\from the vector to make it the requested size.
vector<int> salaries(5);
This creates a vector of 5 zeros for its elements. [0, 0, 0, 0, 0]
for(int i=0; i < 10; i++){
salaries.push_back(i);
}
This adds 10 more elements at the end ranging from 0 to 9 [0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Finally, I don't recommend using foreach so much. Functional programming has the downside of decentralizing code. It is extremely useful in some cases, but for these cases, and especially considering how you're starting out, I'd recommend:
for (vector<int>::const_iterator it = salaries.begin(), end = salaries.end(); it != end; ++it){
salaries.push_back(i);
}
Using this technique, you'll be able to iterate through any collection in the standard library without having to write separate functions or function objects for the loop body.
With C++0x you'll get a lot of goodies to make this easier:
for (int salary: salaries)
cout << salary << endl;
There's also BOOST_FOR_EACH already which is almost as easy if you can use boost.