How to remove a certain element off random range of values? - c++

I'm trying to randomly generate a double round-robin schedule for several teams, with teams as lines and rounds as columns. Each team is scheduled against every other team twice, once assigned positive value of the oppoent for a match at home, once negative value for a match away.
The code will be kinda like below, except it should work for a 2D array instead of 1D, where certain "K" and "-K" is taken off from possible randomized value of element(team, round) for each line of a 2D array, instead of a fixed value.
K is the #line that makes sure a team does not match itself (eg. make k=+-2's weight=0 in "dist{}" while assigning line #2, so "2" and "-2" will not occur as team 2's assigned opponent for itself, home or away), but this removal should NOT be permanent, since in other lines of the schedule array the team #K is a valid opponent.
Is there a functionality that allows taking a different k off the randomization for each line(team)? Or could I do it with srand()?
#include <functional>
#include <iostream>
#include <ostream>
#include <random>
int main()
{
std::random_device rd;
unsigned long seed = rd();
std::cout << "seed " << seed << std::endl;
std::mt19937 engine(seed);
// Distribution {0, 1, 2, 4, 5}
std::discrete_distribution<> dist {{1, 1, 1, 0, 1, 1}}; // 3 given 0 weight(chance).
auto rng = std::bind(dist, std::ref(engine));
const int n = 10;
for (int i = 0; i != n; ++i)
{
int x = rng();
std::cout << x << std::endl;
}
return 0;
}

I think you want this one
std::discrete_distribution<> make_dist(int k, int N)
{
std::vector<int> dist(N,1);
dist.at(k) = 0;
return std::discrete_distribution<>(dist.begin(), dist.end());
}
replace your
std::discrete_distribution<> dist {{1, 1, 1, 0, 1, 1}}; // 3 given 0 weight(chance).
with
auto dist = make_dist(3,6);

Related

How to select randomly the min element from a vector of integers in c++?

I have a vector of integers as follows: vector<int> vec {3, 4, 2, 1, 1, 3, 1}. Below code always returns the minimum element of 1 at index 3. How to make it randomly chose the minimum value of 1 from three locations [3, 4, 6] when same code is run multiple times?
#include <bits/stdc++.h>
using namespace std;
int main() {
vector<int> vec {3, 4, 2, 1, 1, 3, 1};
auto it = min_element(vec.begin(), vec.end());
cout << *it << endl;
cout << "It is at a distance of: " << distance(vec.begin(), it) << endl;
return 0;
}
There are probably many ways to do this depending on your needs. Here's one:
#include <algorithm>
#include <iostream>
#include <random>
#include <vector>
int main() {
// a seeded random number generator
std::mt19937 prng(std::random_device{}());
std::vector<int> vec {3, 4, 2, 1, 1, 3, 1};
// get the min iterator to the first minimum element:
auto mit = std::min_element(vec.begin(), vec.end());
// collect the indices of all elements equal to the min element:
std::vector<std::size_t> ids;
for(auto fit = mit; fit != vec.end(); fit = std::find(fit + 1, vec.end(), *mit))
{
ids.push_back(std::distance(vec.begin(), fit));
}
// a distribution to select one of the indices in `ids`:
std::uniform_int_distribution<std::size_t> dist(0, ids.size()-1);
// print 10 randomly selected indices
for(int i = 0; i < 10; ++i) {
std::cout << ids[dist(prng)] << '\n';
}
}
Demo
Here's a single-pass variation based on selection sampling (though it could probably be made nicer), essentially being a case of Reservoir Sampling with a sample size of 1.
#include <iostream>
#include <random>
#include <vector>
#include <iterator>
template <typename T, typename URBG>
T rmin(T first, T last, URBG &g) {
if (first == last) return first;
T min = first;
using ud = std::uniform_int_distribution<std::size_t>;
using param_type = ud::param_type;
ud d;
std::size_t mincnt = 1;
++first;
while (first != last) {
if (*first < *min) {
/* Found new minimum. */
min = first;
mincnt = 1;
} else if (*first == *min) {
/* If equal to the minimum, select this with probability 1/mincnt + 1.
* Second has 1/2 chance to be selected, third has 1/3, etc. */
auto k = d(g, param_type{0, mincnt++});
if (!k) {
min = first;
}
}
++first;
}
return min;
}
int main() {
// a seeded random number generator
std::mt19937 prng(std::random_device{}());
std::vector<int> vec{3, 4, 2, 1, 1, 3, 1};
for (int i = 0; i < 10; i++) {
auto it = rmin(vec.begin(), vec.end(), prng);
std::cout << *it
<< " is at a distance of: " << std::distance(vec.begin(), it)
<< std::endl;
}
}
Demo of the above
This solution is random but likely does not give equal probability to all entries. However it avoids having to create new vectors and it is still O(N).
It works by randomly splitting the sequence (in logical sense) in two parts and taking the minimum of each. Then you return the minimum of the two parts.
As I said, it is likely not uniformly distributed but it is indeed still random.
#include <vector>
#include <algorithm>
#include <iostream>
#include <random>
template< typename T >
T minimum( T begin, T end ) {
std::size_t size = std::distance( begin, end );
if ( size<=1 ) return begin;
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_int_distribution<size_t> ds(1,size-1);
auto sep = begin + ds(gen);
auto it1 = std::min_element(begin, sep );
auto it2 = std::min_element(sep, end );
if ( *it1<*it2 ) return it1;
return it2;
}
int main() {
std::vector<int> vec {3, 4, 2, 1, 1, 3, 1};
for ( int j=0; j<10; ++j ) {
auto it = minimum( vec.begin(), vec.end() );
std::cout << *it << " is at a distance of: " << std::distance(vec.begin(), it) << std::endl;
}
return 0;
}
Produces
Program returned: 0
1 is at a distance of: 4
1 is at a distance of: 3
1 is at a distance of: 6
1 is at a distance of: 3
1 is at a distance of: 6
1 is at a distance of: 3
1 is at a distance of: 4
1 is at a distance of: 3
1 is at a distance of: 3
1 is at a distance of: 6
Godbolt: https://godbolt.org/z/3EhzdGndz

Time complexity of the travelling salesman problem (Recursive formulation)

According to this recursion formula for dynamic programming (Held–Karp algorithm), the minimum cost can be found. I entered this code in C ++ and this was achieved (neighbor vector is the same set and v is cost matrix):
recursion formula :
C(i,S) = min { d(i,j) + C(j,S-{j}) }
my code :
#include <iostream>
#include <vector>
#define INF 99999
using namespace std;
vector<vector<int>> v{ { 0, 4, 1, 3 },{ 4, 0, 2, 1 },{ 1, 2, 0, 5 },{ 3, 1, 5, 0 } };
vector<int> erase(vector<int> v, int j)
{
v.erase(v.begin() + j);
vector<int> vv = v;
return vv;
}
int TSP(vector<int> neighbor, int index)
{
if (neighbor.size() == 0)
return v[index][0];
int min = INF;
for (int j = 0; j < neighbor.size(); j++)
{
int cost = v[index][neighbor[j]] + TSP(erase(neighbor, j), neighbor[j]);
if (cost < min)
min = cost;
}
return min;
}
int main()
{
vector<int> neighbor{ 1, 2, 3 };
cout << TSP(neighbor, 0) << endl;
return 0;
}
In fact, the erase function removes the element j from the set (which is the neighbor vector)
I know about dynamic programming that prevents duplicate calculations (like the Fibonacci function) but it does not have duplicate calculations because if we draw the tree of this function we see that the arguments of function (i.e. S and i in formula and like the picture below) are never the same and there is no duplicate calculation.
My question is, is this time O(n!)?
picture :
If yes,why? This function is exactly the same as the formula and it does exactly the same thing. Where is the problem? Is it doing duplicate calculations?
Your algorithm time complexity is O(n!). It's easy to understand that your code is guessing the next node of the path. And there're exactly n! different paths. Your code actually counts the same value several times. For example if you run TSP({1, 2, 3, 4}, 0) and it tries order {1, 2, 3} and {2, 1, 3}. It is clear that code will run TSP({4}, 3) two times. To get rid of this store already calculated answers for masks and start node.

Divide elements of a sorted array into least number of groups such that difference between the elements of the new array is less than or equal to 1

How to divide elements in an array into a minimum number of arrays such that the difference between the values of elements of each of the formed arrays does not differ by more than 1?
Let's say that we have an array: [4, 6, 8, 9, 10, 11, 14, 16, 17].
The array elements are sorted.
I want to divide the elements of the array into a minimum number of array(s) such that each of the elements in the resulting arrays do not differ by more than 1.
In this case, the groupings would be: [4], [6], [8, 9, 10, 11], [14], [16, 17]. So there would be a total of 5 groups.
How can I write a program for the same? Or you can suggest algorithms as well.
I tried the naive approach:
Obtain the difference between consecutive elements of the array and if the difference is less than (or equal to) 1, I add those elements to a new vector. However this method is very unoptimized and straight up fails to show any results for a large number of inputs.
Actual code implementation:
#include<cstdio>
#include<iostream>
#include<vector>
using namespace std;
int main() {
int num = 0, buff = 0, min_groups = 1; // min_groups should start from 1 to take into account the grouping of the starting array element(s)
cout << "Enter the number of elements in the array: " << endl;
cin >> num;
vector<int> ungrouped;
cout << "Please enter the elements of the array: " << endl;
for (int i = 0; i < num; i++)
{
cin >> buff;
ungrouped.push_back(buff);
}
for (int i = 1; i < ungrouped.size(); i++)
{
if ((ungrouped[i] - ungrouped[i - 1]) > 1)
{
min_groups++;
}
}
cout << "The elements of entered vector can be split into " << min_groups << " groups." << endl;
return 0;
}
Inspired by Faruk's answer, if the values are constrained to be distinct integers, there is a possibly sublinear method.
Indeed, if the difference between two values equals the difference between their indexes, they are guaranteed to belong to the same group and there is no need to look at the intermediate values.
You have to organize a recursive traversal of the array, in preorder. Before subdividing a subarray, you compare the difference of indexes of the first and last element to the difference of values, and only subdivide in case of a mismatch. As you work in preorder, this will allow you to emit pieces of the groups in consecutive order, as well as detect to the gaps. Some care has to be taken to merge the pieces of the groups.
The worst case will remain linear, because the recursive traversal can degenerate to a linear traversal (but not worse than that). The best case can be better. In particular, if the array holds a single group, it will be found in time O(1). If I am right, for every group of length between 2^n and 2^(n+1), you will spare at least 2^(n-1) tests. (In fact, it should be possible to estimate an output-sensitive complexity, equal to the array length minus a fraction of the lengths of all groups, or similar.)
Alternatively, you can work in a non-recursive way, by means of exponential search: from the beginning of a group, you start with a unit step and double the step every time, until you detect a gap (difference in values too large); then you restart with a unit step. Here again, for large groups you will skip a significant number of elements. Anyway, the best case can only be O(Log(N)).
I would suggest encoding subsets into an offset array defined as follows:
Elements for set #i are defined for indices j such that offset[i] <= j < offset[i+1]
The number of subsets is offset.size() - 1
This only requires one memory allocation.
Here is a complete implementation:
#include <cassert>
#include <iostream>
#include <vector>
std::vector<std::size_t> split(const std::vector<int>& to_split, const int max_dist = 1)
{
const std::size_t to_split_size = to_split.size();
std::vector<std::size_t> offset(to_split_size + 1);
offset[0] = 0;
size_t offset_idx = 1;
for (std::size_t i = 1; i < to_split_size; i++)
{
const int dist = to_split[i] - to_split[i - 1];
assert(dist >= 0); // we assumed sorted input
if (dist > max_dist)
{
offset[offset_idx] = i;
++offset_idx;
}
}
offset[offset_idx] = to_split_size;
offset.resize(offset_idx + 1);
return offset;
}
void print_partition(const std::vector<int>& to_split, const std::vector<std::size_t>& offset)
{
const std::size_t offset_size = offset.size();
std::cout << "\nwe found " << offset_size-1 << " sets";
for (std::size_t i = 0; i + 1 < offset_size; i++)
{
std::cout << "\n";
for (std::size_t j = offset[i]; j < offset[i + 1]; j++)
{
std::cout << to_split[j] << " ";
}
}
}
int main()
{
std::vector<int> to_split{4, 6, 8, 9, 10, 11, 14, 16, 17};
std::vector<std::size_t> offset = split(to_split);
print_partition(to_split, offset);
}
which prints:
we found 5 sets
4
6
8 9 10 11
14
16 17
Iterate through the array. Whenever the difference between 2 consecutive element is greater than 1, add 1 to your answer variable.
`
int getPartitionNumber(int arr[]) {
//let n be the size of the array;
int result = 1;
for(int i=1; i<n; i++) {
if(arr[i]-arr[i-1] > 1) result++;
}
return result;
}
`
And because it is always nice to see more ideas and select the one that suites you best, here the straight forward 6 line solution. Yes, it is also O(n). But I am not sure, if the overhead for other methods makes it faster.
Please see:
#include <iostream>
#include <string>
#include <algorithm>
#include <vector>
#include <iterator>
using Data = std::vector<int>;
using Partition = std::vector<Data>;
Data testData{ 4, 6, 8, 9, 10, 11, 14, 16, 17 };
int main(void)
{
// This is the resulting vector of vectors with the partitions
std::vector<std::vector<int>> partition{};
// Iterating over source values
for (Data::iterator i = testData.begin(); i != testData.end(); ++i) {
// Check,if we need to add a new partition
// Either, at the beginning or if diff > 1
// No underflow, becuase of boolean shortcut evaluation
if ((i == testData.begin()) || ((*i) - (*(i-1)) > 1)) {
// Create a new partition
partition.emplace_back(Data());
}
// And, store the value in the current partition
partition.back().push_back(*i);
}
// Debug output: Copy all data to std::cout
std::for_each(partition.begin(), partition.end(), [](const Data& d) {std::copy(d.begin(), d.end(), std::ostream_iterator<int>(std::cout, " ")); std::cout << '\n'; });
return 0;
}
Maybe this could be a solution . . .
How do you say your approach is not optimized? If your is correct, then according to your approach, it takes O(n) time complexity.
But you can use binary-search here which can optimize in average case. But in worst case this binary search can take more than O(n) time complexity.
Here's a tips,
As the array sorted so you will pick such a position whose difference is at most 1.
Binary search can do this in simple way.
int arr[] = [4, 6, 8, 9, 10, 11, 14, 16, 17];
int st = 0, ed = n-1; // n = size of the array.
int partitions = 0;
while(st <= ed) {
int low = st, high = n-1;
int pos = low;
while(low <= high) {
int mid = (low + high)/2;
if((arr[mid] - arr[st]) <= 1) {
pos = mid;
low = mid + 1;
} else {
high = mid - 1;
}
}
partitions++;
st = pos + 1;
}
cout<< partitions <<endl;
In average case, it is better than O(n). But in worst case (where the answer would be equal to n) it takes O(nlog(n)) time.

C++, certain combos never appear using rand()

I'm using rand() for two ints, between 0 and 2. It appears that the first int is never 0 while the second int is 2. Here is my test code-
#include <iostream>
#include <time.h>
int main()
{
srand(time(NULL));
int number1, number2;
number1 = rand() % 3;
number2 = rand() % 3;
printf("%i, %i", number1, number2);
return 0;
}
Output-25 tries
2, 2
2, 2
2, 1
1, 2
0, 1
2, 2
1, 2
1, 0
2, 1
1, 0
0, 0
1, 2
2, 2
0, 0
2, 1
1, 0
2, 2
1, 0
2, 1
1, 0
0, 1
1, 2
1, 0
0, 0
2, 2
As you can see, out of 25 tries, the combo was never 0, 2. Is this the sign that I should probably move over to < random >? In addition, there is never 2, 0.
No, this will happen for 9*(1-1/9)^25 = 0.4736 of all seeds, or roughly 50% of the time. That is, some two digit sequence with digits in {0,1,2} will be missing from your first 25 results roughly half the times you run your program.
Run it again and see what happens.
You should definitely use <random>. The sooner you forget about rand's existence, the happier you will be.
rand is sometimes implemented with a linear congruential generator (LCG). LCGs suffer from a number of defects, the most relevant being that that the low-order bits of sequentially generated numbers are highly correlated. For this reason, you should never use rand() % k to generate numbers in the range [0, k). There are other reasons. In fact, generating unbiased random integers from a restricted range involves some subtleties, which <random> handles for you.
srand(time(NULL)) seeds the random number generator to the current time in seconds from epoch, which means that if you run the program several times in sequence, the seeds will be either the same or similar. If the seeds are the same, the random number sequence will also be the same. If the seeds are similar, the random number sequences may also be similar. So don't do this except in long-running programs. Finding a good seed for a pseudo random number generator can be tricky. <random> implementations will have better default behaviour, so you won't normally need to worry about this.
Taking % 3 does not depend on just lower order bits.
I ran the program below using VC++ simulating running the OP's program ten million times with one second between invocations. It shows no bias.
start = 1413167398
(0, 0) 1110545
(0, 1) 1111285
(0, 2) 1111611
(1, 0) 1111317
(1, 1) 1111666
(1, 2) 1110451
(2, 0) 1111580
(2, 1) 1110491
(2, 2) 1111054
#include <cstdlib>
#include <ctime>
#include <iostream>
#include <map>
#include <utility>
int main()
{
std::map<std::pair<int, int>, int> counter;
unsigned int start = static_cast<unsigned int>(std::time(nullptr));
std::cout << "start = " << start << std::endl;
unsigned int finish = start + 10000000;
for (unsigned int seed = start; seed != finish; ++seed)
{
std::srand(seed);
int x = rand() % 3;
int y = rand() % 3;
++counter[std::make_pair(x, y)];
}
for (auto iter = counter.cbegin(); iter != counter.cend(); ++iter)
{
std::cout << "(" << iter->first.first << ", " << iter->first.second << ") ";
std::cout << iter->second << std::endl;
}
return 0;
}
It is. This code gave me 0,2 pair just at first run:
for( int i = 0; i < 20; ++i) {
number1 = rand() % 3;
number2 = rand() % 3;
printf("%i, %i\n", number1, number2);
}
Generating truly random number from uniform distribution doesn't guarantee that given (possible) number will appear in limited number of trials. The K2 out of four BSI criteria of good PRNG is
K2 — A sequence of numbers which is indistinguishable from 'true
random' numbers according to specified statistical tests.
thus generating pseudo random numbers ofcourse tends to behave in the same way as sampling from true random distribution - though because of limitations any (possible) number will appear at some point (at time less or equal its period).
http://ideone.com/c5oRQL
Use std::uniform_int_distribution
Apart from the above rand() is not the best generator. It introduces bias always whenever the divisor in modulo operation doesn't evenly divides the range of PRNG. Operator % makes the probability distribution produced in this way skewed because RAND_MAX which is maximum value for rand() can be not equal to k * 3 + 2. If divisor not evenly divides the range then distribution will be skewed and the bias increases with divisor. You can read here more on this. Summing it up: in C++ you should use <random> library:
#include <iostream>
#include <random>
int main()
{
std::random_device rd;
std::mt19937 gen( rd());
std::uniform_int_distribution<> dis( 0, 2);
for ( int n = 0; n < 25; ++n)
std::cout << dis(gen) << ' ';
std::cout << '\n';
}

Given a list of primes and a factorization pattern, how to construct all numbers whose prime factorization matches the given pattern?

Though, I've tried to summarize the question in the title, I think it'll be better if I start off with an instance of the problem:
List of Primes = {2 3 5 7 11 13}
Factorization pattern = {1 1 2 1}
For the above input, the program should be generating the following list of numbers:
2.3.5^2.7
2.3.5^2.11
2.3.5^2.13
2.3.7^2.11
2.3.7^2.13
2.3.11^2.13
2.5.7^2.11
2.5.7^2.13
2.7.11^2.13
3.5.7^2.11
3.5.7^2.13
3.5.11^2.13
3.7.11^2.13
5.7.11^2.13
So far, I understand that since the length of the pattern is arbitrarily large (as is the list of primes), I need to use a recursive function to get all the combinations. What I'm really, really stuck is - how to formulate the function's arguments/when to call etc. This is what I've developed so far:
#include <iostream>
#include <algorithm>
#include <vector>
#include <cmath>
using namespace std;
static const int factors[] = {2, 3, 5, 7, 11, 13};
vector<int> vFactors(factors, factors + sizeof(factors) / sizeof(factors[0]));
static const int powers[] = {1, 1, 2, 1};
vector<int> vPowers(powers, powers + sizeof(powers) / sizeof(powers[0]));
// currPIdx [in] Denotes the index of Power array from which to start generating numbers
// currFidx [in] Denotes the index of Factor array from which to start generating numbers
vector<int> getNumList(vector<int>& vPowers, vector<int>& vFactors, int currPIdx, int currFIdx)
{
vector<int> vResult;
if (currPIdx != vPowers.size() - 1)
{
for (int i = currPIdx + 1; i < vPowers.size(); ++i)
{
vector<int> vTempResult = getNumList(vPowers, vFactors, i, currFIdx + i);
vResult.insert(vResult.end(), vTempResult.begin(), vTempResult.end());
}
int multFactor = pow((float) vFactors[currFIdx], vPowers[currPIdx]);
for (int i = 0; i < vResult.size(); ++i)
vResult[i] *= multFactor;
}
else
{ // Terminating the recursive call
for (int i = currFIdx; i < vFactors.size(); ++i)
{
int element = pow((float) vFactors[i], vPowers[currPIdx]);
vResult.push_back(element);
}
}
return vResult;
}
int main()
{
vector<int> vNumList = getNumList(vPowers, vFactors, 0, 0);
cout << "List of numbers: " << endl;
for (int i = 0; i < vNumList.size(); ++i)
cout << vNumList[i] << endl;
}
When I'm running the above, I'm getting a incorrect list:
List of numbers:
66
78
650
14
22
26
I've somehow run into a mental block, as I can't seem to figure out how to appropriately change the last parameter in the recursive call (which I believe is the reason my program isn't working)!!
It would be really great if anyone would be good enough to tweak my code with the missing logic (or even point me to it - I'm not looking for a complete solution!). I would be really grateful if you could restrict your answer to standard C++!
(In case someone notices that I'm missing out permutations of the given pattern, which would lead to other numbers such as 2.3.5.7^2 etc - don't worry, I intend to repeat this algorithm on all possible permutations of the given pattern by using next_permutate!).
PS: Not a homework/interview problem, just a part of an algorithm for a very interesting Project Euler problem (I think you can even guess which one :)).
EDIT: I've solved the problem on my own - which I've posted as an answer. If you like it, do upvote it (I can't accept it as the answer till it gets more votes than the other answer!)...
Forget about factorization for a moment. The problem you want to solve is having two lists P and F and finding all possible pairings (p,f) for p in P and f in F. This means you'll have |P| * |P|-1 ... * |P|-(|F|-1) possible pairings (assigning one from P to the first element of F, leaves |P|-1 possibilities to match the second element etc). You might want to separate that part of the problem in your code. If you recurse that way, the last step is choosing remaining element from P to the last element of F. Does that help? I must admit I don't understand your code well enough to provide an answer tailored to your current state, but that's how I'd approach it in general.
Well, I figured out this one on my own! Here's the code for it (which I hope is self-explanatory, but I can clarify in case anyone needs more details):
#include <iostream>
#include <algorithm>
#include <vector>
#include <cmath>
using namespace std;
static const int factors[] = {2, 3, 5, 7, 11, 13};
vector<int> vFactors(factors, factors + sizeof(factors) / sizeof(factors[0]));
static const int powers[] = {1, 1, 2, 1};
vector<int> vPowers(powers, powers + sizeof(powers) / sizeof(powers[0]));
// idx - The index from which the rest of the factors are to be considered.
// 0 <= idx < Factors.size() - Powers.size()
// lvl - The lvl of the depth-first tree
// 0 <= lvl < Powers.size()
// lvlProd - The product till the previous level for that index.
void generateNumList
(
vector<int>& vPowers,
vector<int>& vFactors,
vector<int>& vNumList,
int idx,
int lvl,
long lvlProd
)
{
// Terminating case
if (lvl == vPowers.size() - 1)
{
long prod = pow((float) vFactors[idx], vPowers[lvl]) * lvlProd;
vNumList.push_back(prod);
}
else
{
// Recursive case
long tempLvlProd = lvlProd * pow((float) vFactors[idx], vPowers[lvl]);
for (int i = idx + 1; i < vFactors.size(); ++i)
generateNumList(vPowers, vFactors, vNumList, i, lvl + 1,
tempLvlProd);
}
}
vector<int> getNumList(vector<int>& vPowers, vector<int>& vFactors)
{
vector<int> vNumList;
for (int i = 0; i < vFactors.size(); ++i)
generateNumList(vPowers, vFactors, vNumList, i, 0, 1);
return vNumList;
}
int main()
{
vector<int> vNumList = getNumList(vPowers, vFactors);
cout << endl << "List of numbers (" << vNumList.size() << ") : " << endl;
for (int i = 0; i < vNumList.size(); ++i)
cout << vNumList[i] << endl;
}
The output of the above code (I had to work really long to get rid of duplicate entries algorithmically! ):
List of numbers (15) :
1050
1650
1950
3234
3822
9438
5390
6370
15730
22022
8085
9555
23595
33033
55055
real 0m0.002s
user 0m0.001s
sys 0m0.001s