According to this recursion formula for dynamic programming (Held–Karp algorithm), the minimum cost can be found. I entered this code in C ++ and this was achieved (neighbor vector is the same set and v is cost matrix):
recursion formula :
C(i,S) = min { d(i,j) + C(j,S-{j}) }
my code :
#include <iostream>
#include <vector>
#define INF 99999
using namespace std;
vector<vector<int>> v{ { 0, 4, 1, 3 },{ 4, 0, 2, 1 },{ 1, 2, 0, 5 },{ 3, 1, 5, 0 } };
vector<int> erase(vector<int> v, int j)
{
v.erase(v.begin() + j);
vector<int> vv = v;
return vv;
}
int TSP(vector<int> neighbor, int index)
{
if (neighbor.size() == 0)
return v[index][0];
int min = INF;
for (int j = 0; j < neighbor.size(); j++)
{
int cost = v[index][neighbor[j]] + TSP(erase(neighbor, j), neighbor[j]);
if (cost < min)
min = cost;
}
return min;
}
int main()
{
vector<int> neighbor{ 1, 2, 3 };
cout << TSP(neighbor, 0) << endl;
return 0;
}
In fact, the erase function removes the element j from the set (which is the neighbor vector)
I know about dynamic programming that prevents duplicate calculations (like the Fibonacci function) but it does not have duplicate calculations because if we draw the tree of this function we see that the arguments of function (i.e. S and i in formula and like the picture below) are never the same and there is no duplicate calculation.
My question is, is this time O(n!)?
picture :
If yes,why? This function is exactly the same as the formula and it does exactly the same thing. Where is the problem? Is it doing duplicate calculations?
Your algorithm time complexity is O(n!). It's easy to understand that your code is guessing the next node of the path. And there're exactly n! different paths. Your code actually counts the same value several times. For example if you run TSP({1, 2, 3, 4}, 0) and it tries order {1, 2, 3} and {2, 1, 3}. It is clear that code will run TSP({4}, 3) two times. To get rid of this store already calculated answers for masks and start node.
Related
Problem:
Array has positive, negative, and 0 integers, and values are not necessarily unique.
The positive values are to the right of zeroes, and the negative values are to the left of zeroes. The positive and the negative values do not have to be sorted. However, the zeros must be in between the positive and negative integers.
The algorithm must run in linear time.
Some acceptable answers I can think of are the following:
[-3, -1, -9, 0, 5, 2, 9]
[-5, -2, 0, 0, 5, 2, 9]
[-3, 0, 0, 0, 4, 1]
My thought process and attempts:
The process is very similar to the quicksort algorithm where we can choose 0 to be our pivot point. Ideally, this shouldn't be an issue because we can simply search for the index of 0 and set that index as our pivot point to replace at the end. However, from what I understand the search process itself requires at least O(nLog(n)) run time (such as binary search), and therefore cannot be used in this case.
One extreme solution to this that I can think of that runs in a linear time is using the linear select algorithm where we split the array into subgroups of 5 and find the median of the medians of the subgroups. However, I doubt this will work since the median isn't always going to be 0, therefore the pivot point isn't always guaranteed to be 0.
What's really throwing me off in the problem is that the negative and positive integers do not have to be sorted but simultaneously must be partitioned by zeroes.
What I have so far:
I was able to modify the partitioning process of the quick sort algorithm. The following runs in a linear time, but the problem is that it is not able to place the zeroes in the appropriate positions.
This is the output that I am able to get with the current progress I have:
void partition(T a[], int lo, int n) {
size_t i = lo, j;
for (j = i; j < n; j++){
if (a[j] < 0){
std::swap(a[i], a[j]);
i++;
}
}
}
Using array of [9, -3, -6, 1, 3,4,-22,0]
Output of the code is [ -3 -6 -22 1 3 4 9 0 ]
Thank you very much in advance for any feedback that you may have.
A simple linear algorithm in two iterations can be to first push all negative numbers to the left, and in a second iteration push all positives numbers to the right.
void rearrange(T a[], int n) {
int next_to_place= 0;
for (int i = 0; i < n; i++) {
if (a[i] < 0) {
std::swap(a[i], a[next_to_place++]);
}
}
for (int i = next_to_place; i < n; i++) {
if (a[i] == 0) {
std::swap(a[i], a[next_to_place++]);
}
}
}
This is suboptimal, as you can probably solve it with a single iteration (which will be more cache friendly), but it is linear and simple to understand.
You can achieve this by maintaining the count of zeroes and storing the positive numbers and negative numbers in a seperate vector/list.
Please see the following implementation (self explanatory):
#include <iostream>
#include <vector>
int main()
{
int arr[7] = {-3, -1, -9, 0, 5, 2, 9};
int n = sizeof(arr)/sizeof(int);
int count_zeros = 0;
int count_positives = 0;
int count_negatives = 0;
std::vector<int> negative_numbers;
std::vector<int> positive_numbers;
for(int i=0;i<n;i++){
if(arr[i] == 0){
count_zeros ++;
}
else if(arr[i] > 0){
positive_numbers.push_back(arr[i]);
}
else if(arr[i] < 0){
negative_numbers.push_back(arr[i]);
}
}
//Print all negative numbers:
for(int i=0;i<negative_numbers.size();i++){
std::cout<<negative_numbers[i] << " ";
}
//Print all zeroes:
for(int i=0;i<count_zeros;i++){
std::cout<< 0 << " ";
}
//Print all positive numbers:
for(int i=0;i<positive_numbers.size();i++){
std::cout<<positive_numbers[i] << " ";
}
}
Problem Statement: We have an integer N. We have to determine the number of arrays of length N such that it satisfies the following conditions:
Each value of the array is between 0 to 5, inclusively
XOR sum of all value of the array is zero
Constraints:
1 <= T <= 5
1 <= N <= 10^5
Example solutions are 1, 6, and 28 for N equals 1, 2, and 3 respectively.
I know that this a dynamic programming problem and it require a DP state of [index][XOR], where the index is the number of elements already set, XOR is the total XOR of all these elements, and at DP[index][XOR] we will store the number of arrays fulfilling this condition; the final answer will be found at dp[N][0].
The problem is I am not able to write a recursive solution to this problem. By recursive solution, I mean piecewise formula for getting a generalized solution for any value of N. Writing a recursive formula would be sufficient, I would then be able to write it's DP version. Can you help with that? Here is my current code:
#include <bits/stdc++.h>
using namespace std;
int main(){
int n;
cin >> n;
int dp[n+1][n+1];
memset(dp, 0, sizeof(dp));
dp[0][0] = 1; // For index = 1, and XOR = 1, the answer is 1.
for(i = 1; i <= n; i++){
for(j = 0; j < n; j++) {
// dp[i][j] = ?
// Having trouble in transmitting values to higher values of i's and j's.
// How to update the value of dp array?
}
}
cout << (dp[N][1]);
return 0;
}
You don't necessarily need duplicates in the array to make the XOR zero, for example A = [1, 2, 3]. This seems to be a simple exercise in dynamic programming, where you have a dp state of [index][XOR], where index is the number of elements already set, XOR is the total XOR of all these elements, and at dp[index][XOR] you store the number of arrays fulfilling this condition; the final answer will be found at dp[N][0].
With dynamic programming, you might compute number of way to have any result.
with xor number from 0 to 5 (0b1001), max value is 7 (0b1111)
// Compute the number of ways to have target with extra number 0-5
// knowing the number of ways to obtain any number from 0-7
std::size_t sum(const std::array<std::size_t, 8>& last, std::size_t target)
{
std::size_t res = 0;
for (int i = 0; i <= 5; ++i) {
res += last[i ^ target]; // i^target ^ i == target
}
return res;
}
Initial values for 0 numbers is {1, 0, 0, 0, 0, 0, 0, 0}. (degenerated case?)
Initial values for 1 numbers is {1, 1, 1, 1, 1, 1, 0, 0} (which can be obtained from above).
And so
constexpr std::size_t n = 10; // The one we compute.
// xor numbers <= 5 (0b1001) can go up to 7 (0b1111)
std::array<std::size_t, 8> dp{{1, 0, 0, 0, 0, 0, 0, 0}};
// dp[i] represent number of way to reach a total xor of i
// We might use vector to keep history,
// but for computation, only last result is sufficient.
for (int i = 0; i != n; ++i) {
dp = {
sum(dp, 0),
sum(dp, 1),
sum(dp, 2),
sum(dp, 3),
sum(dp, 4),
sum(dp, 5),
sum(dp, 6),
sum(dp, 7)};
}
std::cout << dp[0] << std::endl;
Demo
I finally solved the DP construction using the elementary method.
I created a little program that is able to calculate the determinant of a matrix in C++. I used laplace-expansion, although I know that there are more efficient ways to do it:
double getDeterminantLaplace(const std::vector<std::vector<double>> vect) {
int dimension = vect.size();
if(dimension == 0) {
return 1;
}
if(dimension == 1) {
return vect[0][0];
}
//Formula for 2x2-matrix
if(dimension == 2) {
return vect[0][0] * vect[1][1] - vect[0][1] * vect[1][0];
}
double result = 0;
int sign = 1;
for(int i = 0; i < dimension; i++) {
//Submatrix
std::vector<std::vector<double>> subVect(dimension - 1, std::vector<double> (dimension - 1));
for(int m = 1; m < dimension; m++) {
int z = 0;
for(int n = 0; n < dimension; n++) {
if(n != i) {
subVect[m-1][z] = vect[m][n];
z++;
}
}
}
//recursive call
result = result + sign * vect[0][i] * getDeterminantLaplace(subVect);
sign = -sign;
}
return result;
}
My question now is: How can this algorithm be made more efficient?
One of my ideas is to not create the "submatrices" and just work with the original matrix, but I don't really know how to do it. What do you think about this idea? How can I do this in C++?
Do you have any more ideas?
A first, trivial optimization is not to recurse when the current element is zero. This will give you an instant speed-up on sparse matrices.
The next optimization is what you already suggested: Do not to create all submatrices. You can do that by creating an index vector. For example, if your original matrix has 4×4 elements, you recurse with the following index vectors:
0: {1, 2, 3}
1: {0, 2, 3}
2: {0, 1, 3}
3: {0, 1, 2}
You don't need to create the index vector from scratch each time: Start with the subvector that is the current vector without its front, then overwrite the i-th place with the i-th entry of the current ubvector.
When you access the element s[r][c] of the submatrix, access element a[r + top][col[c]] of the original matrix. You can determine the index of the top row from the dimensions of the current column vector and the original matrix.
You never create submatrices, only sub-column vectors. Split your function in two: One public function as front-end, which calls the recursive worker function.
This will speed up the calculation somewhat, but unfortunately, this improvement will not buy you much when your matrices grow. Let's look at the 4×4 matrix again. In the first recursion step, you will consider these 3×3 submatrices:
1, 2, 3 0, 2, 3 0, 1, 3 0, 1, 2
From there, you will calculate these 2×2 submatrices:
2, 3 2, 3 1, 3 1, 2
1, 3 0, 3 0, 3 0, 2
1, 2 0, 2, 0, 1, 0, 1,
Notice that these 12 indices are realy just 6 different pairs. You'll calculate each of them twice. This will get worse the bigger your original matrix is. A solution to this is memoizing: Once you have calculated the determinant of a certain submatrix, store the value in an associated array. Before calculating a submatrix, check whether you have already done that and if so, just return the value you calculated earlier.
This will speec up your function, but it comes at a price: It will create many entries in the associated array.
Anyway, here's the code that implements all optimizations I've described:
#include <vector>
#include <map>
#include <iostream>
double subdet(const std::vector<std::vector<double> > &a,
const std::vector<int> &col,
std::map<std::vector<int>, double> &memo)
{
int dim = col.size();
int top = a.size() - dim;
if (memo.find(col) != memo.end()) {
return memo[col];
}
if (dim == 2) return a[top + 0][col[0]] * a[top + 1][col[1]]
- a[top + 0][col[1]] * a[top + 1][col[0]];
double result = 0.0;
int sign = 1;
std::vector<int> ncol(&col[1], &col[dim]);
for (int i = 0; i < dim; i++) {
if (a[top][col[i]]) {
double d = subdet(a, ncol, memo);
result = result + sign * a[top][col[i]] * d;
}
sign = -sign;
if (i + 1 < dim) ncol[i] = col[i];
}
memo[col] = result;
return result;
}
double det(const std::vector<std::vector<double> > a)
{
int dim = a.size();
if (dim == 0) return 1.0;
if (dim == 1) return a[0][0];
std::vector<int> col(dim);
std::map<std::vector<int>, double> memo;
for (unsigned i = 0; i < a.size(); i++) col[i] = i;
return subdet(a, col, memo);
}
Notes: The map (a binary tree with O(log n) lookup) should really be an unodered map (a hash table with O(1) lookup), but I couldn't get it to work, because I'm bad at C++. Sorry about that.
There's probably room for optimization of the lookup key, too: One can enumerate the possible index vectors or use a bit mask, perhaps, thereby saving memory in the hash map. It's no good string references to the column-index vector, because it's short-lived and we're swapping around in it a lot, so it's not constant.
Of course, other algorithms are better suited for finding the determinat of large matrices. My answer focuses on improving the existing method.
How to divide elements in an array into a minimum number of arrays such that the difference between the values of elements of each of the formed arrays does not differ by more than 1?
Let's say that we have an array: [4, 6, 8, 9, 10, 11, 14, 16, 17].
The array elements are sorted.
I want to divide the elements of the array into a minimum number of array(s) such that each of the elements in the resulting arrays do not differ by more than 1.
In this case, the groupings would be: [4], [6], [8, 9, 10, 11], [14], [16, 17]. So there would be a total of 5 groups.
How can I write a program for the same? Or you can suggest algorithms as well.
I tried the naive approach:
Obtain the difference between consecutive elements of the array and if the difference is less than (or equal to) 1, I add those elements to a new vector. However this method is very unoptimized and straight up fails to show any results for a large number of inputs.
Actual code implementation:
#include<cstdio>
#include<iostream>
#include<vector>
using namespace std;
int main() {
int num = 0, buff = 0, min_groups = 1; // min_groups should start from 1 to take into account the grouping of the starting array element(s)
cout << "Enter the number of elements in the array: " << endl;
cin >> num;
vector<int> ungrouped;
cout << "Please enter the elements of the array: " << endl;
for (int i = 0; i < num; i++)
{
cin >> buff;
ungrouped.push_back(buff);
}
for (int i = 1; i < ungrouped.size(); i++)
{
if ((ungrouped[i] - ungrouped[i - 1]) > 1)
{
min_groups++;
}
}
cout << "The elements of entered vector can be split into " << min_groups << " groups." << endl;
return 0;
}
Inspired by Faruk's answer, if the values are constrained to be distinct integers, there is a possibly sublinear method.
Indeed, if the difference between two values equals the difference between their indexes, they are guaranteed to belong to the same group and there is no need to look at the intermediate values.
You have to organize a recursive traversal of the array, in preorder. Before subdividing a subarray, you compare the difference of indexes of the first and last element to the difference of values, and only subdivide in case of a mismatch. As you work in preorder, this will allow you to emit pieces of the groups in consecutive order, as well as detect to the gaps. Some care has to be taken to merge the pieces of the groups.
The worst case will remain linear, because the recursive traversal can degenerate to a linear traversal (but not worse than that). The best case can be better. In particular, if the array holds a single group, it will be found in time O(1). If I am right, for every group of length between 2^n and 2^(n+1), you will spare at least 2^(n-1) tests. (In fact, it should be possible to estimate an output-sensitive complexity, equal to the array length minus a fraction of the lengths of all groups, or similar.)
Alternatively, you can work in a non-recursive way, by means of exponential search: from the beginning of a group, you start with a unit step and double the step every time, until you detect a gap (difference in values too large); then you restart with a unit step. Here again, for large groups you will skip a significant number of elements. Anyway, the best case can only be O(Log(N)).
I would suggest encoding subsets into an offset array defined as follows:
Elements for set #i are defined for indices j such that offset[i] <= j < offset[i+1]
The number of subsets is offset.size() - 1
This only requires one memory allocation.
Here is a complete implementation:
#include <cassert>
#include <iostream>
#include <vector>
std::vector<std::size_t> split(const std::vector<int>& to_split, const int max_dist = 1)
{
const std::size_t to_split_size = to_split.size();
std::vector<std::size_t> offset(to_split_size + 1);
offset[0] = 0;
size_t offset_idx = 1;
for (std::size_t i = 1; i < to_split_size; i++)
{
const int dist = to_split[i] - to_split[i - 1];
assert(dist >= 0); // we assumed sorted input
if (dist > max_dist)
{
offset[offset_idx] = i;
++offset_idx;
}
}
offset[offset_idx] = to_split_size;
offset.resize(offset_idx + 1);
return offset;
}
void print_partition(const std::vector<int>& to_split, const std::vector<std::size_t>& offset)
{
const std::size_t offset_size = offset.size();
std::cout << "\nwe found " << offset_size-1 << " sets";
for (std::size_t i = 0; i + 1 < offset_size; i++)
{
std::cout << "\n";
for (std::size_t j = offset[i]; j < offset[i + 1]; j++)
{
std::cout << to_split[j] << " ";
}
}
}
int main()
{
std::vector<int> to_split{4, 6, 8, 9, 10, 11, 14, 16, 17};
std::vector<std::size_t> offset = split(to_split);
print_partition(to_split, offset);
}
which prints:
we found 5 sets
4
6
8 9 10 11
14
16 17
Iterate through the array. Whenever the difference between 2 consecutive element is greater than 1, add 1 to your answer variable.
`
int getPartitionNumber(int arr[]) {
//let n be the size of the array;
int result = 1;
for(int i=1; i<n; i++) {
if(arr[i]-arr[i-1] > 1) result++;
}
return result;
}
`
And because it is always nice to see more ideas and select the one that suites you best, here the straight forward 6 line solution. Yes, it is also O(n). But I am not sure, if the overhead for other methods makes it faster.
Please see:
#include <iostream>
#include <string>
#include <algorithm>
#include <vector>
#include <iterator>
using Data = std::vector<int>;
using Partition = std::vector<Data>;
Data testData{ 4, 6, 8, 9, 10, 11, 14, 16, 17 };
int main(void)
{
// This is the resulting vector of vectors with the partitions
std::vector<std::vector<int>> partition{};
// Iterating over source values
for (Data::iterator i = testData.begin(); i != testData.end(); ++i) {
// Check,if we need to add a new partition
// Either, at the beginning or if diff > 1
// No underflow, becuase of boolean shortcut evaluation
if ((i == testData.begin()) || ((*i) - (*(i-1)) > 1)) {
// Create a new partition
partition.emplace_back(Data());
}
// And, store the value in the current partition
partition.back().push_back(*i);
}
// Debug output: Copy all data to std::cout
std::for_each(partition.begin(), partition.end(), [](const Data& d) {std::copy(d.begin(), d.end(), std::ostream_iterator<int>(std::cout, " ")); std::cout << '\n'; });
return 0;
}
Maybe this could be a solution . . .
How do you say your approach is not optimized? If your is correct, then according to your approach, it takes O(n) time complexity.
But you can use binary-search here which can optimize in average case. But in worst case this binary search can take more than O(n) time complexity.
Here's a tips,
As the array sorted so you will pick such a position whose difference is at most 1.
Binary search can do this in simple way.
int arr[] = [4, 6, 8, 9, 10, 11, 14, 16, 17];
int st = 0, ed = n-1; // n = size of the array.
int partitions = 0;
while(st <= ed) {
int low = st, high = n-1;
int pos = low;
while(low <= high) {
int mid = (low + high)/2;
if((arr[mid] - arr[st]) <= 1) {
pos = mid;
low = mid + 1;
} else {
high = mid - 1;
}
}
partitions++;
st = pos + 1;
}
cout<< partitions <<endl;
In average case, it is better than O(n). But in worst case (where the answer would be equal to n) it takes O(nlog(n)) time.
Though, I've tried to summarize the question in the title, I think it'll be better if I start off with an instance of the problem:
List of Primes = {2 3 5 7 11 13}
Factorization pattern = {1 1 2 1}
For the above input, the program should be generating the following list of numbers:
2.3.5^2.7
2.3.5^2.11
2.3.5^2.13
2.3.7^2.11
2.3.7^2.13
2.3.11^2.13
2.5.7^2.11
2.5.7^2.13
2.7.11^2.13
3.5.7^2.11
3.5.7^2.13
3.5.11^2.13
3.7.11^2.13
5.7.11^2.13
So far, I understand that since the length of the pattern is arbitrarily large (as is the list of primes), I need to use a recursive function to get all the combinations. What I'm really, really stuck is - how to formulate the function's arguments/when to call etc. This is what I've developed so far:
#include <iostream>
#include <algorithm>
#include <vector>
#include <cmath>
using namespace std;
static const int factors[] = {2, 3, 5, 7, 11, 13};
vector<int> vFactors(factors, factors + sizeof(factors) / sizeof(factors[0]));
static const int powers[] = {1, 1, 2, 1};
vector<int> vPowers(powers, powers + sizeof(powers) / sizeof(powers[0]));
// currPIdx [in] Denotes the index of Power array from which to start generating numbers
// currFidx [in] Denotes the index of Factor array from which to start generating numbers
vector<int> getNumList(vector<int>& vPowers, vector<int>& vFactors, int currPIdx, int currFIdx)
{
vector<int> vResult;
if (currPIdx != vPowers.size() - 1)
{
for (int i = currPIdx + 1; i < vPowers.size(); ++i)
{
vector<int> vTempResult = getNumList(vPowers, vFactors, i, currFIdx + i);
vResult.insert(vResult.end(), vTempResult.begin(), vTempResult.end());
}
int multFactor = pow((float) vFactors[currFIdx], vPowers[currPIdx]);
for (int i = 0; i < vResult.size(); ++i)
vResult[i] *= multFactor;
}
else
{ // Terminating the recursive call
for (int i = currFIdx; i < vFactors.size(); ++i)
{
int element = pow((float) vFactors[i], vPowers[currPIdx]);
vResult.push_back(element);
}
}
return vResult;
}
int main()
{
vector<int> vNumList = getNumList(vPowers, vFactors, 0, 0);
cout << "List of numbers: " << endl;
for (int i = 0; i < vNumList.size(); ++i)
cout << vNumList[i] << endl;
}
When I'm running the above, I'm getting a incorrect list:
List of numbers:
66
78
650
14
22
26
I've somehow run into a mental block, as I can't seem to figure out how to appropriately change the last parameter in the recursive call (which I believe is the reason my program isn't working)!!
It would be really great if anyone would be good enough to tweak my code with the missing logic (or even point me to it - I'm not looking for a complete solution!). I would be really grateful if you could restrict your answer to standard C++!
(In case someone notices that I'm missing out permutations of the given pattern, which would lead to other numbers such as 2.3.5.7^2 etc - don't worry, I intend to repeat this algorithm on all possible permutations of the given pattern by using next_permutate!).
PS: Not a homework/interview problem, just a part of an algorithm for a very interesting Project Euler problem (I think you can even guess which one :)).
EDIT: I've solved the problem on my own - which I've posted as an answer. If you like it, do upvote it (I can't accept it as the answer till it gets more votes than the other answer!)...
Forget about factorization for a moment. The problem you want to solve is having two lists P and F and finding all possible pairings (p,f) for p in P and f in F. This means you'll have |P| * |P|-1 ... * |P|-(|F|-1) possible pairings (assigning one from P to the first element of F, leaves |P|-1 possibilities to match the second element etc). You might want to separate that part of the problem in your code. If you recurse that way, the last step is choosing remaining element from P to the last element of F. Does that help? I must admit I don't understand your code well enough to provide an answer tailored to your current state, but that's how I'd approach it in general.
Well, I figured out this one on my own! Here's the code for it (which I hope is self-explanatory, but I can clarify in case anyone needs more details):
#include <iostream>
#include <algorithm>
#include <vector>
#include <cmath>
using namespace std;
static const int factors[] = {2, 3, 5, 7, 11, 13};
vector<int> vFactors(factors, factors + sizeof(factors) / sizeof(factors[0]));
static const int powers[] = {1, 1, 2, 1};
vector<int> vPowers(powers, powers + sizeof(powers) / sizeof(powers[0]));
// idx - The index from which the rest of the factors are to be considered.
// 0 <= idx < Factors.size() - Powers.size()
// lvl - The lvl of the depth-first tree
// 0 <= lvl < Powers.size()
// lvlProd - The product till the previous level for that index.
void generateNumList
(
vector<int>& vPowers,
vector<int>& vFactors,
vector<int>& vNumList,
int idx,
int lvl,
long lvlProd
)
{
// Terminating case
if (lvl == vPowers.size() - 1)
{
long prod = pow((float) vFactors[idx], vPowers[lvl]) * lvlProd;
vNumList.push_back(prod);
}
else
{
// Recursive case
long tempLvlProd = lvlProd * pow((float) vFactors[idx], vPowers[lvl]);
for (int i = idx + 1; i < vFactors.size(); ++i)
generateNumList(vPowers, vFactors, vNumList, i, lvl + 1,
tempLvlProd);
}
}
vector<int> getNumList(vector<int>& vPowers, vector<int>& vFactors)
{
vector<int> vNumList;
for (int i = 0; i < vFactors.size(); ++i)
generateNumList(vPowers, vFactors, vNumList, i, 0, 1);
return vNumList;
}
int main()
{
vector<int> vNumList = getNumList(vPowers, vFactors);
cout << endl << "List of numbers (" << vNumList.size() << ") : " << endl;
for (int i = 0; i < vNumList.size(); ++i)
cout << vNumList[i] << endl;
}
The output of the above code (I had to work really long to get rid of duplicate entries algorithmically! ):
List of numbers (15) :
1050
1650
1950
3234
3822
9438
5390
6370
15730
22022
8085
9555
23595
33033
55055
real 0m0.002s
user 0m0.001s
sys 0m0.001s