Optimize C++ function to generate combinations - c++

I'm trying to get a function to generate all possible combinations of numbers but my problem is the too long elaboration time. So I think I've to optimize it.
Problem: Generate all set of "r" size with 1 to n elements without repeat it in reverse order (1,2 is equal to 2, 1).
Example:
n = 3 //elements: 1,2,3
r = 2 //size of set
Output:
2 3
1 3
1 2
The code I'm using is the following:
void func(int n, int r){
vector <vector <int>> reas;
vector<bool> v(n);
fill(v.end() - r, v.end(), true);
int a = 0;
do {
reas.emplace_back();
for (int i = 0; i < n; ++i) {
if (v[i]) {
reas[a].push_back(i+1);
}
}
a++;
} while (next_permutation(v.begin(), v.end()));
}
If n = 3 and r = 2 the output will be the same of the example upside.
My problem is that if I put n = 50 and r = 5 the elaboration time is too high and I need to work with a range of n = 50...100 and r= 1..5;
Is there a way to optimize this function?
Thank's a lot

Yes, there are several things you can improve significantly. However, you should keep in mind that the number of combinations you are calculating is so large, that it has to be slow if it is to enumerate all subsets. On my machine and with my personal patience budget (100,5) is out of reach.
Given that, here are the things you can improve without completely rewriting your entire algorithm.
First: Cache locality
A vector<vector<T>> will not be contiguous. The nested vector is rather small, so even with preallocation this will always be bad, and iterating over it will be slow because each new sub-vector (and there are a lot) will likely cause a cache miss.
Hence, use a single vector<T>. Your kth subset will then not sit at location k but at k*r. But this is a significant speedup on my machine.
Second: Use a cpu-friendly permutation vector
Your idea to use next_permutation is not bad. But the fact that you use a vector<bool> makes this extremely slow. Paradoxically, using a vector<size_t> is much faster, because it is easier to load a size_t and check it than it is to do the same with a bool.
So, if you take these together the code looks something like this:
auto func2(std::size_t n, std::size_t r){
std::vector<std::size_t> reas;
reas.reserve((1<<r)*n);
std::vector<std::size_t> v(n);
std::fill(v.end() - r, v.end(), 1);
do {
for (std::size_t i = 0; i < n; ++i) {
if (v[i]) {
reas.push_back(i+1);
}
}
} while (std::next_permutation(v.begin(), v.end()));
return reas;
}
Third: Don't press the entire result into one huge buffer
Use a callback to process each sub-set. Thereby you avoid having to return one huge vector. Instead you call a function for each individual sub-set that you found. If you really really need to have one huge set, this callback can still push the sub-sets into a vector, but it can also operate on them in-place.
std::size_t func3(std::size_t n, std::size_t r,
std::function<void(std::vector<std::size_t> const&)> fun){
std::vector<std::size_t> reas;
reas.reserve(r);
std::vector<std::size_t> v(n);
std::fill(v.end() - r, v.end(), 1);
std::size_t num = 0;
do {
reas.clear(); // does not shrink capacity to 0
for (std::size_t i = 0; i < n; ++i) {
if (v[i]) {
reas.push_back(i+1);
}
}
++num;
fun(reas);
} while (std::next_permutation(v.begin(), v.end()));
return num;
}
This yields a speedup of well over 2x in my experiments. But the speedup goes up the more you crank up n and r.
Also: Use compiler optimisation
Use your compiler options to speed up the compilation as much as possible. On my system the jump from -O0 to -O1 is a speedup of well more than 10x. The jump to -O3 from -O1 is much smaller but still there (about x1.1).
Unrelated to performance, but still relevant: Why is "using namespace std;" considered bad practice?

Related

Rotate elements in a vector and how to return a vector

c++ newbie here. So for an assignment I have to rotate all the elements in a vector to the left one. So, for instance, the elements {1,2,3} should rotate to {2,3,1}.
I'm researching how to do it, and I saw the rotate() function, but I don't think that will work given my code. And then I saw a for loop that could do it, but I'm not sure how to translate that into a return statement. (i tried to adjust it and failed)
This is what I have so far, but it is very wrong (i haven't gotten a single result that hasn't ended in an error yet)
Edit: The vector size I have to deal with is just three, so it doesn't need to account for any sized vector
#include <vector>
using namespace std;
vector<int> rotate(const vector<int>& v)
{
// PUT CODE BELOW THIS LINE. DON'T CHANGE ANYTHING ABOVE.
vector<int> result;
int size = 3;
for (auto i = 0; i < size - 1; ++i)
{
v.at(i) = v.at(i + 1);
result.at(i) = v.at(i);
}
return result;
// PUT CODE ABOVE THIS LINE. DON'T CHANGE ANYTHING BELOW.
}
All my teacher does it upload textbook pages that explain what certain parts of code are supposed to do but the textbook pages offer NO help in trying to figure out how to actually apply this stuff.
So could someone please give me a few pointers?
Since you know exactly how many elements you have, and it's the smallest number that makes sense to rotate, you don't need to do anything fancy - just place the items in the order that you need, and return the result:
vector<int> rotate3(const vector<int>& x) {
return vector<int> { x[1], x[2], x[0] };
}
Note that if your collection always has three elements, you could use std::array instead:
std::array<int,3>
First, just pay attention that you have passed v as const reference (const vector<int>&) so you are forbbiden to modify the state of v in v.at(i) = v.at(i + 1);
Although Sergey has already answered a straight forward solution, you could correct your code like this:
#include <vector>
using namespace std;
vector<int> left_rotate(const vector<int>& v)
{
vector<int> result;
int size = v.size(); // this way you are able to rotate vectors of any size
for (auto i = 1; i < size; ++i)
result.push_back(v.at(i));
// adding first element of v at end of result
result.push_back(v.front());
return result;
}
Use Sergey's answer. This answer deals with why what the asker attempted did not work. They're damn close, so it's worth going though it, explaining the problems, and showing how to fix it.
In
v.at(i) = v.at(i + 1);
v is constant. You can't write to it. The naïve solution (which won't work) is to cut out the middle-man and write directly to the result vector because it is NOT const
result.at(i) = v.at(i + 1);
This doesn't work because
vector<int> result;
defines an empty vector. There is no at(i) to write to, so at throws an exception that terminates the program.
As an aside, the [] operator does not check bounds like at does and will not throw an exception. This can lead you to thinking the program worked when instead it was writing to memory the vector did not own. This would probably crash the program, but it doesn't have to1.
The quick fix here is to ensure usable storage with
vector<int> result(v.size());
The resulting code
vector<int> rotate(const vector<int>& v)
{
// PUT CODE BELOW THIS LINE. DON'T CHANGE ANYTHING ABOVE.
vector<int> result(v.size()); // change here to size the vector
int size = 3;
for (auto i = 0; i < size - 1; ++i)
{
result.at(i) = v.at(i + 1); // change here to directly assign to result
}
return result;
// PUT CODE ABOVE THIS LINE. DON'T CHANGE ANYTHING BELOW.
}
almost works. But when we run it on {1, 2, 3} result holds {2, 3, 0} at the end. We lost the 1. That's because v.at(i + 1) never touches the first element of v. We could increase the number of for loop iterations and use the modulo operator
vector<int> rotate(const vector<int>& v)
{
// PUT CODE BELOW THIS LINE. DON'T CHANGE ANYTHING ABOVE.
vector<int> result(v.size());
int size = 3;
for (auto i = 0; i < size; ++i) // change here to iterate size times
{
result.at(i) = v.at((i + 1) % size); // change here to make i + 1 wrap
}
return result;
// PUT CODE ABOVE THIS LINE. DON'T CHANGE ANYTHING BELOW.
}
and now the output is {2, 3, 1}. But it's just as easy, and probably a bit faster, to just do what we were doing and tack on the missing element after the loop.
vector<int> rotate(const vector<int>& v)
{
// PUT CODE BELOW THIS LINE. DON'T CHANGE ANYTHING ABOVE.
vector<int> result(v.size());
int size = 3;
for (auto i = 0; i < size - 1; ++i)
{
result.at(i) = v.at(i + 1);
}
result.at(size - 1) = v.at(0); // change here to store first element
return result;
// PUT CODE ABOVE THIS LINE. DON'T CHANGE ANYTHING BELOW.
}
Taking this a step further, the size of three is an unnecessary limitation for this function that I would get rid of and since we're guaranteeing that we never go out of bounds in our for loop, we don't need the extra testing in at
vector<int> rotate(const vector<int>& v)
{
// PUT CODE BELOW THIS LINE. DON'T CHANGE ANYTHING ABOVE.
if (v.empty()) // nothing to rotate.
{
return vector<int>{}; // return empty result
}
vector<int> result(v.size());
for (size_t i = 0; i < v.size() - 1; ++i) // Explicitly using size_t because
// 0 is an int, and v.size() is an
// unsigned integer of implementation-
// defined size but cannot be larger
// than size_t
// note v.size() - 1 is only safe because
// we made sure v is not empty above
// otherwise 0 - 1 in unsigned math
// Becomes a very, very large positive
// number
{
result[i] = v[i + 1];
}
result.back() = v.front(); // using direct calls to front and back because it's
// a little easier on my mind than math and []
return result;
// PUT CODE ABOVE THIS LINE. DON'T CHANGE ANYTHING BELOW.
}
We can go further still and use iterators and range-based for loops, but I think this is enough for now. Besides at the end of the day, you throw the function out completely and use std::rotate from the <algorithm> library.
1This is called Undefined Behaviour (UB), and one of the most fearsome things about UB is anything could happen including giving you the expected result. We put up with UB because it makes for very fast, versatile programs. Validity checks are not made where you don't need them (along with where you did) unless the compiler and library writers decide to make those checks and give guaranteed behaviour like an error message and crash. Microsoft, for example, does exactly this in the vector implementation in the implementation used when you make a debug build. The release version of Microsoft's vector make no checks and assumes you wrote the code correctly and would prefer the executable to be as fast as possible.
I saw the rotate() function, but I don't think that will work given my code.
Yes it will work.
When learning there is gain in "reinventing the wheel" (e.g. implementing rotate yourself) and there is also gain in learning how to use the existing pieces (e.g. use standard library algorithm functions).
Here is how you would use std::rotate from the standard library:
std::vector<int> rotate_1(const std::vector<int>& v)
{
std::vector<int> result = v;
std::rotate(result.begin(), result.begin() + 1, result.end());
return result;
}

STL algorithms for pairwise comparison and tracking max/longest sequence

Consider this fairly easy algorithmic problem:
Given an array of (unsorted) numbers, find the length of the longest sequence of adjacent numbers that are increasing. For example, if we have {1,4,2,3,5}, we expect the result to be 3 since {2,3,5} gives the longest increasing sequence of adjacent/contiguous elements. Note that for non-empty arrays, such as {4,3,2,1}, the minimum result will be 1.
This works:
#include <algorithm>
#include <iostream>
#include <vector>
template <typename T, typename S>
T max_adjacent_length(const std::vector<S> &nums) {
if (nums.size() == 0) {
return 0;
}
T maxLength = 1;
T currLength = 1;
for (size_t i = 0; i < nums.size() - 1; i++) {
if (nums[i + 1] > nums[i]) {
currLength++;
} else {
currLength = 1;
}
maxLength = std::max(maxLength, currLength);
}
return maxLength;
}
int main() {
std::vector<double> nums = {1.2, 4.5, 3.1, 2.7, 5.3};
std::vector<int> ints = {4, 3, 2, 1};
std::cout << max_adjacent_length<int, double>(nums) << "\n"; // 2
std::cout << max_adjacent_length<int, int>(ints) << "\n"; // 1
return 0;
}
As an exercise for myself, I was wondering if there is/are STL algorithm(s) that achieve the same effect, thereby (ideally) avoiding the raw for-loop I have. The motivation behind doing this is to learn more about STL algorithms, and practice using abstracted algorithms to make my code more general and reusable.
Here are my ideas, but they don't quite achieve what I'd like.
std::adjacent_find achieves the pairwise comparisons and can be used to find the index of a non-increasing pair, but doesn't easily facilitate the ability to keep a current and maximum length and compare the two. It could be possible to have those state variables as part of my predicate function, but that seems a bit wrong since ideally you'd like your predicate function to not have any side effects, right?
std::adjacent_difference is interesting. One could use it to construct a vector of the differences between adjacent numbers. Then, starting from the second element, depending on if the difference is positive or negative, we could again track the maximum number of consecutive positive differences seen. This is actually quite close to achieving what we'd like. See the example code below:
#include <numeric>
#include <vector>
template <typename T, typename S> T max_adjacent_length(std::vector<S> &nums) {
if (nums.size() == 0) {
return 0;
}
std::adjacent_difference(nums.begin(), nums.end(), nums.begin());
nums.erase(std::begin(nums)); // keep only differences
T maxLength = 1, currLength = 1;
for (auto n : nums) {
currLength = n > 0 ? (currLength + 1) : 1;
maxLength = std::max(maxLength, currLength);
}
return maxLength;
}
The problem here is that we lose out the const-ness of nums if we want to compute the difference, or we have to sacrifice space and create a copy of nums, which is a no-no given the original solution is O(1) space complexity already.
Is there an idea/solution that I have overlooked that achieves what I want in a succinct and readable manner?
In both your code snippets, you are iterating through a range (in the first version, with an index-based-loop, and in the second with a range-for loop). This is not really the kind of code you should be writing if you want to use the standard algorithms, which work with iterators into the range. Instead of thinking of a range as a collection of elements, if you start thinking in terms of pairs of iterators, choosing the right algorithms becomes easier.
For this problem, here's a reasonable way to write this code:
auto max_adjacent_length = [](auto const & v)
{
long max = 0;
auto begin = v.begin();
while (begin != v.end()) {
auto next = std::is_sorted_until(begin, v.end());
max = std::max(std::distance(begin, next), max);
begin = next;
}
return max;
};
Here's a demo.
Note that you were already on the right track in terms of picking a reasonable algorithm. This could be solved with adjacent_find as well, with just a little more work.

How can I use all the cores in the loop?

There is a loop.
for (int i = 0; i < n; ++i) {
//...
v[i] = o.f(i);
//...
}
Each v[i] = o.f(i) is independent of all the other v[i] = o.f(i).
n can be any value and it may not be a multiple of the number of cores. What is the simplest way to use all the cores to do this?
The ExecutionPolicy overloads of the algorithms in <algorithm> exist for this purpose. std::transform applies a function to each element of a source range to assign to a destination range.
v.begin() is an acceptable destination, so long as v is of appropriate size. Your snippet assumes this when it uses v[i], so I will too.
We then need an iterator that gives the values [0, n) as our source, so boost::counting_iterator<int>.
Finally we need a Callable that will apply o.f to our values, so lets capture o in a lambda.
#include <algorithm>
#include <execution>
#include <boost/iterator/counting_iterator.hpp>
// assert(v.size() >= n)
std::transform(std::execution::par, boost::counting_iterator<int>(0), boost::counting_iterator<int>(n), v.begin(), [&o](int i){ return o.f(i); });
If o.f does not perform any "vectorization-unsafe operations", you are able to use std::execution::par_unseq, which may interleave calls on the same thread (i.e. unroll the loop and use SIMD instructions)
In the land of existing compilers, and remembering that M/S can't even get this stuff right for C++11, never mind about C++17/20, the C++11 answer goes something like:
typedef v.value_type R;
std::vector< std::future<R> > fut(n);
for (int i=0; i<n; i++)
fut[i] = std::async(std::launch::async, O::f, o, i);
for (auto& f : fut)
v.push_back(f.get());
#arne suggests we can do better by throttling the number of tasks by considering the number of processors (P), which is true, though the above code will give you a clear indication on whether you will really benefit from multi-threading the method f. Given we only want to launch X jobs simultaneously, where X is > P, < 3*P depending on the variation in job complexity (note I am relying on a signed index):
typedef v.value_type R;
std::vector< std::future<R> > fut(n);
for (ssize_t i=0, j=-X; j<n; i++,j++)
{
if (i<n) fut[i] = std::async(std::launch::async, O::f, o, i);
if (j>=0) v.push_back(fut[j].get());
}
I'm not claiming the above code is "great", but if the jobs are complex enough for us to need multithreading, the cost of looping a few extra times isn't gointg to be noticed. You will notice that if X > n the loop will spin a few times in the middle, but will produce the correct result :-)

Vector performance suffering

I've been working on state space exploration and was originally using a map to store the assignment of the world states like map<Variable *, int>, where variables are objects in the world with a domain from 0 to n where n is finite. The implementation was extremely quick for performance, but I noticed that it does not scale well with the size of the state space. I changed the states to use vector<int> instead, where I use the id of a variable to find its index in the vector. Memory usage improved greatly, but the efficiency of the solver has tanked (gone from <30 seconds to 400+). The only code that I modified was generating the states and validating if the state is the goal. I can't figure out why using a vector has degraded performance, especially since the vector operations should only take linear time at worst.
Originally this is was how I generated nodes:
State * SuccessorGen::generate_successor(const Operator &op, map<Variable *, int> &var_assignment){
map<Variable *, int> values;
values.insert(var_assignment.begin(), var_assignment.end());
vector<Operator::Effect> effect = op.get_effect();
vector<Operator::Effect>::const_iterator eff_it = effect.begin();
for (; eff_it != effect.end(); eff_it++){
values[eff_it->var] = eff_it->after;
}
return new State(values);
}
And in my new implementation:
State* SuccessorGen::generate_successor(const Operator &op, const vector<int> &assignment){
vector<int> child;
child = assignment;
vector<Operator::Effect> effect = op.get_effect();
vector<Operator::Effect>::const_iterator eff_it = effect.begin();
for (; eff_it != effect.end(); eff_it++){
Variable *v = eff_it->var;
int id = v->get_id();
child[id] = eff_it->after;
}
return new State(child);
}
(The goal checking is similar, just looping over the goal assignment instead of operator effects.)
Are these vector operations really that much slower than using a map? Is there an equally efficient STL container I can use that has a lower overhead? The number of variables is relatively small (<50) and the vector never needs to be resized or modified after the for loop.
Edit:
I tried timing one loop through all the operators to see timing comparisons, with the effect list and assignment the vector version runs one loop in 0.3 seconds, while the map version is a little over 0.4 seconds. When I comment that section out the map was about the same, yet the vector jumped up to closer to 0.5 seconds. I added child.reserve(assignment.size()) but that did not make any change.
Edit 2:
From user63710's answer, I've also been digging through the rest of the code and noticed something really strange going on in the heuristic calculation. The vector version works fine, but for the map I use this line Node *n = new Node(i, transition.value, label_cost); open_list.push(n);, but once the loop finishes filling the queue the node gets totally screwed up. Nodes are a simple struct as:
struct Node{
// Source Value, Destination Value
int from;
int to;
int distance;
Node(int &f, int &t, int &d) : from(f), to(t), distance(d){}
};
Instead of having from, to, distance, it replaces from and to with id with some random number, and that search does not do what it should and is returning much faster then it should. When I tweak the map version to convert the map to a vector and run this:
Node n(i, transition.value, label_cost); open_list.push(n);
the performance is about equal to that of the vector. So that fixes my main issue, but this leaves me wondering why using Node *n gets this behaviour opposed to Node n()?
If as you say, the sizes of these structures are fairly small (~50 elements), I have to think that the issue is somewhere else. At least, I don't think it involves the memory accesses or allocation of the vector/map.
Some example code I made to test: Map version:
unique_ptr<map<int, int>> make_successor_map(const vector<int> &ids,
const map<int, int> &input)
{
auto new_map = make_unique<map<int, int>>(input.begin(), input.end());
for (size_t i = 0; i < ids.size(); ++i)
swap((*new_map)[ids[i]], (*new_map)[i]);
return new_map;
}
int main()
{
auto a_map = make_unique<map<int, int>>();
// ids to access
vector<int> ids;
const int n = 100;
for (int i = 0; i < n; ++i)
{
a_map->insert({i, rand()});
ids.push_back(i);
}
random_shuffle(ids.begin(), ids.end());
for (int i = 0; i < 1e6; ++i)
{
auto temp_map = make_successor_map(ids, *a_map);
swap(temp_map, a_map);
}
cout << a_map->begin()->second << endl;
}
Vector version:
unique_ptr<vector<int>> make_successor_vec(const vector<int> &ids,
const vector<int> &input)
{
auto new_vec = make_unique<vector<int>>(input);
for (size_t i = 0; i < ids.size(); ++i)
swap((*new_vec)[ids[i]], (*new_vec)[i]);
return new_vec;
}
int main()
{
auto a_vec = make_unique<vector<int>>();
// ids to access
vector<int> ids;
const int n = 100;
for (int i = 0; i < n; ++i)
{
a_vec->push_back(rand());
ids.push_back(i);
}
random_shuffle(ids.begin(), ids.end());
for (int i = 0; i < 1e6; ++i)
{
auto temp_vec = make_successor_vec(ids, *a_vec);
swap(temp_vec, a_vec);
}
cout << *a_vec->begin() << endl;
}
The map version takes around 15 seconds to run on my old Core 2 Duo T9600, and the vector version takes 0.406 seconds. Both we're compiled on G++ 4.9.2 with g++ -O3 --std=c++1y. So if your code takes 0.4s per iteration (note that it took my example code 0.4s for 1 million calls), then I'm really thinking your problem is somewhere else.
That's not to say you aren't having performance decreases due to switching from map->vector, but that the code you posted doesn't show much reason for that to happen.
The problem is that you create vectors without reserving space. Vectors store elements contiguously. That ensures constant access to elements.
So everytime you add an item to the vector (for example via your inserter), the vector has to reallocate more space and eventuelly move all the existing elements to a reallocated memory location. This causes slowdown and considerable heap fragmentation.
The solution to this is to reserve() elements if you know in advance how many elements you'll have. Or if you don't reserve() larger chunks and compare size() and capacity() to check if it's time to reserve more.

How to get random and unique values from a vector? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Unique random numbers in O(1)?
Unique random numbers in an integer array in the C programming language
I have a std::vector of unique elements of some undetermined size. I want to fetch 20 unique and random elements from this vector. By 'unique' I mean that I do not want to fetch the same index more than once. Currently the way I do this is to call std::random_shuffle. But this requires me to shuffle the entire vector (which may contain over 1000 elements). I don't mind mutating the vector (I prefer not to though, as I won't need to use thread locks), but most important is that I want this to be efficient. I shouldn't be shuffling more than I need to.
Note that I've looked into passing in a partial range to std::random_shuffle but it will only ever shuffle that subset of elements, which would mean that the elements outside of that range never get used!
Help is appreciated. Thank you!
Note: I'm using Visual Studio 2005, so I do not have access to C++11 features and libraries.
You can use Fisher Yates http://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle
The Fisher–Yates shuffle (named after Ronald Fisher and Frank Yates), also known as the Knuth shuffle (after Donald Knuth), is an algorithm for generating a random permutation of a finite set—in plain terms, for randomly shuffling the set. A variant of the Fisher–Yates shuffle, known as Sattolo's algorithm, may be used to generate random cycles of length n instead. Properly implemented, the Fisher–Yates shuffle is unbiased, so that every permutation is equally likely. The modern version of the algorithm is also rather efficient, requiring only time proportional to the number of items being shuffled and no additional storage space.
The basic process of Fisher–Yates shuffling is similar to randomly picking numbered tickets out of a hat, or cards from a deck, one after another until there are no more left. What the specific algorithm provides is a way of doing this numerically in an efficient and rigorous manner that, properly done, guarantees an unbiased result.
I think this pseudocode should work (there is a chance of an off-by-one mistake or something so double check it!):
std::list chosen; // you don't have to use this since the chosen ones will be in the back of the vector
for(int i = 0; i < num; ++i) {
int index = rand_between(0, vec.size() - i - 1);
chosen.push_back(vec[index]);
swap(vec[index], vec[vec.size() - i - 1]);
}
You want a random sample of size m from an n-vector:
Let rand(a) return 0..a-1 uniform
for (int i = 0; i < m; i++)
swap(X[i],X[i+rand(n-i)]);
X[0..m-1] is now a random sample.
Use a loop to put random index numbers into a std::set and stop when the size() reaches 20.
std::set<int> indexes;
std::vector<my_vector::value_type> choices;
int max_index = my_vector.size();
while (indexes.size() < min(20, max_index))
{
int random_index = rand() % max_index;
if (indexes.find(random_index) == indexes.end())
{
choices.push_back(my_vector[random_index]);
indexes.insert(random_index);
}
}
The random number generation is the first thing that popped into my head, feel free to use something better.
#include <iostream>
#include <vector>
#include <algorithm>
template<int N>
struct NIntegers {
int values[N];
};
template<int N, int Max, typename RandomGenerator>
NIntegers<N> MakeNRandomIntegers( RandomGenerator func ) {
NIntegers<N> result;
for(int i = 0; i < N; ++i)
{
result.values[i] = func( Max-i );
}
std::sort(&result.values[0], &result.values[0]+N);
for(int i = 0; i < N; ++i)
{
result.values[i] += i;
}
return result;
};
Use example:
// use a better one:
int BadRandomNumberGenerator(int Max) {
return Max>4?4:Max/2;
}
int main() {
NIntegers<100> result = MakeNRandomIntegers<100, 500>( BadRandomNumberGenerator );
for (int i = 0; i < 100; ++i) {
std::cout << i << ":" << result.values[i] << "\n";
}
}
make each number 1 smaller in max than the last. Sort them, then bump up each value by the number of integers before it.
template stuff is just trade dress.