I want to know why std::accumulate (aka reduce) 3rd parameter is needed. For those who do not know what accumulate is, it's used like so:
vector<int> V{1,2,3};
int sum = accumulate(V.begin(), V.end(), 0);
// sum == 6
Call to accumulate is equivalent to:
sum = 0; // 0 - value of 3rd param
for (auto x : V) sum += x;
There is also optional 4th parameter, which allow to replace addition with any other operation.
Rationale that I've heard is that if you need let say not to add up, but multiply elements of a vector, we need other (non-zero) initial value:
vector<int> V{1,2,3};
int product = accumulate(V.begin(), V.end(), 1, multiplies<int>());
But why not do like Python - set initial value for V.begin(), and use range starting from V.begin()+1. Something like this:
int sum = accumulate(V.begin()+1, V.end(), V.begin());
This will work for any op. Why is 3rd parameter needed at all?
You're making a mistaken assumption: that type T is of the same type as the InputIterator.
But std::accumulate is generic, and allows all different kinds of creative accumulations and reductions.
Example #1: Accumulate salary across Employees
Here's a simple example: an Employee class, with many data fields.
class Employee {
/** All kinds of data: name, ID number, phone, email address... */
public:
int monthlyPay() const;
};
You can't meaningfully "accumulate" a set of employees. That makes no sense; it's undefined. But, you can define an accumulation regarding the employees. Let's say we want to sum up all the monthly pay of all employees. std::accumulate can do that:
/** Simple class defining how to add a single Employee's
* monthly pay to our existing tally */
auto accumulate_func = [](int accumulator, const Employee& emp) {
return accumulator + emp.monthlyPay();
};
// And here's how you call the actual calculation:
int TotalMonthlyPayrollCost(const vector<Employee>& V)
{
return std::accumulate(V.begin(), V.end(), 0, accumulate_func);
}
So in this example, we're accumulating an int value over a collection of Employee objects. Here, the accumulation sum isn't the same type of variable that we're actually summing over.
Example #2: Accumulating an average
You can use accumulate for more complex types of accumulations as well - maybe want to append values to a vector; maybe you have some arcane statistic you're tracking across the input; etc. What you accumulate doesn't have to be just a number; it can be something more complex.
For example, here's a simple example of using accumulate to calculate the average of a vector of ints:
// This time our accumulator isn't an int -- it's a structure that lets us
// accumulate an average.
struct average_accumulate_t
{
int sum;
size_t n;
double GetAverage() const { return ((double)sum)/n; }
};
// Here's HOW we add a value to the average:
auto func_accumulate_average =
[](average_accumulate_t accAverage, int value) {
return average_accumulate_t(
{accAverage.sum+value, // value is added to the total sum
accAverage.n+1}); // increment number of values seen
};
double CalculateAverage(const vector<int>& V)
{
average_accumulate_t res =
std::accumulate(V.begin(), V.end(), average_accumulate_t({0,0}), func_accumulate_average)
return res.GetAverage();
}
Example #3: Accumulate a running average
Another reason you need the initial value is because that value isn't always the default/neutral value for the calculation you're making.
Let's build on the average example we've already seen. But now, we want a class that can hold a running average -- that is, we can keep feeding in new values, and check the average so far, across multiple calls.
class RunningAverage
{
average_accumulate_t _avg;
public:
RunningAverage():_avg({0,0}){} // initialize to empty average
double AverageSoFar() const { return _avg.GetAverage(); }
void AddValues(const vector<int>& v)
{
_avg = std::accumulate(v.begin(), v.end(),
_avg, // NOT the default initial {0,0}!
func_accumulate_average);
}
};
int main()
{
RunningAverage r;
r.AddValues(vector<int>({1,1,1}));
std::cout << "Running Average: " << r.AverageSoFar() << std::endl; // 1.0
r.AddValues(vector<int>({-1,-1,-1}));
std::cout << "Running Average: " << r.AverageSoFar() << std::endl; // 0.0
}
This is a case where we absolutely rely on being able to set that initial value for std::accumulate - we need to be able to initialize the accumulation from different starting points.
In summary, std::accumulate is good for any time you're iterating over an input range, and building up one single result across that range. But the result doesn't need to be the same type as the range, and you can't make any assumptions about what initial value to use -- which is why you must have an initial instance to use as the accumulating result.
The way things are, it is annoying for code that knows for sure a range isn't empty and that wants to start accumulating from the first element of the range on. Depending on the operation that is used to accumulate with, it's not always obvious what the 'zero' value to use is.
If on the other hand you only provide a version that requires non-empty ranges, it's annoying for callers that don't know for sure that their ranges aren't empty. An additional burden is put on them.
One perspective is that the best of both worlds is of course to provide both functionality. As an example, Haskell provides both foldl1 and foldr1 (which require non-empty lists) alongside foldl and foldr (which mirror std::transform).
Another perspective is that since the one can be implemented in terms of the other with a trivial transformation (as you've demonstrated: std::transform(std::next(b), e, *b, f) -- std::next is C++11 but the point still stands), it is preferable to make the interface as minimal as it can be with no real loss of expressive power.
Because standard library algorithms are supposed to work for arbitrary ranges of (compatible) iterators. So the first argument to accumulate doesn't have to be begin(), it could be any iterator between begin() and one before end(). It could also be using reverse iterators.
The whole idea is to decouple algorithms from data. Your suggestion, if I understand it correctly, requires a certain structure in the data.
If you wanted accumulate(V.begin()+1, V.end(), V.begin()) you could just write that. But what if you thought v.begin() might be v.end() (i.e. v is empty)? What if v.begin() + 1 is not implemented (because v only implements ++, not generized addition)? What if the type of the accumulator is not the type of the elements? Eg.
std::accumulate(v.begin(), v.end(), 0, [](long count, char c){
return isalpha(c) ? count + 1 : count
});
It's indeed not needed. Our codebase has 2 and 3-argument overloads which use a T{} value.
However, std::accumulate is pretty old; it comes from the original STL. Our codebase has fancy std::enable_if logic to distinguish between "2 iterators and initial value" and "2 iterators and reduction operator". That requires C++11. Our code also uses a trailing return type (auto accumulate(...) -> ...) to calculate the return type, another C++11 feature.
Related
I'm a beginner user of Google OR-Tools, especially the CP-SAT. I'm using version 9.3, and I'm interested in the C++ version.
I'm modeling a problem where I need to count how many pairs of decision variables have the same (assigned) value. So, let's suppose I have a set of integer variables like this:
std::vector<IntVar> my_vars;
I also have a set of pairs like this:
std::vector<std::pair<size_t, size_t>> my_pairs;
Assume that all bounds are valid, size, etc, are valid. Now, I want to compute how many of these pairs have the same value. Using IBM Ilog Concert, I can do it very straightforward using:
// Using Ilog Concert technology.
IloIntVar count(env, 0, MY_UPPER_BOUND);
IloIntExpr expr_count(env);
for(const auto& [u, v] : my_pairs) {
expr_count += (my_vars[u] == my_vars[v]);
}
model.add(count == expr_count);
Here, count is a decision variable that holds how many pairs have the same value in a given solution. The expression is a sum of boolean values comparing the actual decision variable's values, not the variable objects themselves (i.e., is the object representing variable u is the same object representing variable v).
Using OR-Tools, the equality operator ==, compares whether the variable objects (or representation of them) are equal, not the decision variable values. So, the following fails by generating an empty expression:
// Using Google Or-Tools CP-SAT.
IntVar count = cp_model
.NewIntVar(Domain(0, my_pairs.size()))
.WithName("count");
LinearExpr expr_count;
for(const auto& [u, v] : my_pairs) {
expr_count += (my_vars[u] == my_vars[v]);
}
cp_model.AddEquality(count, expr_count);
Note that, according to Google OR-Tools code (here), we have that:
class IntVar {
//...
bool operator==(const IntVar& other) const {
return other.builder_ == builder_ && other.index_ == index_;
}
//...
};
i.e., comparing if the variables are the same, but not the value assigned to them. Therefore, we cannot compare decision variables directly using CP-SAT, and we need to recur to another method.
Obviously, I can change the model using some big-M notation and linearize such expressions. However, can I do count without to recur to "remodeling"? I.e., is there a construct I can use "more or less" easily so that I address such cases?
I must mention while I only depict one case here, I have quite a few counting variables of several sets like that. So, remodeling using big-M will be a big headache. I would prefer a simpler and straightforward approach like Ilog Concert.
(Update) Little extension
Now, I want do the same but comparing decision variables with scalars. For example:
std::vector<int> my_scalars;
for(size_t i = 0; i < my_scalars.size(); ++i) {
expr_count += (my_vars[i] == my_scalars[i]);
}
While this can be done using Ilog, it even did not compile on OR-Tools.
THanks,
Carlos
here is a tentative code:
IntVar count = model.NewIntVar(0, MY_UPPER_BOUND);
LinearExpr expr_count;
for(const auto& [u, v] : my_pairs) {
BoolVar is_equal = model.NewBoolVar();
model.AddEquality(my_vars[u], my_vars[v]).OnlyEnforceIf(is_equal);
model.AddNotEqual(my_vars[u], my_vars[v]).OnlyEnforceIf(is_equal.Not());
expr_count += is_equal;
}
model.AddEquality(expr_count, count);
With help of #sascha and #Laurent, my solution is this one:
vector<BoolVar> is_equal;
is_equal.reserve(my_pairs.size());
for(const auto& [u, v] : my_pairs) {
is_remainder_equal.push_back(cp_model.NewBoolVar());
cp_model
.AddEquality(my_vars[u], my_vars[v])
.OnlyEnforceIf(is_equal.back());
cp_model
.AddNotEqual(my_vars[u], my_vars[v])
.OnlyEnforceIf(Not(is_equal.back()));
}
cp_model.AddEquality(LinearExpr::Sum(is_equal), count);
It is the same as #Laurent in the very end, but I save the boolean vars for late use.
For scalars, it looks like I don't need to make a constant, just compare directly with the expression.
Thanks, #Laurent and #sascha. You guys were very helpful.
I'm using the ranges library to help filer data in my classes, like this:
class MyClass
{
public:
MyClass(std::vector<int> v) : vec(v) {}
std::vector<int> getEvens() const
{
auto evens = vec | ranges::views::filter([](int i) { return ! (i % 2); });
return std::vector<int>(evens.begin(), evens.end());
}
private:
std::vector<int> vec;
};
In this case, a new vector is constructed in the getEvents() function. To save on this overhead, I'm wondering if it is possible / advisable to return the range directly from the function?
class MyClass
{
public:
using RangeReturnType = ???;
MyClass(std::vector<int> v) : vec(v) {}
RangeReturnType getEvens() const
{
auto evens = vec | ranges::views::filter([](int i) { return ! (i % 2); });
// ...
return evens;
}
private:
std::vector<int> vec;
};
If it is possible, are there any lifetime considerations that I need to take into account?
I am also interested to know if it is possible / advisable to pass a range in as an argument, or to store it as a member variable. Or is the ranges library more intended for use within the scope of a single function?
This was asked in op's comment section, but I think I will respond it in the answer section:
The Ranges library seems promising, but I'm a little apprehensive about this returning auto.
Remember that even with the addition of auto, C++ is a strongly typed language. In your case, since you are returning evens, then the return type will be the same type of evens. (technically it will be the value type of evens, but evens was a value type anyways)
In fact, you probably really don't want to type out the return type manually: std::ranges::filter_view<std::ranges::ref_view<const std::vector<int>>, MyClass::getEvens() const::<decltype([](int i) {return ! (i % 2);})>> (141 characters)
As mentioned by #Caleth in the comment, in fact, this wouldn't work either as evens was a lambda defined inside the function, and the type of two different lambdas will be different even if they were basically the same, so there's literally no way of getting the full return type here.
While there might be debates on whether to use auto or not in different cases, but I believe most people would just use auto here. Plus your evens was declared with auto too, typing the type out would just make it less readable here.
So what are my options if I want to access a subset (for instance even numbers)? Are there any other approaches I should be considering, with or without the Ranges library?
Depends on how you would access the returned data and the type of the data, you might consider returning std::vector<T*>.
views are really supposed to be viewed from start to end. While you could use views::drop and views::take to limit to a single element, it doesn't provide a subscript operator (yet).
There will also be computational differences. vector need to be computed beforehand, where views are computed while iterating. So when you do:
for(auto i : myObject.getEven())
{
std::cout << i;
}
Under the hood, it is basically doing:
for(auto i : myObject.vec)
{
if(!(i % 2)) std::cout << i;
}
Depends on the amount of data, and the complexity of computations, views might be a lot faster, or about the same as the vector method. Plus you can easily apply multiple filters on the same range without iterating through the data multiple times.
In the end, you can always store the view in a vector:
std::vector<int> vec2(evens.begin(), evens.end());
So my suggestions is, if you have the ranges library, then you should use it.
If not, then vector<T>, vector<T*>, vector<index> depending on the size and copiability of T.
There's no restrictions on the usage of components of the STL in the standard. Of course, there are best practices (eg, string_view instead of string const &).
In this case, I can foresee no problems with handling the view return type directly. That said, the best practices are yet to be decided on since the standard is so new and no compiler has a complete implementation yet.
You're fine to go with the following, in my opinion:
class MyClass
{
public:
MyClass(std::vector<int> v) : vec(std::move(v)) {}
auto getEvens() const
{
return vec | ranges::views::filter([](int i) { return ! (i % 2); });
}
private:
std::vector<int> vec;
};
As you can see here, a range is just something on which you can call begin and end. Nothing more than that.
For instance, you can use the result of begin(range), which is an iterator, to traverse the range, using the ++ operator to advance it.
In general, looking back at the concept I linked above, you can use a range whenever the conext code only requires to be able to call begin and end on it.
Whether this is advisable or enough depends on what you need to do with it. Clearly, if your intention is to pass evens to a function which expects a std::vector (for instance it's a function you cannot change, and it calls .push_back on the entity we are talking about), you clearly have to make a std::vector out of filter's output, which I'd do via
auto evens = vec | ranges::views::filter(whatever) | ranges::to_vector;
but if all the function which you pass evens to does is to loop on it, then
return vec | ranges::views::filter(whatever);
is just fine.
As regards life time considerations, a view is to a range of values what a pointer is to the pointed-to entity: if the latter is destroied, the former will be dangling, and making improper use of it will be undefined behavior. This is an erroneous program:
#include <iostream>
#include <range/v3/view/filter.hpp>
#include <string>
using namespace ranges;
using namespace ranges::views;
auto f() {
// a local vector here
std::vector<std::string> vec{"zero","one","two","three","four","five"};
// return a view on the local vecotor
return vec | filter([](auto){ return true; });
} // vec is gone ---> the view returned is dangling
int main()
{
// the following throws std::bad_alloc for me
for (auto i : f()) {
std::cout << i << std::endl;
}
}
You can use ranges::any_view as a type erasure mechanism for any range or combination of ranges.
ranges::any_view<int> getEvens() const
{
return vec | ranges::views::filter([](int i) { return ! (i % 2); });
}
I cannot see any equivalent of this in the STL ranges library; please edit the answer if you can.
EDIT: The problem with ranges::any_view is that it is very slow and inefficient. See https://github.com/ericniebler/range-v3/issues/714.
It is desirable to declare a function returning a range in a header and define it in a cpp file
for compilation firewalls (compilation speed)
stop the language server from going crazy
for better factoring of the code
However, there are complications that make it not advisable:
How to get type of a view?
If defining it in a header is fine, use auto
If performance is not a issue, I would recommend ranges::any_view
Otherwise I'd say it is not advisable.
I want to create an MxN array (M particles in N dimensional space) filled with random numbers within an upper and lower boundary. I have a working python code that looks something like this:
# upper_bound/lower_bound are arrays of shape (dim,)
positions = np.random.rand(num_particle,dim)*(upper_bound-lower_bound)+lower_bound
Each row represents a particle, and each column represents a dimension in the problem space. So the upper_bound and lower_bound applies to each column. Now I want to translate the above code to c++, and I have something like this:
#include <iostream>
#include <vector>
#include <random>
#include <algorithm>
#include <ctime>
typedef std::vector<double> vect1d;
std::vector<vect1d> positions;
for (int i=0; i<num_particle; i++){
std::mt19937_64 generator(static_cast<std::mt19937::result_type>(time(0)));
std::uniform_real_distribution<double> distribution(0,1);
vect1d pos(dimension);
std::generate(pos.begin(),pos.end(),distribution(generator));
positions[i] = pos;
}
My problems:
It gives error regarding the generator, so I'm not sure if I set it properly. I'm also not sure how to use the std::generator. I'm trying it as I've looked at other similar posts and it seems that it allows me to generate more than one random number at a time, so I don't have to run it MxN times for each element. Is this true and how to correctly use it?
In python I can just vectorization and broadcasting to manipulate the numpy array. What's the most 'vectorized' way to do it in c++?
The above (incorrect) code only creates random number between 0 and 1, but how to incorporate the lower_bound and upper_bound as in the python version? I understand that I can change the values inside distribution(0,1), but the problem is the limits can be different for each dimension (so each column can have different valid range), so what's the most efficient way to generate random number, taking into account the range for each dimension?
Thanks
First of all, you're doing more work than you need to with your Python version, just use:
np.random.uniform(lower_bound, upper_bound, size=(num_particle, dim))
In your C++ attempt, the line
std::generate(pos.begin(),pos.end(),distribution(generator));
Is incorrect as the third argument must be a function not a value. A reasonable C++ equivalent would be:
using RandomVector = std::vector<double>;
using RandomMatrix = std::vector<RandomVector>;
template <typename Generator=std::mt19937_64>
RandomMatrix&
fill_uniform(const double low, const double high, RandomMatrix& result)
{
Generator gen {static_cast<typename Generator::result_type>(time(0))};
std::uniform_real_distribution<double> dist {low, high};
for (auto& col : result) {
std::generate(std::begin(col), std::end(col), [&] () { return dist(gen); });
}
return result;
}
template <typename Generator=std::mt19937_64>
RandomMatrix
generate_uniform(const double low, const double high,
const std::size_t ncols, const std::size_t nrows)
{
RandomMatrix result(ncols, RandomVector(nrows));
return fill_uniform<Generator>(low, high, result);
}
int main()
{
auto m = generate_uniform(2, 11, 2, 3);
for (const auto& col : m) {
for (const auto& v : col) {
std::cout << v << " ";
}
std::cout << '\n';
}
}
You could generalise this to generate arbitrary dimension tensors (like the NumPy version) without too much work.
I'll address them in random order:
3.You have several options - using one generator per row, created like distribution(row_lower_limit, row_upper_limit). Should be cheap enough to not cause issues. If you want to reuse the same generator, just do something like row_lower_limit + distribution(generator) * (row_upper_limit - row_lower_limit). The distribution is in both cases U[row_lower_limit, row_upper_limit].
2.The vectorization came from the numpy library, not from Python itself. It provided some nice UX at most. C++ doesn't have an equivalent library to numpy (though there's a lot of libraries for it as well - just nothing so univeral). You wouldn't be wrong by doing two nested fors. You'd perhaps be better served by just declaring a NxM array rather than a vector, like here.
1.Not sure how to help with the problem since we don't know the error. The cplusplus.com reference has an example of how to initialize this with reference to a random_device.
Let's say I would like to create a unordered set of unordered multisets of unsigned int. For this, I need to create a hash function to calculate a hash of the unordered multiset. In fact, it has to be good for CRC as well.
One obvious solution is to put the items in vector, sort them and return a hash of the result. This seems to work, but it is expensive.
Another approach is to xor the values, but obviously if I have one item twice or none the result will be the same - which is not good.
Any ideas how I can implement this cheaper - I have an application that will be doing this thousand for thousands of sets, and relatively big ones.
Since it is a multiset, you would like for the hash value to be the same for identical multisets, whose representation might have the same elements presented, added, or deleted in a different order. You would then like for the hash value to be commutative, easy to update, and change for each change in elements. You would also like for two changes to not readily cancel their effect on the hash.
One operation that meets all but the last criteria is addition. Just sum the elements. To keep the sum bounded, do the sum modulo the size of your hash value. (E.g. modulo 264 for a 64-bit hash.) To make sure that inserting or deleting zero values changes the hash, add one to each value first.
A drawback of the sum is that two changes can readily cancel. E.g. replacing 1 3 with 2 2. To address that, you can use the same approach and sum a polynomial of the entries, still retaining commutativity. E.g. instead of summing x+1, you can sum x2+x+1. Now it is more difficult to contrive sets of changes with the same sum.
Here's a reasonable hash function for std::unordered_multiset<int> it would be better if the computations were taken mod a large prime but the idea stands.
#include <iostream>
#include <unordered_set>
namespace std {
template<>
struct hash<unordered_multiset<int>> {
typedef unordered_multiset<int> argument_type;
typedef std::size_t result_type;
const result_type BASE = static_cast<result_type>(0xA67);
result_type log_pow(result_type ex) const {
result_type res = 1;
result_type base = BASE;
while (ex > 0) {
if (ex % 2) {
res = res * base;
}
base *= base;
ex /= 2;
}
return res;
}
result_type operator()(argument_type const & val) const {
result_type h = 0;
for (const int& el : val) {
h += log_pow(el);
}
return h;
}
};
};
int main() {
std::unordered_set<std::unordered_multiset<int>> mySet;
std::unordered_multiset<int> set1{1,2,3,4};
std::unordered_multiset<int> set2{1,1,2,2,3,3,4,4};
std::cout << "Hash 1: " << std::hash<std::unordered_multiset<int>>()(set1)
<< std::endl;
std::cout << "Hash 2: " << std::hash<std::unordered_multiset<int>>()(set2)
<< std::endl;
return 0;
}
Output:
Hash 1: 2290886192
Hash 2: 286805088
When it's a prime p, the number of collisions is proportional to 1/p. I'm not sure what the analysis is for powers of two. You can make updates to the hash efficient by adding/subtracting BASE^x when you insert/remove the integer x.
Implement the inner multiset as a value->count hash map.
This will allow you to avoid the problem that an even number of elements cancels out via xor in the following way: Instead of xor-ing each element, you construct a new number from the count and the value (e.g. multiplying them), and then you can build the full hash using xor.
I have a function, f(a,b), that accepts two inputs. I do not know ahead of time which values of a and b will be used. I'm okay with being a little wasteful on memory (I care about speed). I want to be able to check if the output of f(a,b) has already been delivered, and if so, deliver that output again without re-running through the f(a,b) process.
Trivially easy to do in Python with decorators, but C++ is way over my head here.
I would use a std::map (or maybe an std::unordered_map) whose key is a std::pair, or perhaps use a map of maps.
C++11 improvements are probably helpful in that case. Or maybe some Boost thing.
The poster asks:
I want to be able to check if the output of f(a,b) has already been delivered, and if so, deliver that output again without re-running through the f(a,b) process.
It's pretty easy in C++ using a std::map. The fact that the function has exactly two parameters means that we can use std::pair to describe them.
#include <map>
#include <iostream>
uint64_t real_f(int a, int b) {
std::cout << "*";
// Do something tough:
return (uint64_t)a*b;
}
uint64_t memo_f(int a, int b) {
typedef std::pair<int, int> key;
typedef std::map<key, uint64_t> map;
static map m;
key k(a,b);
map::iterator it = m.find(k);
if(it == m.end()) {
return m[k] = real_f(a, b);
}
return it->second;
}
int main () {
std::cout << memo_f(1, 2) << "\n";
std::cout << memo_f(3, 4) << "\n";
std::cout << memo_f(1, 2) << "\n";
std::cout << memo_f(3, 4) << "\n";
std::cout << memo_f(5, 6) << "\n";
}
The output of the above program is:
*2
*12
2
12
*30
The lines without asterisks represent cached results.
With C++11, you could use tasks and futures. Let f be your function:
int f(int a, int b)
{
// Do hard work.
}
Then you would schedule the function execution, which returns you a handle to the return value. This handle is called a future:
template <typename F>
std::future<typename std::result_of<F()>::type>
schedule(F f)
{
typedef typename std::result_of<F()>::type result_type;
std::packaged_task<result_type> task(f);
auto future = task.get_future();
tasks_.push_back(std::move(task)); // Queue the task, execute later.
return std::move(future);
}
Then, you could use this mechanism as follows:
auto future = schedule(std::bind(&f, 42, 43)); // Via std::bind.
auto future = schedule([&] { f(42, 43); }); // Lambda alternative.
if (future.has_value())
{
auto x = future.get(); // Blocks if the result of f(a,b) is not yet availble.
g(x);
}
Disclaimer: my compiler does not support tasks/futures, so the code may have some rough edges.
The main point about this question are the relative expenses in CPU and RAM between calculating f(a,b) and keeping some sort of lookup table to cache results.
Since an exhaustive table of 128 bits index length is not (yet) feasable, we need to reduce the lookup space into a manageable size - this can't be done without some considerations inside your app:
How big is the really used space of function inputs? Is there a pattern in it?
What about the temporal component? Do you expect repeated calculations to be close to one another or ditributed along the timeline?
What about the distribution? Do you assume a tiny part of the index space to consume the majority of function calls?
I would simply start with a fixed-size array of (a,b, f(a,b)) tuples and a linear search. Depending on your pattern as asked above, you might want to
window-slide it (drop oldest on a cache miss): This is good for localized reocurrences
have (a,b,f(a,b),count) tuples with the tuple with the smallest count being expelled - this is good for non-localized occurrences
have some key-function determine a position in the cache (this is good for tiny index space usage)
whatever else Knuth or Google might have thought of
You might also want to benchmark repeated calculation against the lookup mechanism, if the latter becomes more and more complex: std::map and freinds don't come for free, even if they are high-quality implementations.
The only easy way is to use std::map. std::unordered_map does not work. We cannot use std::pair as the key in unordered map. You can do the following,
std::map<pair<int, int>, int> mp;
int func(int a, int b)
{
if (mp.find({a, b}) != mp.end()) return mp[{a, b}];
// compute f(a, b)...
mp[{a, b}] = // computed value;
return mp[{a, b}];
}