How to explicitly cast argument to match an expected function parameter? - c++

I am trying to write a generic function to compute an average over a certain range.
template <typename Range, typename Ret, typename Func>
Ret average(Range range, Ret zero, Func extract) {
Ret sum = zero;
int numElements = 0;
for (const auto& elem : range) {
sum += extract(elem);
++numElements;
}
if (numElements > 0)
sum /= numElements;
return sum;
}
The problam I am having is with the usage of the /= operator, but to better clarify the arguments of this function, let me clarify them:
Range range is any object that defines a range through begin() and end() member funcitons. I may need to add const& to avoid unnecessary copying.
Ret zero defines the neutral element of the addition used when computing the average. It could be just a scalar, but will work with vectors or matrices too for example.
Func extract is a function (usually given as a lambda function) that converts the elements of the range into Ret values that I average over. In practice I use it as a getter of a specific field in big objects that I iterate over.
I could probably define it as std::function<Ret(decltype(*range.begin()))> or something similar, if C++ didn't have problems deducting types this way.
I assume that Ret provides some /= operator that the above function can work with, but I do not want to require it to take an int specifically.
In my use case, for example, Ret works with float-s and this gives me an annoying warning:
warning: 'argument': conversion from 'int' to 'float', possible loss of data
So, what are my options to make the above function clean and work with any suitable operator/=?
I tried, for example, to deduct the type of the right argument of the operator and explicitly cast to it:
template <typename Range, typename Ret, typename Func>
Ret average(Range range, Ret zero, Func extract) {
Ret sum = zero;
int numElements = 0;
for (const auto& elem : range) {
sum += extract(elem);
++numElements;
}
using F = std::remove_pointer<decltype(&Ret::operator/=)>;
if (numElements > 0)
sum /= static_cast<typename boost::function_traits<F>::arg1_type>(numElements);
return sum;
}
But I get a lot of compile errors, suggesting that I don't know what I am doing. Starts with:
error: 'boost::detail::function_traits_helper<std::remove_pointer<SpecificTypeUsedAsRet &(__cdecl SpecificTypeUsedAsRet::* )(float)> *>': base class undefined
That's probably because boost::function_traits does not work with member functions, just regular ones?
I am also concerned that this solution may not work when:
The operator/= is not given as a member function, but as a regular function with two arguments.
The operator/= is overloaded with respect to its right operand. An int may match only one of the overloads - so there is no ambiguity, but decltype won't know which overload to take.
I would prefer not to use boost but stick to the powers provided by newest C++ standards

You could simply declare Ret numElements = 0; instead of making it an int. If it has /= operator, it probably has an ++ operator; or you could use num_elements += 1 instead.

Related

Multiply variable number of arguments

Say I have a variable amount of arguments which I want to multiply together. The first way I think of is a recursive algorithm:
template<typename Head>
u64 Multiply(Head head) const
{
return head;
}
template<typename Head, typename... Tail>
u64 Multiply(Head head, Tail... tail) const
{
return head * Multiply(tail...);
}
But then I saw this trick:
// u32 and u64 are 32 and 64 bit unsigned integers.
template<typename... T>
u64 Multiply(T... args)
{
u64 res = 1;
for (const u32& arg: {args...})
res *= arg;
return res;
}
The second one appears way nicer to me. Easier to read. However, how does this act on performance? Is anything being copied? What does {args...} do in the first place? Is there a better method?
I have access to C++14.
Edit to be clear: It is about run time multiplication, not compile time.
More to be clear: I do not want to compute integers necessarily(although that is my current application), but the algorithm that I found was specialized for integers.
More: Arguments are of the same type. Algorithms without this restriction would be very interesting but maybe for a different question.
There are multiple questions asked here:
What's the impact on performance? Dunno. You'll need to measure. Depending on the type of the arguments I can imagine that the compiler entirely optimizes things either way, though: it does know the number of arguments and the types.
What is { args... }? Well, it creates an std::initializer_list<T> for the common type of the arguments (assuming there is one). You may want to use the value with std::common_type_t<T...> instead of a fixed type, though.
Is there a better method? There are a couple of approaches although I could imagine that the compiler actually does rather well with this expansion. The alternative which immediately comes to mind is return (args * ... * 1); which, however, requires C++17. If there is at least one argument the * 1 can be omitted: it is there to avoid a compile-time error if there is an empty list of variadic parameters.
The code
template<typename... T>
u64 Multiply(T... args)
{
u64 res = 1;
for (const u32& size : {args...})
res *= size;
return res;
}
is a bit mysterious to me :-) Why we have template parameters with type T and inside the method we used fix size values? And the variable name size looks very obscure because this var has nothing to do with any kind of size. And using integer types inside is also not a valid assumption if you give floating point data into the template.
OK, but to answer your question:
The first one can be used with all types you put into the template function. The second one used fixed ( unsigned integer ) types, which is not what I expect if I see the declaration of the template itself.
Both version can be made constexpr as I learned now :-) and work pretty well for compile time calculation.
To answer the question from your comment:
{args...}
expands to:
{ 1,2,3,4}
which is simply an "array" ( std::std::initializer_list) and only works if all elements have the same type.
So having
for (const u32& size : {args...})
simply iterates over the array.

C++ Auto Keyword - Float vs Int Trouble

I'm relatively new to C++. I just read about the auto keyword in regards to type deduction. I've tried implementing this in a couple functions only to find that it was causing all of kinds of issues when working with math operators. I believe what was happening was that my functions started implementing integer division when I actually needed float division (variables 'i' and 'avg'). I posted the code using the auto keywords below.
Now when I explicitly declared the variables as floats, the function worked fine.
So is this an example in which using auto would not be preferred? However, I can definitely see that they would help when generating the iterators.
namespace Probability
{
/* ExpectedValueDataSet - Calculates the expected value of a data set */
template <typename T, std::size_t N>
double ExpectedValueDataSet(const std::array<T, N>& data)
{
auto i = 0;
auto avg = 0;
for(auto it = data.begin(); it != data.end(); it++)
{
i = it - data.begin() + 1;
avg = ((i-1)/i)*avg + (*it)/i;
}
std::cout << avg << " \n";
return avg;
}
};
The literal 0 is of type int.
A variable auto avg = 0; therefore has type int.
The literal 0.0 (or e.g. 3.14) has type double, which is what you want.
As a general rule, use auto for a variable declaration where
the type is explicitly specified in the initializer, or
the type is awfully verbose, like some iterator type.
But don't use it without reason. :)
If for e.g. aesthetic reasons you want to keep i as an integer, then rewrite the computation
((i-1)/i)*avg + (*it)/i
to e.g.
((i-1)*avg + *it)/i
to avoid pure integer arithmetic for (i-1)/i.

General iterable type with specific element type

I'm trying to write a function for enumerating through a number of a specific base, where the number is stored in some kind of list. Here is an example, taking a std::vector
void next_value(std::vector<unsigned int> &num, unsigned int base) {
unsigned int carry = 1;
for (unsigned int &n: num) {
n += carry;
if (n >= base) {
carry = 1;
n = 0;
} else {
carry = 0;
}
}
}
The num vector doesn't necessarily need to be a vector, it can be an array, or actually any type that has a std::begin() and std::end() defined for it. Is there a way to express that num can be anything with begin() and end(), but that it must have unsigned int type for its elements?
If you really want to check this, try:
template <class Sequence>
void next_value(Sequence &num, unsigned int base) {
static_assert(boost::is_same<Sequence::value_type, unsigned>::value, "foo");
// ...
If you're not using C++11 yet, use BOOST_STATIC_ASSERT instead.
If you need to support plain C-style arrays, a bit more work is needed.
On the other hand, #IgorTandetnik correctly points out that you probably do not need to explicitly check at all. The compiler will give you an (ugly) error if you pass a type which is truly unusable.
Writing a generic function with a static_assert is a good idea, because you can give the user a helpful error message rather than "foo".
However there is another approach using C++11:
template <typename Container, typename ValueType>
typename std::enable_if<std::is_same<Container::value_type, ValueType>::value, void>::type
next_value(Container& num, ValueType base)
{
// ...
}
This is a rather cryptic approach if you've never seen this before. This uses "Substitution failure is not an error" (SFINAE for short). If the ValueType doesn't match the Container::value_type, this template does not form a valid function definition and is therefore ignored. The compiler behaves as if there is not such function. I.e., the user can't use the function with an invalid combination of Container and ValueType.
Note that I do recommend using the static_assert! If you put a reasonable error message there, the user will thank you a thousand times.
I would not in your case.
Change carry to a book, use ++ instead of +=, make base a type T, and n an auto&.
Finally, return carry.
Your code now ducktypes exactly the requirements.
If you want diagnostics, static assert that the operations make sense with custom error messages.
This let's your code handle unsigned ints, polynomials, bigints, whatever.

changing/adding behavior of stl bitset<>::reference::operator=(int)

I need a bitset with a slightly diffrent behavior when asigning variables with integer type to a specific bit. The bit should be set to zero if the assigned integer is smaller then one, and to one elsewise.
As a simple solution I copied the STL bitset, replaced the classname with altbitset, adjusted namespaces and include guard and added following function under reference& operator=(bool __x) in the nested reference class:
template <typename T>
reference& operator=(T i) {
if (i<1) return operator=(false);
return operator=(true);
}
It works as expected.
Question is if there is a better way doing this.
You shouldn't copy a library just to add a new function. Not only that, the new function is wildly unintuitive and could possibly be the source of errors for even just reading the code, let alone writing it.
Before:
bv[n] = -1; // I know a Boolean conversion on -1 will take place
assert(bv[n]); // of course, since -1 as a Boolean is true
After:
bv[n] = -1; // I guess an integer < 1 means false?
assert(bv[n]); // Who changed my bitvector semantics?!
Just write it out so it makes sense in your domain:
bv[n] = (i < 1);
Remember: simplest doesn't always mean fewest characters, it means clearest to read.
If you do want to extend the functionality of existing types, you should do so with free functions:
template <typename BitSet, typename Integer>
auto assign_bit_integer(BitSet& bits, const std::size_t bit, const Integer integer) ->
typename std::enable_if<std::is_integral<Integer>::value,
typename BitSet::reference>::type
{
return bits[bit] = (integer < 1);
}
Giving:
std::bitset<8> bits;
assign_bit_integer(bits, 0, 5);
// ERROR: assign_bit_integer(bits, 0, 5.5);
But for such a small function with no clear "obvious" name that describes what it does concisely(assign_bit_true_if_less_than_one_otherwise_false is verbose, to say the least), just write out the code; it says the same thing anyway.

Why does sizeof operator fail to work inside function template?

I am trying to learn C++ function templates.I am passing an array as pointer to my function template. In that, I am trying to find the size of an array. Here is the function template that I use.
template<typename T>
T* average( T *arr)
{
T *ansPtr,ans,sum = 0.0;
size_t sz = sizeof(arr)/sizeof(arr[0]);
cout<<"\nSz is "<<sz<<endl;
for(int i = 0;i < sz; i++)
{
sum = sum + arr[i];
}
ans = (sum/sz);
ansPtr = &ans;
return ansPtr;
}
The cout statement displays the size of arr as 1 even when I am passing the pointer to an array of 5 integers. Now I know this might be a possible duplicate of questions to which I referred earlier but I need a better explanation on this.
Only thing I could come up with is that since templates are invoked at runtime,and sizeof is a compile time operator, compiler just ignores the line
int sz = sizeof(arr)/sizeof(arr[0]);
since it does not know the exact type of arr until it actually invokes the function.
Is it correct or am I missing something over here? Also is it reliable to send pointer to an array to the function templates?
T *arr
This is C++ for "arr is a pointer to T". sizeof(arr) obviously means "size of the pointer arr", not "size of the array arr", for obvious reasons. That's the crucial flaw in that plan.
To get the size of an array, the function needs to operate on arrays, obviously not on pointers. As everyone knows (right?) arrays are not pointers.
Furthermore, an average function should return an average value. But T* is a "pointer to T". An average function should not return a pointer to a value. That is not a value.
Having a pointer return type is not the last offense: returning a pointer to a local variable is the worst of all. Why would you want to steal hotel room keys?
template<typename T, std::size_t sz>
T average( T(&arr)[sz])
{
T ans,sum = 0.0;
cout<<"\nSz is "<<sz<<endl;
for(int i = 0;i < sz; i++)
{
sum = sum + arr[i];
}
ans = (sum/sz);
return ans;
}
If you want to be able to access the size of a passed parameter, you'd have to make that a template parameter, too:
template<typename T, size_t Len>
T average(const T (&arr)[Len])
{
T sum = T();
cout<<"\nSz is "<<Len<<endl;
for(int i = 0;i < Len; i++)
{
sum = sum + arr[i];
}
return (sum/Len);
}
You can then omit the sizeof, obviously. And you cannot accidentially pas a dynamically allocated array, which is a good thing. On the downside, the template will get instantiated not only once for every type, but once for every size. If you want to avoid duplicating the bulk of the code, you could use a second templated function which accepts pointer and length and returns the average. That could get called from an inline function.
template<typename T>
T average(const T* arr, size_t len)
{
T sum = T();
cout<<"\nSz is "<<len<<endl;
for(int i = 0;i < len; i++)
{
sum = sum + arr[i];
}
return (sum/len);
}
template<typename T, size_t Len>
inline T average(const T (&arr)[Len])
{
return average(arr, Len);
}
Also note that returning the address of a variable which is local to the function is a very bad idea, as it will not outlive the function. So better to return a value and let the compiler take care of optimizing away unneccessary copying.
Arrays decay to pointers when passed as a parameter, so you're effectively getting the size of the pointer. It has nothing to do with templates, it's how the language is designed.
Others have pointed out the immediate errors, but IMHO, there are two
important points that they haven't addresses. Both of which I would
consider errors if they occurred in production code:
First, why aren't you using std::vector? For historical reasons, C
style arrays are broken, and generally should be avoided. There are
exceptions, but they mostly involve static initialization of static
variables. You should never pass C style arrays as a function
argument, because they create the sort of problems you have encountered.
(It's possible to write functions which can deal with both C style
arrays and std::vector efficiently. The function should be a
function template, however, which takes two iterators of the template
type.)
The second is why aren't you using the functions in the standard
library? Your function can be written in basically one line:
template <typename ForwardIterator>
typename ForwardIterator::value_type
average( ForwardIterator begin, ForwardIterator end )
{
return std::accumulate( begin, end,
typename::ForwardIterator::value_type() )
/ std::distance( begin, end );
}
(This function, of course, isn't reliable for floating point types,
where rounding errors can make the results worthless. Floating point
raises a whole set of additional issues. And it probably isn't really
reliable for the integral types either, because of the risk of overflow.
But these are more advanced issues.)