I wanted to do a couple of sanity tests for a pair of convenience functions that split a 64-bit integer in two 32-bit integers, or do the reverse. The intent is that you don't do the bit shifts and logic ops all over again with the potential of a typo somewhere. The sanity tests were supposed to make 100% sure that the pair of functions, although pretty trivial, indeed works as intended.
Nothing fancy, really... so as the first thing I added this:
static constexpr auto joinsplit(uint64_t h) noexcept { auto [a,b] = split(h); return join(a,b); }
static_assert(joinsplit(0x1234) == 0x1234);
... which works perfectly well, but is less "exhaustive" than I'd like. Of course I can follow up with another 5 or 6 tests with different patterns, copy-paste to the rescue. But seriously... wouldn't it be nice to have the compiler check a dozen or so values, within a pretty little function? No copy-paste? Now that would be cool.
With a recursive variadic template, this can be done (and it's what I'm using in lack of something better), but it's in my opinion needlessly ugly.
Given the power of constexpr functions and range-based for, wouldn't it be cool to have something nice and readable like:
constexpr bool test()
{
for(constexpr auto value : {1,2,3}) // other numbers of course
{
constexpr auto [a,b] = split(value);
static_assert(value == join(a,b));
}
return true; // never used
}
static_assert(test()); // invoke test
A big plus of this solution would be that in addtion to being much more readable, it would be obvious from the failing static_assert not just that the test failed in general, but also the exact value for which it failed.
This, however, doesn't work for two reasons:
You cannot declare value as constexpr because, as stated by the compiler: "The value of __for_begin is not usable in a constant expression". The reason for that is also explained by the compiler: "note: __for_begin was not declared constexpr". Fair enough, that is a reason, silly as it may be.
Decomposition declaration cannot be declared constexpr (which is promptly followed by a non-constexpr condition for static_assert error).
In both cases, I wonder if there is truly a hindrance to allowing these being constexpr. I understand why it doesn't work (see above!), but the interesting question is why is it like that?
I acknowledge that declaring value as constexpr is a lie to begin with since its value obviously is not constant (it's different in each iteration). On the other hand, any value that it ever takes is from a compiletime constant set of values, yet without the constexpr keyword the compiler refuses to treat it as such, i.e. the result of split is non-constexpr and not usable with static_assert although it really is, by all means.
OK, well... I'm probably really asking too much if I want to declare something that has a changing value as constant. Even though from some point of view, if it is constant, in each iteration's scope. Somehow... is the language missing a concept here?
I acknowledge that range-based for is, like lambdas, really just a hack that mostly works, and mostly works invisibly, not a true language feature -- the mention of __for_begin is a dead giveaway on its implementation. I also acknowledge that it's generally tricky (forbidding) to allow the counter in a normal for loop being constexpr, not only because it's not constant, but because you can in principle have any kind of expressions in there, and it truly cannot be easily told in advance what values in general will be generated (not with reasonable effort during compiletime, anyway).
On the other hand, given an exact finite sequence of literals (which is as compiletime-constant as it can get), the compiler should be able to do a number of iterations, each iteration of the loop with a different, compiletime-constant value (unroll the loop if you will). Somehow, in a readable (non-recursive-template) manner, such thing should be possible?
Am I asking too much there?
I acknowledge that a decomposition declaration is not an altogether "trivial" thing. It might for example require calling get on a tuple, which is a class template (that could in principle be anything). But, whatever, get happens to be constexpr (so that's no excuse), and also in my concrete example, an anonymous temporary of an anonymous struct with two members is returned, so public direct member binding (to a constexpr struct) is used.
Ironically, the compiler even does exactly the right thing in the first example, too (and with recursive templates as well). So apparently, it's quite possible. Only just, for some reason, not in the second example.
Again, am I asking too much here?
The likely correct answer will be "The standard doesn't provide that".
Apart from that, are there any true, technical reasons why this cannot, could not, or should not work? Is that an oversight, an implementation deficiency, or intentionally forbidden?
I can't answer you theoretical questions (" is the language missing a concept here?", " such thing should be possible? Am I asking too much there?", "there any true, technical reasons why this cannot, could not, or should not work? Is that an oversight, an implementation deficiency, or intentionally forbidden?") but, from the practical point of view...
With a recursive variadic template, this can be done (and it's what I'm using in lack of something better), but it's in my opinion needlessly ugly.
I think that variadic templates is the right way and (you tagged C++17), using folding, there is no reason to recursivize it.
By example
template <uint64_t ... Is>
static constexpr void test () noexcept
{ static_assert( ((joinsplit(Is) == Is) && ...) ); }
The following is a full compiling example
#include <utility>
#include <cstdint>
static constexpr std::pair<uint32_t, uint32_t> split (uint64_t h) noexcept
{ return { h >> 32 , h }; }
static constexpr uint64_t join (uint32_t h1, uint32_t h2) noexcept
{ return (uint64_t{h1} << 32) | h2; }
static constexpr auto joinsplit (uint64_t h) noexcept
{ auto [a,b] = split(h); return join(a, b); }
template <uint64_t ... Is>
static constexpr void test () noexcept
{ static_assert( ((joinsplit(Is) == Is) && ...) ); }
int main()
{
test<1, 2, 3>();
}
-- EDIT -- Bonus answer
Folding (C++17) is great but never underestimate the power of comma operator.
You can obtain the same result (well... quite same) in C++14 with an helper function and the initialization of an unused array
template <uint64_t I>
static constexpr void test_helper () noexcept
{ static_assert( joinsplit(I) == I, "!" ); }
template <uint64_t ... Is>
static constexpr void test () noexcept
{
using unused = int[];
(void)unused { 0, (test_helper<Is>(), 0)... };
}
Obviously after a little change in joinsplit() to make it C++14 compliant
static constexpr auto joinsplit (uint64_t h) noexcept
{ auto p = split(h); return join(p.first, p.second); }
Related
It seems that auto was a fairly significant feature to be added in C++11 that seems to follow a lot of the newer languages. As with a language like Python, I have not seen any explicit variable declaration (I am not sure if it is possible using Python standards).
Is there a drawback to using auto to declare variables instead of explicitly declaring them?
The question is about drawbacks of auto, so this answer highlights some of those. A drawback of using a programming language feature (in this case, a facility associated with a language keyword) does not mean that feature is unacceptable, nor does it mean that feature should be avoided entirely. It means there are disadvantages along with advantages, so a decision to use auto type deduction over alternatives must consider engineering trade-offs.
When used well, auto has several advantages as well - which is not the subject of the question. The drawbacks result from ease of abuse, and from increased potential for code to behave in unintended or unexpected ways.
The main drawback is that, by using auto, you don't necessarily know the type of object being created. There are also occasions where the programmer might expect the compiler to deduce one type, but the compiler adamantly deduces another.
Given a declaration like
auto result = CallSomeFunction(x,y,z);
you don't necessarily have knowledge of what type result is. It might be an int. It might be a pointer. It might be something else. All of those support different operations. You can also dramatically change the code by a minor change like
auto result = CallSomeFunction(a,y,z);
because, depending on what overloads exist for CallSomeFunction() the type of result might be completely different - and subsequent code may therefore behave completely differently than intended. You might suddenly trigger error messages in later code(e.g. subsequently trying to dereference an int, trying to change something which is now const). The more sinister change is where your change sails past the compiler, but subsequent code behaves in different and unknown - possibly buggy - ways. For example (as noted by sashoalm in comments) if the deduced type of a variable changes an integral type to a floating point type - and subsequent code is unexpectedly and silently affected by loss of precision.
Not having explicit knowledge of the type of some variables therefore makes it harder to rigorously justify a claim that the code works as intended. This means more effort to justify claims of "fit for purpose" in high-criticality (e.g. safety-critical or mission-critical) domains.
The other, more common drawback, is the temptation for a programmer to use auto as a blunt instrument to force code to compile, rather than thinking about what the code is doing, and working to get it right.
This isn't a drawback of auto in a principled way exactly, but in practical terms it seems to be an issue for some. Basically, some people either: a) treat auto as a savior for types and shut their brain off when using it, or b) forget that auto always deduces to value types. This causes people to do things like this:
auto x = my_obj.method_that_returns_reference();
Oops, we just deep copied some object. It's often either a bug or a performance fail. Then, you can swing the other way too:
const auto& stuff = *func_that_returns_unique_ptr();
Now you get a dangling reference. These problems aren't caused by auto at all, so I don't consider them legitimate arguments against it. But it does seem like auto makes these issue more common (from my personal experience), for the reasons I listed at the beginning.
I think given time people will adjust, and understand the division of labor: auto deduces the underlying type, but you still want to think about reference-ness and const-ness. But it's taking a bit of time.
Other answers are mentioning drawbacks like "you don't really know what the type of a variable is." I'd say that this is largely related to sloppy naming convention in code. If your interfaces are clearly-named, you shouldn't need to care what the exact type is. Sure, auto result = callSomeFunction(a, b); doesn't tell you much. But auto valid = isValid(xmlFile, schema); tells you enough to use valid without having to care what its exact type is. After all, with just if (callSomeFunction(a, b)), you wouldn't know the type either. The same with any other subexpression temporary objects. So I don't consider this a real drawback of auto.
I'd say its primary drawback is that sometimes, the exact return type is not what you want to work with. In effect, sometimes the actual return type differs from the "logical" return type as an implementation/optimisation detail. Expression templates are a prime example. Let's say we have this:
SomeType operator* (const Matrix &lhs, const Vector &rhs);
Logically, we would expect SomeType to be Vector, and we definitely want to treat it as such in our code. However, it is possible that for optimisation purposes, the algebra library we're using implements expression templates, and the actual return type is this:
MultExpression<Matrix, Vector> operator* (const Matrix &lhs, const Vector &rhs);
Now, the problem is that MultExpression<Matrix, Vector> will in all likelihood store a const Matrix& and const Vector& internally; it expects that it will convert to a Vector before the end of its full-expression. If we have this code, all is well:
extern Matrix a, b, c;
extern Vector v;
void compute()
{
Vector res = a * (b * (c * v));
// do something with res
}
However, if we had used auto here, we could get in trouble:
void compute()
{
auto res = a * (b * (c * v));
// Oops! Now `res` is referring to temporaries (such as (c * v)) which no longer exist
}
It makes your code a little harder, or tedious, to read.
Imagine something like that:
auto output = doSomethingWithData(variables);
Now, to figure out the type of output, you'd have to track down signature of doSomethingWithData function.
One of the drawbacks is that sometimes you can't declare const_iterator with auto. You will get ordinary (non const) iterator in this example of code taken from this question:
map<string,int> usa;
//...init usa
auto city_it = usa.find("New York");
Like this developer, I hate auto. Or rather, I hate how people misuse auto.
I'm of the (strong) opinion that auto is for helping you write generic code, not for reducing typing.
C++ is a language whose goal is to let you write robust code, not to minimize development time.
This is fairly obvious from many features of C++, but unfortunately a few of the newer ones like auto that reduce typing mislead people into thinking they should start being lazy with typing.
In pre-auto days, people used typedefs, which was great because typedef allowed the designer of the library to help you figure out what the return type should be, so that their library works as expected. When you use auto, you take away that control from the class's designer and instead ask the compiler to figure out what the type should be, which removes one of the most powerful C++ tools from the toolbox and risks breaking their code.
Generally, if you use auto, it should be because your code works for any reasonable type, not because you're just too lazy to write down the type that it should work with.
If you use auto as a tool to help laziness, then what happens is that you eventually start introducing subtle bugs in your program, usually caused by implicit conversions that did not happen because you used auto.
Unfortunately, these bugs are difficult to illustrate in a short example here because their brevity makes them less convincing than the actual examples that come up in a user project -- however, they occur easily in template-heavy code that expect certain implicit conversions to take place.
If you want an example, there is one here. A little note, though: before being tempted to jump and criticize the code: keep in mind that many well-known and mature libraries have been developed around such implicit conversions, and they are there because they solve problems that can be difficult if not impossible to solve otherwise. Try to figure out a better solution before criticizing them.
auto does not have drawbacks per se, and I advocate to (hand-wavily) use it everywhere in new code. It allows your code to consistently type-check, and consistently avoid silent slicing. (If B derives from A and a function returning A suddenly returns B, then auto behaves as expected to store its return value)
Although, pre-C++11 legacy code may rely on implicit conversions induced by the use of explicitly-typed variables. Changing an explicitly-typed variable to auto might change code behaviour, so you'd better be cautious.
Keyword auto simply deduce the type from the return value. Therefore, it is not equivalent with a Python object, e.g.
# Python
a
a = 10 # OK
a = "10" # OK
a = ClassA() # OK
// C++
auto a; // Unable to deduce variable a
auto a = 10; // OK
a = "10"; // Value of const char* can't be assigned to int
a = ClassA{} // Value of ClassA can't be assigned to int
a = 10.0; // OK, implicit casting warning
Since auto is deduced during compilation, it won't have any drawback at runtime whatsoever.
What no one mentioned here so far, but for itself is worth an answer if you asked me.
Since (even if everyone should be aware that C != C++) code written in C can easily be designed to provide a base for C++ code and therefore be designed without too much effort to be C++ compatible, this could be a requirement for design.
I know about some rules where some well defined constructs from C are invalid for C++ and vice versa. But this would simply result in broken executables and the known UB-clause applies which most times is noticed by strange loopings resulting in crashes or whatever (or even may stay undetected, but that doesn't matter here).
But auto is the first time1 this changes!
Imagine you used auto as storage-class specifier before and transfer the code. It would not even necessarily (depending on the way it was used) "break"; it actually could silently change the behaviour of the program.
That's something one should keep in mind.
1At least the first time I'm aware of.
As I described in this answer auto can sometimes result in funky situations you didn't intend.
You have to explictly say auto& to have a reference type while doing just auto can create a pointer type. This can result in confusion by omitting the specifier all together, resulting in a copy of the reference instead of an actual reference.
One reason that I can think of is that you lose the opportunity to coerce the class that is returned. If your function or method returned a long 64 bit, and you only wanted a 32 unsigned int, then you lose the opportunity to control that.
I think auto is good when used in a localized context, where the reader easily & obviously can deduct its type, or well documented with a comment of its type or a name that infer the actual type. Those who don't understand how it works might take it in the wrong ways, like using it instead of template or similar. Here are some good and bad use cases in my opinion.
void test (const int & a)
{
// b is not const
// b is not a reference
auto b = a;
// b type is decided by the compiler based on value of a
// a is int
}
Good Uses
Iterators
std::vector<boost::tuple<ClassWithLongName1,std::vector<ClassWithLongName2>,int> v();
..
std::vector<boost::tuple<ClassWithLongName1,std::vector<ClassWithLongName2>,int>::iterator it = v.begin();
// VS
auto vi = v.begin();
Function Pointers
int test (ClassWithLongName1 a, ClassWithLongName2 b, int c)
{
..
}
..
int (*fp)(ClassWithLongName1, ClassWithLongName2, int) = test;
// VS
auto *f = test;
Bad Uses
Data Flow
auto input = "";
..
auto output = test(input);
Function Signature
auto test (auto a, auto b, auto c)
{
..
}
Trivial Cases
for(auto i = 0; i < 100; i++)
{
..
}
Another irritating example:
for (auto i = 0; i < s.size(); ++i)
generates a warning (comparison between signed and unsigned integer expressions [-Wsign-compare]), because i is a signed int. To avoid this you need to write e.g.
for (auto i = 0U; i < s.size(); ++i)
or perhaps better:
for (auto i = 0ULL; i < s.size(); ++i)
I'm surprised nobody has mentioned this, but suppose you are calculating the factorial of something:
#include <iostream>
using namespace std;
int main() {
auto n = 40;
auto factorial = 1;
for(int i = 1; i <=n; ++i)
{
factorial *= i;
}
cout << "Factorial of " << n << " = " << factorial <<endl;
cout << "Size of factorial: " << sizeof(factorial) << endl;
return 0;
}
This code will output this:
Factorial of 40 = 0
Size of factorial: 4
That was definetly not the expected result. That happened because auto deduced the type of the variable factorial as int because it was assigned to 1.
I've searched this question here(on SO), and as far as I know all questions assume what is compile time functions, but it is almost impossible for a beginner to know what that means, because resources to know that is quite rare.
I have found short wikipedia article which shows how to write incomprehensible code by writing never-seen-before use of enums in C++, and a video which is about future of it, but explains very little about that.
It seems to me that there are two ways to write compile time function in C++
constexpr
template<>
I've been through a short introduction of both of them, but I have no idea how they pop up here.
Can anyone explain compile time function with a sufficiently good example such that it encompasses most relevent features of it?
In cpp, as mentioned by you, there are two ways of evaluating a code on compile time - constexpr functions and template metaprogramming.
There are a few differences between those solutions. The template option is older and therefore supported by wider range of compilers. Additionaly templates are guaranteed to be evaluated in compile time while constexpr is somewhat like inline - it only suggests compiler that it is possible to do work while compiling. And for templates the arguments are usually passed via template parameters list while constexpr functions take arguments as regular functions (which they actually are). The constexpr functions are better in a manner that they can be called as regular functions in runtime.
Now the similarities - it must be possible for their parameters to be evaluated at compile time. So they must be either a literal or result of other compile-time function.
Having said all that let's look at compile time max function:
template<int a, int b>
struct max_template {
static constexpr int value = a > b ? a : b;
};
constexpr int max_fun(int a, int b) {
return a > b ? a : b;
}
int main() {
int x = 2;
int y = 3;
int foo = max_fun(3, 2); // can be evaluated at compile time
int bar = max_template<3, 2>::value; // is surely evaluated at compile time
// won't compile without compile-time arguments
// int bar2 = max_template<x, y>::value; // is surely evaluated at compile time
int foo = max_fun(x, y); // will be evaluated at runtime
return 0;
}
A "compile time function" as you have seen the term used is not a C++ construct, it's just the idea of computing stuff (hence, function) at compile-time (as opposed to computing at runtime or via a separate build tool outside the compiler). C++ makes this possible in several ways, of which you have found two:
Templates can indeed be used to compute arbitrary stuff, a set of techniques called "template metaprogramming". That's mostly by accident as they weren't designed for this purpose at all, hence the crazy syntax and struggles with old compilers. But in C++03 and before, that's all we had.
constexpr has been added in C++11 after seeing the need for compile-time calculations, and brings them back into somewhat saner territory. Its toolbelt has been expanding ever since, allowing more and more normal-looking code to be run at compile-time by just tacking a constexpr in the right place.
One could also mention macro metaprogramming, of which Boost.Preprocessor is a good example. But it's even more wonky and abhorrently arcane than old-school template metaprogramming, so you probably don't want to use it if you have a choice.
Suppose for the sake of argument I have following private constexpr in a class:
static constexpr uint16_t square_it(uint16_t x)
{
return std::pow(x, 2);
}
Then I want to construct a static constant array of these values for the integers up to 255 in the same section of the same class using the above constexpr:
static const uint16_t array_of_squares[256] =
{
//something
};
I'd like the array to be constructed at compile time, not at runtime if possible. I think the first problem is that using expressions like std::pow in a constexpr is not valid ISO C++ (though allowed by arm-gcc maybe?), as it can return a domain error. The actual expression I want to use is a somewhat complicated function involving std::exp.
Note that I don't have much of the std library available as I'm compiling for a small microprocessor, the Cortex M4.
Is there a more appropriate way to do this, say using preprocessor macros? I'd very much like to avoid using something like an external Python script to calculate the table each time it needs to be modified during development, and then pasting it in.
The problem, as you say, is that C standard library functions are in general not marked constexpr.
The best workaround here, if you need to use std::exp, is to write your own implementation that can be run at compile-time. If it's meant to be done at compile-time, then optimizing it probably isn't necessary, it only needs to be accurate and moderately efficient.
Someone asked a question about how to do that here a long time ago. You could reuse the idea from there and rewrite as a constexpr function in C++11, although you'd have to refactor it to avoid the for loop. In C++14 less refactoring would be required.
You could also try doing it strictly via templates, but it would be more painful, and double cannot be a template parameter so it would be more complicated.
How about something like this?
constexpr uint16_t square_it(uint16_t v) { return v*v; }
template <size_t N, class = std::make_index_sequence<N>>
struct gen_table;
template <size_t N, size_t... Is>
struct gen_table<N, std::index_sequence<Is...>> {
static const uint16_t values[N] = {square_it(Is)...};
};
constexpr auto&& array_of_squares = gen_table<256>::values;
I have no idea whether that microprocessor supports this sort of operation. It may not have make_index_sequence in your standard library (though you can find implementations on SO), and maybe that template instantiation will take too much memory. But at least it's something that works somewhere.
It seems that auto was a fairly significant feature to be added in C++11 that seems to follow a lot of the newer languages. As with a language like Python, I have not seen any explicit variable declaration (I am not sure if it is possible using Python standards).
Is there a drawback to using auto to declare variables instead of explicitly declaring them?
The question is about drawbacks of auto, so this answer highlights some of those. A drawback of using a programming language feature (in this case, a facility associated with a language keyword) does not mean that feature is unacceptable, nor does it mean that feature should be avoided entirely. It means there are disadvantages along with advantages, so a decision to use auto type deduction over alternatives must consider engineering trade-offs.
When used well, auto has several advantages as well - which is not the subject of the question. The drawbacks result from ease of abuse, and from increased potential for code to behave in unintended or unexpected ways.
The main drawback is that, by using auto, you don't necessarily know the type of object being created. There are also occasions where the programmer might expect the compiler to deduce one type, but the compiler adamantly deduces another.
Given a declaration like
auto result = CallSomeFunction(x,y,z);
you don't necessarily have knowledge of what type result is. It might be an int. It might be a pointer. It might be something else. All of those support different operations. You can also dramatically change the code by a minor change like
auto result = CallSomeFunction(a,y,z);
because, depending on what overloads exist for CallSomeFunction() the type of result might be completely different - and subsequent code may therefore behave completely differently than intended. You might suddenly trigger error messages in later code(e.g. subsequently trying to dereference an int, trying to change something which is now const). The more sinister change is where your change sails past the compiler, but subsequent code behaves in different and unknown - possibly buggy - ways. For example (as noted by sashoalm in comments) if the deduced type of a variable changes an integral type to a floating point type - and subsequent code is unexpectedly and silently affected by loss of precision.
Not having explicit knowledge of the type of some variables therefore makes it harder to rigorously justify a claim that the code works as intended. This means more effort to justify claims of "fit for purpose" in high-criticality (e.g. safety-critical or mission-critical) domains.
The other, more common drawback, is the temptation for a programmer to use auto as a blunt instrument to force code to compile, rather than thinking about what the code is doing, and working to get it right.
This isn't a drawback of auto in a principled way exactly, but in practical terms it seems to be an issue for some. Basically, some people either: a) treat auto as a savior for types and shut their brain off when using it, or b) forget that auto always deduces to value types. This causes people to do things like this:
auto x = my_obj.method_that_returns_reference();
Oops, we just deep copied some object. It's often either a bug or a performance fail. Then, you can swing the other way too:
const auto& stuff = *func_that_returns_unique_ptr();
Now you get a dangling reference. These problems aren't caused by auto at all, so I don't consider them legitimate arguments against it. But it does seem like auto makes these issue more common (from my personal experience), for the reasons I listed at the beginning.
I think given time people will adjust, and understand the division of labor: auto deduces the underlying type, but you still want to think about reference-ness and const-ness. But it's taking a bit of time.
Other answers are mentioning drawbacks like "you don't really know what the type of a variable is." I'd say that this is largely related to sloppy naming convention in code. If your interfaces are clearly-named, you shouldn't need to care what the exact type is. Sure, auto result = callSomeFunction(a, b); doesn't tell you much. But auto valid = isValid(xmlFile, schema); tells you enough to use valid without having to care what its exact type is. After all, with just if (callSomeFunction(a, b)), you wouldn't know the type either. The same with any other subexpression temporary objects. So I don't consider this a real drawback of auto.
I'd say its primary drawback is that sometimes, the exact return type is not what you want to work with. In effect, sometimes the actual return type differs from the "logical" return type as an implementation/optimisation detail. Expression templates are a prime example. Let's say we have this:
SomeType operator* (const Matrix &lhs, const Vector &rhs);
Logically, we would expect SomeType to be Vector, and we definitely want to treat it as such in our code. However, it is possible that for optimisation purposes, the algebra library we're using implements expression templates, and the actual return type is this:
MultExpression<Matrix, Vector> operator* (const Matrix &lhs, const Vector &rhs);
Now, the problem is that MultExpression<Matrix, Vector> will in all likelihood store a const Matrix& and const Vector& internally; it expects that it will convert to a Vector before the end of its full-expression. If we have this code, all is well:
extern Matrix a, b, c;
extern Vector v;
void compute()
{
Vector res = a * (b * (c * v));
// do something with res
}
However, if we had used auto here, we could get in trouble:
void compute()
{
auto res = a * (b * (c * v));
// Oops! Now `res` is referring to temporaries (such as (c * v)) which no longer exist
}
It makes your code a little harder, or tedious, to read.
Imagine something like that:
auto output = doSomethingWithData(variables);
Now, to figure out the type of output, you'd have to track down signature of doSomethingWithData function.
One of the drawbacks is that sometimes you can't declare const_iterator with auto. You will get ordinary (non const) iterator in this example of code taken from this question:
map<string,int> usa;
//...init usa
auto city_it = usa.find("New York");
Like this developer, I hate auto. Or rather, I hate how people misuse auto.
I'm of the (strong) opinion that auto is for helping you write generic code, not for reducing typing.
C++ is a language whose goal is to let you write robust code, not to minimize development time.
This is fairly obvious from many features of C++, but unfortunately a few of the newer ones like auto that reduce typing mislead people into thinking they should start being lazy with typing.
In pre-auto days, people used typedefs, which was great because typedef allowed the designer of the library to help you figure out what the return type should be, so that their library works as expected. When you use auto, you take away that control from the class's designer and instead ask the compiler to figure out what the type should be, which removes one of the most powerful C++ tools from the toolbox and risks breaking their code.
Generally, if you use auto, it should be because your code works for any reasonable type, not because you're just too lazy to write down the type that it should work with.
If you use auto as a tool to help laziness, then what happens is that you eventually start introducing subtle bugs in your program, usually caused by implicit conversions that did not happen because you used auto.
Unfortunately, these bugs are difficult to illustrate in a short example here because their brevity makes them less convincing than the actual examples that come up in a user project -- however, they occur easily in template-heavy code that expect certain implicit conversions to take place.
If you want an example, there is one here. A little note, though: before being tempted to jump and criticize the code: keep in mind that many well-known and mature libraries have been developed around such implicit conversions, and they are there because they solve problems that can be difficult if not impossible to solve otherwise. Try to figure out a better solution before criticizing them.
auto does not have drawbacks per se, and I advocate to (hand-wavily) use it everywhere in new code. It allows your code to consistently type-check, and consistently avoid silent slicing. (If B derives from A and a function returning A suddenly returns B, then auto behaves as expected to store its return value)
Although, pre-C++11 legacy code may rely on implicit conversions induced by the use of explicitly-typed variables. Changing an explicitly-typed variable to auto might change code behaviour, so you'd better be cautious.
Keyword auto simply deduce the type from the return value. Therefore, it is not equivalent with a Python object, e.g.
# Python
a
a = 10 # OK
a = "10" # OK
a = ClassA() # OK
// C++
auto a; // Unable to deduce variable a
auto a = 10; // OK
a = "10"; // Value of const char* can't be assigned to int
a = ClassA{} // Value of ClassA can't be assigned to int
a = 10.0; // OK, implicit casting warning
Since auto is deduced during compilation, it won't have any drawback at runtime whatsoever.
What no one mentioned here so far, but for itself is worth an answer if you asked me.
Since (even if everyone should be aware that C != C++) code written in C can easily be designed to provide a base for C++ code and therefore be designed without too much effort to be C++ compatible, this could be a requirement for design.
I know about some rules where some well defined constructs from C are invalid for C++ and vice versa. But this would simply result in broken executables and the known UB-clause applies which most times is noticed by strange loopings resulting in crashes or whatever (or even may stay undetected, but that doesn't matter here).
But auto is the first time1 this changes!
Imagine you used auto as storage-class specifier before and transfer the code. It would not even necessarily (depending on the way it was used) "break"; it actually could silently change the behaviour of the program.
That's something one should keep in mind.
1At least the first time I'm aware of.
As I described in this answer auto can sometimes result in funky situations you didn't intend.
You have to explictly say auto& to have a reference type while doing just auto can create a pointer type. This can result in confusion by omitting the specifier all together, resulting in a copy of the reference instead of an actual reference.
One reason that I can think of is that you lose the opportunity to coerce the class that is returned. If your function or method returned a long 64 bit, and you only wanted a 32 unsigned int, then you lose the opportunity to control that.
I think auto is good when used in a localized context, where the reader easily & obviously can deduct its type, or well documented with a comment of its type or a name that infer the actual type. Those who don't understand how it works might take it in the wrong ways, like using it instead of template or similar. Here are some good and bad use cases in my opinion.
void test (const int & a)
{
// b is not const
// b is not a reference
auto b = a;
// b type is decided by the compiler based on value of a
// a is int
}
Good Uses
Iterators
std::vector<boost::tuple<ClassWithLongName1,std::vector<ClassWithLongName2>,int> v();
..
std::vector<boost::tuple<ClassWithLongName1,std::vector<ClassWithLongName2>,int>::iterator it = v.begin();
// VS
auto vi = v.begin();
Function Pointers
int test (ClassWithLongName1 a, ClassWithLongName2 b, int c)
{
..
}
..
int (*fp)(ClassWithLongName1, ClassWithLongName2, int) = test;
// VS
auto *f = test;
Bad Uses
Data Flow
auto input = "";
..
auto output = test(input);
Function Signature
auto test (auto a, auto b, auto c)
{
..
}
Trivial Cases
for(auto i = 0; i < 100; i++)
{
..
}
Another irritating example:
for (auto i = 0; i < s.size(); ++i)
generates a warning (comparison between signed and unsigned integer expressions [-Wsign-compare]), because i is a signed int. To avoid this you need to write e.g.
for (auto i = 0U; i < s.size(); ++i)
or perhaps better:
for (auto i = 0ULL; i < s.size(); ++i)
I'm surprised nobody has mentioned this, but suppose you are calculating the factorial of something:
#include <iostream>
using namespace std;
int main() {
auto n = 40;
auto factorial = 1;
for(int i = 1; i <=n; ++i)
{
factorial *= i;
}
cout << "Factorial of " << n << " = " << factorial <<endl;
cout << "Size of factorial: " << sizeof(factorial) << endl;
return 0;
}
This code will output this:
Factorial of 40 = 0
Size of factorial: 4
That was definetly not the expected result. That happened because auto deduced the type of the variable factorial as int because it was assigned to 1.
I'm quite confused with the new keyword constexpr of C++2011. I would like to know where to use constexpr and where to use templates metaprogramming when I code compile-time functions (especially maths functions). For example if we take an integer pow function :
// 1 :
template <int N> inline double tpow(double x)
{
return x*tpow<N-1>(x);
}
template <> inline double tpow<0>(double x)
{
return 1.0;
}
// 2 :
constexpr double cpow(double x, int N)
{
return (N>0) ? (x*cpow(x, N-1)) : (1.0);
}
// 3 :
template <int N> constexpr double tcpow(double x)
{
return x*tcpow<N-1>(x);
}
template <> constexpr double tcpow<0>(double x)
{
return 1.0;
}
Are the 2nd and 3rd functions equivalent ?
What is the best solution ? Does it produce the same result :
if x is known at compile-time
if x is not known at compile-time
When to use constexpr and when to use template metaprogramming ?
EDIT 1 : code modified to include specialization for templates
I probably shouldn't be answering a template metaprogramming question this late. But, here I go.
Firstly, constexpr isn't implemented in Visual Studio 2012. If you want to develop for windows, forget about it. I know, it sucks, I hate Microsoft for not including it.
With that out of the way, there's lots of things you can declare as constant, but they aren't really "constant" in terms of "you can work with them at compile time." For instance:
const int foo[5] = { 2, 5, 1, 9, 4 };
const int bar = foo[foo[2]]; // Fail!
You'd think you could read from that at compile time, right? Nope. But you can if you make it a constexpr.
constexpr int foo[5] = { 2, 5, 1, 9, 4 };
constexpr int bar = foo[foo[2]]; // Woohoo!
Constexpr's are really good for "constant propagation" optimization. What that means is if you have a variable X, that is declared at compile time based on some condition (perhaps metaprogramming), if it is a constexpr then the compiler knows it can "safely" use it when doing optimization to, say, remove instructions like a = (X * y); and replace them with a = 0; if X evaluated to 0 (and other conditions are met).
Obviously this is great because for many mathematical functions, constant propagation can give you an easy (to use) premature optimization.
Their main use, other than rather esoteric things (such as enabling me to write a compile-time byte-code interpreter a lot easier), is to be able to make "functions" or classes that can be called and used both at compile-time and at runtime.
Basically they just sort of fill a hole in C++03 and help with optimization by the compiler.
So which of your 3 is "best"?
2 can be called at run-time, whereas the others are compile-time only. That's pretty sweet.
There's a bit more to it. Wikipedia gives you a very basic summary of "constexpr allows this," but template metaprogramming can be complicated. Constexpr makes parts of it a lot easier. I wish I had a clear example for you other than say, reading from an array.
A good mathematical example, I suppose, would be if you wanted to implement a user-defined complex number class. It would be an order of magnitude more complex to code that with only template metaprogramming and no constexpr.
So when should you not use constexpr? Honestly, constexpr is basically "const except MORE CONST." You can generally use it anywhere you'd use const, with a few caveats such as how when called during runtime a function will act non-const if its input isn't const.
Um. OK, that's all for now. I'm too overtired to say more. I hope i was helpful, feel free to downvote me if I wasn't and I'll delete this.
1st and 3rd are incorrect. Compiler will try instantiate tpow<N-1> before it will evaluate (N>0) ?, and you will get infinite template recursion. You need specialisation for N==1 (or ==0) to make it work. 2nd will work for xknown at compile time and run time.
Added after your specialization for ==0 edit. Now all function will work for compile time or run time x. 1st will always return non constexpr value. 2nd and 3rd will return constexpr if x and N are constexpr. 2nd even work if N is not constexpr, other need constexpr N (so, 2nd and 3rd are not equivalent).
The constexpr is used in two cases. When you write int N=10;, value of N is known at compile time but it is not constexpr and can not used for example as template argument. Keyword constexpr explicitly tells compiler that N is safe to use as compile time value.
Second use is as constexpr functions. They use subset of C++ to conditionally produce constexpr values and can dramatically simplify equivalent template functions. One detriment of constexpr functions is that you have no guaranteed compile time evaluation -- compiler can chose to do evaluation in in run time. With templated implementation you are guaranteed compile time evaluation.