Related
It seems that auto was a fairly significant feature to be added in C++11 that seems to follow a lot of the newer languages. As with a language like Python, I have not seen any explicit variable declaration (I am not sure if it is possible using Python standards).
Is there a drawback to using auto to declare variables instead of explicitly declaring them?
The question is about drawbacks of auto, so this answer highlights some of those. A drawback of using a programming language feature (in this case, a facility associated with a language keyword) does not mean that feature is unacceptable, nor does it mean that feature should be avoided entirely. It means there are disadvantages along with advantages, so a decision to use auto type deduction over alternatives must consider engineering trade-offs.
When used well, auto has several advantages as well - which is not the subject of the question. The drawbacks result from ease of abuse, and from increased potential for code to behave in unintended or unexpected ways.
The main drawback is that, by using auto, you don't necessarily know the type of object being created. There are also occasions where the programmer might expect the compiler to deduce one type, but the compiler adamantly deduces another.
Given a declaration like
auto result = CallSomeFunction(x,y,z);
you don't necessarily have knowledge of what type result is. It might be an int. It might be a pointer. It might be something else. All of those support different operations. You can also dramatically change the code by a minor change like
auto result = CallSomeFunction(a,y,z);
because, depending on what overloads exist for CallSomeFunction() the type of result might be completely different - and subsequent code may therefore behave completely differently than intended. You might suddenly trigger error messages in later code(e.g. subsequently trying to dereference an int, trying to change something which is now const). The more sinister change is where your change sails past the compiler, but subsequent code behaves in different and unknown - possibly buggy - ways. For example (as noted by sashoalm in comments) if the deduced type of a variable changes an integral type to a floating point type - and subsequent code is unexpectedly and silently affected by loss of precision.
Not having explicit knowledge of the type of some variables therefore makes it harder to rigorously justify a claim that the code works as intended. This means more effort to justify claims of "fit for purpose" in high-criticality (e.g. safety-critical or mission-critical) domains.
The other, more common drawback, is the temptation for a programmer to use auto as a blunt instrument to force code to compile, rather than thinking about what the code is doing, and working to get it right.
This isn't a drawback of auto in a principled way exactly, but in practical terms it seems to be an issue for some. Basically, some people either: a) treat auto as a savior for types and shut their brain off when using it, or b) forget that auto always deduces to value types. This causes people to do things like this:
auto x = my_obj.method_that_returns_reference();
Oops, we just deep copied some object. It's often either a bug or a performance fail. Then, you can swing the other way too:
const auto& stuff = *func_that_returns_unique_ptr();
Now you get a dangling reference. These problems aren't caused by auto at all, so I don't consider them legitimate arguments against it. But it does seem like auto makes these issue more common (from my personal experience), for the reasons I listed at the beginning.
I think given time people will adjust, and understand the division of labor: auto deduces the underlying type, but you still want to think about reference-ness and const-ness. But it's taking a bit of time.
Other answers are mentioning drawbacks like "you don't really know what the type of a variable is." I'd say that this is largely related to sloppy naming convention in code. If your interfaces are clearly-named, you shouldn't need to care what the exact type is. Sure, auto result = callSomeFunction(a, b); doesn't tell you much. But auto valid = isValid(xmlFile, schema); tells you enough to use valid without having to care what its exact type is. After all, with just if (callSomeFunction(a, b)), you wouldn't know the type either. The same with any other subexpression temporary objects. So I don't consider this a real drawback of auto.
I'd say its primary drawback is that sometimes, the exact return type is not what you want to work with. In effect, sometimes the actual return type differs from the "logical" return type as an implementation/optimisation detail. Expression templates are a prime example. Let's say we have this:
SomeType operator* (const Matrix &lhs, const Vector &rhs);
Logically, we would expect SomeType to be Vector, and we definitely want to treat it as such in our code. However, it is possible that for optimisation purposes, the algebra library we're using implements expression templates, and the actual return type is this:
MultExpression<Matrix, Vector> operator* (const Matrix &lhs, const Vector &rhs);
Now, the problem is that MultExpression<Matrix, Vector> will in all likelihood store a const Matrix& and const Vector& internally; it expects that it will convert to a Vector before the end of its full-expression. If we have this code, all is well:
extern Matrix a, b, c;
extern Vector v;
void compute()
{
Vector res = a * (b * (c * v));
// do something with res
}
However, if we had used auto here, we could get in trouble:
void compute()
{
auto res = a * (b * (c * v));
// Oops! Now `res` is referring to temporaries (such as (c * v)) which no longer exist
}
It makes your code a little harder, or tedious, to read.
Imagine something like that:
auto output = doSomethingWithData(variables);
Now, to figure out the type of output, you'd have to track down signature of doSomethingWithData function.
One of the drawbacks is that sometimes you can't declare const_iterator with auto. You will get ordinary (non const) iterator in this example of code taken from this question:
map<string,int> usa;
//...init usa
auto city_it = usa.find("New York");
Like this developer, I hate auto. Or rather, I hate how people misuse auto.
I'm of the (strong) opinion that auto is for helping you write generic code, not for reducing typing.
C++ is a language whose goal is to let you write robust code, not to minimize development time.
This is fairly obvious from many features of C++, but unfortunately a few of the newer ones like auto that reduce typing mislead people into thinking they should start being lazy with typing.
In pre-auto days, people used typedefs, which was great because typedef allowed the designer of the library to help you figure out what the return type should be, so that their library works as expected. When you use auto, you take away that control from the class's designer and instead ask the compiler to figure out what the type should be, which removes one of the most powerful C++ tools from the toolbox and risks breaking their code.
Generally, if you use auto, it should be because your code works for any reasonable type, not because you're just too lazy to write down the type that it should work with.
If you use auto as a tool to help laziness, then what happens is that you eventually start introducing subtle bugs in your program, usually caused by implicit conversions that did not happen because you used auto.
Unfortunately, these bugs are difficult to illustrate in a short example here because their brevity makes them less convincing than the actual examples that come up in a user project -- however, they occur easily in template-heavy code that expect certain implicit conversions to take place.
If you want an example, there is one here. A little note, though: before being tempted to jump and criticize the code: keep in mind that many well-known and mature libraries have been developed around such implicit conversions, and they are there because they solve problems that can be difficult if not impossible to solve otherwise. Try to figure out a better solution before criticizing them.
auto does not have drawbacks per se, and I advocate to (hand-wavily) use it everywhere in new code. It allows your code to consistently type-check, and consistently avoid silent slicing. (If B derives from A and a function returning A suddenly returns B, then auto behaves as expected to store its return value)
Although, pre-C++11 legacy code may rely on implicit conversions induced by the use of explicitly-typed variables. Changing an explicitly-typed variable to auto might change code behaviour, so you'd better be cautious.
Keyword auto simply deduce the type from the return value. Therefore, it is not equivalent with a Python object, e.g.
# Python
a
a = 10 # OK
a = "10" # OK
a = ClassA() # OK
// C++
auto a; // Unable to deduce variable a
auto a = 10; // OK
a = "10"; // Value of const char* can't be assigned to int
a = ClassA{} // Value of ClassA can't be assigned to int
a = 10.0; // OK, implicit casting warning
Since auto is deduced during compilation, it won't have any drawback at runtime whatsoever.
What no one mentioned here so far, but for itself is worth an answer if you asked me.
Since (even if everyone should be aware that C != C++) code written in C can easily be designed to provide a base for C++ code and therefore be designed without too much effort to be C++ compatible, this could be a requirement for design.
I know about some rules where some well defined constructs from C are invalid for C++ and vice versa. But this would simply result in broken executables and the known UB-clause applies which most times is noticed by strange loopings resulting in crashes or whatever (or even may stay undetected, but that doesn't matter here).
But auto is the first time1 this changes!
Imagine you used auto as storage-class specifier before and transfer the code. It would not even necessarily (depending on the way it was used) "break"; it actually could silently change the behaviour of the program.
That's something one should keep in mind.
1At least the first time I'm aware of.
As I described in this answer auto can sometimes result in funky situations you didn't intend.
You have to explictly say auto& to have a reference type while doing just auto can create a pointer type. This can result in confusion by omitting the specifier all together, resulting in a copy of the reference instead of an actual reference.
One reason that I can think of is that you lose the opportunity to coerce the class that is returned. If your function or method returned a long 64 bit, and you only wanted a 32 unsigned int, then you lose the opportunity to control that.
I think auto is good when used in a localized context, where the reader easily & obviously can deduct its type, or well documented with a comment of its type or a name that infer the actual type. Those who don't understand how it works might take it in the wrong ways, like using it instead of template or similar. Here are some good and bad use cases in my opinion.
void test (const int & a)
{
// b is not const
// b is not a reference
auto b = a;
// b type is decided by the compiler based on value of a
// a is int
}
Good Uses
Iterators
std::vector<boost::tuple<ClassWithLongName1,std::vector<ClassWithLongName2>,int> v();
..
std::vector<boost::tuple<ClassWithLongName1,std::vector<ClassWithLongName2>,int>::iterator it = v.begin();
// VS
auto vi = v.begin();
Function Pointers
int test (ClassWithLongName1 a, ClassWithLongName2 b, int c)
{
..
}
..
int (*fp)(ClassWithLongName1, ClassWithLongName2, int) = test;
// VS
auto *f = test;
Bad Uses
Data Flow
auto input = "";
..
auto output = test(input);
Function Signature
auto test (auto a, auto b, auto c)
{
..
}
Trivial Cases
for(auto i = 0; i < 100; i++)
{
..
}
Another irritating example:
for (auto i = 0; i < s.size(); ++i)
generates a warning (comparison between signed and unsigned integer expressions [-Wsign-compare]), because i is a signed int. To avoid this you need to write e.g.
for (auto i = 0U; i < s.size(); ++i)
or perhaps better:
for (auto i = 0ULL; i < s.size(); ++i)
I'm surprised nobody has mentioned this, but suppose you are calculating the factorial of something:
#include <iostream>
using namespace std;
int main() {
auto n = 40;
auto factorial = 1;
for(int i = 1; i <=n; ++i)
{
factorial *= i;
}
cout << "Factorial of " << n << " = " << factorial <<endl;
cout << "Size of factorial: " << sizeof(factorial) << endl;
return 0;
}
This code will output this:
Factorial of 40 = 0
Size of factorial: 4
That was definetly not the expected result. That happened because auto deduced the type of the variable factorial as int because it was assigned to 1.
As we know, when constexpr function's return value cannot be known at compile-time, it will be delayed to be computed at run-time(IOW, decay to non-constexpr function). This allows us to adhere constexpr to a function freely and need not worry about any overhead.
I think it can also apply to if statement. Since c++17, we have if constexpr, so we can use compile-time if statement easily(without true_type/false_type. Unlike constexpr function, however, it will fail if its condition cannot be known at compile-time:
constexpr int factorial(int n)
{
if constexpr(n == 0) return 1;
else return n * factorial(n-1);
}
So, the codes above cannot pass compilation because n is not a constant expression. But certainly, the function can be calculated at compile-time when input is known at compile-time.
For the same reason that swallowing errors/exceptions and just plowing through is bad. It can potentially put your program in some sort of unspecified state. Making it almost impossible to reason about.
If a constraint in a program isn't met, the person who wrote it and relied on it needs to be notified promptly. Making such a thing a hard error for a language construct makes sense. Especially if the language construct drive the actual generation of code.
In this case the constraint is b being a valid constant expression.
It seems that auto was a fairly significant feature to be added in C++11 that seems to follow a lot of the newer languages. As with a language like Python, I have not seen any explicit variable declaration (I am not sure if it is possible using Python standards).
Is there a drawback to using auto to declare variables instead of explicitly declaring them?
The question is about drawbacks of auto, so this answer highlights some of those. A drawback of using a programming language feature (in this case, a facility associated with a language keyword) does not mean that feature is unacceptable, nor does it mean that feature should be avoided entirely. It means there are disadvantages along with advantages, so a decision to use auto type deduction over alternatives must consider engineering trade-offs.
When used well, auto has several advantages as well - which is not the subject of the question. The drawbacks result from ease of abuse, and from increased potential for code to behave in unintended or unexpected ways.
The main drawback is that, by using auto, you don't necessarily know the type of object being created. There are also occasions where the programmer might expect the compiler to deduce one type, but the compiler adamantly deduces another.
Given a declaration like
auto result = CallSomeFunction(x,y,z);
you don't necessarily have knowledge of what type result is. It might be an int. It might be a pointer. It might be something else. All of those support different operations. You can also dramatically change the code by a minor change like
auto result = CallSomeFunction(a,y,z);
because, depending on what overloads exist for CallSomeFunction() the type of result might be completely different - and subsequent code may therefore behave completely differently than intended. You might suddenly trigger error messages in later code(e.g. subsequently trying to dereference an int, trying to change something which is now const). The more sinister change is where your change sails past the compiler, but subsequent code behaves in different and unknown - possibly buggy - ways. For example (as noted by sashoalm in comments) if the deduced type of a variable changes an integral type to a floating point type - and subsequent code is unexpectedly and silently affected by loss of precision.
Not having explicit knowledge of the type of some variables therefore makes it harder to rigorously justify a claim that the code works as intended. This means more effort to justify claims of "fit for purpose" in high-criticality (e.g. safety-critical or mission-critical) domains.
The other, more common drawback, is the temptation for a programmer to use auto as a blunt instrument to force code to compile, rather than thinking about what the code is doing, and working to get it right.
This isn't a drawback of auto in a principled way exactly, but in practical terms it seems to be an issue for some. Basically, some people either: a) treat auto as a savior for types and shut their brain off when using it, or b) forget that auto always deduces to value types. This causes people to do things like this:
auto x = my_obj.method_that_returns_reference();
Oops, we just deep copied some object. It's often either a bug or a performance fail. Then, you can swing the other way too:
const auto& stuff = *func_that_returns_unique_ptr();
Now you get a dangling reference. These problems aren't caused by auto at all, so I don't consider them legitimate arguments against it. But it does seem like auto makes these issue more common (from my personal experience), for the reasons I listed at the beginning.
I think given time people will adjust, and understand the division of labor: auto deduces the underlying type, but you still want to think about reference-ness and const-ness. But it's taking a bit of time.
Other answers are mentioning drawbacks like "you don't really know what the type of a variable is." I'd say that this is largely related to sloppy naming convention in code. If your interfaces are clearly-named, you shouldn't need to care what the exact type is. Sure, auto result = callSomeFunction(a, b); doesn't tell you much. But auto valid = isValid(xmlFile, schema); tells you enough to use valid without having to care what its exact type is. After all, with just if (callSomeFunction(a, b)), you wouldn't know the type either. The same with any other subexpression temporary objects. So I don't consider this a real drawback of auto.
I'd say its primary drawback is that sometimes, the exact return type is not what you want to work with. In effect, sometimes the actual return type differs from the "logical" return type as an implementation/optimisation detail. Expression templates are a prime example. Let's say we have this:
SomeType operator* (const Matrix &lhs, const Vector &rhs);
Logically, we would expect SomeType to be Vector, and we definitely want to treat it as such in our code. However, it is possible that for optimisation purposes, the algebra library we're using implements expression templates, and the actual return type is this:
MultExpression<Matrix, Vector> operator* (const Matrix &lhs, const Vector &rhs);
Now, the problem is that MultExpression<Matrix, Vector> will in all likelihood store a const Matrix& and const Vector& internally; it expects that it will convert to a Vector before the end of its full-expression. If we have this code, all is well:
extern Matrix a, b, c;
extern Vector v;
void compute()
{
Vector res = a * (b * (c * v));
// do something with res
}
However, if we had used auto here, we could get in trouble:
void compute()
{
auto res = a * (b * (c * v));
// Oops! Now `res` is referring to temporaries (such as (c * v)) which no longer exist
}
It makes your code a little harder, or tedious, to read.
Imagine something like that:
auto output = doSomethingWithData(variables);
Now, to figure out the type of output, you'd have to track down signature of doSomethingWithData function.
One of the drawbacks is that sometimes you can't declare const_iterator with auto. You will get ordinary (non const) iterator in this example of code taken from this question:
map<string,int> usa;
//...init usa
auto city_it = usa.find("New York");
Like this developer, I hate auto. Or rather, I hate how people misuse auto.
I'm of the (strong) opinion that auto is for helping you write generic code, not for reducing typing.
C++ is a language whose goal is to let you write robust code, not to minimize development time.
This is fairly obvious from many features of C++, but unfortunately a few of the newer ones like auto that reduce typing mislead people into thinking they should start being lazy with typing.
In pre-auto days, people used typedefs, which was great because typedef allowed the designer of the library to help you figure out what the return type should be, so that their library works as expected. When you use auto, you take away that control from the class's designer and instead ask the compiler to figure out what the type should be, which removes one of the most powerful C++ tools from the toolbox and risks breaking their code.
Generally, if you use auto, it should be because your code works for any reasonable type, not because you're just too lazy to write down the type that it should work with.
If you use auto as a tool to help laziness, then what happens is that you eventually start introducing subtle bugs in your program, usually caused by implicit conversions that did not happen because you used auto.
Unfortunately, these bugs are difficult to illustrate in a short example here because their brevity makes them less convincing than the actual examples that come up in a user project -- however, they occur easily in template-heavy code that expect certain implicit conversions to take place.
If you want an example, there is one here. A little note, though: before being tempted to jump and criticize the code: keep in mind that many well-known and mature libraries have been developed around such implicit conversions, and they are there because they solve problems that can be difficult if not impossible to solve otherwise. Try to figure out a better solution before criticizing them.
auto does not have drawbacks per se, and I advocate to (hand-wavily) use it everywhere in new code. It allows your code to consistently type-check, and consistently avoid silent slicing. (If B derives from A and a function returning A suddenly returns B, then auto behaves as expected to store its return value)
Although, pre-C++11 legacy code may rely on implicit conversions induced by the use of explicitly-typed variables. Changing an explicitly-typed variable to auto might change code behaviour, so you'd better be cautious.
Keyword auto simply deduce the type from the return value. Therefore, it is not equivalent with a Python object, e.g.
# Python
a
a = 10 # OK
a = "10" # OK
a = ClassA() # OK
// C++
auto a; // Unable to deduce variable a
auto a = 10; // OK
a = "10"; // Value of const char* can't be assigned to int
a = ClassA{} // Value of ClassA can't be assigned to int
a = 10.0; // OK, implicit casting warning
Since auto is deduced during compilation, it won't have any drawback at runtime whatsoever.
What no one mentioned here so far, but for itself is worth an answer if you asked me.
Since (even if everyone should be aware that C != C++) code written in C can easily be designed to provide a base for C++ code and therefore be designed without too much effort to be C++ compatible, this could be a requirement for design.
I know about some rules where some well defined constructs from C are invalid for C++ and vice versa. But this would simply result in broken executables and the known UB-clause applies which most times is noticed by strange loopings resulting in crashes or whatever (or even may stay undetected, but that doesn't matter here).
But auto is the first time1 this changes!
Imagine you used auto as storage-class specifier before and transfer the code. It would not even necessarily (depending on the way it was used) "break"; it actually could silently change the behaviour of the program.
That's something one should keep in mind.
1At least the first time I'm aware of.
As I described in this answer auto can sometimes result in funky situations you didn't intend.
You have to explictly say auto& to have a reference type while doing just auto can create a pointer type. This can result in confusion by omitting the specifier all together, resulting in a copy of the reference instead of an actual reference.
One reason that I can think of is that you lose the opportunity to coerce the class that is returned. If your function or method returned a long 64 bit, and you only wanted a 32 unsigned int, then you lose the opportunity to control that.
I think auto is good when used in a localized context, where the reader easily & obviously can deduct its type, or well documented with a comment of its type or a name that infer the actual type. Those who don't understand how it works might take it in the wrong ways, like using it instead of template or similar. Here are some good and bad use cases in my opinion.
void test (const int & a)
{
// b is not const
// b is not a reference
auto b = a;
// b type is decided by the compiler based on value of a
// a is int
}
Good Uses
Iterators
std::vector<boost::tuple<ClassWithLongName1,std::vector<ClassWithLongName2>,int> v();
..
std::vector<boost::tuple<ClassWithLongName1,std::vector<ClassWithLongName2>,int>::iterator it = v.begin();
// VS
auto vi = v.begin();
Function Pointers
int test (ClassWithLongName1 a, ClassWithLongName2 b, int c)
{
..
}
..
int (*fp)(ClassWithLongName1, ClassWithLongName2, int) = test;
// VS
auto *f = test;
Bad Uses
Data Flow
auto input = "";
..
auto output = test(input);
Function Signature
auto test (auto a, auto b, auto c)
{
..
}
Trivial Cases
for(auto i = 0; i < 100; i++)
{
..
}
Another irritating example:
for (auto i = 0; i < s.size(); ++i)
generates a warning (comparison between signed and unsigned integer expressions [-Wsign-compare]), because i is a signed int. To avoid this you need to write e.g.
for (auto i = 0U; i < s.size(); ++i)
or perhaps better:
for (auto i = 0ULL; i < s.size(); ++i)
I'm surprised nobody has mentioned this, but suppose you are calculating the factorial of something:
#include <iostream>
using namespace std;
int main() {
auto n = 40;
auto factorial = 1;
for(int i = 1; i <=n; ++i)
{
factorial *= i;
}
cout << "Factorial of " << n << " = " << factorial <<endl;
cout << "Size of factorial: " << sizeof(factorial) << endl;
return 0;
}
This code will output this:
Factorial of 40 = 0
Size of factorial: 4
That was definetly not the expected result. That happened because auto deduced the type of the variable factorial as int because it was assigned to 1.
I'm quite confused with the new keyword constexpr of C++2011. I would like to know where to use constexpr and where to use templates metaprogramming when I code compile-time functions (especially maths functions). For example if we take an integer pow function :
// 1 :
template <int N> inline double tpow(double x)
{
return x*tpow<N-1>(x);
}
template <> inline double tpow<0>(double x)
{
return 1.0;
}
// 2 :
constexpr double cpow(double x, int N)
{
return (N>0) ? (x*cpow(x, N-1)) : (1.0);
}
// 3 :
template <int N> constexpr double tcpow(double x)
{
return x*tcpow<N-1>(x);
}
template <> constexpr double tcpow<0>(double x)
{
return 1.0;
}
Are the 2nd and 3rd functions equivalent ?
What is the best solution ? Does it produce the same result :
if x is known at compile-time
if x is not known at compile-time
When to use constexpr and when to use template metaprogramming ?
EDIT 1 : code modified to include specialization for templates
I probably shouldn't be answering a template metaprogramming question this late. But, here I go.
Firstly, constexpr isn't implemented in Visual Studio 2012. If you want to develop for windows, forget about it. I know, it sucks, I hate Microsoft for not including it.
With that out of the way, there's lots of things you can declare as constant, but they aren't really "constant" in terms of "you can work with them at compile time." For instance:
const int foo[5] = { 2, 5, 1, 9, 4 };
const int bar = foo[foo[2]]; // Fail!
You'd think you could read from that at compile time, right? Nope. But you can if you make it a constexpr.
constexpr int foo[5] = { 2, 5, 1, 9, 4 };
constexpr int bar = foo[foo[2]]; // Woohoo!
Constexpr's are really good for "constant propagation" optimization. What that means is if you have a variable X, that is declared at compile time based on some condition (perhaps metaprogramming), if it is a constexpr then the compiler knows it can "safely" use it when doing optimization to, say, remove instructions like a = (X * y); and replace them with a = 0; if X evaluated to 0 (and other conditions are met).
Obviously this is great because for many mathematical functions, constant propagation can give you an easy (to use) premature optimization.
Their main use, other than rather esoteric things (such as enabling me to write a compile-time byte-code interpreter a lot easier), is to be able to make "functions" or classes that can be called and used both at compile-time and at runtime.
Basically they just sort of fill a hole in C++03 and help with optimization by the compiler.
So which of your 3 is "best"?
2 can be called at run-time, whereas the others are compile-time only. That's pretty sweet.
There's a bit more to it. Wikipedia gives you a very basic summary of "constexpr allows this," but template metaprogramming can be complicated. Constexpr makes parts of it a lot easier. I wish I had a clear example for you other than say, reading from an array.
A good mathematical example, I suppose, would be if you wanted to implement a user-defined complex number class. It would be an order of magnitude more complex to code that with only template metaprogramming and no constexpr.
So when should you not use constexpr? Honestly, constexpr is basically "const except MORE CONST." You can generally use it anywhere you'd use const, with a few caveats such as how when called during runtime a function will act non-const if its input isn't const.
Um. OK, that's all for now. I'm too overtired to say more. I hope i was helpful, feel free to downvote me if I wasn't and I'll delete this.
1st and 3rd are incorrect. Compiler will try instantiate tpow<N-1> before it will evaluate (N>0) ?, and you will get infinite template recursion. You need specialisation for N==1 (or ==0) to make it work. 2nd will work for xknown at compile time and run time.
Added after your specialization for ==0 edit. Now all function will work for compile time or run time x. 1st will always return non constexpr value. 2nd and 3rd will return constexpr if x and N are constexpr. 2nd even work if N is not constexpr, other need constexpr N (so, 2nd and 3rd are not equivalent).
The constexpr is used in two cases. When you write int N=10;, value of N is known at compile time but it is not constexpr and can not used for example as template argument. Keyword constexpr explicitly tells compiler that N is safe to use as compile time value.
Second use is as constexpr functions. They use subset of C++ to conditionally produce constexpr values and can dramatically simplify equivalent template functions. One detriment of constexpr functions is that you have no guaranteed compile time evaluation -- compiler can chose to do evaluation in in run time. With templated implementation you are guaranteed compile time evaluation.
Are floating point calculations, which use compile-time constant integers, performed during compile-time or during run-time? For example, when is the division operation calculated in:
template <int A, int B>
inline float fraction()
{
return static_cast<float>(A) / B;
}
For something this simple, the compiler will probably do it at compile time. In fact, the compiler will probably do it at compile time even without templates, as long as all the values are known at compile time: i.e. if we have inline float fraction(int A, int B), it will probably do the division at compile time if we call fraction(1,2) .
If you want to force the compiler to do stuff at compile-time, you will have to use some template metaprogramming tricks, and I'm not sure you can get it to work with floating-point arithmetic at all. But here is a basic example of the technique:
// Something similarly simple that doesn't use floating-point ;)
template <int A, int B>
struct Product {
enum { value = A * B };
};
// Later:
... Product<3, 4>::value ...
I believe it is implementation defined, but most compilers will evaluate constant expressions at compile time. However even if yours does not the following modification:
template <int A, int B>
inline float fraction()
{
static const float f = static_cast<float>(A) / B;
return f ;
}
will at least ensure that the expression is evaluated just once if it is evaluated at runtime.
You should wait for gcc 4.6 with C++0x constexpr keyword implementation.
Your best bet is to look at the generated code - there is no guarantee that floating-point operations will be performed at compile time, but at higher optimisation levels they potentially could be, particularly for something simple like this.
(Some compilers might avoid doing this because for some architectures the floating point behaviour is configurable at runtime. The results for the operation performed at compile time could then potentially differ from those from the same operation performed at runtime.)
Neither the C or C++ standards require that constant expressions of any stripe be evaluated at compile time, but they do allow it. Most compilers released in the last 20 years will evaluate arithmetic expressions, so if not incurring a function call or inlining the code is important, keep it as simple as possble.
If the scope of these expressions is limited to a single file, you can always take advantage of the preprocessor and #define FRACTION(a,b) (float(a)/float(b)) for convenience. I don't recommend doing this in a header unless you have a good scheme to prevent polluting any file that #includes it.