I just noticed that D0202R2 propose that all <cstring> functions must not have constexpr. I would like to understand why, during Jacksonville meeting, it was decided for a solution like this.
Take a function like std::strchr. I really do not see any reason for not being constexpr. Indeed, a compiler can easily optimize some dummy code like this at compile-time (even if I remove builtins, as you can see from the parameters). However, at the same time, it is not possible to rely on these functions within constexpr contexts or using static assertions.
We could obviously re-implement some of <cstring> functions to be constexpr (as I did in this other dummy code), but I do not understand why they must not have constexpr in the standard library.
What am I missing?
PS: Builtins!
At the beginning I was confused because constexpr functions using some <cstring> capabilities just worked, then I understood it was only thanks to GCC builtins. Indeed, if you add the -fno-builtin parameter, you can just use std::strlen instead of the custom version of the function.
Upon reviewing this more, and thinking more about the implications of the C++14 relaxation of rules surrounding constexpr, I have a different answer.
The <cstring> header is a wrapper around a bunch of C functions. C has no constexpr concept, and while it might be useful for it to have one, it's not likely to grow one anytime soon. So marking up those functions in that way would be cumbersome and require a lot of #ifdefs.
Also (and I think this is the most important reason) when those functions are not compiler intrinsics they are implemented in C and stored in a library file as object code. Object code in a library is not a form accessible to the C++ compiler to evaluate at compile time. They are not 'inline' like template code is.
Lastly, most of the really useful things they do can easily be implemented in terms of the C++ <algorithm> library. strlen(s) = (::std::string_view(s)).length(), memcpy(a, b, len) = ::std::copy(b, b + len, a) and so on. And D0202R2 proposes to make those algorithms constexpr. And, as you pointed out, it also proposes to make functions in ::std::string_view constexpr and these also give equivalent functionality. So, given the previously mentioned headaches, it seems that implementing constexpr for the <cstring> functions would be of dubious benefit.
As a side note, there's ::std::copy, ::std::move, ::std::copy_backward, and ::std::move_backward and it's up to you to figure out which you need to call. It would be nice if there was a function that could figure out whether or not x or x_backward was needed in that particular case like memmove does. But, because of the way iterators are defined, taking one iterator and comparing it to another iterator that may not be iterating over the same object at all just isn't possible to do in C++, even if they're random access iterators.
Related
I know the difference in requirements, I am mostly interested in what benefits from code quality it brings.
Few things I can think of:
reader can just read the function signature and know that function is evaluated at compile time
compiler may emit less code since consteval fns are never used at runtime(this is speculative, I have no real data on this)
no need to have variables to force ctfe, example at the end
note: if code quality is too vague I understand some people might want to close this question, for me code quality is not really that vague term, but...
example where constexpr failure is delayed to runtime:
constexpr int div_cx(int a, int b)
{
assert(b!=0);
return a/b;
}
int main()
{
static constexpr int result = div_cx(5,0); // compile time error, div by 0
std::cout << result;
std::cout << div_cx(5,0) ; // runtime error :(
}
In order to have meaningful, significant static reflection (reflection at compile time), you need a way to execute code at compile time. The initial static reflection TS proposal used traditional template metaprogramming techniques, because those were the only effective tools for executing code at compile-time at all.
However, as constexpr code gained more features, it became increasingly more feasible to do compile-time static reflection through constexpr functions. One problem with such ideas is that static reflection values cannot be allowed to leak out into non-compile-time code.
We need to be able to write code that must only be executed at compile-time. It's easy enough to do that for small bits of code in the middle of a function; the runtime version of that code simply won't contain the reflection parts, only the results of them.
But what if you want to write a function that takes a reflection value and returns a reflection value? Or a list of reflection values?
That function cannot be constexpr, because a constexpr function must be able to be executed at runtime. You are allowed to do things like get pointers to constexpr functions and call them in ways that the compiler can't trace, thus forcing it to execute at runtime.
A function which takes a reflection value can't do that. It must execute only at compile-time. So constexpr is inappropriate for such functions.
Enter consteval: a function which is "required" to execute only at compile time. There are specific rules in place that make it impossible for pointers to such functions to leak out into runtime code and so forth.
As such, consteval doesn't have much purpose at the moment. It gets used in a few places like source_location::current(), which fundamentally makes no sense to execute at runtime. But ultimately, the feature is a necessary building-block for further compile-time programming tools that don't exist yet.
This was laid down in the paper that originally proposed this feature:
The impetus for the present paper, however, is the work being done by SG7 in the realm of compile-time reflection. There is now general agreement that future language support for reflection should use constexpr functions, but since "reflection functions" typically have to be evaluated at compile time, they will in fact likely be immediate functions.
This is more of a philosophical question rather than practical code snippet, but perhaps C++ gurus can enlighten me (and apologies if it's been asked already).
I have been reading Item 15 in Meyers's "Effective Modern C++" book, as well as this thread: implicit constexpr? (plus a reasonable amount of googling). The item goes over usage of constexpr for expressions, namely that it defines functions that can return compile time values given compile time inputs.
Moreover, the StackOverflow thread I referred to shows that some compilers are perfectly capable of figuring out for themselves which function invocation results are known at compile time.
Hence the question: why was constexpr added to the standard as compared to defining when compilers should derive and allow static/compile-time values?
I realise it makes various compile-only (e.g. std::array<T, constexpr>) definitions less predictable, but on the other hand, as per Meyers's book, constexpr is a part of the interface,..., if you remove it, you may cause arbitrarily large amounts of client code to stop compiling.
So, not only having explicit constexpr requires people to remember adding it, it also adds permanent semantics to the interface.
Clarification: This question is not about why constexpr should be used. I appreciate that having an ability to programatically derive compile-time values is very useful, and employed it myself on a number of occasions. It's a question on why it is mandatory in situations where compiler may deduce const-time behaviour on its own.
Clarification no. 2: Here is a code snippet showing that compilers do not deduce that automatically, I've used g++ in this case.
#include <array>
size_t test()
{
return 42;
}
int main()
{
auto i = test();
std::array<int, i> arrayTst;
arrayTst[1] = 20;
return arrayTst[1];
}
std::array declaration fails to compile because I have not defined test() as constexpr, which is of course as per standard. If the standard were different, nothing would have prevented gcc from figuring out independently that test() always returns a constant expression.
This question does not ask "what the standard defines", but rather "why the standard is the way it is"?
Before constexpr the compilers could sometimes figure out a compile time constant and use it. However, the programmer could never know when this would happen.
Afterwards, the programmer is immediately informed if an expression is not a compile time constant and he or she realizes the need to fix it.
If I have a member function declared like so:
double* restrict data(){
return m_data; // array member variable
}
can the restrict keyword do anything?
Apparently, with g++ (x86 architecture) it cannot, but are there other compilers/architectures where this type of construction makes sense, and would allow for optimized machine code generation?
I'm asking because the Blitz library (Blitz++) has a whole slew of functions declared in this manner, and it doesn't make sense that someone would go in and add the restrict keyword unless it actually does something. So before I go in and remove the restrict's (to get rid of compiler warnings) I'd like to know how I'm abusing the code.
WHAT restrict ARE WE TALKING ABOUT?
restrict is, as it currently stands, non-standard.. which means that it's a compiler extension; it's non-portable in the sense that the C++ Standard doesn't mandate its existance, nor is there any formal text in it that tells us what it is supposed to do.
restrict is currently compiler specific in C++, and one has to resort to the compiler documentation of their choice to see exactly what it is doing.
SOME THOUGHTS
There are many papers about the usage of restrict, among them:
Restricted Pointers - Using the GNU Compiler Collection
restrict - wikipedia.org
Demystifying The Restrict Keyword - CellPerformance
It's hinted at several places that the purpose of restrict is to qualify pointers so that the compiler knows that two pointers in the same scope doesn't refer to the same memory location.
With this in mind we can easily see that the return-type has no potential collision with other pointers, so using it in such context will generally not gain any optimization opportunities. However; one must refer to the documented behaviour of the used implementation to know for sure.. as stated: restrict is not standard, yet.
I also found the following thread where the developers of Blitz++ discusses the removal of strict applied to the return-type of a function, since it doesn't do anything:
Re: [Blitz-devel] type qualifiers ignored on function return type
A LITTLE NOTE
As a further note, here's what the LLVM Documentation says about noalias vs restrict:
For function return values, C99’s restrict is not meaningful, while LLVM’s noalias is.
Generaly restrict qualifier can only help to better optimize code. By removing 'restrict' you don't break anything, but when you add it without care you can get some errors. A great example is the difference between memcpy and memmove. You can always use slower memmove, but you can use faster memcpy only if you know that src and dst aren't overlaping.
This question already has answers here:
Why do we need to mark functions as constexpr?
(4 answers)
Closed 2 years ago.
As far as I understand it, constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible.
I know that it also imposes some restriction on the function or initialization declared as constexpr but the final goal is compile-time evaluation, isn't it?
So my question is, why can't we leave that at the compiler? It is obviously capable of checking the pre-conditions, so why doesn't it do for each expression and evaluate at compile-time where possible?
I have two ideas on why this might be the case but I am not yet convinced that they hit the point:
a) It might take too long during compile-time.
b) Since my code can use constexpr functions in locations where normale functions would not be allowed the specifier is also kind of part of the declaration. If the compiler did everything by itself, one could use a function in a C-array definition with one version of the function but with the next version there might be a compiler-error, because the pre-conditions for compile-time evaluation are no more satisfied.
constexpr is not a "hint" to the compiler about anything; constexpr is a requirement. It doesn't require that an expression actually be executed at compile time; it requires that it could.
What constexpr does (for functions) is restrict what you're allowed to put into function definition, so that the compiler can easily execute that code at compile time where possible. It's a contract between you the programmer and the compiler. If your function violates the contract, the compiler will error immediately.
Once the contract is established, you are now able to use these constexpr functions in places where the language requires a compile time constant expression. The compiler can then check the elements of a constant expression to see that all function calls in the expression call constexpr functions; if they don't, again a compiler error results.
Your attempt to make this implicit would result in two problems. First, without an explicit contract as defined by the language, how would I know what I can and cannot do in a constexpr function? How do I know what will make a function not constexpr?
And second, without the contract being in the compiler, via a declaration of my intent to make the function constexpr, how would the compiler be able to verify that my function conforms to that contract? It couldn't; it would have to wait until I use it in a constant expression before I find that it isn't actually a proper constexpr function.
Contracts are best stated explicitly and up-front.
constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible
No, see below
the final goal is compile-time evaluation
No, see below.
so why doesn't it do for each expression and evaluate at compile-time where possible?
Optimizers do things like that, as allowed under the as-if rule.
constexpr is not used to make things faster, it is used to allow usage of the result in context where a runtime-variable expression is illegal.
This is only my evaluation, but I believe your (b) reason is correct (that it forms part of the interface that the compiler can enforce). The interface requirement serves both for the writer of the code and the client of the code.
The writer may intend something to be usable in a compile-time context, but not actually use it in this way. If the writer violates the rules for constexpr, they might not find out until after publication when clients who try to use it constexpr fail. Or, more realistically, the library might use the code in a constexpr sense in version 1, refactor this usage out in version 2, and break constexpr compatibility in version 3 without realizing it. By checking constexpr-compliance, the breakage in version 3 will be caught before deployment.
The interface for the client is more obvious --- an inline function won't silently become constexpr-required because it happened to work and someone used that way.
I don't believe your (a) reason (that it could take too long for the compiler) is applicable because (1) the compiler has to check much of the constexpr constraints anyway when the code is marked, (2) without the annotation, the compiler would only have to do the checking when used in a constexpr way (so most functions wouldn't have to be checked), and (3) IIUC the D programming language actually does allow functions to be compile-time evaluated if they meet requirements without any declaration assistance, so apparently it can be done.
I think I remember watching an early talk by Bjarne Stroustrup where he mentioned that programmers wanted fine grained control on this "dangerous" feature, from which I understand that they don't want things "accidentally" executed at compile time without them knowing. (Even if that sound like a good thing.)
There can be many reasons for that, but the only valid one is ultimatelly compilation speed I think ( (a) in your list ).
It would be too much burden on the compiler to determine for every function if it could be computed at compile time.
This argument is weaker as compilation times in general go down.
Like many other features of C++ what end up happening is that we end up with the "wrong defaults".
So you have to tell when you want constexpr instead of when you don't want constexpr (runtimeexpr); you have to tell when you want const intead of where you want mutable, etc.
Admitedly, you can imagine functions that take an absurd amount of time to run at compile time and that cannot be amortized (with other kind of machine resources) at runtime.
(I am not aware that "time-out" can be a criterion in a compiler for constexpr, but it could be so.)
Or it could be that one is compiling in a system that is always expected to finish compilation in a finite time but an unbounded runtime is admissible (or debuggable).
I know that this question is old, but time has illuminated that it actually makes sense to have constexpr as default:
In C++17, for example, you can declare a lambda constexpr but more importantly they are constexpr by default if they can be so.
https://learn.microsoft.com/en-us/cpp/cpp/lambda-expressions-constexpr
Note that lambda has all the "right" (opposite) defaults, members (captures) are const by default, arguments are templates by default auto, and now these functions are constexpr by default.
So noticed from this page that none of the math functions in c++11 seems to make use of constexpr, whereas I believe all of them could be. So that leaves me with two questions, one is why did they choose not to make the functions constexpr. And two for a function like sqrt I could probably write my own constexpr, but something like sin or cos would be trickier so is there a way around it.
Actually, because of old and annoying legacy, almost none of the math functions can be constexpr, since they all have the side-effect of setting errno on various error conditions, usually domain errors.
From "The C++ Programming Language (4th Edition)", by B. Stroustrup, describing C++11:
"To be evaluated at compile time, a function must be suitably simple: a constexpr function must consist of a single return-statement; no loops, and no local variables are allowed. Also, a constexpr function may not have side effects."
Which means that it must be inline, without for, while and if statements and local variables. Side effects are also forbidden (ex: changing of errno). Another problem is that most of math functions are FPU instructions which are not represented in pure c/c++ (they are written in assembler code). That's why non of cmath function is declared as constexpr.
So noticed from this page that none of the math functions in c++11
seems to make use of constexpr, whereas I believe all of them could
be. So that leaves me with two questions, one is why did they choose
not to make the functions constexpr.
This part is very well answered by Sebastian Redl and Adam Szaj so won't be adding anything to it.
And two for a function like sqrt I could probably write my own
constexpr, but something like sin or cos would be trickier so is there
away around it.
Yes, you can write your own version of constexpr sin, cos by using the taylor series expansions of these functions. Have a look at this super cool github repo which implements several mathematical functions as constexpr functions Morwenn/static_math