So noticed from this page that none of the math functions in c++11 seems to make use of constexpr, whereas I believe all of them could be. So that leaves me with two questions, one is why did they choose not to make the functions constexpr. And two for a function like sqrt I could probably write my own constexpr, but something like sin or cos would be trickier so is there a way around it.
Actually, because of old and annoying legacy, almost none of the math functions can be constexpr, since they all have the side-effect of setting errno on various error conditions, usually domain errors.
From "The C++ Programming Language (4th Edition)", by B. Stroustrup, describing C++11:
"To be evaluated at compile time, a function must be suitably simple: a constexpr function must consist of a single return-statement; no loops, and no local variables are allowed. Also, a constexpr function may not have side effects."
Which means that it must be inline, without for, while and if statements and local variables. Side effects are also forbidden (ex: changing of errno). Another problem is that most of math functions are FPU instructions which are not represented in pure c/c++ (they are written in assembler code). That's why non of cmath function is declared as constexpr.
So noticed from this page that none of the math functions in c++11
seems to make use of constexpr, whereas I believe all of them could
be. So that leaves me with two questions, one is why did they choose
not to make the functions constexpr.
This part is very well answered by Sebastian Redl and Adam Szaj so won't be adding anything to it.
And two for a function like sqrt I could probably write my own
constexpr, but something like sin or cos would be trickier so is there
away around it.
Yes, you can write your own version of constexpr sin, cos by using the taylor series expansions of these functions. Have a look at this super cool github repo which implements several mathematical functions as constexpr functions Morwenn/static_math
Related
How can I force the inlining of a function, but define it in a C++ file ?
This is a question that's been asked in the past, for example here: Moving inline methods from a header file to a .cpp files
The answers there, in short, go as follows: "inline used to mean [remove function call overhead at the expense of .text size], now it means [relax ODR], so don't use inline for anything that's not ODR related, the compiler knows better".
I'm aware of that, however in my somewhat exotic case, I don't care about performance.
I'm programming an embedded device and, should someone break through the other layers of security, I want to make it as obnoxious as possible to reverse engineer this part of the code, and one thing this implies is that I don't want function calls (that aren't called numerous times anyway) to expose the function boundaries, which are natural delimitations of pieces of code that achieve something on their own.
However, I would also like to keep my code orderly and not have code in my header files.
I see that I can use __attribute((force_inline)) to force inlining, but then I get warnings if those functions don't have an inline attribute too: warning: always_inline function might not be inlinable [-Wattributes]
Suppressing the attributes warning is an option, but I'd rather only take it once I'm sure there are no clean way to do this.
Hence the question: how can I have a forcibly inlined function whose declaration is in a header, but definition is in a source file, without suppressing all attributes warnings ? Is that impossible ?
Inlining can only be asked. Sometimes a bit forcefully. But you can never guarantee that the function WILL be inlined finally - because reasons, sometimes quite obscure ones.
Here what's MSVC documentation says (I've highlighted the important parts):
The compiler treats the inline expansion options and keywords as suggestions. There's no guarantee that functions will be inlined. You can't force the compiler to inline a particular function, even with the __forceinline keyword. When compiling with /clr, the compiler won't inline a function if there are security attributes applied to the function.
C++ standard says:
No matter how you designate a function as inline, it is a request that the compiler is allowed to ignore: the compiler might inline-expand some, all, or none of the places where you call a function designated as inline.
GCC documentation is a bit less crystal-clear about non-inlinable functions, but cases exists anyway.
The only "real" way to force inlining is quite ugly, since it rely on inlining it before compilation... Yeah, old-style preprocessor macros. The Evil Itself. Or by using a dirty hack with a #include replacing the function call (and inserting C++ code instead)... It may be a bit safer than a macro, regarding double evaluations, but other side-effects can be even worse since it must rely on "global" variables to work.
Does it worth the pain? Probably not. In particular for "obfuscation", because it won't be as "secure" as you think it will be. Yes, an explicit function call is easier to trace. But it won't change anything: reverse engineering don't rely on that to be done. In fact, obfuscation is near never a good (or even working...) solution. I used to think that... a long, very long time ago. I proved to myself that it was near useless. On my own "secured" code. Breaking the code took me much less time than it took me to "protect" it...
This might be a stupid question, but I am confused. I had a feeling that an immediate (consteval) function has to be executed during compile time and we simply cannot see its body in the binary.
This article clearly supports my feeling:
This has the implication that the [immediate] function is only seen at compile time. Symbols are not emitted for the function, you cannot take the address of such a function, and tools such as debuggers will not be able to show them. In this matter, immediate functions are similar to macros.
The similar strong claim might be found in Herb Sutter's publication:
Note that draft C++20 already contains part of the first round of reflection-related work to land in the standard: consteval functions that are guaranteed to run at compile time, which came from the reflection work and are designed specifically to be used to manipulate reflection information.
However, there is a number of evidences that are not so clear about this fact.
From cppreference:
consteval - specifies that a function is an immediate function, that is, every call to the function must produce a compile-time constant.
It does not mean it has to be called during compile time only.
From the P1073R3 proposal:
There is now general agreement that future language support for reflection should use constexpr functions, but since "reflection functions" typically have to be evaluated at compile time, they will in fact likely be immediate functions.
Seems like this means what I think, but still it is not clearly said. From the same proposal:
Sometimes, however, we want to express that a function should always produce a constant when called (directly or indirectly), and a non-constant result should produce an error.
Again, this does not mean the function has to be evaluated during compile time only.
From this answer:
your code must produce a compile time constant expression. But a compile time constant expression is not an observable property in the context where you used it, and there are no side effects to doing it at link or even run time! And under as-if there is nothing preventing that
Finally, there is a live demo, where consteval function is clearly called during runtime. However, I hope this is due to the fact consteval is not yet properly supported in clang and the behavior is actually incorrect, just like in Why does a consteval function allow undefined behavior?
To be more precise, I'd like to hear which of the following statements of the cited article are correct:
An immediate function is only seen at compile time (and cannot be evaluated at run time)
Symbols are not emitted for an immediate function
Tools such as debuggers will not be able to show an immediate function
To be more precise, I'd like to hear which of the following statements of the cited article are correct:
An immediate function is only seen at compile time (and cannot be evaluated at run time)
Symbols are not emitted for an immediate function
Tools such as debuggers will not be able to show an immediate function
Almost none of these are answers which the C++ standard can give. The standard doesn't define "symbols" or what tools can show. Almost all of these are dealer's choice as far as the standard is concerned.
Indeed, even the question of "compile time" vs. "run time" is something the standard doesn't deal with. The only question that concerns the standard is whether something is a constant expression. Invoking a constexpr function may produce a constant expression, depending on its parameters. Invoking a consteval function in a way which does not produce a constant expression is il-formed.
The one thing the standard does define is what gets "seen". Though it's not really about "compile time". There are a number of statements in C++20 that forbid most functions from dealing in pointers/references to immediate functions. For example, C++20 states in [expr.prim.id]/3:
An id-expression that denotes an immediate function shall appear only
as a subexpression of an immediate invocation, or
in an immediate function context.
So if you're not in an immediate function, or you're not using the name of an immediate function to call another immediate function (passing a pointer/reference to the function), then you cannot name an immediate function. And you can't get a pointer/reference to a function without naming it.
This and other statements in the spec (like pointers to immediate function not being valid results of constant expressions) essentially make it impossible for a pointer/reference to an immediate function to leak outside of constant expressions.
So statements about the visibility of immediate functions are correct, to some degree. Symbols can be emitted for immediate functions, but you cannot use immediate functions in a way that would prevent an implementation from discarding said symbols.
And that's basically the thing with consteval. It doesn't use standard language to enforce what must happen. It uses standard language to make it impossible to use the function in a way that will prevent these things from happening. So it's more reasonable to say:
You cannot use an immediate function in a way that would prevent the compiler from executing it at compile time.
You cannot use an immediate function in a way that would prevent the compiler from discarding symbols for it.
You cannot use an immediate function in a way that would force debuggers to be able to see them.
Quality of implementation is expected to take things from there.
It should also be noted that debugging builds are for... debugging. It would be entirely reasonable for advanced compiler tools to be able to debug code that generates constant expressions. So a debugger which could see immediate functions execute is an entirely desirable technology. This becomes moreso as compile-time code grows more complex.
The proposal mentions:
One consequence of this specification is that an immediate function never needs to be seen by a back end.
So it is definitely the intention of the proposal that calls are replaced by the constant. In other words, that the constant expression is evaluated during translation.
However, it does not say it is required that it is not seen by the backend. In fact, in another sentence of the proposal, it just says it is unlikely:
It also means that, unlike plain constexpr functions, consteval functions are unlikely to show up in symbolic debuggers.
More generally, we can re-state the question as:
Are compilers forced to evaluate constant expressions (everywhere; not just when they definitely need it)?
For instance, a compiler needs to evaluate a constant expression if it is the number of elements of an array, because it needs to statically determine the total size of the array.
However, a compiler may not need to evaluate other uses, and while any decent optimizing compiler will try to do so anyway, it does not mean it needs to.
Another interesting case to think about is an interpreter: while an interpreter still needs to evaluate some constant expressions, it may just do it lazily all the time, without performing any constant folding.
So, as far as I know, they aren't required, but I don't know the exact quotes we need from the standard to prove it (or otherwise). Perhaps it is a good follow-up question on its own, which would answer this one too.
For instance, in [expr.const]p1 there is a note that says they can, not that they are:
[Note: Constant expressions can be evaluated during translation. — end note]
I was recently reading about operator overloading in C++. So, I was wondering whether the built-in operators are replaced by function calls behind the scenes.
For example, Is a + b(a and b are int types) replaced by a.operator+(b)? Or the compiler does something different?
There is no int::operator+. Whether the compiler chooses to compile a + b directly to assembly (likely) or replace it with some internal function like int __add_ints(int, int) (unlikely) is an implementation detail.
The internals of the compiler are complex. On a conceptual level, the answer is YES. Whenever a compiler sees a + b, it does have to check for known functions with the name operator+ and replace it with a call to the right function.
In practice, their are 2 important nuances to make:
The compiler knows about fundamental types (which you can't override), it doesn't need to insert a function call it can immediately insert the right 'instructions'
Inlining is an important optimization, which will remove the function call when interesting
Maybe. Many arithmetic operations map dir calypso into CPU instructions, and the compiler will just generate the appropriate code in place. If that’s not possible, the compiler will generate a call to an appropriate function, and the runtime library will have a definition of that function. Back in the olden days floating-point math was usually done with function calls. These days, CPUs for desktop systems have floating-point hardware, and floating-point math is generated as direct CPU instructions. But embedded systems often don’t have hardware for that, so the compiler generates function calls instead.
Back in the really early days, even integer math was sometimes done with function calls. Because of his, the IBM 1620 was sometimes referred to as the CADET: Can’t Add, Doesn’t Even Try.
I just noticed that D0202R2 propose that all <cstring> functions must not have constexpr. I would like to understand why, during Jacksonville meeting, it was decided for a solution like this.
Take a function like std::strchr. I really do not see any reason for not being constexpr. Indeed, a compiler can easily optimize some dummy code like this at compile-time (even if I remove builtins, as you can see from the parameters). However, at the same time, it is not possible to rely on these functions within constexpr contexts or using static assertions.
We could obviously re-implement some of <cstring> functions to be constexpr (as I did in this other dummy code), but I do not understand why they must not have constexpr in the standard library.
What am I missing?
PS: Builtins!
At the beginning I was confused because constexpr functions using some <cstring> capabilities just worked, then I understood it was only thanks to GCC builtins. Indeed, if you add the -fno-builtin parameter, you can just use std::strlen instead of the custom version of the function.
Upon reviewing this more, and thinking more about the implications of the C++14 relaxation of rules surrounding constexpr, I have a different answer.
The <cstring> header is a wrapper around a bunch of C functions. C has no constexpr concept, and while it might be useful for it to have one, it's not likely to grow one anytime soon. So marking up those functions in that way would be cumbersome and require a lot of #ifdefs.
Also (and I think this is the most important reason) when those functions are not compiler intrinsics they are implemented in C and stored in a library file as object code. Object code in a library is not a form accessible to the C++ compiler to evaluate at compile time. They are not 'inline' like template code is.
Lastly, most of the really useful things they do can easily be implemented in terms of the C++ <algorithm> library. strlen(s) = (::std::string_view(s)).length(), memcpy(a, b, len) = ::std::copy(b, b + len, a) and so on. And D0202R2 proposes to make those algorithms constexpr. And, as you pointed out, it also proposes to make functions in ::std::string_view constexpr and these also give equivalent functionality. So, given the previously mentioned headaches, it seems that implementing constexpr for the <cstring> functions would be of dubious benefit.
As a side note, there's ::std::copy, ::std::move, ::std::copy_backward, and ::std::move_backward and it's up to you to figure out which you need to call. It would be nice if there was a function that could figure out whether or not x or x_backward was needed in that particular case like memmove does. But, because of the way iterators are defined, taking one iterator and comparing it to another iterator that may not be iterating over the same object at all just isn't possible to do in C++, even if they're random access iterators.
This question already has answers here:
Why do we need to mark functions as constexpr?
(4 answers)
Closed 2 years ago.
As far as I understand it, constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible.
I know that it also imposes some restriction on the function or initialization declared as constexpr but the final goal is compile-time evaluation, isn't it?
So my question is, why can't we leave that at the compiler? It is obviously capable of checking the pre-conditions, so why doesn't it do for each expression and evaluate at compile-time where possible?
I have two ideas on why this might be the case but I am not yet convinced that they hit the point:
a) It might take too long during compile-time.
b) Since my code can use constexpr functions in locations where normale functions would not be allowed the specifier is also kind of part of the declaration. If the compiler did everything by itself, one could use a function in a C-array definition with one version of the function but with the next version there might be a compiler-error, because the pre-conditions for compile-time evaluation are no more satisfied.
constexpr is not a "hint" to the compiler about anything; constexpr is a requirement. It doesn't require that an expression actually be executed at compile time; it requires that it could.
What constexpr does (for functions) is restrict what you're allowed to put into function definition, so that the compiler can easily execute that code at compile time where possible. It's a contract between you the programmer and the compiler. If your function violates the contract, the compiler will error immediately.
Once the contract is established, you are now able to use these constexpr functions in places where the language requires a compile time constant expression. The compiler can then check the elements of a constant expression to see that all function calls in the expression call constexpr functions; if they don't, again a compiler error results.
Your attempt to make this implicit would result in two problems. First, without an explicit contract as defined by the language, how would I know what I can and cannot do in a constexpr function? How do I know what will make a function not constexpr?
And second, without the contract being in the compiler, via a declaration of my intent to make the function constexpr, how would the compiler be able to verify that my function conforms to that contract? It couldn't; it would have to wait until I use it in a constant expression before I find that it isn't actually a proper constexpr function.
Contracts are best stated explicitly and up-front.
constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible
No, see below
the final goal is compile-time evaluation
No, see below.
so why doesn't it do for each expression and evaluate at compile-time where possible?
Optimizers do things like that, as allowed under the as-if rule.
constexpr is not used to make things faster, it is used to allow usage of the result in context where a runtime-variable expression is illegal.
This is only my evaluation, but I believe your (b) reason is correct (that it forms part of the interface that the compiler can enforce). The interface requirement serves both for the writer of the code and the client of the code.
The writer may intend something to be usable in a compile-time context, but not actually use it in this way. If the writer violates the rules for constexpr, they might not find out until after publication when clients who try to use it constexpr fail. Or, more realistically, the library might use the code in a constexpr sense in version 1, refactor this usage out in version 2, and break constexpr compatibility in version 3 without realizing it. By checking constexpr-compliance, the breakage in version 3 will be caught before deployment.
The interface for the client is more obvious --- an inline function won't silently become constexpr-required because it happened to work and someone used that way.
I don't believe your (a) reason (that it could take too long for the compiler) is applicable because (1) the compiler has to check much of the constexpr constraints anyway when the code is marked, (2) without the annotation, the compiler would only have to do the checking when used in a constexpr way (so most functions wouldn't have to be checked), and (3) IIUC the D programming language actually does allow functions to be compile-time evaluated if they meet requirements without any declaration assistance, so apparently it can be done.
I think I remember watching an early talk by Bjarne Stroustrup where he mentioned that programmers wanted fine grained control on this "dangerous" feature, from which I understand that they don't want things "accidentally" executed at compile time without them knowing. (Even if that sound like a good thing.)
There can be many reasons for that, but the only valid one is ultimatelly compilation speed I think ( (a) in your list ).
It would be too much burden on the compiler to determine for every function if it could be computed at compile time.
This argument is weaker as compilation times in general go down.
Like many other features of C++ what end up happening is that we end up with the "wrong defaults".
So you have to tell when you want constexpr instead of when you don't want constexpr (runtimeexpr); you have to tell when you want const intead of where you want mutable, etc.
Admitedly, you can imagine functions that take an absurd amount of time to run at compile time and that cannot be amortized (with other kind of machine resources) at runtime.
(I am not aware that "time-out" can be a criterion in a compiler for constexpr, but it could be so.)
Or it could be that one is compiling in a system that is always expected to finish compilation in a finite time but an unbounded runtime is admissible (or debuggable).
I know that this question is old, but time has illuminated that it actually makes sense to have constexpr as default:
In C++17, for example, you can declare a lambda constexpr but more importantly they are constexpr by default if they can be so.
https://learn.microsoft.com/en-us/cpp/cpp/lambda-expressions-constexpr
Note that lambda has all the "right" (opposite) defaults, members (captures) are const by default, arguments are templates by default auto, and now these functions are constexpr by default.