Is it beneficial to make library functions templated to avoid compiler instructions? - c++

Let's say I'm creating my own library in namespace l. Would it be beneficial to make as much as possible members of the namespace templated? This would encourage the compiler to only generate instructions for members that are actually called by the user of the library. To make my point clear, I'll demonstrate it here:
1)
namespace l{
template <typename = void>
int f() { return 3; }
}
vs.
2)
namespace l{
int f() { return 3; }
}
It is not being called in main to show the difference.
int main() { return EXIT_SUCCESS; }
function 1) does not require additional instructions for l::f():
main:
push rbp
mov rbp, rsp
mov dword ptr [rbp - 4], 0
mov eax, 0
pop rbp
ret
function 2) does require additional instructions for l::f() ( If, again, l::f() is not called):
l::f()
push rbp
mov rbp, rsp
mov eax, 3
pop rbp
ret
main:
push rbp
mov rbp, rsp
mov dword ptr [rbp - 4], 0
mov eax, 0
pop rbp
ret

tl;dr
Is it beneficial to make library functions templated to avoid compiler instructions?
No. Emitting dead code isn't the expensive part of compiling. File access, parsing and optimization (not necessarily in that order) take time, and this idea forces library clients to read & parse more code than in the regular header + library model.
Templates are usually blamed for slowing builds, not speeding them up.
It also means you can't build your library ahead of time, so each user needs to compile whichever parts they use from scratch, in every translation unit where they're used.
The total time spent compiling will probably be greater with the templated version. You'd have to profile to be sure (and I suspect this f is so small as to be immeasurable either way) but I have a hard time seeing this as a useful improvement.
Your comparison isn't representative anyway - a good compiler will discard dead code at link time. Some will also be able to inline code from static libraries, so there's no reliable effect on either compile-time or runtime performance.

Related

Compiler optimization of static constexpr

Given the following C++ code:
#include <stdio.h>
static constexpr int x = 1;
void testfn() {
if (x == 2)
printf("This is test.\n");
}
int main() {
for (int a = 0; a < 10; a++)
testfn();
return 0;
}
Visual Studio 2019 produces the following Debug build assembly (viewed using Approach 1 of accepted answer at: How to view the assembly behind the code using Visual C++?)
int main() {
00EC1870 push ebp
00EC1871 mov ebp,esp
00EC1873 sub esp,0CCh
00EC1879 push ebx
00EC187A push esi
00EC187B push edi
00EC187C lea edi,[ebp-0CCh]
00EC1882 mov ecx,33h
00EC1887 mov eax,0CCCCCCCCh
00EC188C rep stos dword ptr es:[edi]
00EC188E mov ecx,offset _6D4A0457_how_compiler_treats_staticconstexpr#cpp (0ECC003h)
00EC1893 call #__CheckForDebuggerJustMyCode#4 (0EC120Dh)
for (int a = 0; a < 10; a++)
00EC1898 mov dword ptr [ebp-8],0
00EC189F jmp main+3Ah (0EC18AAh)
00EC18A1 mov eax,dword ptr [ebp-8]
00EC18A4 add eax,1
00EC18A7 mov dword ptr [ebp-8],eax
00EC18AA cmp dword ptr [ebp-8],0Ah
00EC18AE jge main+47h (0EC18B7h)
testfn();
00EC18B0 call testfn (0EC135Ch)
00EC18B5 jmp main+31h (0EC18A1h)
return 0;
00EC18B7 xor eax,eax
}
As can be seen in the assembly, possibly because this is a Debug build, there is pointless references to the for loop and testfn in main. I would have hoped that they should not find any mention in the assembly code at all given that the printf in testfn will never be hit since static constexpr int x=1.
I have 2 questions:
(1)Perhaps in the Release build, the for loop is optimized away. How can I check this? Viewing the release build assembly code does not work for me even on using the Approach 2 specified at at: How to view the assembly behind the code using Visual C++?. The file with the assembly code is not produced at all.
(2)In using static constexpr int/double/char as opposed to #define's, under what circumstances is one guaranteed that the former does not involve any unnecessary overhead (runtime computations/evaluations)? #define's, though much maligned, seem to offer much greater guarantee than static constexpr's in this regard.
The issue here is that you are compiling the code using a debug build. If you want sanity in the asm, compile as release instead. The problem is that a debugger is used to help confirm the logic of the underlying code. The logic in your underlying code is that it should call testfn() 10 times. As a result, you should be able to place a breakpoint on that method, and hit it at the correct point in the execution. In a release build, that breakpoint would never be hit (because it would have been optimised away).
In your case however, it's entirely incorrect to say that the constexpr is being ignored. You may notice that there are no calls to printf() in the generated asm, so the compiler has correctly identified that if (x == 2) can never be true, and has removed it. However, if the compiler removed the call to testfn() completely, your breakpoint would never be hit, and the debugger would basically be useless.
Don't look at the output of a debug build and imagine it tells you anything useful about the code or compiler. You should expect the code to be deliberately de-optimised.

C++ constexpr function in return statement

Why is a constexpr function no evaluated at compile time but in runtime in the return statement of main function?
It tried
template<int x>
constexpr int fac() {
return fac<x - 1>() * x;
}
template<>
constexpr int fac<1>() {
return 1;
}
int main() {
const int x = fac<3>();
return x;
}
and the result is
main:
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], 6
mov eax, 6
pop rbp
ret
with gcc 8.2. But when I call the function in the return statement
template<int x>
constexpr int fac() {
return fac<x - 1>() * x;
}
template<>
constexpr int fac<1>() {
return 1;
}
int main() {
return fac<3>();
}
I get
int fac<1>():
push rbp
mov rbp, rsp
mov eax, 1
pop rbp
ret
main:
push rbp
mov rbp, rsp
call int fac<3>()
nop
pop rbp
ret
int fac<2>():
push rbp
mov rbp, rsp
call int fac<1>()
add eax, eax
pop rbp
ret
int fac<3>():
push rbp
mov rbp, rsp
call int fac<2>()
mov edx, eax
mov eax, edx
add eax, eax
add eax, edx
pop rbp
ret
Why is the first code evaluated at compile time and the second at runtime?
Also I tried both snippets with clang 7.0.0 and they are evaluated at runtime. Why is this not valid constexpr for clang?
All evaluation was done in godbolt compiler explorer.
A common misconception with regard to constexpr is that it means "this will be evaluated at compile time"1.
It is not. constexpr was introduced to let us write natural code that may produce constant expressions in contexts that need them. It means "this must be evaluatable at compile time", which is what the compiler will check.
So if you wrote a constexpr function returning an int, you can use it to calculate a template argument, an initializer for a constexpr variable (also const if it's an integral type) or an array size. You can use the function to obtain natural, declarative, readable code instead of the old meta-programming tricks one needed to resort to in the past.
But a constexpr function is still a regular function. The constexpr specifier doesn't mean a compiler has2 to optimize it to heck and do constant folding at compile time. It's best not to confuse it for such a hint.
1 - Thanks user463035818 for the phrasing.
2 - c++20 and consteval is a different story however :)
StoryTeller's answer is good, but I think there's a slightly different take possible.
With constexpr, there are three situations to distinguish:
The result is needed in a compile-time context, such as array sizes. In this case, the arguments too must be known at compile time. Evaluation is probably at compile time, and at least all diagnosable errors will be found at compile time.
The arguments are only known at run time, and the result is not needed at compile time. In this case, evaluation necessarily has to happen at run time.
The arguments may be available at compile time, but the result is needed only at run time.
The fourth combination (arguments available only at runtime, result needed at compile time) is an error; the compiler will reject such code.
Now in cases 1 and 3 the calculation could happen at compile time, as all inputs are available. But to facilitate case 2, the compiler must be able to create a run-time version, and it may decide to use this variant in the other cases as well - if it can.
E.g. some compilers internally support variable-sized arrays, so even while the language requires compile-time array bounds, the implementation may decide not to.

Macro replace itself using undesired braces

I have problems with macros, since they are being replaced with braces.
Since I will need to compile for different Operative Systems [WINDOWS, OSX, ANDROID, iOS], I'm trying to use typedef for the basic C++ types, to replace them easily and test performances.
Since I'm doing lots of static_cast, I thought I could use a macro to do it only when its need (CPU is critical on my software). So in this way, the static_cast will be only performed when the types are different, instead doing weird things like this:
const int tv = 8;
const int tvc = static_cast<int>(8);
So, if FORCE_USE32 is enabled or not, it would choose the best version for it
So Visual Studio 2017 using default compiler gives me an error when I do some things like this:
#ifndef FORCE_USE32
#define FORCE_USE32 0
#endif
#if FORCE_USE32
typedef int s08;
#define Cs08(v) {v}
#else
typedef char s08;
#define Cs08(v) {static_cast<s08>(v)}
#endif
// this line give me an error because Cs08 is replaced by {static_cast<s08>(1)} instead just static_cast<s08>(1)
std::array<s08, 3> myArray{Cs08(1), 0, 0};
I know I could solve easily creating a variable before I do the array, something like this
const s08 tempVar = Cs08(1);
std::array<s08, 3> myArray{tempVar, 0, 0};
But I do not understand the reason, and I want to keep my code as clean as possible. Is there any way to include the macro inside the array definition?
You are trying to solve a non-problem
const int tvc = static_cast<int>(8);
Will not use any CPU cycles here. How dumb do you think compilers are nowadays? Even with no optimizations the above cast is a no-op (no operation). There won't be any additional instructions generated for the cast.
auto test(int a) -> int
{
return a;
}
auto test_cast(int a) -> int
{
return static_cast<int>(a);
}
With no optimization enabled the two functions generate identical code:
test(int): # #test(int)
push rbp
mov rbp, rsp
mov dword ptr [rbp - 4], edi
mov eax, dword ptr [rbp - 4]
pop rbp
ret
test_cast(int): # #test_cast(int)
push rbp
mov rbp, rsp
mov dword ptr [rbp - 4], edi
mov eax, dword ptr [rbp - 4]
pop rbp
ret
With -O3 they get:
test(int): # #test(int)
mov eax, edi
ret
test_cast(int): # #test_cast(int)
mov eax, edi
ret
Coming back to how smart the compilers (actually the optimization algorithms) are, with optimizations enabled a compiler can do crazy crazy things, like loop unrolling, converting a recursive function to an iterative one, removing entire redundant code and on and on and on. What you are doing is premature optimization. If your code is performance critical then you need a decent understanding of assembly, compiler optimizations and system architecture. And then you don't just blindly optimize what you think is slow. You write code for readability first and then you profile.
Answering your macro problem: just remove the {} from the macro:
#define Cs08(v) v
#define Cs08(v) static_cast<s08>(v)

Unneccessary pop instructions in functions with early if statement

while playing around with godbolt.org I noticed that gcc (6.2, 7.0 snapshot), clang (3.9) and icc (17) when compiling something close to
int a(int* a, int* b) {
if (b - a < 2) return *a = ~*a;
// register intensive code here e.g. sorting network
}
compiles (-O2/-O3) this into somthing like this:
push r15
mov rax, rcx
push r14
sub rax, rdx
push r13
push r12
push rbp
push rbx
sub rsp, 184
mov QWORD PTR [rsp], rdx
cmp rax, 7
jg .L95
not DWORD PTR [rdx]
.L162:
add rsp, 184
pop rbx
pop rbp
pop r12
pop r13
pop r14
pop r15
ret
which obviously has a huge overhead in case of b - a < 2. In case of -Os gcc compiles to:
mov rax, rcx
sub rax, rdx
cmp rax, 7
jg .L74
not DWORD PTR [rdx]
ret
.L74:
Which leads me to beleave that there is no code keeping the compiler from emitting this shorter code.
Is there a reason why compilers do this ? Is there a way to get them compiling to the shorter version without compiling for size?
Here's an example on Godbolt that reproduces this. It seems to have something to do with the complex part being recursive
This is a known compiler limitation, see my comments on the question. IDK why it exists; maybe it's hard for compilers to decide what they can do without spilling when they haven't finished saving regs yet.
Pulling the early-out check into a wrapper is often useful when it's small enough to inline.
Looks like modern gcc can actually sidestep this compiler limitation sometimes.
Using your example on the Godbolt compiler explorer, adding a second caller is enough to get even gcc6.1 -O2 to split the function for you, so it can inline the early-out into the second caller and into the externally visible square() (which ends with jmp square(int*, int*) [clone .part.3] if the early-out return path isn't taken).
code on Godbolt, note I added -std=gnu++14, which is required for clang to compiler your code.
void square_inlinewrapper(int* a, int* b) {
//if (b - a < 16) return; // gcc inlines this part for us, and calls a private clone of the function!
return square(a, b);
}
# gcc6.1 -O2 (default / generic -march= and -mtune=)
mov rax, rsi
sub rax, rdi
cmp rax, 63
jg .L9
rep ret
.L9:
jmp square(int*, int*) [clone .part.3]
square() itself compiles to the same thing, calling the private clone which has the bulk of the code. The recursive calls from inside the clone call the wrapper function, so they don't do the extra push/pop work when it's not needed.
Even gcc7 doesn't do this when there's no other caller, even at -O3. It does still transform one of the recursive calls into a loop, but the other one just calls the big function again.
Clang 3.9 and icc17 don't clone the function, either, so you should write the inlineable wrapper manually (and change the main body of the function to use it for recursive calls, if the check is needed there).
You might want to name the wrapper square, and rename just the main body to a private name (like static void square_impl).

Register keyword in C++

What is difference between
int x=7;
and
register int x=7;
?
I am using C++.
register is a hint to the compiler, advising it to store that variable in a processor register instead of memory (for example, instead of the stack).
The compiler may or may not follow that hint.
According to Herb Sutter in "Keywords That Aren't (or, Comments by Another Name)":
A register specifier has the same
semantics as an auto specifier...
According to Herb Sutter, register is "exactly as meaningful as whitespace" and has no effect on the semantics of a C++ program.
In C++ as it existed in 2010, any program which is valid that uses the keywords "auto" or "register" will be semantically identical to one with those keywords removed (unless they appear in stringized macros or other similar contexts). In that sense the keywords are useless for properly-compiling programs. On the other hand, the keywords might be useful in certain macro contexts to ensure that improper usage of a macro will cause a compile-time error rather than producing bogus code.
In C++11 and later versions of the language, the auto keyword was re-purposed to act as a pseudo-type for objects which are initialized, which a compiler will automatically replace with the type of the initializing expression. Thus, in C++03, the declaration: auto int i=(unsigned char)5; was equivalent to int i=5; when used within a block context, and auto i=(unsigned char)5; was a constraint violation. In C++11, auto int i=(unsigned char)5; became a constraint violation while auto i=(unsigned char)5; became equivalent to auto unsigned char i=5;.
With today's compilers, probably nothing. Is was orginally a hint to place a variable in a register for faster access, but most compilers today ignore that hint and decide for themselves.
register is deprecated in C++11. It is unused and reserved in C++17.
Source: http://en.cppreference.com/w/cpp/keyword/register
Almost certainly nothing.
register is a hint to the compiler that you plan on using x a lot, and that you think it should be placed in a register.
However, compilers are now far better at determining what values should be placed in registers than the average (or even expert) programmer is, so compilers just ignore the keyword, and do what they wants.
The register keyword was useful for:
Inline assembly.
Expert C/C++ programming.
Cacheable variables declaration.
An example of a productive system, where the register keyword was required:
typedef unsigned long long Out;
volatile Out out,tmp;
Out register rax asm("rax");
asm volatile("rdtsc":"=A"(rax));
out=out*tmp+rax;
It has been deprecated since C++11 and is unused and reserved in C++17.
As of gcc 9.3, compiling using -std=c++2a, register produces a compiler warning, but it still has the desired effect and behaves identically to C's register when compiling without -O1–-Ofast optimisation flags in the respect of this answer. Using clang++-7 causes a compiler error however. So yes, register optimisations only make a difference on standard compilation with no optimisation -O flags, but they're basic optimisations that the compiler would figure out even with -O1.
The only difference is that in C++, you are allowed to take the address of the register variable which means that the optimisation only occurs if you don't take the address of the variable or its aliases (to create a pointer) or take a reference of it in the code (only on - O0, because a reference also has an address, because it's a const pointer on the stack, which, like a pointer can be optimised off the stack if compiling using -Ofast, except they will never appear on the stack using -Ofast, because unlike a pointer, they cannot be made volatile and their addresses cannot be taken), otherwise it will behave like you hadn't used register, and the value will be stored on the stack.
On -O0, another difference is that const register on gcc C and gcc C++ do not behave the same. On gcc C, const register behaves like register, because block-scope consts are not optimised on gcc. On clang C, register does nothing and only const block-scope optimisations apply. On gcc C, register optimisations apply but const at block-scope has no optimisation. On gcc C++, both register and const block-scope optimisations combine.
#include <stdio.h> //yes it's C code on C++
int main(void) {
const register int i = 3;
printf("%d", i);
return 0;
}
int i = 3;:
.LC0:
.string "%d"
main:
push rbp
mov rbp, rsp
sub rsp, 16
mov DWORD PTR [rbp-4], 3
mov eax, DWORD PTR [rbp-4]
mov esi, eax
mov edi, OFFSET FLAT:.LC0
mov eax, 0
call printf
mov eax, 0
leave
ret
register int i = 3;:
.LC0:
.string "%d"
main:
push rbp
mov rbp, rsp
push rbx
sub rsp, 8
mov ebx, 3
mov esi, ebx
mov edi, OFFSET FLAT:.LC0
mov eax, 0
call printf
mov eax, 0
mov rbx, QWORD PTR [rbp-8] //callee restoration
leave
ret
const int i = 3;
.LC0:
.string "%d"
main:
push rbp
mov rbp, rsp
sub rsp, 16
mov DWORD PTR [rbp-4], 3 //still saves to stack
mov esi, 3 //immediate substitution
mov edi, OFFSET FLAT:.LC0
mov eax, 0
call printf
mov eax, 0
leave
ret
const register int i = 3;
.LC0:
.string "%d"
main:
push rbp
mov rbp, rsp
mov esi, 3 //loads straight into esi saving rbx push/pop and extra indirection (because C++ block-scope const is always substituted immediately into the instruction)
mov edi, OFFSET FLAT:.LC0 // can't optimise away because printf only takes const char*
mov eax, 0 //zeroed: https://stackoverflow.com/a/6212755/7194773
call printf
mov eax, 0 //default return value of main is 0
pop rbp //nothing else pushed to stack -- more efficient than leave (rsp == rbp already)
ret
register tells the compiler to 1)store a local variable in a callee saved register, in this case rbx, and 2)optimise out stack writes if address of variable is never taken. const tells the compiler to substitute the value immediately (instead of assigning it a register or loading it from memory) and write the local variable to the stack as default behaviour. const register is the combination of these emboldened optimisations. This is as slimline as it gets.
Also, on gcc C and C++, register on its own seems to create a random 16 byte gap on the stack for the first local on the stack, which doesn't happen with const register.
Compiling using -Ofast however; register has 0 optimisation effect because if it can be put in a register or made immediate, it always will be and if it can't it won't be; const still optimises out the load on C and C++ but at file scope only; volatile still forces the values to be stored and loaded from the stack.
.LC0:
.string "%d"
main:
//optimises out push and change of rbp
sub rsp, 8 //https://stackoverflow.com/a/40344912/7194773
mov esi, 3
mov edi, OFFSET FLAT:.LC0
xor eax, eax //xor 2 bytes vs 5 for mov eax, 0
call printf
xor eax, eax
add rsp, 8
ret
Consider a case when compiler's optimizer has two variables and is forced to spill one onto stack. It so happened that both variables have the same weight to the compiler. Given there is no difference, the compiler will arbitrarily spill one of the variables. On the other hand, the register keyword gives compiler a hint which variable will be accessed more frequently. It is similar to x86 prefetch instruction, but for compiler optimizer.
Obviously register hints are similar to user-provided branch probability hints, and can be inferred from these probability hints. If compiler knows that some branch is taken often, it will keep branch related variables in registers. So I suggest caring more about branch hints, and forgetting about register. Ideally your profiler should communicate somehow with the compiler and spare you from even thinking about such nuances.