I was searching on the web to find a proper solution, without much success.
So I hope one of you know something about it: Is there any way to detect the "Intel Bit Manipulation Instruction Sets 2" (BMI2) compile time? I want to make some conditional thing based on the availability of it.
With GCC you can check for the __BMI2__ macro. This macro will be defined if the target supports BMI2 (e.g. -mbmi2,-march=haswell). This is the macro that the instrinsic's headers (x86intrin.h, bmi2intrin.h) uses to check for BMI2 at compile time.
For runtime checks, __builtin_cpu_init() and __builtin_cpu_supports("bmi2") can be used in modern GCC (tested in GCC 5.1, 4.9 and lower doesn't have it).
Run the CPUID intrinsic function with EAX=7, ECX=0, then check bit 3 of the returned EBX register (the BMI1 flag). EBX bit 8 is the BMI2 flag. Consult your compiler's documentation for how to call CPUID and get the data back from it.
Related
I have a program that makes heavy use of the intrinsic command _BitScanForward / _BitScanForward64 (aka count trailing zeros, TZCNT, CTZ).
I would like to not use the intrinsic but instead use the according CPU instruction (available on Haswell and later).
When using gcc or clang (where the intrinsic is called __builtin_ctz), I can achieve this by specifying either -march=haswell or -mbmi2 as compiler flags.
The documentation of _BitScanForward only specifies that the intrinsic is available on all architectures "x86, ARM, x64, ARM64" or "x64, ARM64", but I don't just want it to be available, I want to ensure it is compiled to use the CPU instruction instead of the intrinsic function. I also checked /Oi but that doesn't explain it either.
I also searched the web but there are curiously few matches for my question, most just explain how to use intrinsics, e.g. this question and this question.
Am I overthinking this and MSVC will create code that magically uses the CPU instruction if the CPU supports it? Are there any flags required? How can I ensure that the CPU instructions are used when available?
UPDATE
Here is what it looks like with Godbolt.
Please be nice, my assembly reading skills are pretty basic.
GCC uses tzcnt with haswell/bmi2, otherwise resorts to rep bsf.
MSVC uses bsf without rep.
I also found this useful answer, which states that:
"Using a redundant rep prefix for bsr was generally defined to be ignored [...]". I wonder whether the same is true for bsf?
It explains (as I knew) that bsf is not the same as tzcnt, however MSVC doesn't appear to check for input == 0
This adds the questions: Why does bsf work for MSVC?
UPDATE
Okay, this was easy, I actually call _BitScanForward for MSVC. Doh!
UPDATE
So I added a bit of unnecessary confusion here. Ideally I would like to use an intrinsic __tzcnt, but that doesn't exist in MSVC so I resorted to _BitScanForward plus an extra check to account for 0 input.
However, MSVC supports LZCNT, where I have a similar issue (but it is used less in my code).
Slightly updated question would be: How does MSVC deal with LZCNT (instead of TZCNT)?
Answer: see here. Specifically: "On Intel processors that don't support the lzcnt instruction, the instruction byte encoding is executed as bsr (bit scan reverse). If code portability is a concern, consider use of the _BitScanReverse intrinsic instead."
The article suggests to resort to bsr if older CPUs are a concern. To me, this implies that there is no compiler flag to control this, instead they suggest to manually identify the __cpu and then call either bsr or lzcnt.
In short, MSVC has no support for different CPU architectures (beyond x86/64/ARM).
As I posted above, MSVC doesn't appear to have support for different CPU architectures (beyond x86/64/ARM).
This article says: "On Intel processors that don't support the lzcnt instruction, the instruction byte encoding is executed as bsr (bit scan reverse). If code portability is a concern, consider use of the _BitScanReverse intrinsic instead."
The article suggests to resort to bsr if older CPUs are a concern. To me, this implies that there is no compiler flag to control this, instead they suggest to manually identify the __cpuid and then call either bsr or lzcnt depending on the result.
UPDATE
As #dewaffled pointed out, there are indeed _tzcnt_u32 / _tzcnt_u64 in the x64 intrinsics list.
I got mislead by looking at the Alphabetical listing of intrinsic functions on the left side of the pane. I wonder whether there is a distinction between "intrinsics" and "intrinsic functions", i.e. _tzcnt_u64 is an intrinsic but not an intrinsic function.
Just interesting how it works in games and other software.
More precisely, I'm asking for a solution in C++.
Something like:
if AMX available -> Use AMX version of the math library
else if AVX-512 available -> Use AVX-512 version of the math library
else if AVX-256 available -> Use AVX-256 version of the math library
etc.
The basic idea I have is to compile the library in different DLLs and swap them on runtime but it seems not to be the best solution for me.
For the detection part
See Are the xgetbv and CPUID checks sufficient to guarantee AVX2 support? which shows how to detect CPU and OS support for new extensions: cpuid and xgetbv, respectively.
ISA extensions that add new/wider registers that need to be saved/restored on context switch also need to be supported and enabled by the OS, not just the CPU. New instructions like AVX-512 will still fault on a CPU that supports them if the OS hasn't set a control-register bit. (Effectively promising that it knows about them and will save/restore them.) Intel designed things so the failure mode is faulting, not silent corruption of registers on CPU migration, or context switch between two programs using the extension.
Extensions that added new or wider registers are AVX, AVX-512F, and AMX. OSes need to know about them. (AMX is very new, and adds a large amount of state: 8 tile registers T0-T7 of 1KiB each. Apparently OSes need to know about AMX for power-management to work properly.)
OSes don't need to know about AVX2/FMA3 (still YMM0-15), or any of the various AVX-512 extensions which still use k0-k7 and ZMM0-31.
There's no OS-independent way to detect OS support of SSE, but fortunately it's old enough that these days you don't have to. It and SSE2 are baseline for x86-64. Everything up to SSE4.2 uses the same register state (XMM0-15) so OS support for SSE1 is sufficient for user-space to use SSE4.2. SSE1 was new in 1999, with Pentium 3.
Different compilers have different ways of doing CPUID and xgetbv detection. See does gcc's __builtin_cpu_supports check for OS support? - unfortunately no, only CPUID, at least when that was asked. I'd consider that a GCC bug, but IDK if it ever got reported or fixed.
For the optional-use part
Typically setting function pointers to selected versions of some important functions. Inlining through function pointers isn't generally possible, so make sure you choose the boundaries appropriately, like an AVX-512 version of a function that includes a loop, not just a single vector.
GCC's function multi-versioning can automate that for you, transparently compiling multiple versions and hooking some function-pointer setup.
There have been some previous Q&As about this with different compilers, search for "CPU dispatch avx" or something like that, along with other search terms.
See The Effect of Architecture When Using SSE / AVX Intrinisics to understand the difference between GCC/clang's model for intrinsics where you have to enable -march=skylake or whatever, or manually -mavx2, before you can use an intrinsic. vs. MSVC and classic ICC where you could use any intrinsic anywhere, even to emit instructions the compiler wouldn't be able to auto-vectorize with. (Those compilers can't or don't optimize intrinsics much at all, perhaps because that could lead to them getting hoisted out of if(cpu) statements.)
Windows provides IsProcessorFeaturePresent but AVX support is not on the list.
For more detailed detection you need to ask the CPU directly. On x86 this means the CPUID instruction. Visual C++ provides the __cpuidex intrinsic for this. In your case, function/leaf 1 and check bit 28 in ECX. Wikipedia has a decent article but you really should download the Intel instruction set manual to use as a reference.
When cross-compiling using clang and the -target option, targeting the same architecture and hardware as the native system, I've noticed that clang seems to generate worse optimizations than the native-built counter-part for cases where the <sys> in the triple is none.
Consider this simple code example:
int square(int num) {
return num * num;
}
When optimized at -O3 with -target x86_64-linux-elf, the native x86_64 target code generation yields:
square(int):
mov eax, edi
imul eax, edi
ret
The code generated with -target x86_64-none-elf yields:
square(int):
push rbp
mov rbp, rsp
mov eax, edi
imul eax, edi
pop rbp
ret
Live Example
Despite having the same hardware and optimization flags, clearly something is missing an optimization. The problem goes away if none is replaced with linux in the target triple, despite no system-specific features being used.
At first glance it may look like it simply isn't optimizing at all, but different code segments show that it is performing some optimizations, just not all. For example, loop-unrolling is still occurring.
Although the above examples are simply with x86_64, in practice, this issue is generating code-bloat for an armv7-based constrained embedded system, and I've noticed several missed optimizations in certain circumstances such as:
Not removing unnecessary setup/cleanup instructions (same as in x86_64)
Not coalescing certain sequential inlined increments into a single add instruction (at -Os, when inlining vector-like push_back calls. This optimizes when built natively from an arm-based system running armv7.)
Not coalescing adjacent small integer values into a single mov (such as merging a 32-bit int with a bool in an optional implementation. This optimizes when built natively from an arm-based system running armv7.)
etc
I would like to know what I can do, if anything, to achieve the same optimizations when cross-compiling as compiling natively? Are there any relevant flags that can help tweak tuning that are somehow implied with the <sys> in the triple?
If possible, I'd love some insight as to why the cross-compilation target appears to fail to optimize certain things simply because the system is different, despite having the same architecture and abi. My understanding is that LLVM's optimizer operates on the IR, which should generate effectively the same optimizations so long as nothing is reliant on the target system itself.
TL;DR: for the x86 targets, frame-pointers are enabled by default when the OS is unknown. You can manually disable them using -fomit-frame-pointer. For ARM platforms, you certainly need to provide more information so that the backend can deduce the target ABI. Use -emit-llvm so to check which part of Clang/LLVM generate an inefficient code.
The Application Binary Interface (ABI) can change from one target to another. There is no standard ABI in C. The chosen one is dependent of several parameters including the architecture, its version, the vendor, the OS, the environment, etc.
The use of the -target parameter help the compiler to select a ABI. The thing is x86_64-none-elf is not complete enough so the backend can actually generate a fast code. In fact, I think this is actually not a valid target since there is a warning from Clang in this case and the same warning appear with wrong random targets. Surprisingly, the compiler still succeed to generate a generic binary with the provided information. Targets like x86_64-windows and x86_64-linux works as well as x86_64-unknown-windows-cygnus (for Cygwin in Windows). You can get the list of the Clang supported platforms, OS, environment, etc. in the code.
One particular aspect of the ABI is the calling conventions. They are different between operating systems. For example, x86-64 Linux platforms uses the calling convention from the System V AMD64 ABI while recent x86-64 Windows platforms uses the vectorcall calling convention based on the older Microsoft x64 one. The situation is more complex for old x86 architectures. For more information about this please read this and this.
In your case, without information about the OS, the compiler might select its own generic ABI resulting in the old push/pop instruction being used. That being said, the compiler assumes that edi contains the argument passed to the function meaning that the chosen ABI is the System V AMD64 (or a derived version). The environment can play an important role in stack optimizations like the access of the stack from outside the function (eg. the callee functions).
My initial guess was that the assembly backend disabled some optimizations due to the lack of information regarding the specified target, but this is not the case for x86-64 ELFs as the same backend is selected. Note that target is parsed by architecture-specific backend (see here for example).
In fact, the issue comes from Clang emitting an IR code with frame-pointer flag set to all. You can check the IR code with the flag -emit-llvm. You can solve this problem using -fomit-frame-pointer.
On ARM, the problem can be different and come from the assembly backend. You should certainly specify the target OS or at least more information like the sub-architecture type (mainly for ARM), the vendor and the environment.
Overall, note that it is reasonable to think that a more generic targets produces a less efficient code due to the lack of information. If you want more information about this, please fill an issue on the Clang or LLVM bug tracker so to track the reason why this happens or/and let developers fix this.
Related posts/links:
clang: how to list supported target architectures?
https://clang.llvm.org/docs/CrossCompilation.html
I want know the inner workings of "__builtin_popcount".
As much as I understand, it works differently for different cpu.
Similar to many other built-ins, it translates into specific CPU instruction if one is available on the target CPU, thus considerably speeding up the application.
For example, on x86_64 it translates to popcntl ASM instruction.
Additional information can be found on GCC page: https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
It is also worth noting that the actual speedup could only be seen if gcc is ran with march flag which targets architecture supporting this instruction or an argument which specifically enables it, -mpopcnt. Without either of those, gcc will revert to generic bit counting via bit operations.
We'd like to deploy our product on two different HW platforms, an i5 (typically i5-7500, but older CPUs back to 4100 must be supported) and an Atom (E3845)
Supporting Atom is new. Running the current binaries on the E3845 don't work - "Illegal instruction". Disassembling in gdb doesn't show me exactly which instruction, it only says "(bad)".
Since both are x86 I'd like to deploy a single set of binaries but, other than exhaustive trial and error, I don't know how to find which combination gcc flags will generate code compatible with both CPUs.
P Brady's gcccpuopt.sh script looked promising but it doesn't support my CPUs
Looking at /proc/cpuinfo here's the difference:
CPU Atom E3845 i3-4160
Family 6 6
Model 55 60
3dnowprefetch
epb IA32_ENERGY_PERF_BIAS support
abm Advanced Bit Manipulation
avx Advanced Vector Extensions
avx2 Advanced Vector Extensions
bmi1 Bit Manipulation Instructions
bmi2 Bit Manipulation Instructions
eagerfpu ???
f16c 16-bit floating point conversions
fma 4 operands MAC instructions for fused multiply–add
fsgsbase ????
invpcid Invalidate Processor Context ID
pcid Process Context Identifiers
pdpe1gb One GB pages (allows hugepagesz=1G)
pln Intel Power Limit Notification
pts Intel Package Thermal Status
xsave Save Processor Extended States: also provides XGETBY,XRSTOR,XSETBY
xsaveopt Optimized XSAVE
I don't really know what all of those mean... Would I just disable (if possible) generation of everything in the i5 column? Or is there a better procedure for finding the settings?
Target environment is 32-bit Centos6 with 3.10 kernel. GCC 4.9. Code is mostly C++ with some C.
In order to make this answer applicable to more use cases, I'll try to make this generic and use atom and i5 as the examples.
On each platform run gcc -march=native -Q --help=target as noted here
Gather the options that are common to all platforms and either add them to your CFLAGS or make a wrapper that always adds them to your compiler command line (it could just be a shell script with /path/to/real-gcc $myflags $# where $myflags is your list of common flags). I have often had to resort to the wrapper method for some stubborn build systems that ignore $CFLAGS.
Compile as normal, ensuring that your CFLAGS get used.
If performance is acceptable stop here, otherwise do a profile guided optimization build
If performance is acceptable stop here, otherwise you can use your profile info to identify functions that may benefit from gcc's target_clones function attribute or a combination of ifunc and target function attributes (supported by clang) to generate sub-architecture specific versions of each function that get resolved at run time. (Note that in this specific case there may be no functions where this is useful, since the i5 outperforms the atom in most cases)
If performance is acceptable stop here, otherwise fix the code.