I want know the inner workings of "__builtin_popcount".
As much as I understand, it works differently for different cpu.
Similar to many other built-ins, it translates into specific CPU instruction if one is available on the target CPU, thus considerably speeding up the application.
For example, on x86_64 it translates to popcntl ASM instruction.
Additional information can be found on GCC page: https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
It is also worth noting that the actual speedup could only be seen if gcc is ran with march flag which targets architecture supporting this instruction or an argument which specifically enables it, -mpopcnt. Without either of those, gcc will revert to generic bit counting via bit operations.
Related
I have a program that makes heavy use of the intrinsic command _BitScanForward / _BitScanForward64 (aka count trailing zeros, TZCNT, CTZ).
I would like to not use the intrinsic but instead use the according CPU instruction (available on Haswell and later).
When using gcc or clang (where the intrinsic is called __builtin_ctz), I can achieve this by specifying either -march=haswell or -mbmi2 as compiler flags.
The documentation of _BitScanForward only specifies that the intrinsic is available on all architectures "x86, ARM, x64, ARM64" or "x64, ARM64", but I don't just want it to be available, I want to ensure it is compiled to use the CPU instruction instead of the intrinsic function. I also checked /Oi but that doesn't explain it either.
I also searched the web but there are curiously few matches for my question, most just explain how to use intrinsics, e.g. this question and this question.
Am I overthinking this and MSVC will create code that magically uses the CPU instruction if the CPU supports it? Are there any flags required? How can I ensure that the CPU instructions are used when available?
UPDATE
Here is what it looks like with Godbolt.
Please be nice, my assembly reading skills are pretty basic.
GCC uses tzcnt with haswell/bmi2, otherwise resorts to rep bsf.
MSVC uses bsf without rep.
I also found this useful answer, which states that:
"Using a redundant rep prefix for bsr was generally defined to be ignored [...]". I wonder whether the same is true for bsf?
It explains (as I knew) that bsf is not the same as tzcnt, however MSVC doesn't appear to check for input == 0
This adds the questions: Why does bsf work for MSVC?
UPDATE
Okay, this was easy, I actually call _BitScanForward for MSVC. Doh!
UPDATE
So I added a bit of unnecessary confusion here. Ideally I would like to use an intrinsic __tzcnt, but that doesn't exist in MSVC so I resorted to _BitScanForward plus an extra check to account for 0 input.
However, MSVC supports LZCNT, where I have a similar issue (but it is used less in my code).
Slightly updated question would be: How does MSVC deal with LZCNT (instead of TZCNT)?
Answer: see here. Specifically: "On Intel processors that don't support the lzcnt instruction, the instruction byte encoding is executed as bsr (bit scan reverse). If code portability is a concern, consider use of the _BitScanReverse intrinsic instead."
The article suggests to resort to bsr if older CPUs are a concern. To me, this implies that there is no compiler flag to control this, instead they suggest to manually identify the __cpu and then call either bsr or lzcnt.
In short, MSVC has no support for different CPU architectures (beyond x86/64/ARM).
As I posted above, MSVC doesn't appear to have support for different CPU architectures (beyond x86/64/ARM).
This article says: "On Intel processors that don't support the lzcnt instruction, the instruction byte encoding is executed as bsr (bit scan reverse). If code portability is a concern, consider use of the _BitScanReverse intrinsic instead."
The article suggests to resort to bsr if older CPUs are a concern. To me, this implies that there is no compiler flag to control this, instead they suggest to manually identify the __cpuid and then call either bsr or lzcnt depending on the result.
UPDATE
As #dewaffled pointed out, there are indeed _tzcnt_u32 / _tzcnt_u64 in the x64 intrinsics list.
I got mislead by looking at the Alphabetical listing of intrinsic functions on the left side of the pane. I wonder whether there is a distinction between "intrinsics" and "intrinsic functions", i.e. _tzcnt_u64 is an intrinsic but not an intrinsic function.
Just interesting how it works in games and other software.
More precisely, I'm asking for a solution in C++.
Something like:
if AMX available -> Use AMX version of the math library
else if AVX-512 available -> Use AVX-512 version of the math library
else if AVX-256 available -> Use AVX-256 version of the math library
etc.
The basic idea I have is to compile the library in different DLLs and swap them on runtime but it seems not to be the best solution for me.
For the detection part
See Are the xgetbv and CPUID checks sufficient to guarantee AVX2 support? which shows how to detect CPU and OS support for new extensions: cpuid and xgetbv, respectively.
ISA extensions that add new/wider registers that need to be saved/restored on context switch also need to be supported and enabled by the OS, not just the CPU. New instructions like AVX-512 will still fault on a CPU that supports them if the OS hasn't set a control-register bit. (Effectively promising that it knows about them and will save/restore them.) Intel designed things so the failure mode is faulting, not silent corruption of registers on CPU migration, or context switch between two programs using the extension.
Extensions that added new or wider registers are AVX, AVX-512F, and AMX. OSes need to know about them. (AMX is very new, and adds a large amount of state: 8 tile registers T0-T7 of 1KiB each. Apparently OSes need to know about AMX for power-management to work properly.)
OSes don't need to know about AVX2/FMA3 (still YMM0-15), or any of the various AVX-512 extensions which still use k0-k7 and ZMM0-31.
There's no OS-independent way to detect OS support of SSE, but fortunately it's old enough that these days you don't have to. It and SSE2 are baseline for x86-64. Everything up to SSE4.2 uses the same register state (XMM0-15) so OS support for SSE1 is sufficient for user-space to use SSE4.2. SSE1 was new in 1999, with Pentium 3.
Different compilers have different ways of doing CPUID and xgetbv detection. See does gcc's __builtin_cpu_supports check for OS support? - unfortunately no, only CPUID, at least when that was asked. I'd consider that a GCC bug, but IDK if it ever got reported or fixed.
For the optional-use part
Typically setting function pointers to selected versions of some important functions. Inlining through function pointers isn't generally possible, so make sure you choose the boundaries appropriately, like an AVX-512 version of a function that includes a loop, not just a single vector.
GCC's function multi-versioning can automate that for you, transparently compiling multiple versions and hooking some function-pointer setup.
There have been some previous Q&As about this with different compilers, search for "CPU dispatch avx" or something like that, along with other search terms.
See The Effect of Architecture When Using SSE / AVX Intrinisics to understand the difference between GCC/clang's model for intrinsics where you have to enable -march=skylake or whatever, or manually -mavx2, before you can use an intrinsic. vs. MSVC and classic ICC where you could use any intrinsic anywhere, even to emit instructions the compiler wouldn't be able to auto-vectorize with. (Those compilers can't or don't optimize intrinsics much at all, perhaps because that could lead to them getting hoisted out of if(cpu) statements.)
Windows provides IsProcessorFeaturePresent but AVX support is not on the list.
For more detailed detection you need to ask the CPU directly. On x86 this means the CPUID instruction. Visual C++ provides the __cpuidex intrinsic for this. In your case, function/leaf 1 and check bit 28 in ECX. Wikipedia has a decent article but you really should download the Intel instruction set manual to use as a reference.
I have already known SIMD instructions sets contains SSE1 to SSE5.
But not found too much talk about any instruction sets support MIMD arch.
In c++ code , we can use intrinsic to write "SIMD running" code.
Is there any way to write "MIMD running" code ?
If MIMD is more powerful than SIMD,
it is better to write c++ code support MIMD.
Is my thought correct ?
The Wikipedia page Flynn's taxonomy describes MIMD as:
Multiple autonomous processors simultaneously executing different instructions on different data. MIMD architectures include multi-core superscalar processors, and distributed systems, using either one shared memory space or a distributed memory space.
Any time you divide an algorithm (such as into threads using OpenMP, for example), you may be using MIMD. Generally, you don't need a special "MIMD instruction set" - the ISA is the same as for SISD, as each instruction stream operates independently of the others, on its own data. EPIC (explicitly parallel instruction computing) is an alternative approach where the functional units operate in lockstep, but with independent(ish) instructions and data.
As to which is "more powerful" (or more energy-efficient, or lowest latency, or whatever matters in your use case), there's no single answer. As with many complex issues, "it depends".
Is my thought correct ?
It is certainly naive, and implementation specific. Remember the following facts:
optimizing compilers generate very clever code (when you enable optimizations). Try for example some recent GCC invoked as g++ -march=native -O3 -Wall (and perhaps also -fverbose-asm -S if you want to look into the generated assembler code); see CppCon 2017: Matt Godbolt's talk “What Has My Compiler Done for Me Lately? Unbolting the Compiler's Lid”
there are some extensions (done thru standardized pragmas) to improve optimizations for MIMD, look into OpenMP, OpenACC.
consider explicit parallelization approaches: multi-threading (read some pthread programming tutorial), MPI...
look also into dialects for GPGPU computing like OpenCL & CUDA.
See also this answer to a related question.
If MIMD is more powerful than SIMD, it is better to write c++ code support MIMD.
Certainly not always, if you just care about performance. As usual, it depends, and you need to benchmark.
In this highly voted answer to a question on the performance differences between C++ and Java I learn that the JIT compiler is sometimes able to optimize better because it can determine the exact specifics of the machine (processor, cache sizes, etc.):
Generally, C# and Java can be just as fast or faster because the JIT
compiler -- a compiler that compiles your IL the first time it's
executed -- can make optimizations that a C++ compiled program cannot
because it can query the machine. It can determine if the machine is
Intel or AMD; Pentium 4, Core Solo, or Core Duo; or if supports SSE4,
etc.
A C++ program has to be compiled beforehand usually with mixed
optimizations so that it runs decently well on all machines, but is
not optimized as much as it could be for a single configuration (i.e.
processor, instruction set, other hardware).
Question: Is there a way to tell the compiler to optimize specifically for my current machine? Is there a compiler which is able to do this?
For GCC, you can use the flag -march=native. Be aware that the generated code may not run on other CPUs because
GCC uses this name to determine what kind of instructions it can emit
when generating assembly code.
So CPU specific assembly can be generated.
If you want your code to run on other CPU types, but tune it for better performance on your CPU, then you should use -mtune=native:
Specify the name of the processor to tune the performance for. The
code will be tuned as if the target processor were of the type
specified in this option, but still using instructions compatible with
the target processor specified by a -mcpu= option.
Certainly a compiler could be instructed to optimize for a specific architecture. This is true of gcc, if you look at the multitude of architecture flags that you can pass in. The same is true to a lesser extent on Visual Studio, as it has the -MACHINE option and /arch options.
However, unlike in Java, this likely means that the generated code is only (safe) to run on that hardware that is being targeted. The assertion that Java can be just as fast or faster only likely holds in the case of generically compiled C++ code. Given the target architecture, C++ code compiled for that specific architecture will likely be as fast or faster than equivalent Java code. Of course, it's much more work to support multiple architectures in this way.
Are the following functions executed in a single clock cycle?
__builtin_popcount
__builtin_ctz
__builtin_clz
also what is the no of clock cycles for the ll(64 bit) version of the same.
are they portable. why or why not?
Do these functions execute in a single clock-cycle?
Not necessarily. On architectures where they can be implemented with a single instruction, they will typically be the fastest way to compute that function (but still not necessarily a single clock cycle). On architectures where they cannot be implemented as a single instruction, their performance is less certain.
On my processor (a Core 2 Duo), __builtin_ctz and __builtin_clz can be implemented with a single instruction (Bit Scan Forward and Bit Scan Reverse). However, __builtin_popcount cannot be implemented with a single instruction on my processor. For __builtin_popcount, gcc 4.7.2 calls a library function, while clang 3.1 generates an inline instruction sequence (implementing this bit twiddling hack). Clearly, the performance of those two implementations will not be the same.
Are they portable?
They are not portable across compilers. They originated with GCC (as far as I know), and are also implemented in some other compilers such as Clang.
Compilers that do support these functions may provide them for multiple architectures, but implementation quality (performance) is likely to vary.
__builtin functions like this are used to access specific machine instructions in a somewhat easier way than using inline assembly. If you need to achieve the highest performance and are willing to sacrifice portability to do so or to provide an alternate implementation for compilers or platforms where these functions are not provided, then it makes sense to use them. If optimal low level performance is your goal you should also check the assembly output of the compiler, to determine whether it really is generating the instruction that you expect it to use.
You can get a first idea of what your compiler does with it by compiling it with -O3 -march=native -S into assembler code. There you can check if this resolves to just one assembler statement. If so, this is not a guarantee that this is done in one cycle. To know the real cost, you'd have to measure.