Why is __int128_t faster than long long on x86-64 GCC? - c++

This is my test code:
#include <chrono>
#include <iostream>
#include <cstdlib>
using namespace std;
using ll = long long;
int main()
{
__int128_t a, b;
ll x, y;
a = rand() + 10000000;
b = rand() % 50000;
auto t0 = chrono::steady_clock::now();
for (int i = 0; i < 100000000; i++)
{
a += b;
a /= b;
b *= a;
b -= a;
a %= b;
}
cout << chrono::duration_cast<chrono::milliseconds>(chrono::steady_clock::now() - t0).count() << ' '
<< (ll)a % 100000 << '\n';
x = rand() + 10000000;
y = rand() % 50000;
t0 = chrono::steady_clock::now();
for (int i = 0; i < 100000000; i++)
{
x += y;
x /= y;
y *= x;
y -= x;
x %= y;
}
cout << chrono::duration_cast<chrono::milliseconds>(chrono::steady_clock::now() - t0).count() << ' '
<< (ll)x % 100000 << '\n';
return 0;
}
This is the test result:
$ g++ main.cpp -o main -O2
$ ./main
2432 1
2627 1
Using GCC 10.1.0 on x64 GNU/Linux, no matter if it's using optimization of -O2 or un-optimized, __int128_t is always a little faster than long long.
int and double are both significantly faster than long long; long long has become the slowest type.
How does this happen?

The performance difference comes from the efficiency of 128-bit divisions/modulus with GCC/Clang in this specific case.
Indeed, on my system as well as on godbolt, sizeof(long long) = 8 and sizeof(__int128_t) = 16. Thus operation on the former are performed by native instruction while not the latter (since we focus on 64 bit platforms). Additions, multiplications and subtractions are slower with __int128_t. But, built-in functions for divisions/modulus on 16-byte types (__divti3 and __modti3 on x86 GCC/Clang) are surprisingly faster than the native idiv instruction (which is pretty slow, at least on Intel processors).
If we look deeper in the implementation of GCC/Clang built-in functions (used only for __int128_t here), we can see that __modti3 uses conditionals (when calling __udivmodti4). Intel processors can execute the code faster because:
taken branches can be well predicted in this case since they are always the same (and also because the loop is executed millions of times);
the division/modulus is split in faster native instructions that can mostly be executed in parallel by multiple CPU ports (and that benefit from out-of-order execution). A div instruction is still used in most possible paths (especially in this case);
The execution time of the div/idiv instructions covers most of the overall execution time because of their very high latencies. The div/idiv instructions cannot be executed in parallel because of the loop dependencies. However, the latency of a div lower than a idiv making the former faster.
Please note that the performance of the two implementations can highly differ from one architecture to another (because of the number of CPU ports, the branch prediction capability and the latency/througput of the idiv instruction).
Indeed, the latency of a 64-bit idiv instruction takes 41-95 cycles on Skylake while it takes 8-41 cycles on AMD Ryzen processors for example. Respectively the latency of a div is about 6-89 cycles on Skylake and still the same on Ryzen. This means that the benchmark performance results should be significantly different on Ryzen processors (the opposite effect may be seen due to the additional instructions/branch costs in the 128-bits case).

TL:DR: __int128 division helper functions internally end up doing an unsigned div reg64 (after some branching on the values being positive and the upper halves being 0). 64-bit div is faster on Intel CPUs than the signed idiv reg64 that GCC inlines for signed long long. Faster by enough to make up for all the extra overhead of the helper function, and extended-precision for the other operations.
You probably wouldn't see this effect on AMD CPUs: long long would be faster as expected because idiv r64 is similar enough in perf to div r64 there.
And unsigned long long is faster than unsigned __int128 even on Intel CPUs, for example on my i7-6700k (Skylake) at 3.9GHz (run under perf stat to be sure of CPU frequency during the test):
2097 (i128) vs. 2332 (i64) - your original test (run back-to-back for CPU freq warm-up)
2075 (u128) vs. 1900 (u64) - unsigned versions. Slightly less branching in u128 division vs. i128, but major difference for i64 vs. u64 where the only difference is div vs. idiv.
Also, drawing any general conclusions from a very specific micro-benchmark like this would be a bad idea. It is interesting to dig into why exactly the extended-precision __int128 type manages to be faster in this division benchmark with positive numbers small enough to fit in a 32-bit integer, though.
Your benchmark is heavily weighted towards division, which you do twice per iteration (/ and %), even though it's much more expensive than other operations and in most code used much less often. (e.g. sum a whole array then divide once to get the average.)
Your benchmark is also has no instruction-level parallelism: each step has a data dependency on the previous step. This prevents auto-vectorization or anything that would show some of the advantages of narrower types.
(It's also not careful to avoid warm-up effects like the first timed region being slow until the CPU gets up to max turbo. Idiomatic way of performance evaluation?. But that happens much faster than the couple seconds of your timed regions, so that's not a problem here.)
128-bit integer division (especially signed) is too complicated for GCC to want to inline, so gcc emits a call to a helper function, __divti3 or __modti3. (TI = tetra-integer, GCC's internal name for an integer that's 4x the size of int.) These functions are documented in the GCC-internals manual.
You can see the compiler-generated asm on the Godbolt compiler-explorer. i.e. 128-bit addition with add/adc, multiplication with one mul full-multiply of the low halves, and 2x non-widening imul of the cross products. Yes these are slower than the single-instruction equivalents for int64_t.
But Godbolt doesn't show you the asm for libgcc helper functions. It doesn't disassemble them even in "compile-to-binary" and disassemble mode (instead of the usual compiler asm text output) because it dynamically links libgcc_s instead of libgcc.a.
Extended-precision signed division is done by negating if necessary and doing unsigned division of 64-bit chunks, then fixing up the sign of the result if necessary.
With both inputs small and positive, no actual negation is needed (just testing and branching). There are also fast-paths for small numbers (high-half divisor = 0, and quotient will fit in 64 bits), which is the case here. The end result is that the path of execution through __divti3 looks like this:
This is from manually single-stepping into the call to __divti3 with gdb, after compiling with g++ -g -O3 int128-bench.cpp -o int128-bench.O3 on my Arch GNU/Linux system, with gcc-libs 10.1.0-2.
# Inputs: dividend = RSI:RDI, divisor = RCX:RDX
# returns signed quotient RDX:RAX
| >0x7ffff7c4fd40 <__divti3> endbr64 # in case caller was using CFE (control-flow enforcement), apparently this instruction has to pollute all library functions now. I assume it's cheap at least in the no-CFE case.
│ 0x7ffff7c4fd44 <__divti3+4> push r12
│ 0x7ffff7c4fd46 <__divti3+6> mov r11,rdi
│ 0x7ffff7c4fd49 <__divti3+9> mov rax,rdx │ 0x7ffff7c4fd4c <__divti3+12> xor edi,edi
│ 0x7ffff7c4fd4e <__divti3+14> push rbx
│ 0x7ffff7c4fd4f <__divti3+15> mov rdx,rcx
│ 0x7ffff7c4fd52 <__divti3+18> test rsi,rsi # check sign bit of dividend (and jump over a negation)
│ 0x7ffff7c4fd55 <__divti3+21> jns 0x7ffff7c4fd6e <__divti3+46>
... taken branch to
| >0x7ffff7c4fd6e <__divti3+46> mov r10,rdx
│ 0x7ffff7c4fd71 <__divti3+49> test rdx,rdx # check sign bit of divisor (and jump over a negation), note there was a mov rdx,rcx earlier
│ 0x7ffff7c4fd74 <__divti3+52> jns 0x7ffff7c4fd86 <__divti3+70>
... taken branch to
│ >0x7ffff7c4fd86 <__divti3+70> mov r9,rax
│ 0x7ffff7c4fd89 <__divti3+73> mov r8,r11
│ 0x7ffff7c4fd8c <__divti3+76> test r10,r10 # check high half of abs(divisor) for being non-zero
│ 0x7ffff7c4fd8f <__divti3+79> jne 0x7ffff7c4fdb0 <__divti3+112> # falls through: small-number fast path
│ 0x7ffff7c4fd91 <__divti3+81> cmp rax,rsi # check that quotient will fit in 64 bits so 128b/64b single div won't fault: jump if (divisor <= high half of dividend)
│ 0x7ffff7c4fd94 <__divti3+84> jbe 0x7ffff7c4fe00 <__divti3+192> # falls through: small-number fast path
│ 0x7ffff7c4fd96 <__divti3+86> mov rdx,rsi
│ 0x7ffff7c4fd99 <__divti3+89> mov rax,r11
│ 0x7ffff7c4fd9c <__divti3+92> xor esi,esi
│ >0x7ffff7c4fd9e <__divti3+94> div r9 #### Do the actual division ###
│ 0x7ffff7c4fda1 <__divti3+97> mov rcx,rax
│ 0x7ffff7c4fda4 <__divti3+100> jmp 0x7ffff7c4fdb9 <__divti3+121>
...taken branch to
│ >0x7ffff7c4fdb9 <__divti3+121> mov rax,rcx
│ 0x7ffff7c4fdbc <__divti3+124> mov rdx,rsi
│ 0x7ffff7c4fdbf <__divti3+127> test rdi,rdi # check if the result should be negative
│ 0x7ffff7c4fdc2 <__divti3+130> je 0x7ffff7c4fdce <__divti3+142>
... taken branch over a neg rax / adc rax,0 / neg rdx
│ >0x7ffff7c4fdce <__divti3+142> pop rbx
│ 0x7ffff7c4fdcf <__divti3+143> pop r12
│ 0x7ffff7c4fdd1 <__divti3+145> ret
... return back to the loop body that called it
Intel CPUs (since IvyBridge) have zero-latency mov, so all that overhead doesn't worsen the critical path latency (which is your bottleneck) significantly. Or at least not enough to make up the difference between idiv and div.
The branching is handled by branch prediction and speculative execution, only checking the predictions after the fact when the actual input register values are the same. The branching goes the same way every time so it's trivial for branch prediction to learn. Since division is so slow, there's plenty of time for out-of-order exec to catch up.
64-bit operand-size integer division is very slow on Intel CPUs, even when the numbers are actually small and would fit in a 32-bit integer, and the extra microcode for signed integer division is even more expensive.
e.g. on my Skylake (i7-6700k), https://uops.info/ shows that (table search result )
idiv r64 is 56 uops for the front-end, with latency from 41 to 95 cycles (from divisor to quotient, which is the relevant case here I think).
div r64 is 33 uops for the front-end, with latency from 35 to 87 cycles. (for that same latency path).
The latency best-case happens for small quotients or small dividends or something, I can never remember which.
Similar to the branching that GCC does in software for 128-bit division in terms of 64-bit, I think the CPU microcode is internally doing 64-bit division in terms of narrower operations, probably the 32-bit that's only 10 uops for signed or unsigned, with much lower latency. (Ice Lake improves the divider so 64-bit division isn't much slower than 32-bit.)
This is why you found long long so much slower than int for this benchmark. In a lot of cases it's about the same, or half speed if memory bandwidth or SIMD are involved. (Only 2 elements per 128-bit of vector width, not 4).
AMD CPUs handle 64-bit operand size more efficiently, with performance only depending on the actual values, so about the same for div r32 vs. div r64 with the same numbers.
BTW, the actual values tend to be something like a=1814246614 / b=1814246613 = 1, then a=1 % b=1814246612 (with b decreasing by 1 every iteration). Only ever testing division with quotient=1 seems very silly. (The first iteration might be different, but we get into this state for the 2nd and later.)
Performance of integer operations other than division is not data-dependent on modern CPUs. (Unless of course there are compile-time constants that allow different asm to be emitted. Like division by a constant is much cheaper when done with a multiplicative inverse calculated at compile time.)
re: double: see Floating point division vs floating point multiplication for division vs. multiplication. FP division is often harder to avoid, and its performance is relevant in more cases, so it's handled better.
Related:
Trial-division code runs 2x faster as 32-bit on Windows than 64-bit on Linux has a specific example of changing div r64 to div r32 in a program that uses small-enough numbers, and seeing throughput improve ~3x.
Can 128bit/64bit hardware unsigned division be faster in some cases than 64bit/32bit division on x86-64 Intel/AMD CPUs? has some details about div and idiv being microcoded.
How sqrt() of GCC works after compiled? Which method of root is used? Newton-Raphson? has some hardware details of how div/sqrt execution units are designed in general, and in older Intel CPUs. But that doesn't explain why 64-bit is so many more uops than 32-bit; I'm only inferring that the hardware must be narrower before Ice Lake from the fact that it needs so many more uops of microcode.

Related

How to unset N right-most set bits

There is a relatively well-known trick for unsetting a single right-most bit:
y = x & (x - 1) // 0b001011100 & 0b001011011 = 0b001011000 :)
I'm finding myself with a tight loop to clear n right-most bits, but is there a simpler algebraic trick?
Assume relatively large n (n has to be <64 for 64bit integers, but it's often on the order of 20-30).
// x = 0b001011100 n=2
for (auto i=0; i<n; i++) x &= x - 1;
// x = 0b001010000
I've thumbed my TAOCP Vol4a few times, but can't find any inspiration.
Maybe there is some hardware support for it?
For Intel x86 CPUs with BMI2, pext and pdep are fast. AMD before Zen3 has very slow microcoded PEXT/PDEP (https://uops.info/) so be careful with this; other options might be faster on AMD, maybe even blsi in a loop, or better a binary-search on popcount (see below).
Only Intel has dedicated hardware execution units for the mask-controlled pack/unpack that pext/pdep do, making it constant-time: 1 uop, 3 cycle latency, can only run on port 1.
I'm not aware of other ISAs having a similar bit-packing hardware operation.
pdep basics: pdep(-1ULL, a) == a. Taking the low popcnt(a) bits from the first operand, and depositing them at the places where a has set bits, will give you a back again.
But if, instead of all-ones, your source of bits has the low N bits cleared, the first N set bits in a will grab a 0 instead of 1. This is exactly what you want.
uint64_t unset_first_n_bits_bmi2(uint64_t a, int n){
return _pdep_u64(-1ULL << n, a);
}
-1ULL << n works for n=0..63 in C. x86 asm scalar shift instructions mask their count (effectively &63), so that's probably what will happen for the C undefined-behaviour of a larger n. If you care, use n&63 in the source so the behaviour is well-defined in C, and it can still compile to a shift instruction that uses the count directly.
On Godbolt with a simple looping reference implementation, showing that they produce the same result for a sample input a and n.
GCC and clang both compile it the obvious way, as written:
# GCC10.2 -O3 -march=skylake
unset_first_n_bits_bmi2(unsigned long, int):
mov rax, -1
shlx rax, rax, rsi
pdep rax, rax, rdi
ret
(SHLX is single-uop, 1 cycle latency, unlike legacy variable-count shifts that update FLAGS... except if CL=0)
So this has 3 cycle latency from a->output (just pdep)
and 4 cycle latency from n->output (shlx, pdep).
And is only 3 uops for the front-end.
A semi-related BMI2 trick:
pext(a,a) will pack the bits at the bottom, like (1ULL<<popcnt(a)) - 1 but without overflow if all bits are set.
Clearing the low N bits of that with an AND mask, and expanding with pdep would work. But that's an overcomplicated expensive way to create a source of bits with enough ones above N zeros, which is all that actually matters for pdep. Thanks to #harold for spotting this in the first version of this answer.
Without fast PDEP: perhaps binary search for the right popcount
#Nate's suggestion of a binary search for how many low bits to clear is probably a good alternative to pdep.
Stop when popcount(x>>c) == popcount(x) - N to find out how many low bits to clear, preferably with branchless updating of c. (e.g. c = foo ? a : b often compiles to cmov).
Once you're done searching, x & (-1ULL<<c) uses that count, or just tmp << c to shift back the x>>c result you already have. Using right-shift directly is cheaper than generating a new mask and using it every iteration.
High-performance popcount is relatively widely available on modern CPUs. (Although not baseline for x86-64; you still need to compile with -mpopcnt or -march=native).
Tuning this could involve choosing a likely starting-point, and perhaps using a max initial step size instead of pure binary search. Getting some instruction-level parallelism out of trying some initial guesses could perhaps help shorten the latency bottleneck.

Why is ONE basic arithmetic operation in for loop body executed SLOWER THAN TWO arithmetic operations?

While I experimented with measuring time of execution of arithmetic operations, I came across very strange behavior. A code block containing a for loop with one arithmetic operation in the loop body was always executed slower than an identical code block, but with two arithmetic operations in the for loop body. Here is the code I ended up testing:
#include <iostream>
#include <chrono>
#define NUM_ITERATIONS 100000000
int main()
{
// Block 1: one operation in loop body
{
int64_t x = 0, y = 0;
auto start = std::chrono::high_resolution_clock::now();
for (long i = 0; i < NUM_ITERATIONS; i++) {x+=31;}
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> diff = end-start;
std::cout << diff.count() << " seconds. x,y = " << x << "," << y << std::endl;
}
// Block 2: two operations in loop body
{
int64_t x = 0, y = 0;
auto start = std::chrono::high_resolution_clock::now();
for (long i = 0; i < NUM_ITERATIONS; i++) {x+=17; y-=37;}
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> diff = end-start;
std::cout << diff.count() << " seconds. x,y = " << x << "," << y << std::endl;
}
return 0;
}
I tested this with different levels of code optimization (-O0,-O1,-O2,-O3), with different online compilers (for example onlinegdb.com), on my work machine, on my hame PC and laptop, on RaspberryPi and on my colleague's computer. I rearranged these two code blocks, repeated them, changed constants, changed operations (+, -, <<, =, etc.), changed integer types. But I always got similar result: the block with one line in loop is SLOWER than block with two lines:
1.05681 seconds. x,y = 3100000000,0
0.90414 seconds. x,y = 1700000000,-3700000000
I checked the assembly output on https://godbolt.org/ but everything looked like I expected: second block just had one more operation in assembly output.
Three operations always behaved as expected: they are slower than one and faster than four. So why two operations produce such an anomaly?
Edit:
Let me repeat: I have such behaviour on all of my Windows and Unix machines with code not optimized. I looked at assembly I execute (Visual Studio, Windows) and I see the instructions I want to test there. Anyway if the loop is optimized away, there is nothing I ask about in the code which left. I added that optimizations notice in the question to avoid "do not measure not optimized code" answers because optimizations is not what I ask about. The question is actually why my computers execute two operations faster than one, first of all in code where these operations are not optimized away. The difference in time of execution is 5-25% on my tests (quite noticeable).
This effect only happens at -O0 (or with volatile), and is a result of the compiler keeping your variables in memory (not registers). You'd expect that to just introduce a fixed amount of extra latency into a loop-carried dependency chains through i, x, and y, but modern CPUs are not that simple.
On Intel Sandybridge-family CPUs, store-forwarding latency is lower when the load uop runs some time after the store whose data it's reloading, not right away. So an empty loop with the loop counter in memory is the worst case. I don't understand what CPU design choices could lead to that micro-architectural quirk, but it's a real thing.
This is basically a duplicate of Adding a redundant assignment speeds up code when compiled without optimization, at least for Intel Sandybridge-family CPUs.
This is is one of the major reasons why you shouldn't benchmark at -O0: the bottlenecks are different than in realistically optimized code. See Why does clang produce inefficient asm with -O0 (for this simple floating point sum)? for more about why compilers make such terrible asm on purpose.
Micro-benchmarking is hard; you can only measure something properly if you can get compilers to emit realistically optimized asm loops for the thing you're trying to measure. (And even then you're only measuring throughput or latency, not both; those are separate things for single operations on out-of-order pipelined CPUs: What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand?)
See #rcgldr's answer for measurement + explanation of what would happen with loops that keep variables in registers.
With clang, benchmark::DoNotOptimize(x1 += 31) also de-optimizes into keeping x in memory, but with GCC it does just stay in a register. Unfortunately #SashaKnorre's answer used clang on QuickBench, not gcc, to get results similar to your -O0 asm. It does show the cost of lots of short-NOPs being hidden by the bottleneck through memory, and a slight speedup when those NOPs delay the reload next iteration just long enough for store-forwarding to hit the lower latency good case. (QuickBench I think runs on Intel Xeon server CPUs, with the same microarchitecture inside each CPU core as desktop version of the same generation.)
Presumably all the x86 machines you tested on had Intel CPUs from the last 10 years, or else there's a similar effect on AMD. It's plausible there's a similar effect on whichever ARM CPU your RPi uses, if your measurements really were meaningful there. Otherwise, maybe another case of seeing what you expected (confirmation bias), especially if you tested with optimization enabled there.
I tested this with different levels of code optimization (-O0,-O1,-O2,-O3) [...] But I always got similar result
I added that optimizations notice in the question to avoid "do not measure not optimized code" answers because optimizations is not what I ask about.
(later from comments) About optimizations: yes, I reproduced that with different optimization levels, but as the loops were optimized away, the execution time was too fast to say for sure.
So actually you didn't reproduce this effect for -O1 or higher, you just saw what you wanted to see (confirmation bias) and mostly made up the claim that the effect was the same. If you'd accurately reported your data (measurable effect at -O0, empty timed region at -O1 and higher), I could have answered right away.
See Idiomatic way of performance evaluation? - if your times don't increase linearly with increasing repeat count, you aren't measuring what you think you're measuring. Also, startup effects (like cold caches, soft page faults, lazy dynamic linking, and dynamic CPU frequency) can easily lead to the first empty timed region being slower than the second.
I assume you only swapped the loops around when testing at -O0, otherwise you would have ruled out there being any effect at -O1 or higher with that test code.
The loop with optimization enabled:
As you can see on Godbolt, gcc fully removes the loop with optimization enabled. Sometimes GCC leaves empty loops alone, like maybe it thinks the delay was intentional, but here it doesn't even loop at all. Time doesn't scale with anything, and both timed regions look the same like this:
orig_main:
...
call std::chrono::_V2::system_clock::now() # demangled C++ symbol name
mov rbp, rax # save the return value = start
call std::chrono::_V2::system_clock::now()
# end in RAX
So the only instruction in the timed region is saving start to a call-preserved register. You're measuring literally nothing about your source code.
With Google Benchmark, we can get asm that doesn't optimize the work away, but which doesn't store/reload to introduce new bottlenecks:
#include <benchmark/benchmark.h>
static void TargetFunc(benchmark::State& state) {
uint64_t x2 = 0, y2 = 0;
// Code inside this loop is measured repeatedly
for (auto _ : state) {
benchmark::DoNotOptimize(x2 += 31);
benchmark::DoNotOptimize(y2 += 31);
}
}
// Register the function as a benchmark
BENCHMARK(TargetFunc);
# just the main loop, from gcc10.1 -O3
.L7: # do{
add rax, 31 # x2 += 31
add rdx, 31 # y2 += 31
sub rbx, 1
jne .L7 # }while(--count != 0)
I assume benchmark::DoNotOptimize is something like asm volatile("" : "+rm"(x) ) (GNU C inline asm) to make the compiler materialize x in a register or memory, and to assume the lvalue has been modified by that empty asm statement. (i.e. forget anything it knew about the value, blocking constant-propagation, CSE, and whatever.) That would explain why clang stores/reloads to memory while GCC picks a register: this is a longstanding missed-optimization bug with clang's inline asm support. It likes to pick memory when given the choice, which you can sometimes work around with multi-alternative constraints like "+r,m". But not here; I had to just drop the memory alternative; we don't want the compiler to spill/reload to memory anyway.
For GNU C compatible compilers, we can use asm volatile manually with only "+r" register constraints to get clang to make good scalar asm (Godbolt), like GCC. We get an essentially identical inner loop, with 3 add instructions, the last one being an add rbx, -1 / jnz that can macro-fuse.
static void TargetFunc(benchmark::State& state) {
uint64_t x2 = 0, y2 = 0;
// Code inside this loop is measured repeatedly
for (auto _ : state) {
x2 += 16;
y2 += 17;
asm volatile("" : "+r"(x2), "+r"(y2));
}
}
All of these should run at 1 clock cycle per iteration on modern Intel and AMD CPUs, again see #rcgldr's answer.
Of course this also disables auto-vectorization with SIMD, which compilers would do in many real use cases. Or if you used the result at all outside the loop, it might optimize the repeated increment into a single multiply.
You can't measure the cost of the + operator in C++ - it can compile very differently depending on context / surrounding code. Even without considering loop-invariant stuff that hoists work. e.g. x + (y<<2) + 4 can compile to a single LEA instruction for x86.
The question is actually why my computers execute two operations faster than one, first of all in code where these operations are not optimized away
TL:DR: it's not the operations, it's the loop-carried dependency chain through memory that stops the CPU from running the loop at 1 clock cycle per iteration, doing all 3 adds in parallel on separate execution ports.
Note that the loop counter increment is just as much of an operation as what you're doing with x (and sometimes y).
ETA: This was a guess, and Peter Cordes has made a very good argument about why it's incorrect. Go upvote Peter's answer.
I'm leaving my answer here because some found the information useful. Though this doesn't correctly explain the behavior seen in the OP, it highlights some of the issues that make it infeasible (and meaningless) to try to measure the speed of a particular instruction on a modern processor.
Educated guess:
It's the combined effect of pipelining, powering down portions of a core, and dynamic frequency scaling.
Modern processors pipeline so that multiple instructions can be executing at the same time. This is possible because the processor actually works on micro-ops rather than the assembly-level instructions we usually think of as machine language. Processors "schedule" micro-ops by dispatching them to different portions of the chip while keeping track of the dependencies between the instructions.
Suppose the core running your code has two arithmetic/logic units (ALUs). A single arithmetic instruction repeated over and over requires only one ALU. Using two ALUs doesn't help because the next operation depends on completion of the current one, so the second ALU would just be waiting around.
But in your two-expression test, the expressions are independent. To compute the next value of y, you do not have to wait for the current operation on x to complete. Now, because of power-saving features, that second ALU may be powered down at first. The core might run a few iterations before realizing that it could make use of the second ALU. At that point, it can power up the second ALU and most of the two-expression loop will run as fast as the one-expression loop. So you might expect the two examples to take approximately the same amount of time.
Finally, many modern processors use dynamic frequency scaling. When the processor detects that it's not running hard, it actually slows its clock a little bit to save power. But when it's used heavily (and the current temperature of the chip permits), it might increase the actual clock speed as high as its rated speed.
I assume this is done with heuristics. In the case where the second ALU stays powered down, the heuristic may decide it's not worth boosting the clock. In the case where two ALUs are powered up and running at top speed, it may decide to boost the clock. Thus the two-expression case, which should already be just about as fast as the one-expression case, actually runs at a higher average clock frequency, enabling it to complete twice as much work in slightly less time.
Given your numbers, the difference is about 14%. My Windows machine idles at about 3.75 GHz, and if I push it a little by building a solution in Visual Studio, the clock climbs to about 4.25GHz (eyeballing the Performance tab in Task Manager). That's a 13% difference in clock speed, so we're in the right ballpark.
I split up the code into C++ and assembly. I just wanted to test the loops, so I didn't return the sum(s). I'm running on Windows, the calling convention is rcx, rdx, r8, r9, the loop count is in rcx. The code is adding immediate values to 64 bit integers on the stack.
I'm getting similar times for both loops, less than 1% variation, same or either one up to 1% faster than the other.
There is an apparent dependency factor here: each add to memory has to wait for the prior add to memory to the same location to complete, so two add to memories can be performed essentially in parallel.
Changing test2 to do 3 add to memories, ends up about 6% slower, 4 add to memories, 7.5% slower.
My system is Intel 3770K 3.5 GHz CPU, Intel DP67BG motherboard, DDR3 1600 9-9-9-27 memory, Win 7 Pro 64 bit, Visual Studio 2015.
.code
public test1
align 16
test1 proc
sub rsp,16
mov qword ptr[rsp+0],0
mov qword ptr[rsp+8],0
tst10: add qword ptr[rsp+8],17
dec rcx
jnz tst10
add rsp,16
ret
test1 endp
public test2
align 16
test2 proc
sub rsp,16
mov qword ptr[rsp+0],0
mov qword ptr[rsp+8],0
tst20: add qword ptr[rsp+0],17
add qword ptr[rsp+8],-37
dec rcx
jnz tst20
add rsp,16
ret
test2 endp
end
I also tested with add immediate to register, 1 or 2 registers within 1% (either could be faster, but we'd expect them both to execute at 1 iteration / clock on Ivy Bridge, given its 3 integer ALU ports; What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand?).
3 registers 1.5 times as long, somewhat worse than the ideal 1.333 cycles / iterations from 4 uops (including the loop counter macro-fused dec/jnz) for 3 back-end ALU ports with perfect scheduling.
4 registers, 2.0 times as long, bottlenecked on the front-end: Is performance reduced when executing loops whose uop count is not a multiple of processor width?. Haswell and later microarchitectures would handle this better.
.code
public test1
align 16
test1 proc
xor rdx,rdx
xor r8,r8
xor r9,r9
xor r10,r10
xor r11,r11
tst10: add rdx,17
dec rcx
jnz tst10
ret
test1 endp
public test2
align 16
test2 proc
xor rdx,rdx
xor r8,r8
xor r9,r9
xor r10,r10
xor r11,r11
tst20: add rdx,17
add r8,-37
dec rcx
jnz tst20
ret
test2 endp
public test3
align 16
test3 proc
xor rdx,rdx
xor r8,r8
xor r9,r9
xor r10,r10
xor r11,r11
tst30: add rdx,17
add r8,-37
add r9,47
dec rcx
jnz tst30
ret
test3 endp
public test4
align 16
test4 proc
xor rdx,rdx
xor r8,r8
xor r9,r9
xor r10,r10
xor r11,r11
tst40: add rdx,17
add r8,-37
add r9,47
add r10,-17
dec rcx
jnz tst40
ret
test4 endp
end
#PeterCordes proved this answer to be wrong in many assumptions, but it could still be useful as some blind research attempt of the problem.
I set up some quick benchmarks, thinking it may somehow be connected to code memory alignment, truly a crazy thought.
But it seems that #Adrian McCarthy got it right with the dynamic frequency scaling.
Anyway benchmarks tell that inserting some NOPs could help with the issue, with 15 NOPs after the x+=31 in Block 1 leading to nearly the same performance as the Block 2. Truly mind blowing how 15 NOPs in single instruction loop body increase performance.
http://quick-bench.com/Q_7HY838oK5LEPFt-tfie0wy4uA
I also tried -OFast thinking compilers might be smart enough to throw away some code memory inserting such NOPs, but it seems not to be the case.
http://quick-bench.com/so2CnM_kZj2QEWJmNO2mtDP9ZX0
Edit: Thanks to #PeterCordes it was made clear that optimizations were never working quite as expected in benchmarks above (as global variable required add instructions to access memory), new benchmark http://quick-bench.com/HmmwsLmotRiW9xkNWDjlOxOTShE clearly shows that Block 1 and Block 2 performance is equal for stack variables. But NOPs could still help with single-threaded application with loop accessing global variable, which you probably should not use in that case and just assign global variable to local variable after the loop.
Edit 2: Actually optimizations never worked due to quick-benchmark macros making variable access volatile, preventing important optimizations. It is only logical to load the variable once as we are only modifying it in the loop, so it is volatile or disabled optimizations being the bottleneck. So this answer is basically wrong, but at least it shows how NOPs could speed-up unoptimized code execution, if it makes any sense in the real world (there are better ways like bucketing counters).
Processors are so complex these days that we can only guess.
The assembly emitted by your compiler is not what is really executed. The microcode/firmware/whatever of your CPU will interpret it and turn it into instructions for its execution engine, much like JIT languages such as C# or java do.
One thing to consider here is that for each loop, there is not 1 or 2 instructions, but n + 2, as you also increment and compare i to your number of iteration. In the vast majority of case it wouldn't matter, but here it does, as the loop body is so simple.
Let's see the assembly :
Some defines:
#define NUM_ITERATIONS 1000000000ll
#define X_INC 17
#define Y_INC -31
C/C++ :
for (long i = 0; i < NUM_ITERATIONS; i++) { x+=X_INC; }
ASM :
mov QWORD PTR [rbp-32], 0
.L13:
cmp QWORD PTR [rbp-32], 999999999
jg .L12
add QWORD PTR [rbp-24], 17
add QWORD PTR [rbp-32], 1
jmp .L13
.L12:
C/C++ :
for (long i = 0; i < NUM_ITERATIONS; i++) {x+=X_INC; y+=Y_INC;}
ASM:
mov QWORD PTR [rbp-80], 0
.L21:
cmp QWORD PTR [rbp-80], 999999999
jg .L20
add QWORD PTR [rbp-64], 17
sub QWORD PTR [rbp-72], 31
add QWORD PTR [rbp-80], 1
jmp .L21
.L20:
So both Assemblies look pretty similar. But then let's think twice : modern CPUs have ALUs which operate on values which are wider than their register size. So there is a chance than in the first case, the operation on x and i are done on the same computing unit. But then you have to read i again, as you put a condition on the result of this operation. And reading means waiting.
So, in the first case, to iterate on x, the CPU might have to be in sync with the iteration on i.
In the second case, maybe x and y are treated on a different unit than the one dealing with i. So in fact, your loop body runs in parallel than the condition driving it. And there goes your CPU computing and computing until someone tells it to stop. It doesn't matter if it goes too far, going back a few loops is still fine compared to the amount of time it just gained.
So, to compare what we want to compare (one operation vs two operations), we should try to get i out of the way.
One solution is to completely get rid of it by using a while loop:
C/C++:
while (x < (X_INC * NUM_ITERATIONS)) { x+=X_INC; }
ASM:
.L15:
movabs rax, 16999999999
cmp QWORD PTR [rbp-40], rax
jg .L14
add QWORD PTR [rbp-40], 17
jmp .L15
.L14:
An other one is to use the antequated "register" C keyword:
C/C++:
register long i;
for (i = 0; i < NUM_ITERATIONS; i++) { x+=X_INC; }
ASM:
mov ebx, 0
.L17:
cmp rbx, 999999999
jg .L16
add QWORD PTR [rbp-48], 17
add rbx, 1
jmp .L17
.L16:
Here are my results:
x1 for: 10.2985 seconds. x,y = 17000000000,0
x1 while: 8.00049 seconds. x,y = 17000000000,0
x1 register-for: 7.31426 seconds. x,y = 17000000000,0
x2 for: 9.30073 seconds. x,y = 17000000000,-31000000000
x2 while: 8.88801 seconds. x,y = 17000000000,-31000000000
x2 register-for :8.70302 seconds. x,y = 17000000000,-31000000000
Code is here: https://onlinegdb.com/S1lAANEhI

fastest way to convert two-bit number to low-memory representation

I have a 56-bit number with potentially two set bits, e.g., 00000000 00000000 00000000 00000000 00000000 00000000 00000011. In other words, two bits are distributed among 56 bits, so that we have bin(56,2)=1540 possible permutations.
I now look for a loss-free mapping of such an 56 bit number to an 11-bit number that can carry 2048 and therefore also 1540. Knowing the structure, this 11-bit number is enough to store the value of my low-density (of ones) 56 bit number.
I want to maximize performance (this function should run millions or even billions of times per second if possible). So far, I only came up with some loop:
int inputNumber = 24; // 11000
int bitMask = 1;
int bit1 = 0, bit2 = 0;
for(int n = 0; n < 54; ++n, bitMask *= 2)
{
if((inputNumber & bitMask) != 0)
{
if(bit1 != 0)
bit1 = n;
else
{
bit2 = n;
break;
}
}
}
and using these two bits, I can easily generate some 1540 max number.
But is there no faster version than using such a loop?
Most ISAs have hardware support for a bit-scan instruction that finds the position of a set bit. Use that instead of a naive loop or bithack for any architecture where you care about this running fast. https://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious has some tricks that are better than nothing, but those are all still much worse than a single efficient asm instruction.
But ISO C++ doesn't portably expose clz/ctz operations; it's only available via intrinsics / builtins for various implementations. (And the x86 intrinsincs have quirks for all-zero input, corresponding to the asm instruction behaviour).
For some ISAs, it's a count-leading-zeros giving you 31 - highbit_index. For others, it's a CTZ count trailing zeros operation, giving you the index of the low bit. x86 has both. (And its high-bit finder actually directly finds the high-bit index, not a leading-zero count, unless you use BMI1 lzcnt instead of traditional bsr) https://en.wikipedia.org/wiki/Find_first_set has a table of what different ISAs have.
GCC portably provides __builtin_clz and __builtin_ctz; on ISAs without hardware support, they compile to a call to a helper functions. See What is the fastest/most efficient way to find the highest set bit (msb) in an integer in C? and Implementation of __builtin_clz
(For 64-bit integers, you want the long long versions: like __builtin_ctzll GCC manual.)
If we only have a CLZ, use high=63-CLZ(n) and low= 63-CLZ((-n) & n) to isolate the low bit. Note that x86's bsr instruction actually produces 63-CLZ(), i.e. the bit-index instead of the leading-zero count. So 63-__builtin_clzll(n) can compile to a single instruction on x86; IIRC gcc does notice this. Or 2 instructions if GCC uses an extra xor-zeroing to avoid the inconvenient false dependency.
If we only have CTZ, do low = CTZ(n) and high = CTZ(n & (n - 1)) to clear the lowest set bit. (Leaving the high bit, assuming the number has exactly 2 set bits).
If we have both, low = CTZ(n) and high = 63-CLZ(n). I'm not sure what GCC does on non-x86 ISAs where they aren't both available natively. The GCC builtins are always available even when targeting HW that doesn't have it. But the internal implementation can't use the above tricks because it doesn't know there are always exactly 2 bits set.
(I wrote out the full formulas; an earlier version of this answer had CLZ and CTZ reversed in this part. I find that happens to me easily, especially when I also have to keep track of x86's bsr and bsr (bitscan reverse and forward) and remember that those are leading and trailing, respectively.)
So if you just use both CTZ and CLZ, you might end up with slow emulation for one of them. Or fast emulation on ARM with rbit to bit-reverse for clz, which is 100% fine.
AVX512CD has SIMD VPLZCNTQ for 64-bit integers, so you could encode 2, 4, or 8x 64-bit integers in parallel with that on recent Intel CPUs. For SSSE3 or AVX2, you can build a SIMD lzcnt by using pshufb _mm_shuffle_epi8 byte-shuffle as a 4-bit LUT and combining with _mm_max_epu8. There was a recent Q&A about this but I can't find it. (It might have been for 16-bit integers only; wider requires more work.)
With this, a Skylake-X or Cascade Lake CPU could maybe compress 8x 64-bit integers per 2 or 3 clock cycles once you factor in the throughput cost of packing the results. SIMD is certainly useful for packing 12-bit or 11-bit results into a contiguous bitstream, e.g. with variable-shift instructions, if that's what you want to do with the results. At ~3 or 4GHz clock speed, that could maybe get you over 10 billion per clock with a single thread. But only if the inputs come from contiguous memory. Depending what you want to do with the results, it might cost a few more cycles to do more than just pack them down to 16-bit integers. e.g. to pack into a bitstream. But SIMD should be good for that with variable-shift instructions that can line up the 11 or 12 bits from each register into the right position to OR together after shuffling.
There's a tradeoff between coding efficiency and encode performance. Using 12 bits for two 6-bit indices (of bit positions) is very simple both to compress and decompress, at least on hardware that has bit-scan instructions.
Or instead of bit-indices, one or both could be leading zero counts, so decoding would be (1ULL << 63) >> a. 1ULL>>63 is a fixed constant that you can actually right-shift, or the compiler could turn it into a left-shift of 1ULL << (63-a) which IIRC optimizes to 1 << (-a) in assembly for ISAs like x86 where shift instructions mask the shift count (look only at the low 6 bits).
Also, 2x 12 bits is a whole number of bytes, but 11 bits only gives you a whole number of bytes every 8 outputs, if you're packing them. So indexing a bit-packed array is simpler.
0 is still a special case: maybe handle that by using all-ones bit-indices (i.e. index = bit 63, which is outside the low 56 bits). On decode/decompress, you set the 2 bit positions (1ULL<<a) | (1ULL<<b) and then & mask to clear high bits. Or bias your bit indices and have decode right shift by 1.
If we didn't have to handle zero then a modern x86 CPU could do 1 or 2 billion encodes per second if it didn't have to do anything else. e.g. Skylake has 1 per clock throughput for bit-scan instructions and should be able to encode at 1 number per 2 clocks just bottlenecked on that. (Or maybe better with SIMD). With just 4 scalar instructions, we can get the low and high indices (64-bit tzcnt + bsr), shift by 6 bits, and OR together.1 Or on AMD, avoid bsr / bsf and manually do 63-lzcnt.
A branchy or branchless check for input == 0 to to set the final result to whatever hard-coded constant (like 63 , 63) should be cheap, though.
Compression on other ISAs like AArch64 is also cheap. It has clz but not ctz. Probably your best bet there is use an intrinsic for rbit to bit-reverse a number (so clz on the bit-reversed number directly gives you the bit-index of the low bit. Which is now the high bit of the reversed version.) Assuming rbit is as fast as add / sub, this is cheaper than using multiple instructions to clear the low bit.
If you really want 11 bits then you need to avoid the redundancy of 2x 6-bit being able to have either index larger than the other. Like maybe have 6-bit a and 5-bit b, and have a<=b mean something special like b+=32. I haven't thought this through fully. You need to be able to encode 2 adjacent bits either near the top or bottom of the registers, or the 2 set bits could be as far apart as 28 bits, if we consider wrapping at the boundaries like a 56-bit rotate.
Melpomene's suggestion to isolate the low and high set bits might be useful as part of something else, but is only useful for encoding on targets where you only have one direction of bit-scan available, not both. Even so, you wouldn't actually use both expressions. Leading-zero count doesn't require you to isolate the low bit, you just need to clear it to get at the high bit.
Footnote 1: decoding on x86 is also cheap: x |= (1<<a) is 1 instruction: bts. But many compilers have missed optimizations and don't notice this, instead actually shifting a 1. bts reg, reg is 1 uop / 1 cycle latency on Intel since PPro, or sometimes 2 uops on AMD. (Only the memory destination version is slow.) https://agner.org/optimize/
Best encoding performance on AMD CPUs requires BMI1 tzcnt / lzcnt because bsr and bsf are slower (6 uops instead of 1 https://agner.org/optimize/). On Ryzen, lzcnt is 1 uop, 1c latency, 4 per clock throughput. But tzcnt is 2 uops.
With BMI1, the compiler could use blsr to clear the lowest set bit of a register (and copy it). i.e. modern x86 has an instruction for dst = (SRC-1) bitwiseAND ( SRC ); that are single-uop on Intel but 2 uops on AMD.
But with lzcnt being more efficient than tzcnt on AMD Ryzen, probably the best asm for AMD doesn't use it.
Or maybe something like this (assuming exactly 2 bits, which apparently we can do).
(This asm is what you'd like to get your compiler to emit. Don't actually use inline asm!)
Ryzen_encode_scalar: ; input in RDI, output in EAX
lzcnt rcx, rdi ; 63-high bit index
tzcnt rdx, rdi ; low bit
mov eax, 63
sub eax, ecx
shl edx, 6
or eax, edx ; (low_bit << 6) | high_bit
ret ; goes away with inlining.
Shifting the low bit-index balances the lengths of the critical path, allowing better instruction-level parallelism, if we need 63-CLZ for the high bit.
Throughput: 7 uops total, and no execution-unit bottlenecks. So at 5 uops per clock pipeline width, that's better than 1 per 2 clocks.
Skylake_encode_scalar: ; input in RDI, output in EAX
tzcnt rax, rdi ; low bit. No false dependency on Skylake. GCC will probably xor-zero RAX because there is on Broadwell and earlier.
bsr rdi, rdi ; high bit index. same,same reg avoids false dep
shl eax, 6
or eax, edx
ret ; goes away with inlining.
This has 5 cycle latency from input to output: bitscan instructions are 3 cycles on Intel vs. 1 on AMD. SHL + OR each add 1 cycle.
For throughput, we only bottleneck on one bit-scan per cycle (execution port 1), so we can do one encode per 2 cycles with 4 uops of front-end bandwidth left over for load, store, and loop overhead (or something else), assuming we have multiple independent encodes to do.
(But for the multiple independent encode case, SIMD may still be better for both AMD and Intel, if a cheap emulation of vplzcntq exists and the data is coming from memory.)
Scalar decode can be something like this:
decode: ;; input in EDI, output in RAX
xor eax, eax ; RAX=0
bts rax, rdi ; RAX |= 1ULL << (high_bit_idx & 63)
shr edi, 6 ; extract low_bit_idx
bts rax, rdi ; RAX |= 1ULL << low_bit_idx
ret
This has 3 shifts (including the bts) which on Skylake can only run on port0 or port6. So on Intel it only costs 4 uops for the front-end (so 1 per clock as part of doing something else). But if doing only this, it bottlenecks on shift throughput at 1 decode per 1.5 clock cycles.
On a 4GHz CPU, that's 2.666 billion decodes per second, so yeah we're doing pretty well hitting your targets :)
Or Ryzen, bts reg,reg is 2 uops , with 0.5c throughput, but shr can run on any port. So it doesn't steal throughput from bts, and the whole thing is 6 uops (vs. Ryzen's pipeline being 5-wide at the narrowest point). So 1 encode per 1.2 clock cycles, just bottlenecked on front-end cost.
With BMI2 available, starting with a 1 in a register and using shlx rax, rbx, rdi can replace the xor-zeroing + first BTS with a single uop, assuming the 1 in a register can be reused in a loop.
(This optimization is totally dependent on your compiler to find; flag-less shifts are just more efficient ways to copy-and-shift that become available with -march=haswell or -march=znver1, or other targets that have BMI2.)
Either way you're just going to write retval = 1ULL << (packed & 63) for decoding the first bit. But if you're wondering which compilers make nice code here, this is what you're looking for.

Mysteries of C++ optimization

Take the two following snippets:
int main()
{
unsigned long int start = utime();
__int128_t n = 128;
for(__int128_t i=1; i<1000000000; i++)
n = (n * i);
unsigned long int end = utime();
cout<<(unsigned long int) n<<endl;
cout<<end - start<<endl;
}
and
int main()
{
unsigned long int start = utime();
__int128_t n = 128;
for(__int128_t i=1; i<1000000000; i++)
n = (n * i) >> 2;
unsigned long int end = utime();
cout<<(unsigned long int) n<<endl;
cout<<end - start<<endl;
}
I am benchmarking 128 bit integers in C++. When executing the first one (just multiplication) everything runs in approx. 0.95 seconds. When I also add the bit shift operation (second snippet) the execution time raises to an astounding 2.49 seconds.
How is this possible? I thought that bit shifting was one of the lightest operations for a processor. How comes that there is so much overhead due to such a simple operation? I am compiling with O3 flag activated.
Any idea?
This question has been bugging me for the past few days, so I decided to do some more investigation. My initial answer focused on the difference in data values between the two tests. My assertion was that the integer multiplication unit in the processor finishes an operation in fewer clock cycles if one of the operands is zero.
While there are instructions that are clearly documented to work that way (integer division, for example), there are very strong indications that integer multiplication is done in a constant number of cycles in modern processors, regardless of input. The note in Intel's documentation that initially made me think that the number of cycles for integer multiplication can depend on input data doesn't seem to apply to these instructions. Also, I did some more rigorous performance tests with the same sequence of instructions on both zero and non-zero operands and the results didn't yield significant differences. As far as I can tell, harold's comment on this subject is correct. My mistake; sorry.
While contemplating the possibility of deleting this answer altogether, so that it doesn't lead people astray in the future, I realized there were still quite a few more things worth saying on this subject. I also think there's at least one other way in which data values can influence performance in such calculations (included in the last section). So, I decided to restructure and enhance the rest of the information in my initial answer, started writing, and... didn't quite stop for a while. It's up to you to decide whether it was worth it.
The information is structured into the following sections:
What the code does
What the compiler does
What the processor does
What you can do about it
Unanswered questions
What the code does
It overflows, mostly.
In the first version, n starts overflowing on the 33rd iteration. In the second version, with the shift, n starts overflowing on the 52nd iteration.
In the version without the shift, starting with the 128th iteration, n is zero (it overflows "cleanly", leaving only zeros in the least significant 128 bits of the result).
In the second version, the right shift (dividing by 4) takes out more factors of two from the value of n on each iteration than the new operands bring in, so the shift results in rounding on some iterations. Quick calculation: the total number of factors of two in all numbers from 1 to 128 is equal to
128 / 2 + 128 / 4 + ... + 2 + 1 = 26 + 25 + ... + 2 + 1 = 27 - 1
while the number of factors of two taken out by the right shift (if it had enough to take from) is 128 * 2, more than double.
Armed with this knowledge, we can give a first answer: from the point of view of the C++ standard, this code spends most of its time in undefined behaviour land, so all bets are off. Problem solved; stop reading now.
What the compiler does
If you're still reading, from this point forward we'll ignore the overflows and look at some implementation details. "The compiler", in this case, means GCC 4.9.2 or Clang 3.5.1. I've only done performance measurements on code generated by GCC. For Clang, I've looked at the generated code for a few test cases and noted some differences that I'll mention below, but I haven't actually run the code; I might have missed some things.
Both multiplication and shift operations are available for 64-bit operands, so 128-bit operations need to be implemented in terms of those. First, multiplication: n can be written as 264 nh + nl, where nh and nl are the high and low 64-bit halves, respectively. The same goes for i. So, the multiplication can be written:
(264 nh + nl)(264 ih + il) = 2128 nh ih + 264 (nh il + nl ih) + nl il
The first term doesn't have any non-zero bits in the lower 128-bit part; it's either all overflow or all zero. Since ignoring integer overflows is valid and common for C++ implementations, that's what the compiler does: the first term is ignored completely.
The parenthesis only contributes bits to the upper 64-bit half of the 128-bit result; any overflow resulting from the two multiplications or the addition is also ignored (the result is truncated to 64 bits).
The last term determines the bits in the low 64-bit half of the result and, if the result of that multiplication has more than 64 bits, the extra bits need to be added to the high 64-bit half obtained from the parenthesis discussed before. There's a very useful multiplication instruction in x86-64 assembly that does just what's needed: takes two 64-bit operands and places the result in two 64-bit registers, so the high half is ready to be added to the result of the operations in the parenthesis.
That is how 128-bit integer multiplication is implemented: three 64-bit multiplications and two 64-bit additions.
Now, the shift: using the same notations as above, the two least significant bits of nh need to become the two most significant bits of nl, after the contents of the latter is shifted right by two bits. Using C++ syntax, it would look like this:
nl = nh << 62 | nl >> 2 //Doesn't change nh, only uses its bits.
Besides that, nh also needs to be shifted, using something like
nh >>= 2;
That is how the compiler implements a 128-bit shift. For the first part, there's an x86-64 instruction that has the exact semantics of that expression; it's called SHRD. Using it can be good or bad, as we'll see below, and the two compilers make slightly different choices in this respect.
What the processor does
... is highly processor-dependent. (No... really?!)
Detailed information about what happens for Haswell processors can be found in harold's excellent answer. Here, I'll try to cover more ground at a higher level. For more detailed data, here are some sources:
Intel® 64 and IA-32 Architectures Optimization Reference Manual
Software Optimization Guide for AMD Family 15h Processors
Agner Fog's microarchitecture manual and instruction tables.
I'll refer to the following architectures:
Intel Sandy Bridge / Ivy Bridge - abbreviated "IntelSB" going forward;
Intel Haswell / Broadwell - "IntelH" going forward;
I'll just use "Intel" for things that are the same between SB and H.
AMD Bulldozer / Piledriver / Steamroller - "AMD" going forward.
I have measurement data taken on an IntelSB system; I think it's precise enough, as long as the compiler doesn't act up. Unfortunately, when working with such tight loops, this can happen very easily. At various points during testing, I had to use all kinds of stupid tricks to avoid GCC's idiosyncrasies, usually related to register use. For example, it seems to have a tendency to shuffle registers around unnecessarily, when compiling simpler code than for other cases when it generates optimal assembly. Ironically, on my test setup, it tended to generate optimal code for the second sample, using the shift, and worse code for the first one, making the impact of the shift less visible. Clang/LLVM seems to have fewer of those bad habits, but then again, I looked at fewer examples using it and I didn't measure any of them, so this doesn't mean much. In the interest of comparing apples with apples, all measurement data below refers to the best code generated for each case.
First, let's rearrange the expression for 128-bit multiplication from the previous section into a (horrible) diagram:
nh * il
\
+ -> tmp
/ \
nl * ih + -> next nh
/
high 64 bits
/
nl * il --------
\
low 64 bits
\
-> next nl
(sorry, I hope it gets the point across)
Some important points:
The two additions can't execute until their respective inputs are ready; the final addition can't execute until everything else is ready.
The three multiplications can, theoretically, execute in parallel (no input depends on another multiplication's output).
In the ideal scenario above, the total number of cycles to complete the entire calculation for one iteration is the sum of the number of cycles for one multiplication and two additions.
The next nl can be ready early. This, together with the fact that the next il and ih are very cheap to calculate, means the nl * il and nl * ih calculations for the next iteration can start early, possibly before the next nh has been computed.
Multiplications can't really execute entirely in parallel on these processors, as there's only one integer multiplication unit for each core, but they can execute concurrently through pipelining. One multiplication can begin executing on each cycle on Intel, and every 4 cycles on AMD, even if previous multiplications haven't finished executing yet.
All of the above mean that, if the loop's body doesn't contain anything else that gets in the way, the processor can reorder those multiplications to achieve something as close as possible to the ideal scenario above. This applies to the first code snippet. On IntelH, as measured by harold, it's exactly the ideal scenario: 5 cycles per iteration are made up of 3 cycles for one multiplication and one cycle each for the two additions (impressive, to be honest). On IntelSB, I measured 6 cycles per iteration (closer to 5.5, actually).
The problem is that in the second code snippet something does get in the way:
nh * il
\ normal shift -> next nh
+ -> tmp /
/ \ /
nl * ih + ----> temp nh
/ \
high 64 bits \
/ "composite" shift -> next nl
nl * il -------- /
\ /
low 64 bits /
\ /
-> temp nl ---------
The next nl is no longer ready early. temp nl has to wait for temp nh to be ready, so that both can be fed into the composite shift, and only then will we have the next nl. Even if both shifts are very fast and execute in parallel, they don't just add the execution cost of one shift to an iteration; they also change the dynamics of the loop's "pipeline", acting like a sort of synchronizing barrier.
If the two shifts finish at the same time, then all three multiplications for the next iteration will be ready to execute at the same time, and they can't all start in parallel, as explained above; they'll have to wait for one another, wasting cycles. This is the case on IntelSB, where the two shifts are equally fast (see below); I measured 8 cycles per iteration for this case.
If the two shifts don't finish at the same time, it will typically be the normal shift that finishes first (the composite shift is slower on most architectures). This means that the next nh will be ready early, so the top multiplication can start early for the next iteration. However, the other two multiplications still have to wait more (wasted) cycles for the composite shift to finish, and then they'll be ready at the same time and one will have to wait for the other to start, wasting some more time. This is the case on IntelH, measured by harold at 9 cycles per iteration.
I expect AMD to fall under this last category as well. While there's an even bigger difference in performance between the composite shift and normal shift on this platform, integer multiplications are also slower on AMD than on Intel (more than twice as slow), making the first sample slower to begin with. As a very rough estimate, I think the first version could take about 12 cycles on AMD, with the second one at around 16. It would be nice to have some concrete measurements, though.
Some more data on the troublesome composite shift, SHRD:
On IntelSB, it's exactly as cheap as a simple shift (great!); simple shifts are about as cheap as they come: they execute in one cycle, and two shifts can start executing each cycle.
On IntelH, SHRD takes 3 cycles to execute (yes, it got worse in the newer generation), and two shifts of any kind (simple or composite) can start executing each cycle;
On AMD, it's even worse. If I'm reading the data correctly, executing an SHRD keeps both shift execution units busy until execution finishes - no parallelism and no pipelining possible; it takes 3 cycles, during which no other shift can start executing.
What you can do about it
I can think of three possible improvements:
replace SHRD with something faster on platforms where it makes sense;
optimize the multiplication to take advantage of the data types involved here;
restructure the loop.
1. SHRD can be replaced with two shifts and a bitwise OR, as described in the compiler section. A C++ implementation of a 128-bit shift right by two bits could look like this:
__int128_t shr2(__int128_t n)
{
using std::int64_t;
using std::uint64_t;
//Unpack the two halves.
int64_t nh = n >> 64;
uint64_t nl = static_cast<uint64_t>(n);
//Do the actual work.
uint64_t rl = nl >> 2 | nh << 62;
int64_t rh = nh >> 2;
//Pack the result.
return static_cast<__int128_t>(rh) << 64 | rl;
}
Although it looks like a lot of code, only the middle section doing the actual work generates shifts and ORs. The other parts merely indicate to the compiler which 64-bit parts we want to work with; since the 64-bit parts are already in separate registers, those are effectively no-ops in the generated assembly code.
However, keep in mind that this amounts to "trying to write assembly using C++ syntax", and it's generally not a very bright idea. I'm only using it because I verified that it works for GCC and I'm trying to keep the amount of assembly code in this answer to a minimum. Even so, there's one surprise: the LLVM optimizer detects what we're trying to do with those two shifts and one OR and... replaces them with an SHRD in some cases (more about this below).
Functions of the same form can be used for shifts by other numbers of bits, less than 64. From 64 to 127, it gets easier, but the form changes. One thing to keep in mind is that it would be a mistake to pass the number of bits for shifting as a runtime parameter to a shr function. Shift instructions by a variable number of bits are slower than the ones using a constant number on most architectures. You could use a non-type template parameter to generate different functions at compile time - this is C++, after all...
I think using such a function makes sense on all architectures except IntelSB, where SHRD is already as fast as it can get. On AMD, it will definitely be an improvement. Less so on IntelH: for our case, I don't think it will make a difference, but generally it could shave once cycle off some calculations; there could theoretically be cases where it could make things slightly worse, but I think those are very uncommon (as usual, there's no substitute for measuring). I don't think it will make a difference for our loop because it will change things from [nh being ready after once cycle and nl after three] to [both being ready after two]; this means all three multiplications for the next iteration will be ready at the same time and they'll have to wait for one another, essentially wasting the cycle that was gained by the shift.
GCC seems to use SHRD on all architectures, and the "assembly in C++" code above can be used as an optimization where it makes sense. The LLVM optimizer uses a more nuanced approach: it does the optimization (replaces SHRD) automatically for AMD, but not for Intel, where it even reverses it, as mentioned above. This may change in future releases, as indicated by the discussion on the patch for LLVM that implemented this optimization. For now, if you want to use the alternative with LLVM on Intel, you'll have to resort to assembly code.
2. Optimizing the multiplication: The test code uses a 128-bit integer for i, but that's not needed in this case, as its value fits easily in 64 bits (32, actually, but that doesn't help us here). What this means is that ih will always be zero; this reduces the diagram for 128-bit multiplication to the following:
nh * il
\
\
\
+ -> next nh
/
high 64 bits
/
nl * il
\
low 64 bits
\
-> next nl
Normally, I'd just say "declare i as long long and let the compiler optimize things" but unfortunately this doesn't work here; both compilers go for the standard behaviour of converting the two operands to their common type before doing the calculation, so i ends up on 128 bits even if it starts on 64. We'll have to do things the hard way:
__int128_t mul(__int128_t n, long long i)
{
using std::int64_t;
using std::uint64_t;
//Unpack the two halves.
int64_t nh = n >> 64;
uint64_t nl = static_cast<uint64_t>(n);
//Do the actual work.
__asm__(R"(
movq %0, %%r10
imulq %2, %%r10
mulq %2
addq %%r10, %0
)" : "+d"(nh), "+a"(nl) : "r"(i) : "%r10");
//Pack the result.
return static_cast<__int128_t>(nh) << 64 | nl;
}
I said I tried to avoid assembly code in this answer, but it's not always possible. I managed to coax GCC into generating the right code with "assembly in C++" for the function above, but once the function is inlined everything falls apart - the optimizer sees what's going on in the complete loop body and converts everything to 128 bits. LLVM seems to behave in this case, but, since I was testing on GCC, I had to use a reliable way to get the right code in there.
Declaring i as long long and using this function instead of the normal multiplication operator, I measured 5 cycles per iteration for the first sample and 7 cycles for the second one on IntelSB, a gain of one cycle in each case. I expect it to shave one cycle off the iterations for both examples on IntelH as well.
3. The loop can sometimes be restructured to encourage pipelined execution, when (at least some) iterations don't really depend on previous results, even though it may look like they do. For example, we could replace the for loop for the second sample with something like this:
__int128_t n2 = 1;
long long j = 1000000000 / 2;
for(long long i = 1; i < 1000000000 / 2; ++i, ++j)
{
n *= i;
n2 *= j;
n >>= 2;
n2 >>= 2;
}
n *= (n2 * j) >> 2;
This takes advantage of the fact that some partial results can be calculated independently and only aggregated at the end. We're also hinting to the compiler that we want to pipeline the multiplications and shifts (not always necessary, but it does make a small difference for GCC for this code).
The code above is nothing more than a naive proof of concept. Real code would need to handle the total number of iterations in a more reliable way. The bigger problem is that this code won't generate the same results as the original, because of different behaviour in the presence of overflow and rounding. Even if we stop the loop on the 51st iteration, to avoid overflow, the result will still be different by about 10%, because of rounding happening in different ways when shifting right. In real code, this would most likely be a problem; but then again, you wouldn't have real code like this, would you?
Assuming this technique is applied to a case where the problems above don't occur, I measured the performance of such code in a few cases, again on IntelSB. The results are given in "cycles per iteration", as before, where "iteration" means the one from the original code (I divided the total number of cycles for executing the whole loop by the total number of iterations executed by the original code, not for the restructured one, to have a meaningful comparison):
The code above executes in 7 cycles per iteration, a gain of one cycle over the original.
The code above with the multiplication operator replaced with our mul() function needs 6 cycles per iteration.
The restructured code does suffer from more register shuffling, which can't be avoided unfortunately (more variables). More recent processors like IntelH have architecture improvements that make register moves essentially free in many cases; this could make the code yield even larger gains. Using new instructions like MULX for IntelH could avoid some register moves altogether; GCC does use such instructions when compiling with -march=haswell.
Unanswered questions
None of the measurements that we have so far explain the large differences in performance reported by the OP, and observed by me on a different system.
My initial timings were taken on a remote system (Westmere family processor) where, of course, a lot of things could happen; yet, the results were strangely stable.
On that system, I also experimented with executing the second sample with a right shift and a left shift; the code using a right shift was consistently 50% slower than the other variant. I couldn't replicate that on my controlled test system on IntelSB, and I don't have an explanation for those results either.
We can discard all of the above as unpredictable side effects of compiler / processor / overall system behaviour, but I can't shake the feeling that not everything has been explained here.
It's true that it doesn't really make much sense to benchmark such tight loops without a controlled system, precise tools (counting cycles) and looking at the generated assembly code for each case. Compiler idiosyncrasies can easily result in code that artificially introduces differences of 50% or more in performance.
Another factor that could explain large differences is the presence of Intel Hyper-Threading. Different parts of the core behave differently when this is enabled, and the behaviour has also changed between processor families. This could have strange effects on tight loops.
To top everything off, here's a crazy hypothesis: Flipping bits consumes more power than keeping them constant. In our case, the first sample, working with zero values most of the time, will be flipping far fewer bits than the second one, so the latter will consume more power. Many modern processors have features that dynamically adjust the core frequency depending on electrical and thermal limits (Intel Turbo Boost / AMD Turbo Core). This means that, theoretically, under the right (or wrong?) conditions, the second sample could trigger a reduction of the core frequency, thus making the same number of cycles take longer time, and making the performance data-dependent.
After benchmarking both (using the assembly generated by GCC 4.7.3 on -O2) on my 4770K, I found that the first one takes 5 cycles per iteration and the second one takes 9 cycles per iteration. Why so much difference?
It turns out to be an interplay between throughput and latency. The main killer is shrd, which takes 3 cycles and is on the critical path. Here's a picture of it (I ignore the chain for i because it is faster and there is plenty of spare throughput for it to just run ahead, it will not interfere):
The edges here are dependencies, not dataflow.
Based solely on latencies in this chain, the expected time would be 8 cycles per iteration. But it is not. The problem here is that for 8 cycles to happen, mul2 and imul3 have to be executed in parallel, and integer multiplication only has a throughput of 1/cycle. So it (either one) has to wait a cycle, and holds up the chain by a cycle. I verified this by changing that imul to an add, which reduced the time to 8 cycles per iteration. Changing the other imul to an add had no effect, as predicted based on this explanation (it doesn't depend on shrd and can thus be scheduled earlier, without interfering with the other multiplications).
These exact details are only for Haswell.
The code I used was this:
section .text
global cmp1
proc_frame cmp1
[endprolog]
mov r8, rsi
mov r9, rdi
mov esi, 1
xor edi, edi
mov eax, 128
xor edx, edx
.L2:
mov rcx, rdx
mov rdx, rdi
imul rdx, rax
imul rcx, rsi
add rcx, rdx
mul rsi
add rdx, rcx
add rsi, 1
mov rcx, rsi
adc rdi, 0
xor rcx, 10000000
or rcx, rdi
jne .L2
mov rdi, r9
mov rsi, r8
ret
endproc_frame
global cmp2
proc_frame cmp2
[endprolog]
mov r8, rsi
mov r9, rdi
mov esi, 1
xor edi, edi
mov eax, 128
xor edx, edx
.L3:
mov rcx, rdi
imul rcx, rax
imul rdx, rsi
add rcx, rdx
mul rsi
add rdx, rcx
shrd rax, rdx, 2
sar rdx, 2
add rsi, 1
mov rcx, rsi
adc rdi, 0
xor rcx, 10000000
or rcx, rdi
jne .L3
mov rdi, r9
mov rsi, r8
ret
endproc_frame
Unless your processor can support native 128-bit operations, the operations will have to be software coded to use the next best option.
Your 128-bit operations are using the same scheme as the 8-bit processors did when using 16-bit operations, and this takes time.
For example, a 128-bit right shift, by one bit, using 64-bit registers requires:
Shift the Most Significant register right into carry. The Carry flag will contain the bit that was shifted out.
Shift the Least Significant register right, with carry. The bits will be shifted right, with the carry flag being shifted into the Most Significant Bit position.
Without support for native 128-bit operations, you code will take twice as many operations as the same 64-bit operations; sometimes more (such as multiplication). This is why you are seeing such poor performance.
I highly recommend only using 128-bits in places where it is extremely necessary.

Is multiplication of two numbers a constant time algorithm?

Suppose I write,
int a = 111;
int b = 509;
int c = a * b;
So what is the time complexity to compute 'a * b' ? How is the multiplication operation executed?
Compiling this function:
int f(int a, int b) {
return a * b;
}
With gcc -O3 -march=native -m64 -fomit-frame-pointer -S gives me the following assembly:
f:
movl %ecx, %eax
imull %edx, %eax
ret
The first instruction (movl) loads the first argument, the second instruction (imull) loads the second argument and multiplies it with the first - then the result gets returned.
The actual multiplication is done with imull, which - depending on your CPU type - will take a certain amount of CPU cycles.
If you look at Agner Fog's instruction timing tables you can see how much time each instruction will take. On most x86 processors it seems to be a small constant, however the imul instruction on the AMD K8 with a 64 bit argument and result shows as 4-5 CPU cycles. I don't know if that's a measurement issue or really variable time however.
Also note that there's other factors involved than just the execution time. The integer has to be moved through the processor and get into the right place to get multiplied. All of this and other factors make latency, which is also noted in Agner Fog's tables. There are other issues such as cache issues which also make life more difficult - it's not that easy to simply say how fast something will run without running it.
x86 isn't the only architecture, and it's actually not inconceivable there are CPU's and architectures out there that have non-constant time multiplication. This is especially important for cryptography where algorithms using multiplication might be susceptible to timing attacks on those platforms.
Multiplication itself on most common architectures will be constant. Time to load registers may vary depending on the location of the variables (L1, L2, RAM, etc) but the number of cycles operation takes will be constant. This is in contrast to operations like sqrt that may require additional cycles to achieve certain precision.
you can get instruction costs here for AMD, Intel, VIA: http://www.agner.org/optimize/instruction_tables.pdf
By time complexity, I presume you mean whether it depends on the number of digits in a and b? So whether the number of CPU clock cycles would vary depending on whether you multiplied say 2*3 or 111*509. I think yes they would vary and it would depend on how that architecture implements the multiplication operation and how the intermediate results are stored.
Although there can be many ways to do this one simple/primitive way is to implement multiplication using the binary adder/subtractor circuit.
Multiplication of a*b is adding a to itself b times using n-digit binary adders. Similarly division a/b is subtraction b from a until it reaches 0, although this will take more space to store the quotient and remainder.
void myfun()
{
int a = 111;
int b = 509;
int c = a * b;
}
De assemble part :
movl $111, -4(%ebp)
movl $509, -8(%ebp)
movl -4(%ebp), %eax
imull -8(%ebp), %eax
So as you can see it all depends on imull instruction, specifically the fetch, decode, and execute cycle of a CPU.
In your example, the compiler would do the multiplication and your code would look like
int c = 56499;
If you changed your example to look like
int c = a * 509;
then the compiler MIGHT decide to rewrite your code like
int c = a * ( 512 - 2 - 1 );
int c = (a << 9) - (a << 1) - a;
I said might because the compiler will compare the cost of using shift to the cost of a multiply instruction and pick the best option.
Given a fast multiply instruction, that usually means only 1 or maybe 2 shift will be faster.
If your numbers are too large to fit in an integer (32-bits) then the arbitrary precision
math routines use between O(n^2) and O(n log n) time where n is the number of 32-bit parts needed to hold the numbers.