Deriving nth Gray code from the (n-1)th Gray Code - bit-manipulation

Is there a way to derive the 4-bit nth Gray code using the (n-1)th Gray code by using bit operations on the (n-1)th Gray Code?
For example the 4th Gray code is 0010. Now I want to get the 5th Gray Code, 0110, by doing bit operations on 0010.

Perhaps it's "cheating" but you can just pack a lookup table into a 64-bit constant value, like this:
0000 0 -> 1
0001 1 -> 3
0011 3 -> 2
0010 2 -> 6
0110 6 -> 7
0111 7 -> 5
0101 5 -> 4
0100 4 -> C
1100 C -> D
1101 D -> F
1111 F -> E
1110 E -> A
1010 A -> B
1011 B -> 9
1001 9 -> 8
1000 8 -> 0
FEDCBA9876543210 nybble order (current Gray code)
| |
V V
EAFD9B80574C2631 next Gray code
Then you can use shifts and masks to perform a lookup (depending on your language):
int next_gray_code(int code)
{
return (0xEAFD9B80574C2631ULL >> (code << 2)) & 15;
}
Alternatively, you can use the formula for converting from Gray to binary, increment the value, and then convert from binary to Gray, which is just n xor (n / 2):
int next_gray_code(int code)
{
code = code ^ (code >> 2);
code = code ^ (code >> 1);
code = (code + 1) & 15;
return code ^ (code >> 1);
}

What about the following?
t1 := XOR(g0, g1)
b0 := !XOR(g0, g1, g2, g3)
b1 := t1 & g2 & g3 + !t1 & !g2 & !g3
b2 := t1 & g2 & !g3
b3 := t1 & !g2 & !g3
n0 := XOR(b0, g0)
n1 := XOR(b1, g1)
n2 := XOR(b2, g2)
n3 := XOR(b3, g3)
The current gray code word is g3 g2 g1 g0 and the next code word is n3 n2 n1 n0. b3 b2 b1 b0 are the four bits which flip or not flip a bit in the code word to progress to the subsequent code word. Only one bit is changed between adjacent code words.

Related

How to interpret n & (n - 1) [duplicate]

I'm looking at some code which should be trivial -- but my math is failing me miserably here.
Here's a condition that checks if a number if a power of 2 using the following:
if((num != 1) && (num & (num - 1))) { /* make num pow of 2 */ }
My question is, how does using a bitwise AND between num and num - 1 determine if a number is a power of 2?
Any power of 2 minus 1 is all ones: (2 N - 1 = 111....b)
2 = 2^1. 2-1 = 1 (1b)
4 = 2^2. 4-1 = 3 (11b)
8 = 2^3. 8-1 = 7 (111b)
Take 8 for example. 1000 & 0111 = 0000
So that expression tests if a number is NOT a power of 2.
Well, the first case will check for 20 == 1.
For the other cases the num & (num - 1) comes into play:
That's saying if you take any number, and mask off the bits from one lower, you'll get one of two cases:
if the number is a power of two already, then one less will result in a binary number that only has the lower-order bits set. Using & there will do nothing.
Example with 8: 0100 & (0100 - 1) --> (0100 & 0011) --> 0000
if the number is not a power of two already, then one less will not touch the highest bit, so the result will be at least the largest power of two less than num.
Example with 3: 0011 & (0011 - 1) --> (0011 & 0010) --> 0010
Example with 13: 1101 & (1101 - 1) --> (1101 & 1100) --> 1100
So the actual expression finds everything that isn't a power of two, including 20.
Well,
if you have X = 1000 then x-1 = 0111. And 1000 && 0111 is 0000.
Each number X that is a power of 2 has an x-1 that has ones on the position x has zeroes. And a bitwise and of 0 and 1 is always 0.
If the number x is not a power of two, for example 0110. The x-1 is 0101 and the and gives 0100.
For all combinbations within 0000 - 1111 this leads to
X X-1 X && X-1
0000 1111 0000
0001 0000 0000
0010 0001 0000
0011 0010 0010
0100 0011 0000
0101 0100 0100
0110 0101 0100
0111 0110 0110
1000 0111 0000
1001 1000 1000
1010 1001 1000
1011 1010 1010
1100 1011 1000
1101 1100 1100
1110 1101 1100
1111 1110 1110
And there is no need for a separate check for 1.
I prefer this approach that relies on two's complement:
bool checkPowTwo(int x){
return (x & -x) == x;
}
Explained here nicely
Also the expression given considers 0 to be a power of 2. To fix that use
!(x & (x - 1)) && x; instead.
It determines whether integer is power of 2 or not. If (x & (x-1)) is zero then the number is power of 2.
For example,
let x be 8 (1000 in binary); then x-1 = 7 (0111).
if 1000
& 0111
---------------
0000
C program to demonstrate:
#include <stdio.h>
void main()
{
int a = 8;
if ((a&(a-1))==0)
{
printf("the bit is power of 2 \n");
}
else
{
printf("the bit is not power of 2\n");
}
}
This outputs the bit is power of 2.
#include <stdio.h>
void main()
{
int a = 7;
if ((a&(a-1))==0)
{
printf("the bit is power of 2 \n");
}
else
{
printf("the bit is not power of 2\n");
}
}
This outputs the bit is not power of 2.
Suppose n is the given number,
if n is power of 2 (n && !(n & (n-1)) will return 1 else return 0
When you decrement a positive integer by 1:
if it is zero, you get -1.
if its least significant bit is 1, this bit is set to 0, the other bits are unchanged.
otherwise, all the low order 0 bits are set to 1 and the lowest 1 bit if set to 0, the other bits are unchanged.
Case 1: x & (x - 1) is 0, yet x is not a power of 2, trivial counter example.
Cases 2 and 3: if x and x-1 have no bits in common, it means the other bits in both of the above cases are all zero, hence the number has a single 1 bit, hence it is a power of 2.
If x is negative, this test does not work for two's complement representation of signed integers as either decrementing overflows or x and x-1 have at least the sign bit in common, which means x & (x-1) is not zero.
To test for a power of 2 the code should be:
int is_power_of_2(unsigned x) {
return x && !(x & (x - 1));
}
#include <stdio.h>
void powerof2(int a);
int a;
int main()
{
while(1)
{
printf("Enter any no. and Check whether no is power of 2 or no \n");
scanf("%d",&a);
powerof2(a);
}
}
void powerof2(int a)
{
int count = 0;
int b=0;
while(a)
{
b=a%2;
a=a/2;
if(b == 1)
{ count++; }
}
if(count == 1)
{
printf("power of 2\n");
}
else
printf("not power of 2\n");
}
Following program in C will find out if the number is power of 2 and also find which power of 2, the number is.
#include<stdio.h>
void main(void)
{
unsigned int a;
unsigned int count=0
unsigned int check=1;
unsigned int position=0;
unsigned int temp;
//get value of a
for(i=0;i<sizeof(int)*8;i++)//no of bits depend on size of integer on that machine
{
temp = a&(check << i);
if(temp)
{
position = i;
count++;
}
}
if(count == 1)
{
printf("%d is 2 to the power of %d",a,position);
}
else
{
printf("Not a power of 2");
}
}
There are other ways to do this:-
if a number is a power of 2, only 1 bit will be set in the binary format
for example 8 is equivalent to 0x1000, substracting 1 from this, we get 0x0111.
End operation with the original number(0x1000) gives 0.
if that is the case, the number is a power of 2
void IsPowerof2(int i)
{
if(!((i-1)&1))
{
printf("%d" is a power of 2, i);
}
}
another way can be like this:-
If we take complement of a number which is a power of 2,
for example complement of 8 i.e 0x1000 , we get 0x0111 and add 1 to it, we get
the same number, if that is the case, that number is a power of 2
void IsPowerof2(int i)
{
if(((~1+1)&i) == 1)
{
printf("%d" is a power of 2,i):
}
}

Unitary number for “&” bitwise operator in c++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a question, I would appreciate it if you helped me to understand it. Imagin I define the following number
c= 0x3FFFFFFF
and a = an arbitrary integer number=Q. My question is, why a &= c always is equal to "Q" and it does not change? for example, if I consider a=10 then the result of a &= c is 10 if a=256 the result of a &= c is 256. Could you please explain why? Thanks a lot.
Both a and c are integer types and are composed of 32 bits in a computer. The first digit of an integer in a computer is the sign bit.The first digit of a positive number is 0, and the first digit of a negative number is 1. 0x3FFFFFFF is a special value. The first two digits of this number are 0, and the other digits are all 1. 1 & 1 = 1, 1 & 0 = 0. So when the number a a is positive and less than c, a & 0x3FFFFFFF is still a itself
a &= c is the same as a = a & c, which calculates the binary and of a and b and then assign that value to a again - just in case you've mistaken what that operator does.
Now a contains almost only 1's. Then just think what each bit becomes: 1 & x will always be x. Since you try with such low numbers only, none of them will change.
Try with c=0xffffffff and you will get a different result.
You have not tested a &= c; with all possible values of a and are incorrect to assert it does not change the value of a in all cases.
a &= c; sets a to a value in which each bit is set if the two bits in the same position in a and in c are both set. If the two bits are not both set, 5he bit in the result is clear.
In 0x3FFFFFFF, the 30 least significant bits are set. When this is used in a &= c; with any number in which higher bits are set, such as 0xC0000000, the higher bits will be cleared.
If you know about bitwise & ("and") operation and how it works, then there should be no question about this. Say, you have two numbers a and b. Each of them are n-bits long. Look,
a => a_(n-1) a_(n-2) a_(n-3) ... a_i ... a_2 a_1 a_0
b => b_(n-1) b_(n-2) b_(n-3) ... b_i ... b_2 b_1 b_0
Where a_0 and b_0 are the least significant bits and a_(n-1) and b_(n-1) are the most significant bits of a and b respectively.
Now, take a look at the & operation on two single binary bits.
1 & 1 = 1
1 & 0 = 0
0 & 1 = 0
0 & 0 = 0
So, the result of the & operation is 1 only when all bits are 1. If at least one bit is 0, then the result is 0.
Now, for n-bits long number,
a & b = (a_i & b_i); where `i` is from 0 to `n-1`
For example, if a and b both are 4 bits long numbers and a = 5, b = 12, then
a = 5 => a = 0101
b = 12 => b = 1100
if c = (a & b), c_i = (a_i & b_i) for i=0..3, here all numbers are 4 bits(0..3)
now, c = c_3 c_2 c_1 c_0
so c_3 = a_3 & b_3
c_2 = a_2 & b_2
c_1 = a_1 & b_1
c_0 = a_0 & b_0
a 0 1 0 1
b 1 1 0 0
-------------
c 0 1 0 0 (that means c = 4)
therefore, c = a & b = 5 & 12 = 4
Now, what would happen, if all of the bits in one number are 1s?
Let's see.
0 & 1 = 0
1 & 1 = 1
so if any bit is fixed and it 1, then the result is the same as the other bit.
if a = 5 (0101) and b = 15 (1111), then
a 0 1 0 1 (5)
b 1 1 1 1 (15)
------------------
c 0 1 0 1 (5, which is equal to a=5)
So, if any of the numbers has all bits are 1s, then the & result is the same as the other number. Actually, for a=any value of 4-bits long number, you will get the result as a, since b is 4-bits long and all 4 bits are 1s.
Now another issue would happen, when a > 15 means a exceeds 4-bits
For the above example, expand the bit size to 1 and change the value of a is 25.
a = 25 (11001) and b = 15 (01111). Still, b is the same as before except the size. So the Most Significant Bit (MSB) is 0. Now,
a 1 1 0 0 1 (25)
b 0 1 1 1 1 (15)
----------------------
c 0 1 0 0 1 (9, not equal to a=25)
So, it is clear that we have to keep every single bit to 1 if we want to get the other number as the result of the & operation.
Now it is time to analyze the scenario you posted.
Here, a &= c is the same as a = a & c.
We assumed that you are using 32-bit integer variables.
You set c = 0x3FFFFFFF means c = (2^30) - 1 or c = 1073741823
a = 0000 0000 0000 0000 0000 0000 0000 1010 (10)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0000 0000 1010 (10, which is equal to a=10)
and
a = 0000 0000 0000 0000 0000 0001 0000 0000 (256)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0001 0000 0000 (256, which is equal to a=256)
but, if a > c, say a=0x40000000 (1073741824, c+1 in base 10), then
a = 0100 0000 0000 0000 0000 0001 0000 0000 (1073741824)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0000 0000 0000 (0, which is not equal to a=1073741823)
So, your assumption ( the value of a after executing statement a &= c is the same as previous a) is true only if a <= c

challenging bit manipulation procedure

This question showed up on one of my teacher's old final exams. How does one even think logically about arriving at the answer?
I am familiar with the bit-manipulation operators and conversion between hex and binary.
int whatisthis(int x) {
x = (0x55555555 & x) + (0x55555555 & (x >>> 1));
x = (0x33333333 & x) + (0x33333333 & (x >>> 2));
x = (0x0f0f0f0f & x) + (0x0f0f0f0f & (x >>> 4));
x = (0x00ff00ff & x) + (0x00ff00ff & (x >>> 8));
x = (0x0000ffff & x) + (0x0000ffff & (x >>> 16));
return x;
}
Didn't you forget some left shifts?
x = ((0x55555555 & x) <<< 1) + (0x55555555 & (x >>> 1));
x = ((0x33333333 & x) <<< 2) + (0x33333333 & (x >>> 2));
snip...
This would then be the reversal of bits from left to right.
You can see that bits are moved together rather than one by one and this lead to a cost in O(log2(nbit))
(you invert 2^5=32 bits in 5 statements)
It might help you to rewrite the constants in binary to understand better how it works.
If there are no left shifts, then I can't help you because the additions will generate carry and I can't see any obvious meaning...
EDIT: OK, interesting, so this is for counting the number of bits set to 1 (also known as population count or popcount)... Here is a squeak Smalltalk quick test on 16 bits
| f |
f := [:x |
| y |
y := (x bitAnd: 16r5555) + (x >> 1 bitAnd: 16r5555).
y := (y bitAnd: 16r3333) + (y >> 2 bitAnd: 16r3333).
y := (y bitAnd: 16r0F0F) + (y >> 4 bitAnd: 16r0F0F).
y := (y bitAnd: 16r00FF) + (y >> 8 bitAnd: 16r00FF).
y].
^(0 to: 16rFFFF) detect: [:i | i bitCount ~= (f value: i)] ifNone: [nil]
The first statement handle each bit pairs. If no bit is set in the pair then it produce 00, if a single bit is set, it produces 01, if two bits are set, it produces 10.
00 -> 0+0 -> 00 = 0, no bit set
01 -> 1+0 -> 01 = 1, 1 bit set
10 -> 0+1 -> 01 = 1, 1 bit set
11 -> 1+1 -> 10 = 2, 2 bits set
So it count the number of bits in each pair.
The second statement handles group of 4 adjacent bits:
0000 -> 00+00 -> 0000 0+0=0 bits set
0001 -> 01+00 -> 0001 1+0=1 bits set
0010 -> 10+00 -> 0010 2+0=2 bits set
0100 -> 00+01 -> 0001 0+1=1 bits set
0101 -> 01+01 -> 0010 1+1=2 bits set
0110 -> 10+01 -> 0011 2+1=3 bits set
1000 -> 00+10 -> 0010 0+2=2 bits set
1001 -> 01+10 -> 0011 1+2=3 bits set
1010 -> 10+10 -> 0100 2+2=4 bits set
So, while the first step did replace each pair of bits by the number of bits set in this pair, the second did add this count in each pair of pair...
Next will handle each group of 8 adjacent bits, and sum the number of bits sets in two groups of 4...

Big integer bit rotation

I want to implement unsigned left rotation in my integer class. However, because it is a template class, it can be in any size from 128-bit and goes on; so I cannot use algorithms that require a temporary of the same size because, if the type becomes big, a stack overflow will occur (specially if such function was in call chain).
So to fix such problem I minimized it to a question: what steps do I have to do to rotate a 32-bit number using only 4 bits. Well, if you think about, it a 32-bit number contains 8 groups of 4 bits each, so if the number of bits to rotate is 4 then a swap will occur between groups 0 and 4, 1 and 5, 2 and 6, 3 and 7, after which the rotation is done.
If bits to rotate less than 4 and greater than 0 then it is simple just preserve the last N bits and start shift-Or loop, e.g. suppose we have the number 0x9CE2 to left rotate it 3 bits we will do that following:
The number in little endian binary is 1001 1100 1110 0010, each nibble indexed from 0 to 3 from right to left and we will call the this number N and number of bits in one group B
[1] x <- N[3] >> 3
x <- 1001 >> 3
x <- 0100
y <- N[0] >> (B - 3)
y <- N[0] >> (4 - 3)
y <- 0010 >> 1
y <- 0001
N[0] <- (N[0] << 3) | x
N[0] <- (0010 << 3) | 0100
N[0] <- 0000 | 0100
N[0] <- 0100
[2] x <- y
x <- 0001
y <- N[1] >> (B - 3)
y <- N[1] >> (4 - 3)
y <- 1110 >> 1
y <- 0111
N[1] <- (N[1] << 3) | x
N[1] <- (1110 << 3) | 0001
N[1] <- 0000 | 0001
N[1] <- 0001
[3] x <- y
x <- 0111
y <- N[2] >> (B - 3)
y <- N[2] >> (4 - 3)
y <- 1100 >> 1
y <- 0110
N[2] <- (N[2] << 3) | x
N[2] <- (1100 << 3) | 0111
N[2] <- 0000 | 0111
N[2] <- 0111
[4] x <- y
x <- 0110
y <- N[3] >> (B - 3)
y <- N[3] >> (4 - 3)
y <- 1001 >> 1
y <- 0100
N[3] <- (N[3] << 3) | x
N[3] <- (1001 << 3) | 0110
N[3] <- 1000 | 0110
N[3] <- 1110
The result is 1110 0111 0001 0100, 0xE714 in hexadecimal, which is the right answer; and, if you try to apply it on any number with any precision, all you will need is one variable which type is the type of any element of the array forming that bignum type.
Now the real problem is when the number bits to rotate is bigger than one group or bigger than half size of the type (i.e. bigger than 4bits or 8bits in this example).
Usually, we shift bits from last element to first element and so on; but now, after shifting
the last element to first element, the result has to be relocated to new place because the number of bits to rotate is bigger than on element (i.e. > 4 bits). The start index where the shift will start is the last index (3 in this example), and for destination index we use the equation: dest_index = int(bits_count/half_bits) + 1, where half_bits is number of bits in half the number and in this example half_bits = 8, so if bits_count = 7 then dest_index = int(7/8) + 1 = 1 + 1 = 2, and that means the result of the first shift must relocated to destination index 2 -- and that is my problem, for I cannot think of a way to write an algorithm for this situation.
Thanks.
This will just be some hints for one way to accomplish this. You can think about making two passes.
first pass, rotate on 4 bit boundaries only
second pass, rotate on 1 bit boundaries
So, the top level pseudo code might look like:
rotate (unsigned bits) {
bits %= 32; /* only have 32 bits */
if (bits == 0) return;
rotate_nibs(bits/4);
rotate_bits(bits%4);
}
So, to rotate by 13 bits, you first rotate by 3 nibbles, then rotate by 1 bit to get your total of 13 bits of rotation.
You could avoid nibble rotation altogether if you treat your array of nibbles as a circular buffer. Then, a nibble rotation is just a matter of changing the start position in the array for the 0 index.
If you must do rotation, it can be tricky. If you are rotating an 8 item array and only want to use 1 item of storage overhead to do the rotation, then to rotate by 3 items, you might approach it like this:
orig: A B C D E F G H
step 1: A B C A E F G H rem: D
2: A B C A E F D H rem: G
3: A G C A E F D H rem: B
4: A G C A B F D H rem: E
5: A G C A B F D E rem: H
6: A G H A B F D E rem: C
7: A G H A B C D E rem: F
8: F G H A B C D E done
But, if you tried the same technique with 2, 4, or 6 item rotations, the cycle does not run through the whole array. So, you have to be aware if the rotation count and the array size has a common divisor, and make the algorithm account for that. If you step through with the 6 step rotation, some more clues fall out.
orig: A B C D E F G H
A
G
E
C cycled back to A's position
B
H
F
D done
Notice that the GCD(6,8) is 2, which means we should expect 4 iterations for each pass. Then, the rotation algorithm for an N item array could look like:
rotate (n) {
G = GCD(n, N)
for (i = 0; i < G; ++i) {
p = arr[i];
for (j = 1; j < N/G; ++j) {
swap(p, arr[(i + j*n) % N]);
}
arr[i] = p;
}
}
There is an optimization you can do to avoid the swap per iteration, that I'll leave as an exercise.
I suggest calling an assembly language function for the bit rotation.
Many assembly languages have better facilities for rotating bits through carry and rotating carry through bits.
Many times the assembly language is less complex than the C or C++ function.
The drawback is that you will need one instance of each assembly function for each {different} platform.

Whats the reverse function of x XOR (x/2)?

Whats the reverse function of x XOR (x/2)?
Is there a system of rules for equation solving, similar to algebra, but with logic operators?
Suppose we have a number x of N bits. You could write this as:
b(N-1) b(N-2) b(N-3) ... b(0)
where b(i) is bit number i in the number (where 0 is the least significant bit).
x / 2 is the same as x shifted left 1 bit. Let's assume unsigned numbers. So:
x / 2 = 0 b(N-1) b(N-2) ... b(1)
Now we XOR x with x / 2:
x ^ (x / 2) = b(N-1)^0 b(N-2)^b(N-1) b(N-3)^b(N-2) ... b(0)^b(1)
Note that the rightmost bit (the most significant bit) of this is b(N-1)^0 which is b(N-1). In other words, you can get bit b(N-1) from the result immediately. When you have this bit, you can calculate b(N-2) because the second bit of the result is b(N-2)^b(N-1) and you already know b(N-1). And so on, you can compute all bits b(N-1) to b(0) of the original number x.
I can give you an algorithm in bits:
Assuming you have an array of n bits:
b = [b1 .. bn] // b1-bn are 0 or 1
The original array is:
x0 = b0
x1 = b1 ^ x0
x2 = b2 ^ x1
or in general
x[i] = b[i] ^ x[i-1]
Assume Y = X ^ (X / 2)
If you want to find X, do this
X = 0
do
X ^= Y
Y /= 2
while Y != 0
I hope it helps!
I know it's an old topic, but I stumbled upon the same question, and I found out a little trick. If you have n bits, instead of requiring n bits operations (like the answer by Jesper), you can do it with log2(n) number operations :
Suppose that y is equal to x XOR (x/2) at the beginning of the program, you can do the following C program :
INPUT : y
int i, x;
x = y;
for (i = 1; i < n; i <<= 1)
x ^= x >> i;
OUTPUT : x
and here you have the solution.
">>" is the right bit shift operation. For example the number 13, 1101 in binary, if shifted by 1 on the right, will become 110 in binary, thus 13 >> 1 = 6. x >> i is equivalent to x / 2^i (division in the integers, of course)
"<<" is the left bit shift operation (i <<= 1 is equivalent to i *= 2)
Why does it work ? Let's take as example n = 5 bits, and start with y = b4 b3 b2 b1 b0 (in binary : in the following x is written in binary also, but i is written in decimal)
Initialisation :
x = b4 b3 b2 b1 b0
First step : i = 1
x >> 1 = b4 b3 b2 b1 so we have
x = b4 b3 b2 b1 b0 XOR b3 b2 b1 b0 = b4 (b3^b4) (b2^b3) (b1^b2) (b0^b1)
Second step : i = 2
x >> 2 = b4 (b3^b4) (b2^b3) so we have
x = b4 (b3^b4) (b2^b3) (b1^b2) (b0^b1) XOR b4 (b3^b4) (b2^b3) = b4 (b3^b4) (b2^b3^b4) (b1^b2^b3^b4) (b0^b1^b2^b3)
Third step : i = 4
x >> 4 = b4 so we have
x = b4 (b3^b4) (b2^b3^b4) (b1^b2^b3^b4) (b0^b1^b2^b3) XOR b4 = b4 (b3^b4) (b2^b3^b4) (b1^b2^b3^b4) (b0^b1^b2^b3^b4)
Then i = 8, which is more than 5, we exit the loop.
And we have the desired output.
The loop has log2(n) iterations because i starts at 1 and is multiplied by 2 at each step, so for i to reach n, we have to do it log2(n) times.