How can I test whether a number is a power of 2? - c++

I need a function like this:
// return true if 'n' is a power of 2, e.g.
// is_power_of_2(16) => true
// is_power_of_2(3) => false
bool is_power_of_2(int n);
Can anyone suggest how I could write this?

(n & (n - 1)) == 0 is best. However, note that it will incorrectly return true for n=0, so if that is possible, you will want to check for it explicitly.
http://www.graphics.stanford.edu/~seander/bithacks.html has a large collection of clever bit-twiddling algorithms, including this one.

A power of two will have just one bit set (for unsigned numbers). Something like
bool powerOfTwo = !(x == 0) && !(x & (x - 1));
Will work fine; one less than a power of two is all 1s in the less significant bits, so must AND to 0 bitwise.
As I was assuming unsigned numbers, the == 0 test (that I originally forgot, sorry) is adequate. You may want a > 0 test if you're using signed integers.

Powers of two in binary look like this:
1: 0001
2: 0010
4: 0100
8: 1000
Note that there is always exactly 1 bit set. The only exception is with a signed integer. e.g. An 8-bit signed integer with a value of -128 looks like:
10000000
So after checking that the number is greater than zero, we can use a clever little bit hack to test that one and only one bit is set.
bool is_power_of_2(int x) {
return x > 0 && !(x & (x−1));
}
For more bit twiddling see here.

Approach #1:
Divide number by 2 reclusively to check it.
Time complexity : O(log2n).
Approach #2:
Bitwise AND the number with its just previous number should be equal to ZERO.
Example: Number = 8
Binary of 8: 1 0 0 0
Binary of 7: 0 1 1 1 and the bitwise AND of both the numbers is 0 0 0 0 = 0.
Time complexity : O(1).
Approach #3:
Bitwise XOR the number with its just previous number should be sum of both numbers.
Example: Number = 8
Binary of 8: 1 0 0 0
Binary of 7: 0 1 1 1 and the bitwise XOR of both the numbers is 1 1 1 1 = 15.
Time complexity : O(1).
http://javaexplorer03.blogspot.in/2016/01/how-to-check-number-is-power-of-two.html

In C++20 there is std::has_single_bit which you can use for exactly this purpose if you don't need to implement it yourself:
#include <bit>
static_assert(std::has_single_bit(16));
static_assert(!std::has_single_bit(15));
Note that this requires the argument to be an unsigned integer type.

bool is_power_of_2(int i) {
if ( i <= 0 ) {
return 0;
}
return ! (i & (i-1));
}

for any power of 2, the following also holds.
n&(-n)==n
NOTE: The condition is true for n=0 ,though its not a power of 2.
Reason why this works is:
-n is the 2s complement of n. -n will have every bit to the left of rightmost set bit of n flipped compared to n. For powers of 2 there is only one set bit.

This is probably the fastest, if using GCC. It only uses a POPCNT cpu instruction and one comparison. Binary representation of any power of 2 number, has always only one bit set, other bits are always zero. So we count the number of set bits with POPCNT, and if it's equal to 1, the number is power of 2. I don't think there is any possible faster methods. And it's very simple, if you understood it once:
if(1==__builtin_popcount(n))

Following would be faster then most up-voted answer due to boolean short-circuiting and fact that comparison is slow.
int isPowerOfTwo(unsigned int x)
{
return x && !(x & (x – 1));
}
If you know that x can not be 0 then
int isPowerOfTwo(unsigned int x)
{
return !(x & (x – 1));
}

return n > 0 && 0 == (1 << 30) % n;

This isn't the fastest or shortest way, but I think it is very readable. So I would do something like this:
bool is_power_of_2(int n)
int bitCounter=0;
while(n) {
if ((n & 1) == 1) {
++bitCounter;
}
n >>= 1;
}
return (bitCounter == 1);
}
This works since binary is based on powers of two. Any number with only one bit set must be a power of two.

What's the simplest way to test whether a number is a power of 2 in C++?
If you have a modern Intel processor with the Bit Manipulation Instructions, then you can perform the following. It omits the straight C/C++ code because others have already answered it, but you need it if BMI is not available or enabled.
bool IsPowerOf2_32(uint32_t x)
{
#if __BMI__ || ((_MSC_VER >= 1900) && defined(__AVX2__))
return !!((x > 0) && _blsr_u32(x));
#endif
// Fallback to C/C++ code
}
bool IsPowerOf2_64(uint64_t x)
{
#if __BMI__ || ((_MSC_VER >= 1900) && defined(__AVX2__))
return !!((x > 0) && _blsr_u64(x));
#endif
// Fallback to C/C++ code
}
GCC, ICC, and Clang signal BMI support with __BMI__. It's available in Microsoft compilers in Visual Studio 2015 and above when AVX2 is available and enabled. For the headers you need, see Header files for SIMD intrinsics.
I usually guard the _blsr_u64 with an _LP64_ in case compiling on i686. Clang needs a little workaround because it uses a slightly different intrinsic symbol nam:
#if defined(__GNUC__) && defined(__BMI__)
# if defined(__clang__)
# ifndef _tzcnt_u32
# define _tzcnt_u32(x) __tzcnt_u32(x)
# endif
# ifndef _blsr_u32
# define _blsr_u32(x) __blsr_u32(x)
# endif
# ifdef __x86_64__
# ifndef _tzcnt_u64
# define _tzcnt_u64(x) __tzcnt_u64(x)
# endif
# ifndef _blsr_u64
# define _blsr_u64(x) __blsr_u64(x)
# endif
# endif // x86_64
# endif // Clang
#endif // GNUC and BMI
Can you tell me a good web site where this sort of algorithm can be found?
This website is often cited: Bit Twiddling Hacks.

Here is another method, in this case using | instead of & :
bool is_power_of_2(int x) {
return x > 0 && (x<<1 == (x|(x-1)) +1));
}

It is possible through c++
int IsPowOf2(int z) {
double x=log2(z);
int y=x;
if (x==(double)y)
return 1;
else
return 0;
}

Another way to go (maybe not fastest) is to determine if ln(x) / ln(2) is a whole number.

This is the bit-shift method in T-SQL (SQL Server):
SELECT CASE WHEN #X>0 AND (#X) & (#X-1)=0 THEN 1 ELSE 0 END AS IsPowerOfTwo
It is a lot faster than doing a logarithm four times (first set to get decimal result, 2nd set to get integer set & compare)

Related

Comparing the Most Significant Bit of two numbers: ==, <, <=

Is there a quick bit operation to implement msb_equal: a function to check if two numbers have the same most significant bit?
For example, 0b000100 and 0b000111 both have 4 as their most significant bit value, so they are most msb_equal. In contrast 0b001111 has 8 as the MSB value, and 0b010000 has 16 as it's MSB value, so the pair are not msb_equal.
Similarly, are there fast ways to compute <, and <=?
Examples:
msb_equal(0, 0) => true
msb_equal(2, 3) => true
msb_equal(0, 1) => false
msb_equal(1, 2) => false
msb_equal(3, 4) => false
msb_equal(128, 255) => true
A comment asks why 0 and 1 are not msb_equal. My view on this is that if I write out two numbers in binary, they are msb_equal when the most significant 1 bit in each is the same bit.
Writing out 2 & 3:
2 == b0010
3 == b0011
In this case, the top most 1 is the same in each number
Writing out 1 & 0:
1 == b0001
0 == b0000
Here, the top most 1s are not the same.
It could be said that as 0 has no top most set bit, msb_equal(0,0) is ambiguous. I'm defining it as true: I feel this is helpful and consistent.
Yes, there are fast bit based operations to compute MSB equality and inequalities.
Note on syntax
I'll provide implementations using c language syntax for bitwise and logical operators:
| – bitwise OR. || – logical OR.
& – bitwise AND. && – logical AND.
^ – bitwise XOR.
==
msb_equal(l, r) -> bool
{
return (l^r) <= (l&r)
}
<
This is taken from the Wikipedia page on the Z Order Curve (which is awesome):
msb_less_than(l, r) -> bool
{
(l < r) && (l < l^r)
}
<=
msb_less_than_equal(l, r) -> bool
{
(l < r) || (l^r <= l&r)
}
If you know which number is the smallest/biggest one, there is a very fast way to check whether the MSBs are equal. The following code is written in C:
bool msb_equal(unsigned small, unsigned big) {
assert(small <= big);
return (small ^ big) <= small;
}
This can be useful in cases like when you add numbers to a variable and you want to know when you reached a new power of 2.
Explanation
The trick here is that if the two numbers have the same most significant bit, it will disappear since 1 xor 1 is 0; that makes the xor result smaller than both numbers. If they have different most significant bits, the biggest number's MSB will remain because the smallest number has a 0 in that place and therefore the xor result will be bigger than the smallest number.
When both input numbers are 0, the xor result will be 0 and the function will return true. If you want 0 and 0 to count as having different MSBs then you can replace <= with <.

Bitwise NOT operator returning unexpected and negative value? [duplicate]

This question already has answers here:
Why is the output -33 for this code snippet
(3 answers)
Closed 9 years ago.
I'm trying to get the value of an integer using Bitwise NOT, but i'm not getting what i expected.
#include <stdio.h>
int main(){
int i = 16;
int j = ~i;
printf("%d", j);
return 0;
}
Isn't 16 supposed to be:
00000000000000000000000000010000
So ~16 is supposed to be:
11111111111111111111111111101111
Why i'm not getting what i expected and why the result is negative?
This is what i'm trying to do:
I have a number for exemple 27 which is:
00000000000000000000000000011011
And want to check every bit if it's 1 or 0.
So i need to get for exemple this value
11111111111111111111111111110111
The use second one to check if the 3rd bit of the first is set to 1.
Although there are pedantic points which can be made about compiler behaviour, the simple answer is that a signed int with the top bit set is a negative number.
So if you do something which sets the top bit of an int (a signed int, not an unsigned one), then ask the tools/library to show you the value of that int, you'll see a negative number.
This is not a universal truth, but it's a good approximation to it for most modern systems.
Note that it's printf which is making the representation here - because %d formats numbers as signed. %u may give the result you're expecting. Just changing the types of the variables won't be enough, because printf doesn't know anything about the types of its arguments.
I would say that as a general rule of thumb, if you're doing bit-twiddling, then use unsigned ints and display them in hexadecimal. Life will be simpler that way, and it most generally fits with the intent. (Fancy accelerated maths tricks are an obvious exception)
And want to check every bit if it's 1 or 0.
To check an individual bit, you don't NOT the number, you AND it with an appropriate bit mask:
if ((x & 1) != 0) ... // bit 0 is 1
if ((x & 2) != 0) ... // bit 1 is 1
if ((x & 4) != 0) ... // bit 2 is 1
if ((x & 8) != 0) ... // bit 3 is 1
...
if ((x & (1 << n)) != 0) ... // bit n is 1
...
if ((x & 0x80000000) != 0) ... // bit 31 is 1
If you want to get ones' complement of a number, you need to put that number into an unsigned variable and show it as so.
In C it would be:
unsigned int x = ~16;
printf("%u\n", x);
and you will get 4294967279.
But if you are just trying to get the negative number of a certain one, put the - operator before it.
EDIT: To check whether a bit is 0 or 1, you have to use the bitwise AND.
In two-complement arithmetic to get a reverse number (for example for value 16 to get value -16) you need reverse each bit and add 1.
In your example, to get -16 from 16 that is represented as
00000000000000000000000000010000
you need reverse each bit. You will get
11111111111111111111111111101111
Now you must add 1 and you will get
11111111111111111111111111110000
As you can see if you add these two values, you will get 0. It proves that you did all correctly.

determine power of number

i know that if number is power of two,then it must satisfy (x&(x-1))=0; for example let's take x=16 or 10000 x-16=10000-1=01111 and (x&(x-1))=0; for another non power number 7 for example, 7=0111,7-1=0110 7&(7-1)=0110 which is not equal 0,my question how can i determine if number is some power of another number k? for example 625 is 5^4,and also how can i find in which power is equal k to n?i am interested using bitwise operators,sure i can find it by brute force methods(by standard algorithm,thanks a lot
I doubt you're going to find a bitwise algorithm for determining that a number is a power of 5.
In general, given y = n^x, to find x, you need to use logarithms, i.e. x = log_n(y). Most languages don't offer a log_n function, but you can achieve it with the following identity:
log_n(y) = log(y) / log(n)
If y is an integer power of n, then x will be an integer. Of course, due to the limitations of finite-precision computer arithmetic, you won't necessarily get the exact answer with the method above.
I'm afraid, you can't do that with just simple bit magic. Bits are typically good for powers of 2. For powers of, say, 5 you'd probably need to operate in base-5 system, where 15=110, 105=510, 1005=2510, 10005=12510, 100005=62510, etc. In base-5 system you can recognize powers of 5 just as easily as powers of 2 in binary. But you'd first need to convert your numbers to that base.
For arbitrary k there is only the generic solution:
bool is_pow(unsigned long x, unsigned int base) {
assert(base >= 2);
if (x == 0) {
return false;
}
unsigned long t = x;
while (t % base == 0) {
t /= base;
}
return t == 1;
}
When k is a power of two, you can speed things up by checking whether x is a power of two and whether the number of trailing zero bits of x is divisible by log2(k).
And if computational speed is important and your k is fixed, you can always use the trivial implementation:
bool is_pow5(unsigned long x) {
if (x == 5 || x == 25 || x == 125 || x == 625)
return true;
if (x < 3125)
return false;
// you got the idea
...
}

Fast divisibility tests (by 2,3,4,5,.., 16)?

What are the fastest divisibility tests? Say, given a little-endian architecture and a 32-bit signed integer: how to calculate very fast that a number is divisible by 2,3,4,5,... up to 16?
WARNING: given code is EXAMPLE only. Every line is independent! Just obvious solution using modulo operation is slow on many processors, which don't have DIV hardware (like many ARMs). Some compilers are also cannot make such optimizations (say, if divisor is a function's argument or is dependent on something).
Divisible_by_1 = do();
Divisible_by_2 = if (!(number & 1)) do();
Divisible_by_3 = ?
Divisible_by_4 = ?
Divisible_by_5 = ?
Divisible_by_6 = ?
Divisible_by_7 = ?
Divisible_by_8 = ?
Divisible_by_9 = ?
Divisible_by_10 = ?
Divisible_by_11 = ?
Divisible_by_12 = ?
Divisible_by_13 = ?
Divisible_by_14 = ?
Divisible_by_15 = ?
Divisible_by_16 = if(!number & 0x0000000F) do();
and special cases:
Divisible_by_2k = if(number & (tk-1)) do(); //tk=2**k=(2*2*2*...) k times
In every case (including divisible by 2):
if (number % n == 0) do();
Anding with a mask of low order bits is just obfuscation, and with a modern compiler will not be any faster than writing the code in a readable fashion.
If you have to test all of the cases, you might improve performance by putting some of the cases in the if for another: there's no point it testing for divisibility by 4 if divisibility by 2 has already failed, for example.
It is not a bad idea AT ALL to figure out alternatives to division instructions (which includes modulo on x86/x64) because they are very slow. Slower (or even much slower) than most people realize. Those suggesting "% n" where n is a variable are giving foolish advice because it will invariably lead to the use of the division instruction. On the other hand "% c" (where c is a constant) will allow the compiler to determine the best algorithm available in its repertoire. Sometimes it will be the division instruction but a lot of the time it won't.
In this document Torbjörn Granlund shows that the ratio of clock cycles required for unsigned 32-bit mults:divs is 4:26 (6.5x) on Sandybridge and 3:45 (15x) on K10. for 64-bit the respective ratios are 4:92 (23x) and 5:77 (14.4x).
The "L" columns denote latency. "T" columns denote throughput. This has to do with the processor's ability to handle multiple instructions in parallell. Sandybridge can issue one 32-bit multiplication every other cycle or one 64-bit every cycle. For K10 the corresponding throughput is reversed. For divisions the K10 needs to complete the entire sequence before it may begin another. I suspect it is the same for Sandybridge.
Using the K10 as an example it means that during the cycles required for a 32-bit division (45) the same number (45) of multiplications can be issued and the next-to-last and last one of these will complete one and two clock cycles after the division has completed. A LOT of work can be performed in 45 multiplications.
It is also interesting to note that divs have become less efficient with the evolution from K8-K9 to K10: from 39 to 45 and 71 to 77 clock cycles for 32- and 64-bit.
Granlund's page at gmplib.org and at the Royal Institute of Technology in Stockholm contain more goodies, some of which have been incorporated into the gcc compiler.
As #James mentioned, let the compiler simplify it for you. If n is a constant, any decent compiler is able to recognize the pattern and change it to a more efficient equivalent.
For example, the code
#include <stdio.h>
int main() {
size_t x;
scanf("%u\n", &x);
__asm__ volatile ("nop;nop;nop;nop;nop;");
const char* volatile foo = (x%3 == 0) ? "yes" : "no";
__asm__ volatile ("nop;nop;nop;nop;nop;");
printf("%s\n", foo);
return 0;
}
compiled with g++-4.5 -O3, the relevant part of x%3 == 0 will become
mov rcx,QWORD PTR [rbp-0x8] # rbp-0x8 = &x
mov rdx,0xaaaaaaaaaaaaaaab
mov rax,rcx
mul rdx
lea rax,"yes"
shr rdx,1
lea rdx,[rdx+rdx*2]
cmp rcx,rdx
lea rdx,"no"
cmovne rax,rdx
mov QWORD PTR [rbp-0x10],rax
which, translated back to C code, means
(hi64bit(x * 0xaaaaaaaaaaaaaaab) / 2) * 3 == x ? "yes" : "no"
// equivalatent to: x % 3 == 0 ? "yes" : "no"
no division involved here. (Note that 0xaaaaaaaaaaaaaaab == 0x20000000000000001L/3)
Edit:
The magic constant 0xaaaaaaaaaaaaaaab can be computed in http://www.hackersdelight.org/magic.htm
For divisors of the form 2n - 1, check http://graphics.stanford.edu/~seander/bithacks.html#ModulusDivision
A bit tongue in cheek, but assuming you get the rest of the answers:
Divisible_by_6 = Divisible_by_3 && Divisible_by_2;
Divisible_by_10 = Divisible_by_5 && Divisible_by_2;
Divisible_by_12 = Divisible_by_4 && Divisible_by_3;
Divisible_by_14 = Divisible_by_7 && Divisible_by_2;
Divisible_by_15 = Divisible_by_5 && Divisible_by_3;
Assume number is unsigned (32-bits). Then the following are very fast ways to compute divisibility up to 16. (I haven't measured but the assembly code indicates so.)
bool divisible_by_2 = number % 2 == 0;
bool divisible_by_3 = number * 2863311531u <= 1431655765u;
bool divisible_by_4 = number % 4 == 0;
bool divisible_by_5 = number * 3435973837u <= 858993459u;
bool divisible_by_6 = divisible_by_2 && divisible_by_3;
bool divisible_by_7 = number * 3067833783u <= 613566756u;
bool divisible_by_8 = number % 8 == 0;
bool divisible_by_9 = number * 954437177u <= 477218588u;
bool divisible_by_10 = divisible_by_2 && divisible_by_5;
bool divisible_by_11 = number * 3123612579u <= 390451572u;
bool divisible_by_12 = divisible_by_3 && divisible_by_4;
bool divisible_by_13 = number * 3303820997u <= 330382099u;
bool divisible_by_14 = divisible_by_2 && divisible_by_7;
bool divisible_by_15 = number * 4008636143u <= 286331153u;
bool divisible_by_16 = number % 16 == 0;
Regarding divisibility by d the following rules hold:
When d is a power of 2:
As pointed out by James Kanze, you can use is_divisible_by_d = (number % d == 0). Compilers are clever enough to implement this as (number & (d - 1)) == 0 which is very efficient but obfuscated.
However, when d is not a power of 2 it looks like the obfuscations shown above are more efficient than what current compilers do. (More on that later).
When d is odd:
The technique takes the form is_divisible_by_d = number * a <= b where a and b are cleverly obtained constants. Notice that all we need is 1 multiplication and 1 comparison:
When d is even but not a power of 2:
Then, write d = p * q where p is a power of 2 and q is odd and use the "tongue in cheek" suggested by unpythonic, that is, is_divisible_by_d = is_divisible_by_p && is_divisible_by_q. Again, only 1 multiplication (in the calculation of is_divisible_by_q) is performed.
Many compilers (I've tested clang 5.0.0, gcc 7.3, icc 18 and msvc 19 using godbolt) replace number % d == 0 by (number / d) * d == number. They use a clever technique (see references in Olof Forshell's answer) to replace the division by a multiplication and a bit shift. They end up doing 2 multiplications. In contrast the techniques above perform only 1 multiplication.
Update 01-Oct-2018
Looks like the algorithm above is coming to GCC soon (already in trunk):
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82853
The GCC's implementation seems even more efficient. Indeed, the implementation above has three parts: 1) divisibility by the divisor's even part; 2) divisibility by the divisor's odd part; 3) && to connect the results of the two previous steps. By using an assembler instruction which is not efficiently available in standard C++ (ror), GCC wraps up the three parts into a single one which is very similar to that of divisibility by the odd part. Great stuff! Having this implementation available, it's better (for both clarity and performance) to fall back to % all times.
Update 05-May-2020
My articles on the subject have been published:
Quick Modular Calculations (Part 1), Overload Journal 154, December 2019, pages 11-15.
Quick Modular Calculations (Part 2), Overload Journal 155, February 2020, pages 14-17.
Quick Modular Calculations (Part 3), Overload Journal 156, April 2020, pages 10-13.
First of all, I remind you that a number in the form bn...b2b1b0 in binary has value:
number = bn*2^n+...+b2*4+b1*2+b0
Now, when you say number%3, you have:
number%3 =3= bn*(2^n % 3)+...+b2*1+b1*2+b0
(I used =3= to indicate congruence modulo 3). Note also that b1*2 =3= -b1*1
Now I will write all the 16 divisions using + and - and possibly multiplication (note that multiplication could be written as shift or sum of same value shifted to different locations. For example 5*x means x+(x<<2) in which you compute x once only)
Let's call the number n and let's say Divisible_by_i is a boolean value. As an intermediate value, imagine Congruence_by_i is a value congruent to n modulo i.
Also, lets say n0 means bit zero of n, n1 means bit 1 etc, that is
ni = (n >> i) & 1;
Congruence_by_1 = 0
Congruence_by_2 = n&0x1
Congruence_by_3 = n0-n1+n2-n3+n4-n5+n6-n7+n8-n9+n10-n11+n12-n13+n14-n15+n16-n17+n18-n19+n20-n21+n22-n23+n24-n25+n26-n27+n28-n29+n30-n31
Congruence_by_4 = n&0x3
Congruence_by_5 = n0+2*n1-n2-2*n3+n4+2*n5-n6-2*n7+n8+2*n9-n10-2*n11+n12+2*n13-n14-2*n15+n16+2*n17-n18-2*n19+n20+2*n21-n22-2*n23+n24+2*n25-n26-2*n27+n28+2*n29-n30-2*n31
Congruence_by_7 = n0+2*n1+4*n2+n3+2*n4+4*n5+n6+2*n7+4*n8+n9+2*n10+4*n11+n12+2*n13+4*n14+n15+2*n16+4*n17+n18+2*n19+4*n20+n21+2*n22+4*n23+n24+2*n25+4*n26+n27+2*n28+4*n29+n30+2*n31
Congruence_by_8 = n&0x7
Congruence_by_9 = n0+2*n1+4*n2-n3-2*n4-4*n5+n6+2*n7+4*n8-n9-2*n10-4*n11+n12+2*n13+4*n14-n15-2*n16-4*n17+n18+2*n19+4*n20-n21-2*n22-4*n23+n24+2*n25+4*n26-n27-2*n28-4*n29+n30+2*n31
Congruence_by_11 = n0+2*n1+4*n2+8*n3+5*n4-n5-2*n6-4*n7-8*n8-5*n9+n10+2*n11+4*n12+8*n13+5*n14-n15-2*n16-4*n17-8*n18-5*n19+n20+2*n21+4*n22+8*n23+5*n24-n25-2*n26-4*n27-8*n28-5*n29+n30+2*n31
Congruence_by_13 = n0+2*n1+4*n2+8*n3+3*n4+6*n5-n6-2*n7-4*n8-8*n9-3*n10-6*n11+n12+2*n13+4*n14+8*n15+3*n16+6*n17-n18-2*n19-4*n20-8*n21-3*n22-6*n3+n24+2*n25+4*n26+8*n27+3*n28+6*n29-n30-2*n31
Congruence_by_16 = n&0xF
Or when factorized:
Congruence_by_1 = 0
Congruence_by_2 = n&0x1
Congruence_by_3 = (n0+n2+n4+n6+n8+n10+n12+n14+n16+n18+n20+n22+n24+n26+n28+n30)-(n1+n3+n5+n7+n9+n11+n13+n15+n17+n19+n21+n23+n25+n27+n29+n31)
Congruence_by_4 = n&0x3
Congruence_by_5 = n0+n4+n8+n12+n16+n20+n24+n28-(n2+n6+n10+n14+n18+n22+n26+n30)+2*(n1+n5+n9+n13+n17+n21+n25+n29-(n3+n7+n11+n15+n19+n23+n27+n31))
Congruence_by_7 = n0+n3+n6+n9+n12+n15+n18+n21+n24+n27+n30+2*(n1+n4+n7+n10+n13+n16+n19+n22+n25+n28+n31)+4*(n2+n5+n8+n11+n14+n17+n20+n23+n26+n29)
Congruence_by_8 = n&0x7
Congruence_by_9 = n0+n6+n12+n18+n24+n30-(n3+n9+n15+n21+n27)+2*(n1+n7+n13+n19+n25+n31-(n4+n10+n16+n22+n28))+4*(n2+n8+n14+n20+n26-(n5+n11+n17+n23+n29))
// and so on
If these values end up being negative, add it with i until they become positive.
Now what you should do is recursively feed these values through the same process we just did until Congruence_by_i becomes less than i (and obviously >= 0). This is similar to what we do when we want to find remainder of a number by 3 or 9, remember? Sum up the digits, if it had more than one digit, some up the digits of the result again until you get only one digit.
Now for i = 1, 2, 3, 4, 5, 7, 8, 9, 11, 13, 16:
Divisible_by_i = (Congruence_by_i == 0);
And for the rest:
Divisible_by_6 = Divisible_by_3 && Divisible_by_2;
Divisible_by_10 = Divisible_by_5 && Divisible_by_2;
Divisible_by_12 = Divisible_by_4 && Divisible_by_3;
Divisible_by_14 = Divisible_by_7 && Divisible_by_2;
Divisible_by_15 = Divisible_by_5 && Divisible_by_3;
Edit: Note that some of the additions could be avoided from the very beginning. For example n0+2*n1+4*n2 is the same as n&0x7, similarly n3+2*n4+4*n5 is (n>>3)&0x7 and thus with each formula, you don't have to get each bit individually, I wrote it like that for the sake of clarity and similarity in operation. To optimize each of the formulas, you should work on it yourself; group operands and factorize operation.
The LCM of these numbers seems to be 720720. Its quite small, so that you can perform a single modulus operation and use the remainder as the index in the precomputed LUT.
You should just use (i % N) == 0 as your test.
My compiler (a fairly old version of gcc) generated good code for all the cases I tried.
Where bit tests were appropriate it did that. Where N was a constant it didn't generate the obvious "divide" for any case, it always used some "trick".
Just let the compiler generate the code for you, it will almost certainly know more about the architecture of the machine than you do :) And these are easy optimisations where you are unlikely to think up something better than the compiler does.
It's an interesting question though. I can't list the tricks used by the compiler for each constant as I have to compile on a different computer.. But I'll update this reply later on if nobody beats me to it :)
This probably won't help you in code, but there's a neat trick which can help do this in your head in some cases:
For divide by 3: For a number represented in decimal, you can sum all the digits, and check if the sum is divisible by 3.
Example: 12345 => 1+2+3+4+5 = 15 => 1+5 = 6, which is divisible by 3 (3 x 4115 = 12345).
More interestingly the same technique works for all factors of X-1, where X is the base in which the number is represented. So for decimal number, you can check divide by 3 or 9. For hex, you can check divide by 3,5 or 15. And for octal numbers, you can check divide by 7.
In a previous question, I showed a fast algorithm to check in base N for divisors that are factors of N-1. Base transformations between different powers of 2 are trivial; that's just bit grouping.
Therefore, checking for 3 is easy in base 4; checking for 5 is easy in base 16, and checking for 7 (and 9) is easy in base 64.
Non-prime divisors are trivial, so only 11 and 13 are hard cases. For 11, you could use base 1024, but at that point it's not really efficient for small integers.
A method that can help modulo reduction of all integer values uses bit-slicing and popcount.
mod3 = pop(x & 0x55555555) + pop(x & 0xaaaaaaaa) << 1; // <- one term is shared!
mod5 = pop(x & 0x99999999) + pop(x & 0xaaaaaaaa) << 1 + pop(x & 0x44444444) << 2;
mod7 = pop(x & 0x49249249) + pop(x & 0x92492492) << 1 + pop(x & 0x24924924) << 2;
modB = pop(x & 0x5d1745d1) + pop(x & 0xba2e8ba2) << 1 +
pop(x & 0x294a5294) << 2 + pop(x & 0x0681a068) << 3;
modD = pop(x & 0x91b91b91) + pop(x & 0xb2cb2cb2) << 1 +
pop(x & 0x64a64a64) << 2 + pop(x & 0xc85c85c8) << 3;
The maximum values for these variables are 48, 80, 73, 168 and 203, which all fit into 8-bit variables. The second round can be carried in parallel (or some LUT method can be applied)
mod3 mod3 mod5 mod5 mod5 mod7 mod7 mod7 modB modB modB modB modD modD modD modD
mask 0x55 0xaa 0x99 0xaa 0x44 0x49 0x92 0x24 0xd1 0xa2 0x94 0x68 0x91 0xb2 0x64 0xc8
shift *1 *2 *1 *2 *4 *1 *2 *4 *1 *2 *4 *8 *1 *2 *4 *8
sum <-------> <------------> <-----------> <-----------------> <----------------->
You can replace division by a non-power-of-two constant by a multiplication, essentially multiplying by the reciprocal of your divisor. The details to get the exact result by this method are complicated.
Hacker's Delight discusses this at length in chapter 10 (unfortunately not available online).
From the quotient you can get the modulus by another multiplication and a subtraction.
One thing to consider: since you only care about divisibility up to 16, you really only need to check divisibility by the primes up to 16. These are 2, 3, 5, 7, 11, and 13.
Divide your number by each of the primes, keeping track with a boolean (such as div2 = true). The numbers two and three are special cases. If div3 is true, try dividing by 3 again, setting div9. Two and its powers are very simple (note: '&' is one of the fastest things a processor can do):
if n & 1 == 0:
div2 = true
if n & 3 == 0:
div4 = true
if n & 7 == 0:
div8 = true
if n & 15 == 0:
div16 = true
You now have the booleans div2, div3, div4, div5, div7, div8, div9, div11, div13, and div16. All
other numbers are combinations; for instance div6 is the same as (div2 && div3)
So, you only need to do either 5 or 6 actual divisions (6 only if your number is divisible by 3).
For myself, i would probably use bits in a single register for my booleans; for instance
bit_0 means div2. I can then use masks:
if (flags & (div2+div3)) == (div2 + div3): do_6()
note that div2+div3 can be a precomputed constant. If div2 is bit0, and div3 is bit1,
then div2+div3 == 3. This makes the above 'if' optimize to:
if (flags & 3) == 3: do_6()
So now... mod without a divide:
def mod(n,m):
i = 0
while m < n:
m <<= 1
i += 1
while i > 0:
m >>= 1
if m <= n: n -= m
i -= 1
return n
div3 = mod(n,3) == 0
...
btw: the worst case for the above code is 31 times through either loop for a 32-bit number
FYI: Just looked at Msalter's post, above. His technique can be used instead of mod(...) for some of the primes.
Fast tests for divisibility depend heavily on the base in which the number is represented. In case when base is 2, I think you can only do "fast tests" for divisibility by powers of 2. A binary number is divisible by 2n iff the last n binary digits of that number are 0. For other tests I don't think you can generally find anything faster than %.
A bit of evil, obfuscated bit-twiddling can get you divisbility by 15.
For a 32-bit unsigned number:
def mod_15ish(unsigned int x) {
// returns a number between 0 and 21 that is either x % 15
// or 15 + (x % 15), and returns 0 only for x == 0
x = (x & 0xF0F0F0F) + ((x >> 4) & 0xF0F0F0F);
x = (x & 0xFF00FF) + ((x >> 8) & 0xFF00FF);
x = (x & 0xFFFF) + ((x >> 16) & 0xFFFF);
// *1
x = (x & 0xF) + ((x >> 4) & 0xF);
return x;
}
def Divisible_by_15(unsigned int x) {
return ((x == 0) || (mod_15ish(x) == 15));
}
You can build similar divisibility routines for 3 and 5 based on mod_15ish.
If you have 64-bit unsigned ints to deal with, extend each constant above the *1 line in the obvious way, and add a line above the *1 line to do a right shift by 32 bits with a mask of 0xFFFFFFFF. (The last two lines can stay the same) mod_15ish then obeys the same basic contract, but the return value is now between 0 and 31. (so what's maintained is that x % 15 == mod_15ish(x) % 15)
Here are some tips I haven't see anyone else suggest yet:
One idea is to use a switch statement, or precompute some array. Then, any decent optimizer can simply index each case directly. For example:
// tests for (2,3,4,5,6,7)
switch (n % 8)
{
case 0: break;
case 1: break;
case 2: do(2); break;
case 3: do(3); break;
case 4: do(2); do(4) break;
case 5: do(5); break;
case 6: do(2); do(3); do(4); break;
case 7: do(7); break;
}
Your application is a bit ambiguous, but you may only need to check prime numbers less than n=16. This is because all numbers are factors of the current or previous prime numbers. So for n=16, you might be able to get away with only checking 2, 3, 5, 7, 11, 13 somehow. Just a thought.

Checking whether a number is positive or negative using bitwise operators

I can check whether a number is odd/even using bitwise operators. Can I check whether a number is positive/zero/negative without using any conditional statements/operators like if/ternary etc.
Can the same be done using bitwise operators and some trick in C or in C++?
Can I check whether a number is positive/zero/negative without using any conditional statements/operators like if/ternary etc.
Of course:
bool is_positive = number > 0;
bool is_negative = number < 0;
bool is_zero = number == 0;
If the high bit is set on a signed integer (byte, long, etc., but not a floating point number), that number is negative.
int x = -2300; // assuming a 32-bit int
if ((x & 0x80000000) != 0)
{
// number is negative
}
ADDED:
You said that you don't want to use any conditionals. I suppose you could do this:
int isNegative = (x & 0x80000000);
And at some later time you can test it with if (isNegative).
Or, you could use signbit() and the work's done for you.
I'm assuming that under the hood, the math.h implementation is an efficient bitwise check (possibly solving your original goal).
Reference: http://en.cppreference.com/w/cpp/numeric/math/signbit
There is a detailed discussion on the Bit Twiddling Hacks page.
int v; // we want to find the sign of v
int sign; // the result goes here
// CHAR_BIT is the number of bits per byte (normally 8).
sign = -(v < 0); // if v < 0 then -1, else 0.
// or, to avoid branching on CPUs with flag registers (IA32):
sign = -(int)((unsigned int)((int)v) >> (sizeof(int) * CHAR_BIT - 1));
// or, for one less instruction (but not portable):
sign = v >> (sizeof(int) * CHAR_BIT - 1);
// The last expression above evaluates to sign = v >> 31 for 32-bit integers.
// This is one operation faster than the obvious way, sign = -(v < 0). This
// trick works because when signed integers are shifted right, the value of the
// far left bit is copied to the other bits. The far left bit is 1 when the value
// is negative and 0 otherwise; all 1 bits gives -1. Unfortunately, this behavior
// is architecture-specific.
// Alternatively, if you prefer the result be either -1 or +1, then use:
sign = +1 | (v >> (sizeof(int) * CHAR_BIT - 1)); // if v < 0 then -1, else +1
// On the other hand, if you prefer the result be either -1, 0, or +1, then use:
sign = (v != 0) | -(int)((unsigned int)((int)v) >> (sizeof(int) * CHAR_BIT - 1));
// Or, for more speed but less portability:
sign = (v != 0) | (v >> (sizeof(int) * CHAR_BIT - 1)); // -1, 0, or +1
// Or, for portability, brevity, and (perhaps) speed:
sign = (v > 0) - (v < 0); // -1, 0, or +1
// If instead you want to know if something is non-negative, resulting in +1
// or else 0, then use:
sign = 1 ^ ((unsigned int)v >> (sizeof(int) * CHAR_BIT - 1)); // if v < 0 then 0, else 1
// Caveat: On March 7, 2003, Angus Duggan pointed out that the 1989 ANSI C
// specification leaves the result of signed right-shift implementation-defined,
// so on some systems this hack might not work. For greater portability, Toby
// Speight suggested on September 28, 2005 that CHAR_BIT be used here and
// throughout rather than assuming bytes were 8 bits long. Angus recommended
// the more portable versions above, involving casting on March 4, 2006.
// Rohit Garg suggested the version for non-negative integers on September 12, 2009.
#include<stdio.h>
void main()
{
int n; // assuming int to be 32 bit long
//shift it right 31 times so that MSB comes to LSB's position
//and then and it with 0x1
if ((n>>31) & 0x1 == 1) {
printf("negative number\n");
} else {
printf("positive number\n");
}
getch();
}
Signed integers and floating points normally use the most significant bit for storing the sign so if you know the size you could extract the info from the most significant bit.
There is generally little benefit in doing this this since some sort of comparison will need to be made to use this information and it is just as easy for a processor to tests whether something is negative as it is to test whether it is not zero. If fact on ARM processors, checking the most significant bit will be normally MORE expensive than checking whether it is negative up front.
It is quite simple
It can be easily done by
return ((!!x) | (x >> 31));
it returns
1 for a positive number,
-1 for a negative, and
0 for zero
This can not be done in a portable way with bit operations in C. The representations for signed integer types that the standard allows can be much weirder than you might suspect. In particular the value with sign bit on and otherwise zero need not be a permissible value for the signed type nor the unsigned type, but a so-called trap representation for both types.
All computations with bit operators that you can thus do might have a result that leads to undefined behavior.
In any case as some of the other answers suggest, this is not really necessary and comparison with < or > should suffice in any practical context, is more efficient, easier to read... so just do it that way.
// if (x < 0) return -1
// else if (x == 0) return 0
// else return 1
int sign(int x) {
// x_is_not_zero = 0 if x is 0 else x_is_not_zero = 1
int x_is_not_zero = (( x | (~x + 1)) >> 31) & 0x1;
return (x & 0x01 << 31) >> 31 | x_is_not_zero; // for minux x, don't care the last operand
}
Here's exactly what you waht!
Here is an update related to C++11 for this old question. It is also worth considering std::signbit.
On Compiler Explorer using gcc 7.3 64bit with -O3 optimization, this code
bool s1(double d)
{
return d < 0.0;
}
generates
s1(double):
pxor xmm1, xmm1
ucomisd xmm1, xmm0
seta al
ret
And this code
bool s2(double d)
{
return std::signbit(d);
}
generates
s2(double):
movmskpd eax, xmm0
and eax, 1
ret
You would need to profile to ensure that there is any speed difference, but the signbit version does use 1 less opcode.
When you're sure about the size of an integer (assuming 16-bit int):
bool is_negative = (unsigned) signed_int_value >> 15;
When you are unsure of the size of integers:
bool is_negative = (unsigned) signed_int_value >> (sizeof(int)*8)-1; //where 8 is bits
The unsigned keyword is optional.
if( (num>>sizeof(int)*8 - 1) == 0 )
// number is positive
else
// number is negative
If value is 0 then number is positive else negative
A simpler way to find out if a number is positive or negative:
Let the number be x
check if [x * (-1)] > x. if true x is negative else positive.
You can differentiate between negative/non-negative by looking at the most significant bit.
In all representations for signed integers, that bit will be set to 1 if the number is negative.
There is no test to differentiate between zero and positive, except for a direct test against 0.
To test for negative, you could use
#define IS_NEGATIVE(x) ((x) & (1U << ((sizeof(x)*CHAR_BIT)-1)))
Suppose your number is a=10 (positive). If you shift a a times it will give zero.
i.e:
10>>10 == 0
So you can check if the number is positive, but in case a=-10 (negative):
-10>>-10 == -1
So you can combine those in an if:
if(!(a>>a))
print number is positive
else
print no. is negative
#include<stdio.h>
int checksign(int n)
{
return (n >= 0 && (n & (1<<32-1)) >=0);
}
void main()
{
int num = 11;
if(checksign(num))
{
printf("Unsigned number");
}
else
{
printf("signed Number");
}
}
Without if:
string pole[2] = {"+", "-"};
long long x;
while (true){
cin >> x;
cout << pole[x/-((x*(-1))-1)] << "\n\n";
}
(not working for 0)
if(n & (1<<31))
{
printf("Negative number");
}
else{
printf("positive number");
}
It check the first bit which is most significant bit of the n number and then & operation is work on it if the value is 1 which is true then the number is negative and it not then it is positive number