This question already has answers here:
Operator precedence (bitwise '&' lower than '==')
(3 answers)
Closed 8 years ago.
I was trying to solve a Counter Game problem:
"Louise and Richard play a game. They have a counter set to N. Louise gets the first turn and the turns alternate thereafter. In the game, they perform the following operations.
If N is not a power of 2, they reduce the counter by the largest power of 2 less than N.
If N is a power of 2, they reduce the counter by half of N.
The resultant value is the new N which is again used for subsequent operations.
The game ends when the counter reduces to 1, i.e., N == 1, and the last person to make a valid move wins.
Given N, your task is to find the winner of the game."
To solve the question, I implemented bit manipulation, and got accepted:
#include <iostream>
#include <cstdio>
int main() {
long long unsigned int n, tmp;
int cnt, t;
scanf("%d", &t);
while(t--) {
scanf("%llu", &n), tmp=n;
cnt=0;
while(tmp) tmp&=tmp-1, cnt++;
cnt--;
while((n&1)==0) n>>=1, cnt++;
if(cnt%2==0) printf("Richard\n");
else printf("Louise\n");
}
return 0;
}
However, during the coding i coded while(n&1==false) instead of while((n&1)==false), thus could not get desired result. Coding while(!(n&1)) gave expected result, but that(!a instead of a==false) was bad practice due to some sources(I forgot them) I have read online. And I know the difference between while(!n&1) and while(!(n&1)), but I did not know while(n&1==false) and while((n&1)==false). Learnt the latter was and is dissimilar, and may I ask the distinction, please?
This is considered by many a design mistake of C.
While it's natural that a logical-and operation should have lower precedence than equality comparison the same is much more questionable for bitwise-and because bitwise operations are naturally closer to math operations.
The same design error has been inherited by C++ for backward compatibility.
A good rule is always parenthesize bitwise operations to avoid surprises.
As you can see here, the precedence of == is above the precedence of &.
Therefore n&1==false is interpreted as n&(1==false) not (n&1)==false, so you need the parentheses.
== has higher priority in C++ than & (source).
Thus while(n&1==false) is treated as while (n & (1 == false)) which actually is while (n & 0).
Related
This question already has answers here:
why does this work? (finding odd number in c++)
(9 answers)
Closed 2 years ago.
int main()
int n=5;
if(n & 1)
cout<<"odd";
else
cout<<"even";
how we are getting even or out using "&" operator
it should have been like this
if(n%2!=0) to check the even or odd.
can anyone explain what is that code doing.
What "n & 1" does here is that it returns the bitwise AND between n and 1. In other words, this is checking whether the last bit of n is equal to 1.
If it is, then n is odd, because the least significant bit of all binary representations of odd numbers is 1. If it's not, then n is even, because the least significant bit of all binary representations of even numbers is 0.
I have an integer constant, lets say:
expr x = ctx.int_const("x");
What I'm trying to do is apply random constraints on the individual bits of x. However, it turns out you cannot use bit-wise operations with integer sorts, only bit-vectors. My initial approach before realizing this was this:
for(int i = 0; i < 32; i++){
int mask = 0x00000001 << i;
if(rand()%2)
solver.add((x & mask) == 0);
else
solver.add((x & mask) != 0);
}
This of course does not work, as Z3 throws an exception.
After a bit of digging through the API, I found the Z3_mk_int2bv function, and figured I'd give that a try:
for(int i = 0; i < 32; i++){
if(rand()%2)
solver.add(z3::expr(ctx(),Z3_mk_int2bv(ctx(), 32, v())).extract(i, i) == ctx().bv_val(0, 1));
else
solver.add(z3::expr(ctx(),Z3_mk_int2bv(ctx(), 32, v())).extract(i, i) != ctx().bv_val(0, 1));
}
While no assertion gets thrown on the above solver add calls, the actual solving time suddenly exploded. So much so that I have yet to see how long it actually takes. Adding similar expressions using bit-vectors does not take a major toll on the SAT solver, with the solver time being less than a second as far I can tell.
I'm wondering what it is about the above expression that could cause the solver performance to degrade so badly, and whether there's a better approach?
int2bv is expensive. There are many reasons for this, but bottom line the solver now has to negotiate between the theory of integers and bit-vectors, and the heuristics probably don't help much. Notice that to do a proper conversion the solver has to perform repeated divisions, which is quite costly. Furthermore, talking about bits of a mathematical integer doesn't make much sense to start with: What if it's a negative number? Do you assume some sort of a infinite-width 2's complement representation? Or is it some other mapping? All this makes it harder to reason with such conversions. And for a long time int2bv was uninterpreted in z3 for this and similar reasons. You can find many posts regarding this on stack-overflow, for instance see here: Z3 : Questions About Z3 int2bv?
Your best bet would be to simply use bit-vectors to start with. If you're reasoning about machine arithmetic, why not model everything with bit-vectors to start with?
If you're stuck with the Int type, my recommendation would be to simply stick to mod function, making sure the second argument is a constant. This might avoid some of the complexity, but without looking at actual problems, it's hard to opine any further.
This question already has answers here:
Store and work with Big numbers in C
(3 answers)
Closed 6 years ago.
How to calclute 2 to the power 10000000 without crashing the compiler. What shoud be data type for extramily big integer in c/c++.
For the very specific value 2 raised to the power of 1000 a double is sufficient.
#include <stdio.h>
#include <math.h>
int main(int argc, const char *argv[]) {
printf("%f\n", pow(2., 1000));
return 0;
}
In general however you will need to implement an arbitrary precision multiplication algorithm to compute numbers that big (or use a library that provides that).
C++ has no predefined standard functions for this kind of computation.
If you want to implement your own version as an exercise then my suggestion is to use numbers in base 10000. They're small enough that single-digit multiplication won't overflow and it's very easy and fast to translate the result into decimal at the end because you can just map base-10000 digits to decimal without having to implement division an modulo too.
Also to compute such a big power (10,000,000) you will need to implement power by squaring, i.e.
BigNum pow(BigNum a, int b) {
if (b == 0) {
return 1;
} else if (b & 1) {
return a*pow(a, b-1);
} else {
BigNum x = pow(a, b/2);
return x*x;
}
}
this will allow to compute pow(a, b) with O(log(b)) instead of O(b) multiplications.
Store the digits in an int array where each location of the array denotes one digit. Then multiply them repetitively. That way you will get the answer with out crashing the compiler.
Well you need 302 locations for that. And the multiplication is simply the one that we do in grade classes. You have implement it in coding.
Little bit of code
int d[400];
for(int i=0;i<399;i++)
d[i]=0;
d[0]=1;
int carry=0;
int temp=0;
for(int j=0;j<=999;j++)
{
carry=0;
temp=0;
for(int i=0;i<=399;i++)
{
temp=d[i]*2+carry;
d[i]= temp%10;
carry = temp/10;
}
}
print d[0..399] in reverse order trimming zeroes.
Unlike Python/Java, C++ does not handle such big number by itself nor does it have a dedicated data type for it. You need to use an array to store the numbers. You do not have a data type for the problem. These kind of questions are common in competitive programming sites. Here is a detailed tutorial.
Large Number in C/C++
You can also learn about bit manipulation. They are handy when you multiply by 2.
Please read this before using pow(2., 1000) as mentioned in another answer.
c++ pow(2,1000) is normaly to big for double, but it's working. why?
As #6502 cleraly puts it in his answer, it can be used for this specific case of 2^1000. I missed that, be careful about that in case you are going to use this in a competitive programming site.
Is there any difference in the performance of the following code snippets? Which one performs best and why?
int i = 1000000000;
while(i != 0) { i--; }
or
int i = 1000000000;
while(i) { i--; }
or
int i = 1000000000;
while(i > 0) { i--; }
I see a lot of people use the first example and wonder why. Easier to read?
They are all the same in this context and any decent compiler will generate equivalent code for all three.
In any case, trying to hand-optimize trivial things like this (integer comparisons) is pointless. Your compiler will figure it out and do a much better job during code-gen than you ever could. So just stop trying and instead just write the most readable code you can and then trust the compiler - in any case, none of this makes any performance difference.
Is there any difference in the performance of the following code snippets?
No.
First two are equivalent, and all three can be optimized to exactly same assembly.
I see a lot of people use the first example and wonder why. Easier to read?
It requires the reader to know fewer language rules than the second one. In particular, the second program requires the knowledge that conditional expression is converted to bool, and that the conversion from int has the same result as inequality with zero.
Note that if i were replaced with a floating point number, or if the decrement were modified to have more complexity (for example: decrement by 2), then the third option would be easiest to prove correct. With integers and single decrement, there is no difference.
This question already has answers here:
Which is better option to use for dividing an integer number by 2?
(22 answers)
Closed 6 years ago.
I have encountered many occasions when I have to use between division operator(divide by 2) or the right shift operator(>>) but I tend to use the division operator assuming that use of bit wise operator will make my code less readable. Is my assumption true?
Is it good practice to use left shift operator and right shift operator in production code instead of multiply by 2 or divide by 2.
Using the bitwise operators for multiplication or division by 2 is utter madness.
The behaviour of << is undefined for negative signed types.
<< and >> have lower precedence than addition and subtraction so it messes up your expressions.
It's unnecessarily obfuscating.
Trust a modern compiler to optimise appropriately.
Integer division by constants is routinely optimized to bit shifts (if by powers of two), multiplication by the "integral reciprocal" and all kind of tricks, so performance should not be a concern.
What matters is to clearly express intent. If you are operating on integers "as numbers" and you divide by something that just happens to be a power of 2 use the division operator.
int mean(int a, int b) {
return (a+b)/2; // yes overflow blah blah
}
If instead you are operating on integers as bitfields - for example, you are unpacking a nibble and you need to right shift by 4 to move it in "low" position, or you need to explicitly set some bit -, then use bitwise operators.
void hex_byte(unsigned char byte, char *out) {
out[0]=byte>>4;
out[1]=byte&0xf;
}
unsigned set_bit(unsigned in, unsigned n) {
return in | (1<<n);
}
In general, most often you'll use division on signed integers, bitwise operators on unsigned ones.