what is use of &(AND) operator in C language? [duplicate] - c++

This question already has answers here:
Understanding the bitwise AND Operator
(4 answers)
Closed 8 years ago.
#include <stdio.h>
#include <math.h>
int main()
{
int n,i,j;long long p,sum=0,count;
scanf("%d",&n);
long long a[n];
for(i=0;i<n;i++)
scanf("%lld",&a[i]);
for(j=0;j<64;j++)
{
count=0;
p=pow(2,j);
for(i=0;i<n;i++)
{
**if(a[i]&p)**
count++;
}
sum+=(count*(count-1)*p/2);
}
printf("%lld",sum);
return 0;
}
what does if statement in second for loop do here?
And why & is used in program?

The bitwise AND operator is a single ampersand: &. A handy mnemonic is
that the small version of the boolean AND, &&, works on smaller pieces
(bits instead of bytes, chars, integers, etc). In essence, a binary
AND simply takes the logical AND of the bits in each position of a
number in binary form.
For instance, working with a byte (the char type):
EX.
01001000 &
10111000 =
--------
00001000
The most significant bit of the first number is 0, so we know the most significant bit of the result must be 0; in the second most significant bit, the bit of second number is zero, so we have the same result. The only time where both bits are 1, which is the only time the result will be 1, is the fifth bit from the left. Consequently,
72 & 184 = 8
More example
unsigned int a = 60; /* 60 = 0011 1100 */
unsigned int b = 13; /* 13 = 0000 1101 */
int c = 0;
c = a & b; /* 12 = 0000 1100 */

& is the bitwise AND operator. It does what that sounds like - does the and operator on every bit. In your case, if p = 2^k, a[i]&p checks if the machine's binary representation of a[i] has the k-th bit set to 1.

AND operator compares two given inputs bits and makes result as 1 if both bits are 1. Else it gives 0.

Related

I need to understand the logic behind this cpp code

int x = 25;
unsigned int g = x & 0x80000000;
how did this code read the most significant bit of in the address of x? does the reference to 0x80000000, or binary 1000 0000 0000 0000 accomplished that task, or was it something else?
For char the most significant bit is typically the sign bit as per Two's Complement, so this should be:
char x = 25;
unsigned int msb = x & (1 << 6);
Where (1 << 6) means the 6 bit, counting from 0, or the 7th counting from 1st. It's the second-to-top bit and equivalent to 0x40.
Since 25 is 0b00011001 you won't get a bit set. You'll need a value >= 64.

Arduino code: shifting bits seems to change data type from int to long

on my Arduino, the following code produces output I don't understand:
void setup(){
Serial.begin(9600);
int a = 250;
Serial.println(a, BIN);
a = a << 8;
Serial.println(a, BIN);
a = a >> 8;
Serial.println(a, BIN);
}
void loop(){}
The output is:
11111010
11111111111111111111101000000000
11111111111111111111111111111010
I do understand the first line: leading zeros are not printed to the serial terminal. However, after shifting the bits the data type of a seems to have changed from int to long (32 bits are printed). The expected behaviour is that bits are shifted to the left, and that bits which are shifted "out" of the 16 bits an int has are simply dropped. Shifting the bits back does not turn the "32bit" variable to "16bit" again.
Shifting by 7 or less positions does not show this effect.
I probably should say that I am not using the Arduino IDE, but the Makefile from https://github.com/sudar/Arduino-Makefile.
What is going on? I almost expect this to be "normal", but I don't get it. Or is it something in the printing routine which simply adds 16 "1"'s to the output?
Enno
In addition to other answers, Integers might be stored in 16 bits or 32 bits depending on what arduino you have.
The function printing numbers in Arduino is defined in /arduino-1.0.5/hardware/arduino/cores/arduino/Print.cpp
size_t Print::printNumber(unsigned long n, uint8_t base) {
char buf[8 * sizeof(long) + 1]; // Assumes 8-bit chars plus zero byte.
char *str = &buf[sizeof(buf) - 1];
*str = '\0';
// prevent crash if called with base == 1
if (base < 2) base = 10;
do {
unsigned long m = n;
n /= base;
char c = m - base * n;
*--str = c < 10 ? c + '0' : c + 'A' - 10;
} while(n);
return write(str);
}
All other functions rely on this one, so yes your int gets promoted to an unsigned long when you print it, not when you shift it.
However, the library is correct. By shifting left 8 positions, the negative bit in the integer number becomes '1', so when the integer value is promoted to unsigned long the runtime correctly pads it with 16 extra '1's instead of '0's.
If you are using such a value not as a number but to contain some flags, use unsigned int instead of int.
ETA: for completeness, I'll add further explanation for the second shifting operation.
Once you touch the 'negative bit' inside the int number, when you shift towards right the runtime pads the number with '1's in order to preserve its negative value. Shifting to the left k positions corresponds to dividing the number by 2^k, and since the number is negative to start with then the result must remain negative.

what value will be printed out if it is out of range in C++

I am a rookie in C++ and I have got a question here.
I use an int to print the first 100 power of 2. I know that the outcome will be out of range of an int variable. I am just curious since the result given by the program is 0. How did 0 come out?
Thanks in advance!
My code is as followed:
#include<iostream>
using namespace std;
void main()
{
int a=1;
unsigned int b=1;
for (int i=1;i<=100;i++)
{
a=2*a;
b=2*b;
}
cout<<"the first 1oo powers of 2 is (using an signed int): "<<a<<endl;
cout<<"the first 1oo powers of 2 is (using an unsigned int): "<<b<<endl;
//The fix
cout<<"Enter a Char to Exit."<<endl;
char theFix;
cin>>theFix;
}
Multiplying an unsigned integer or a positive signed integer by 2 is like shifting left by 1, while a 0 bit will be shifted in from the right. After 32 iterations (assuming 32 bit integers), the entire value will be all 0 bits. After that, shifting 0 left will not change the outcome anymore.
Since, you're new to C++, you might not know how the computer stores information. Eventually, all integers are broken down into 32-bit binary numbers (a bunch of 1's and 0's).
a = a * 2; // multiplication
a << 1; // left shift
These two instructions are actually synonymous due to the nature of binary numbers.
For instance, 0....000010 in binary notation == 2 in decimal notation.
So,
2 * 2 = 4 = 0....000100
4 * 2 = 8 = 0....001000
8 * 2 = 16 = 0....010000
and so on...
Since the bit count is capped at 32 for integers, you'll get a huge number 2^32 == 1000....000. When you multiply by 2 again, the number is shifted left again and you end up with 000...000000 = 0.
All further multiplications of 0 = zero, so that's where your final result came from.
EDIT: Would just like to point out that this is one of the only situations where this exact result would occur. If you were to try using the number 3, for example, you would see the expected integer overflow behavior.

C++ copying integer to char[] or unsigned char[] error

So I'm using the following code to put an integer into a char[] or an unsigned char[]
(unsigned???) char test[12];
test[0] = (i >> 24) & 0xFF;
test[1] = (i >> 16) & 0xFF;
test[2] = (i >> 8) & 0xFF;
test[3] = (i >> 0) & 0xFF;
int j = test[3] + (test[2] << 8) + (test[1] << 16) + (test[0] << 24);
printf("Its value is...... %d", j);
When I use type unsigned char and value 1000000000 it prints correctly.
When I use type char (same value) I get 98315724 printed?
So, the question really is can anyone explain what the hell is going on??
Upon examining the binary for the two different numbers I still can't work out whats going on. I thought signed was when the MSB was set to 1 to indicate a negative value (but negative char? wth?)
I'm explicitly telling the buffer what to insert into it, and how to interpret the contents, so don't see why this could be happening.
I have included binary/hex below for clarity in what I examined.
11 1010 1001 1001 1100 1010 0000 0000 // Binary for 983157248
11 1011 1001 1010 1100 1010 0000 0000 // Binary for 1000000000
3 A 9 9 C A 0 0 // Hex for 983157248
3 B 9 A C A 0 0 // Hex for 1000000000
In addition to the answer by Kerrek SB please consider the following:
Computers (almost always) use something called twos-complement notation for negative numbers, with the high bit functioning as a 'negative' indicator. Ask yourself what happens when you perform shifts on a signed type considering that the computer will handle the signed bit specially.
You may want to read Why does left shift operation invoke Undefined Behaviour when the left side operand has negative value? right here on StackOverflow for a hint.
When you say i & 0xFF etc, you're creaing values in the range [0, 256). But (your) char has a range of [-128, +128), and so you cannot actually store those values sensibly (i.e. the behaviour is implementation defined and tedious to reason about).
Use unsigned char for unsigned values. The clue is in the name.
This all has to do with internal representation and the way each type uses that data to interpret it. In the internal representation of a signed character, the first bit of your byte holds the sign, the others, the value. when the first bit is 1, the number is negative, the following bits then represent the complement of the positive value. for example:
unsigned char c; // whose internal representation we will set at 1100 1011
c = (1 * 2^8) + (1 * 2^7) + (1 * 2^4) + (1 * 2^2) + (1 * 2^1);
cout << c; // will give 203
// inversely:
char d = c; // not unsigned
cout << d; // will print -53
// as if the first is 1, d is negative,
// and other bits complement of value its positive value
// 1100 1011 -> -(complement of 100 1011)
// the complement is an XOR +1 011 0101
// furthermore:
char e; // whose internal representation we will set at 011 0101
e = (1 * 2^6) + (1 * 2^5) + (1 * 3^2) + (1 * 2^1);
cout << e; // will print 53

n bit 2s binary to decimal in C++

I am trying to convert a string of signed binary numbers to decimal value in C++ using stoi as shown below.
stoi( binaryString, nullptr, 2 );
My inputs are binary string in 2s format and stoi will work fine as long as the number of digits is eight. for instance "1100" results 12 because stoi probably perceive it as "00001100".
But for a 4 bit system, 1100 in 2s format equals to -4. Any clues how to do this kind of conversion for arbitrary bit length 2s numbers in C++?
Handle sigendness for numbers with less bits:
convert binary -> decimal
calc 2s-complement if signed bit is set (wherever your sign bit is depending on wordlength).
.
#define BITSIZE 4
#define SIGNFLAG (1<<(BITSIZE-1)) // 0b1000
#define DATABITS (SIGNFLAG-1) // 0b0111
int x= std::stoi( "1100", NULL, 2); // x= 12
if ((x & SIGNFLAG)!=0) { // signflag set
x= (~x & DATABITS) + 1; // 2s complement without signflag
x= -x; // negative number
}
printf("%d\n", x); // -4
You can use strtoul, which is the unsigned equivalent. The only difference is that it returns an unsigned long, instead of an int.
You probably can implement
in C++, where a is binaryString, N is binaryString.size() and w is result.
The correct answer would probably depend on what you ultimately want to do with the int after you convert it. If you want to do signed math with it then you would need to 'sign extend' your result after the stoi conversion -- this is what the compiler does internally on a cast operation from one size signed int to another.
You can manually do this with something like this for a 4-bit system:
int myInt;
myInt = std::stoi( "1100", NULL, 2);
myInt |= myInt & 0x08 ? (-16 ) : 0;
Note, I used 0x08 as the test mask and -16 as the or mask as this is for a 4-bit result. You can change the mask to be correct for whatever your input bit length is. Also using a negative int like this will correctly sign-extend no matter what your systems integer size is.
Example for arbitrary bit width system (I used bitWidth to denote the size:
myInt = std::stoi( "1100", NULL, 2);
int bitWidth = 4;
myInt |= myInt & (1 << (bitWidth-1)) ? ( -(1<<bitWidth) ) : 0;
you can use the bitset header file for this :
#include <iostream>
#include <bitset>
using namespace std;
int main()
{
bitset<4> bs;
int no;
cin>>bs;
if(bs[3])
{
bs[3]=0;
no=-1*bs.to_ulong();
}
else
no=bs.to_ulong();
cout<<no;
return 0;
}
Since it returns unsigned long so you have to check the last bit.