The tilde operator in C - c++

I've seen the tilde operator used in the ELF hashing algorithm, and I'm curious what it does. (The code is from Eternally Confused.)
unsigned elf_hash ( void *key, int len )
{
unsigned char *p = key;
unsigned h = 0, g;
int i;
for ( i = 0; i < len; i++ ) {
h = ( h << 4 ) + p[i];
g = h & 0xf0000000L;
if ( g != 0 )
h ^= g >> 24;
h &= ~g;
}
return h;
}

The ~ operator is bitwise NOT, it inverts the bits in a binary number:
NOT 011100
= 100011

~ is the bitwise NOT operator. It inverts the bits of the operand.
For example, if you have:
char b = 0xF0; /* Bits are 11110000 */
char c = ~b; /* Bits are 00001111 */

This is the bitwise NOT operator.
It flips all the bits in a number: 100110 -> 011001

The tilde character is used as an operator to invert all bits of an integer (bitwise NOT).
For example: ~0x0044 = 0xFFBB.

It is the bitwise NOT operator. It inverts all bits in an integer value.

Tilde operator (~) also called bitwise NOT operator, performs one's complement of any binary number as argument. If the operand to NOT is decimal number then it convert it as binary and perform's one's complement operation.
To calculate one's complement simply invert all the digit [0-->1] and [1-->0]
Ex : 0101 = 5; ~(0101) = 1010.
Use of tilde operator :
1. It is used in masking operation , Masking means setting and resetting the values inside any register . for ex :
char mask ;
mask = 1 << 5 ;
It will set mask to a binary value of 10000 and this mask can be used to check the bit value present inside other variable .
int a = 4;
int k = a&mask ; if the 5th bit is 1 , then k=1 otherwise k=0.
This is called Masking of bits.
2.To find binary equivalent of any number using masking properties.
#include<stdio.h>
void equi_bits(unsigned char);
int main()
{
unsigned char num = 10 ;
printf("\nDecimal %d is same as binary ", num);
equi_bits(num);
return 0;
}
void equi_bits(unsigned char n)
{
int i ;
unsigned char j , k ,mask ;
for( i = 7 ; i >= 0 ; i--)
{
j=i;
mask = 1 << j;
k = n&mask ; // Masking
k==0?printf("0"):printf("1");
}
}
Output : Decimal 10 is same as 00001010
My observation :For the maximum range of any data type , one's complement provide the negative value decreased by 1 to any corresponding value.
ex: ~1 --------> -2
~2---------> -3
and so on... I will show you this observation using little code snippet
#include<stdio.h>
int main()
{
int a , b;
a=10;
b=~a; // b-----> -11
printf("%d\n",a+~b+1);// equivalent to a-b
return 0;
}
Output: 0
Note : This is valid only for the range of data type. means for int data type this rule will be applicable only for the value of range[-2,147,483,648 to 2,147,483,647].
Thankyou .....May this help you

Related

Bitwise operator to calculate checksum

Am trying to come up with a C/C++ function to calculate the checksum of a given array of hex values.
char *hex = "3133455D332015550F23315D";
For e.g., the above buffer has 12 bytes and then last byte is the checksum.
Now what needs to done is, convert the 1st 11 individual bytes to decimal and then take there sum.
i.e., 31 = 49,
33 = 51,.....
So 49 + 51 + .....................
And then convert this decimal value to Hex. And then take the LSB of that hex value and convert that to binary.
Now take the 2's complement of this binary value and convert that to hex. At this step, the hex value should be equal to 12th byte.
But the above buffer is just an example and so it may not be correct.
So there're multiple steps involved in this.
Am looking for an easy way to do this using bitwise operators.
I did something like this, but it seems to take the 1st 2 bytes and doesn't give me the right answer.
int checksum (char * buffer, int size){
int value = 0;
unsigned short tempChecksum = 0;
int checkSum = 0;
for (int index = 0; index < size - 1; index++) {
value = (buffer[index] << 8) | (buffer[index]);
tempChecksum += (unsigned short) (value & 0xFFFF);
}
checkSum = (~(tempChecksum & 0xFFFF) + 1) & 0xFFFF;
}
I couldn't get this logic to work. I don't have enough embedded programming behind me to understand the bitwise operators. Any help is welcome.
ANSWER
I got this working with below changes.
for (int index = 0; index < size - 1; index++) {
value = buffer[index];
tempChecksum += (unsigned short) (value & 0xFFFF);
}
checkSum = (~(tempChecksum & 0xFF) + 1) & 0xFF;
Using addition to obtain a checksum is at least weird. Common checksums use bitwise xor or full crc. But assuming it is really what you need, it can be done easily with unsigned char operations:
#include <stdio.h>
char checksum(const char *hex, int n) {
unsigned char ck = 0;
for (int i=0; i<n; i+=1) {
unsigned val;
int cr = sscanf(hex + 2 * i, "%2x", &val); // convert 2 hexa chars to a byte value
if (cr == 1) ck += val;
}
return ck;
}
int main() {
char hex[] = "3133455D332015550F23315D";
char ck = checksum(hex, 11);
printf("%2x", (unsigned) (unsigned char) ck);
return 0;
}
As the operation are made on an unsigned char everything exceeding a byte value is properly discarded and you obtain your value (26 in your example).

Bitwise operations and setting "flags"

So this is an update to my last post, but I'm still having a lot of trouble understanding how this works. So I was giving the main function:
void set_flag(int* flag_holder, int flag_position);
int check_flag(int flag_holder, int flag_position);
int main(int argc, char* argv[])
{
int flag_holder = 0;
int i;
set_flag(&flag_holder, 3);
set_flag(&flag_holder, 16);
set_flag(&flag_holder, 31);
for(i = 31; i >= 0; i--) {
printf("%d", check_flag(flag_holder, i));
if(i % 4 == 0)
printf(" ");
}
printf("\n");
return 0;
}
And for the assignment we are supposed to write the functions set_flag and check_flag, so that the output is equal to:
1000 0000 0000 0001 0000 0000 0000 1000
So from what I understand, were supposed to use the "set_flag" function to make sure that the nth bit is 1. And the "check_flag" function returns an integer that is 0 when the nth bit is 0, and 1 when it is 1. I don't understand what "set_flag" is really doing, and how 3, 16, and 31, will be saved as "flags" which then return as 1's in "check_flag".
When working with binary or hexadecimal values a common approach is the define a mask that we will apply to a main value.
You can easily set one or more bits to '1' with the inclusive OR operator '|'
eg: we want to set the bit#0 to '1'
main value 01011000 |
mask 00000001 =
result 01011001
To test a particular bit you can use the AND operator '&'
eg: we want to test the bit#3
main value 01011000 &
mask 00001000 =
result 00001000
note: you may need to properly format result; here the & operation will return either zero or non-zero (but nor necessary '1').
So here are the 2 functions set_flag and check_flag:
void set_flag(int* flag_holder, int flag_position) {
int mask = 1 << flag_position;
flag_holder = flag_holder | mask;
}
int check_flag(int flag_holder, int flag_position) {
int mask = 1 << flag_position;
int check = flag_holder & mask;
return (check !=0 ? 1 : 0);
}
In these scenarios we need a binary mask to set/check only one bit. The code "int mask = 1 << flag_position;" builds this single bit mask, it basically sets the bit#0 to '1' then shift to the left to the #bit we want to set/check.
Function Set_flag and Function Check_flag:
void set_flag(int* flag_holder, int flag_position) {
int mask = 1 << flag_position;
*flag_holder = *flag_holder | mask;
}
int check_flag(int flag_holder, int flag_position) {
int mask = 1 << flag_position;
int check = flag_holder & mask;
return (check !=0 ? 1 : 0);
}
Run it along with the main program and you will get the desired output.

retrieve last 6 bits from an integer

I need to fetch last 6 bits of a integer or Uint32. For example if I have a value of 183, I need last six bits which will be 110 111 ie 55.
I have written a small piece of code, but it's not behaving as expected. Could you guys please point out where I am making a mistake?
int compress8bitTolessBit( int value_to_compress, int no_of_bits_to_compress )
{
int ret = 0;
while(no_of_bits_to_compress--)
{
std::cout << " the value of bits "<< no_of_bits_to_compress << std::endl;
ret >>= 1;
ret |= ( value_to_compress%2 );
value_to_compress /= 2;
}
return ret;
}
int _tmain(int argc, _TCHAR* argv[])
{
int val = compress8bitTolessBit( 183, 5 );
std::cout <<" the value is "<< val << std::endl;
system("pause>nul");
return 0;
}
You have entered the realm of binary arithmetic. C++ has built-in operators for this kind of thing. The act of "getting certain bits" of an integer is done with an "AND" binary operator.
0101 0101
AND 0000 1111
---------
0000 0101
In C++ this is:
int n = 0x55 & 0xF;
// n = 0x5
So to get the right-most 6 bits,
int n = original_value & 0x3F;
And to get the right-most N bits,
int n = original_value & ((1 << N) - 1);
Here is more information on
Binary arithmetic operators in C++
Binary operators in general
I don't get the problem, can't you just use bitwise operators? Eg
u32 trimmed = value & 0x3F;
This will keep just the 6 least significant bits by using the bitwise AND operator.
tl;dr:
int val = x & 0x3F;
int value = input & ((1 << (no_of_bits_to_compress + 1) - 1)
This one calculates the (n+1)th power of two: 1 << (no_of_bits_to_compress + 1) and subtracts 1 to get a mask with all n bits set.
The last k bits of an integer A.
1. A % (1<<k); // simply A % 2^k
2. A - ((A>>k)<<k);
The first method uses the fact that the last k bits is what is trimmed after doing k right shits(divide by 2^k).

Check value of least significant bit (LSB) and most significant bit (MSB) in C/C++

I need to check the value of the least significant bit (LSB) and most significant bit (MSB) of an integer in C/C++. How would I do this?
//int value;
int LSB = value & 1;
Alternatively (which is not theoretically portable, but practically it is - see Steve's comment)
//int value;
int LSB = value % 2;
Details:
The second formula is simpler. The % operator is the remainder operator. A number's LSB is 1 iff it is an odd number and 0 otherwise. So we check the remainder of dividing with 2. The logic of the first formula is this: number 1 in binary is this:
0000...0001
If you binary-AND this with an arbitrary number, all the bits of the result will be 0 except the last one because 0 AND anything else is 0. The last bit of the result will be 1 iff the last bit of your number was 1 because 1 & 1 == 1 and 1 & 0 == 0
This is a good tutorial for bitwise operations.
HTH.
You can do something like this:
#include <iostream>
int main(int argc, char **argv)
{
int a = 3;
std::cout << (a & 1) << std::endl;
return 0;
}
This way you AND your variable with the LSB, because
3: 011
1: 001
in 3-bit representation. So being AND:
AND
-----
0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
You will be able to know if LSB is 1 or not.
edit: find MSB.
First of all read Endianess article to agree on what MSB means. In the following lines we suppose to handle with big-endian notation.
To find the MSB, in the following snippet we will focus applying a right shift until the MSB will be ANDed with 1.
Consider the following code:
#include <iostream>
#include <limits.h>
int main(int argc, char **argv)
{
unsigned int a = 128; // we want to find MSB of this 32-bit unsigned int
int MSB = 0; // this variable will represent the MSB we're looking for
// sizeof(unsigned int) = 4 (in Bytes)
// 1 Byte = 8 bits
// So 4 Bytes are 4 * 8 = 32 bits
// We have to perform a right shift 32 times to have the
// MSB in the LSB position.
for (int i = sizeof(unsigned int) * 8; i > 0; i--) {
MSB = (a & 1); // in the last iteration this contains the MSB value
a >>= 1; // perform the 1-bit right shift
}
// this prints out '0', because the 32-bit representation of
// unsigned int 128 is:
// 00000000000000000000000010000000
std::cout << "MSB: " << MSB << std::endl;
return 0;
}
If you print MSB outside of the cycle you will get 0.
If you change the value of a:
unsigned int a = UINT_MAX; // found in <limits.h>
MSB will be 1, because its 32-bit representation is:
UINT_MAX: 11111111111111111111111111111111
However, if you do the same thing with a signed integer things will be different.
#include <iostream>
#include <limits.h>
int main(int argc, char **argv)
{
int a = -128; // we want to find MSB of this 32-bit unsigned int
int MSB = 0; // this variable will represent the MSB we're looking for
// sizeof(int) = 4 (in Bytes)
// 1 Byte = 8 bits
// So 4 Bytes are 4 * 8 = 32 bits
// We have to perform a right shift 32 times to have the
// MSB in the LSB position.
for (int i = sizeof(int) * 8; i > 0; i--) {
MSB = (a & 1); // in the last iteration this contains the MSB value
a >>= 1; // perform the 1-bit right shift
}
// this prints out '1', because the 32-bit representation of
// int -128 is:
// 10000000000000000000000010000000
std::cout << "MSB: " << MSB << std::endl;
return 0;
}
As I said in the comment below, the MSB of a positive integer is always 0, while the MSB of a negative integer is always 1.
You can check INT_MAX 32-bit representation:
INT_MAX: 01111111111111111111111111111111
Now. Why the cycle uses sizeof()?
If you simply do the cycle as I wrote in the comment: (sorry for the = missing in comment)
for (; a != 0; a >>= 1)
MSB = a & 1;
you will get 1 always, because C++ won't consider the 'zero-pad bits' (because you specified a != 0 as exit statement) higher than the highest 1. For example for 32-bit integers we have:
int 7 : 00000000000000000000000000000111
^ this will be your fake MSB
without considering the full size
of the variable.
int 16: 00000000000000000000000000010000
^ fake MSB
int LSB = value & 1;
int MSB = value >> (sizeof(value)*8 - 1) & 1;
Others have already mentioned:
int LSB = value & 1;
for getting the least significant bit. But there is a cheatier way to get the MSB than has been mentioned. If the value is a signed type already, just do:
int MSB = value < 0;
If it's an unsigned quantity, cast it to the signed type of the same size, e.g. if value was declared as unsigned, do:
int MSB = (int)value < 0;
Yes, officially, not portable, undefined behavior, whatever. But on every two's complement system and every compiler for them that I'm aware of, it happens to work; after all, the high bit is the sign bit, so if the signed form is negative, then the MSB is 1, if it's non-negative, the MSB is 0. So conveniently, a signed test for negative numbers is equivalent to retrieving the MSB.
LSB is easy. Just x & 1.
MSSB is a bit trickier, as bytes may not be 8 bits and sizeof(int) may not be 4, and there might be padding bits to the right.
Also, with a signed integer, do you mean the sign bit of the MS value bit.
If you mean the sign bit, life is easy. It's just x < 0
If you mean the most significant value bit, to be completely portable.
int answer = 0;
int rack = 1;
int mask = 1;
while(rack < INT_MAX)
{
rack << = 1;
mask << = 1;
rack |= 1;
}
return x & mask;
That's a long-winded way of doing it. In reality
x & (1 << (sizeof(int) * CHAR_BIT) - 2);
will be quite portable enough and your ints won't have padding bits.

How to read individual bits from an array?

Lets say i have an array dynamically allocated.
int* array=new int[10]
That is 10*4=40 bytes or 10*32=320 bits. I want to read the 2nd bit of the 30th byte or 242nd bit. What is the easiest way to do so? I know I can access the 30th byte using array[30] but accessing individual bits is more tricky.
bool bitset(void const * data, int bitindex) {
int byte = bitindex / 8;
int bit = bitindex % 8;
unsigned char const * u = (unsigned char const *) data;
return (u[byte] & (1<<bit)) != 0;
}
this is working !
#define GET_BIT(p, n) ((((unsigned char *)p)[n/8] >> (n%8)) & 0x01)
int main()
{
int myArray[2] = { 0xaaaaaaaa, 0x00ff00ff };
for( int i =0 ; i < 2*32 ; i++ )
printf("%d", GET_BIT(myArray, i));
return 0;
}
ouput :
0101010101010101010101010101010111111111000000001111111100000000
Be carefull of the endiannes !
First, if you're doing bitwise operations, it's usually
preferable to make the elements an unsigned integral type
(although in this case, it really doesn't make that much
difference). As for accessing the bits: to access bit i in an
array of n int's:
static int const bitsPerWord = sizeof(int) * CHAR_BIT;
assert( i >= 0 && i < n * bitsPerWord );
int wordIndex = i / bitsPerWord;
int bitIndex = i % bitsPerWord;
then to read:
return (array[wordIndex] & (1 << bitIndex)) != 0;
to set:
array[wordIndex] |= 1 << bitIndex;
and to reset:
array[wordIndex] &= ~(1 << bitIndex);
Or you can use bitset, if n is constant, or vector<bool> or
boost::dynamic_bitset if it's not, and let someone else do the
work.
You can use something like this:
!((array[30] & 2) == 0)
array[30] is the integer.
& 2 is an and operation which masks the second bit (2 = 00000010)
== 0 will check if the mask result is 0
! will negate that result, because we're checking if it's 1 not zero....
You need bit operations here...
if(array[5] & 0x1)
{
//the first bit in array[5] is 1
}
else
{
//the first bit is 0
}
if(array[5] & 0x8)
{
//the 4th bit in array[5] is 1
}
else
{
//the 4th bit is 0
}
0x8 is 00001000 in binary. Doing the anding masks all other bits and allows you to see if the bit is 1 or 0.
int is typically 32 bits, so you would need to do some arithmetic to get a certain bit number in the entire array.
EDITED based on comment below - array contains int of 32 bits, not 8 bits uchar.
int pos = 241; // I start at index 0
bool bit242 = (array[pos/32] >> (pos%32)) & 1;