Is my masking correct - c++

What does it mean by
if ((readParameter - > type(0) & 0xff) == 0xff) {}
I know when we dO '&'with oxff then it returns the LSB. But what does it mean by evaluating it with again == 0xff ?
I feel it something like this(for instance) :
if ((00000000 00000000 00000000 11011001 & 00000000 00000000 00000000 11111111) == 00000000 00000000 00000000 11111111)
{
//IF THEY ARE EQUAL IT ENTERS IN THE LOOP ? IN THIS CASE THEY ARE NOT EQUAL
}
Please correct me
if i am wrong ?

But what does it mean by evaluating it with again == 0xff ?
this if checks if least significant byte is equal to 0xff. The rest of what readParameter->type(0) returns might contain other bits set. If they were not removed with & 0xff then equality to 0xff might never be true.
I know when we dO '&'with something then it returns the LSB.
this is not true, when you use binary bitwise AND then the result depends on the arguments used in the operation. If you & with 0xff then you will get least significant byte, but if you do (ui32value & 0xff000000) >> 24 then you will read the most significant byte.

The result of the filtering operation ( & 0xff ) is compared to the mask itself, to check if the last 8 bits of the parameter were all equals to 1

Related

Check if IP first octect does not starts with 127 / 224 or 255

I have an IP stored in uint32_t type variable:
u32int_t ip = 4289172904;
I need to find if the first octet or IP does not starts with 127 / 224 and 225 address.
I am not sure how to achieve the same?
it depends on what you call the first octet
uint8_t octet = ip & 0xff;
or
uint8_t octet = (ip >> 24);
explaination for first solution:
uint32_t are on 32 bits
0xff is 00000000 00000000 00000000 11111111 in binary
so doing ip & 0xff will mask all bits that aren't from the lowest byte
No detailed information about the semantics of your uint32_t ip. Be aware of the your host byte order (endian mode) may differ from network byte order. If necessary, uses htonl(ip) to convert to network byte order then uses bit wise operator to check the highest byte, e.g. ((htonl(ip) >> 24) & 0xff) can get the first octet, then compares it against your 127/244/255, etc.

How to get the bit length of an integer in C++? [duplicate]

This question already has answers here:
Minimum number of bits to represent a given `int`
(9 answers)
Closed 4 months ago.
This question is not a duplicate of Count the number of set bits in a 32-bit integer. See comment by Daniel S. below.
--
Let's say there is a variable int x;. Its size is 4 bytes, i.e. 32 bits.
Then I assign a value to this variable, x = 4567 (in binary 10001 11010111), so in memory it looks like this:
00000000 00000000 00010001 11010111
Is there a way to get the length of the bits which matter. In my example, the length of bits is 13 (I marked them with bold).
If I use sizeof(x) it returns 4, i.e. 4 bytes, which is the size of the whole int. How do I get the minimum number of bits required to represent the integer without the leading 0s?
unsigned bits, var = (x < 0) ? -x : x;
for(bits = 0; var != 0; ++bits) var >>= 1;
This should do it for you.
Warning: math ahead. If you are squeamish, skip ahead to the TL;DR.
What you are really looking for is the highest bit that is set. Let's write out what the binary number 10001 11010111 actually means:
x = 1 * 2^(12) + 0 * 2^(11) + 0 * 2^(10) + ... + 1 * 2^1 + 1 * 2^0
where * denotes multiplication and ^ is exponentiation.
You can write this as
2^12 * (1 + a)
where 0 < a < 1 (to be precise, a = 0/2 + 0/2^2 + ... + 1/2^11 + 1/2^12).
If you take the logarithm (base 2), let's denote it by log2, of this number you get
log2(2^12 * (1 + a)) = log2(2^12) + log2(1 + a) = 12 + b.
Since a < 1 we can conclude that 1 + a < 2 and therefore b < 1.
In other words, if you take the log2(x) and round it down you will get the most significant power of 2 (in this case, 12). Since the powers start counting at 0, the number of bits is one more than this power, namely 13. So:
TL;DR:
The minimum number of bits needed to represent the number x is given by
numberOfBits = floor(log2(x)) + 1
You're looking for the most significant bit that's set in the number. Let's ignore negative numbers for a second. How can we find it? Well, let's see how many bits we need to set to zero before the whole number is zero.
00000000 00000000 00010001 11010111
00000000 00000000 00010001 11010110
^
00000000 00000000 00010001 11010100
^
00000000 00000000 00010001 11010000
^
00000000 00000000 00010001 11010000
^
00000000 00000000 00010001 11000000
^
00000000 00000000 00010001 11000000
^
00000000 00000000 00010001 10000000
^
...
^
00000000 00000000 00010000 00000000
^
00000000 00000000 00000000 00000000
^
Done! After 13 bits, we've cleared them all. Now how do we do this? Well, the expression 1<< pos is the 1 bit shifted over pos positions. So we can check if (x & (1<<pos)) and if true, remove it: x -= (1<<pos). We can also do this in one operation: x &= ~(1<<pos). ~ gets us the complement: all ones with the pos bit set to zero instead of the other way around. x &= y copies the zero bits of y into x.
Now how do we deal with signed numbers? The easiest is to just ignore it: unsigned xu = x;
Many processors provide an instruction for calculating the number of leading zero bits directly (e.g. x86 has lzcnt / bsr and ARM has clz). Usually C++ compilers provide an intrinsic for accessing one of these instructions. The number of leading zeros can then be used to calculate the bit length.
In GCC, the intrinsic is called __builtin_clz. It counts the number of leading zeros for a 32 bit integer.
However, there is one caveat about __builtin_clz. When the input is 0, then the result is undefined. Therefor we need to take care of this special case. This is done in the following function with (x == 0) ? 32 : ..., which gives the result 32 when x is 0:
uint32_t count_of_leading_0_bits(const uint32_t &x) {
return (x == 0) ? 32 : __builtin_clz(x);
}
The bit length can then be calculated from the number of leading zeros:
uint32_t bitlen(const uint32_t &x) {
return 32 - count_of_leading_0_bits(x);
}
Note that other C++ compilers have different intrinsics for counting the number of leading zero bits, but you can find them quickly with a search on the internet. Here is How to use MSVC intrinsics to get the equivalent of this GCC code? for an equivalent with MSVC.
The portable modern way since C++20 should probably use std::countl_zero, like
#include <bit>
int bit_length(unsigned x)
{
return (8*sizeof x) - std::countl_zero(x);
}
Both gcc and clang emit a single bsr instruction on x86 for this code (with a branch on zero), so it should be pretty much optimal.
Note that std::countl_zero only accepts unsigned arguments though, so deciding how to handle your original int parameter is left as an exercise for the reader.

How is memory laid out and in which direction does the address get bigger? Using unions

I was working on making a random number gen. I am using a union to access bytes.
typedef unsigned int uint;
typdef unsigned char uchar;
union
{
uint intBits;
uchar charBits[4];
};
// Yes I know ints are not guaranteed to be 4 but ignore that.
So if the number 1 was stored in this union it would look like
00000000 0000000 00000000 00000001
right?
Would a int of -1 look like
00000000 0000000 00000000 00000001
or
10000000 0000000 00000000 00000001
so really the address of the uint is the bit that is 1 right? And the address of charBits[0] is the bit that is 1 right? The confusing thing is this. charBits[1], would have to move to the left to be here
!
00000000 0000000 00000000 00000001
so do memory addresses get bigger right to left or left to right?
EDIT:
I am on a 64bit windows 7 system intel i7 CPU.
It depends on the machine architecture. If your CPU is big endian then it will work as you seem to expect:
int(4) => b3 b2 b1 b0
But if your CPU is little endian then the bytes are in the opposite direction:
int(4) => b0 b1 b2 b3
Note that bit orders within bytes are always from left (most significant) to right (least signficant).
There is absolutely no need to do this at all. You can easily compose a 32-bit integer from 8-bit values like this:
int myInt = byte1 | (byte2<<8) | (byte3<<16) | (byte4<<24);
And you can easily decompose a 32-bit integer into 8-bit values like this:
byte1 = myInt & 0xff;
byte2 = (myInt >> 8) & 0xff;
byte3 = (myInt >> 16) & 0xff;
byte4 = (myInt >> 24);
So there's no reason to write non-portable, hard to understand code that relies on internal representation details of the CPU or platform. Just write code that clearly does what you actually want.

Are bitwise operations going to help me to serialize some bools?

I'm not used to binary files, and I'm trying to get the hang of it. I managed to store some integers and unsigned char, and read them without too much pain. Now, when I'm trying to save some booleans, I see that each of my bool takes exactly 1 octet in my file, which seems logical since a lone bool is stored in a char-sized data (correct me if I'm wrong!).
But since I'm going to have 3 or 4 bools to serialize, I figure it is a waste to store them like this : 00000001 00000001 00000000, for instance, when I could have 00000110. I guess to obtain this I should use bitwise operation, but I'm not very good with them... so could somebody tell me:
How to store up to 8 bools in a single octet using bitwise manipulations?
How to give proper values to (up to 8 bools) from a single octet using bitwise manipulation?
(And, bonus question, does anybody can recommend a simple, non-mathematical-oriented-mind like mine, bit manipulation tutorial if this exists? Everything I found I understood but could not put into practice...)
I'm using C++ but I guess most C-syntaxic languages will use the same kind of operation.
To store bools in a byte:
bool flag; // value to store
unsigned char b = 0; // all false
int position; // ranges from 0..7
b = b | (flag << position);
To read it back:
flag = (b & (1 << position));
The easy way is to use std::bitset which allows you to use indexing to access individual bits (bools), then get the resulting value as an integer. It also allows the reverse.
int main() {
std::bitset<8> s;
s[1] = s[2] = true; // 0b_0000_0110
cout << s.to_ulong() << '\n';
}
Without wrapping in fancy template/pre-processor machinery:
Set bit 3 in var:var |= (1 << 3)
Set bit n in var:var |= (1 << n)
Clear bit n in var:var &= ~(1 << n)
Test bit n in var: (the !! ensures the result is 0 or 1)!!(var & (1 << n))
Try reading this in order.
http://www.cprogramming.com/tutorial/bitwise_operators.html
http://www-graphics.stanford.edu/~seander/bithacks.html#ConditionalSetOrClearBitsWithoutBranching
Some people willthink that 2nd link is way too hardcore, but once you will master simple manipulation, it will come handy.
Basic stuff first:
The only combination of bits that means false is 00000000 all the others mean true i.e: 00001000,01010101
00000000 = 0(decimal), 00000001 = 2^0, 00000010 = 2^1, 00000100 = 2^2, …. ,10000000 = 2^7
There is a big difference between the operands (&&, ||) and (&,|) the first ones give the result of the logic operation between the two numbers, for example:
00000000 && 00000000 = false,
01010101 && 10101010 = true
00001100 || 00000000 = true,
00000000 || 00000000 = false
The second pair makes a bitwise operation (the logic operation between each bit of the numbers):
00000000 & 00000000 = 00000000 = false
00001111 & 11110000 = 00000000 = false
01010101 & 10101001 = 00000001 = true
00001111 | 11110000 = 11111111 = true
00001100 | 00000011 = 00001111 = true
To work with this and play with the bits, you only need to know some basic tricks:
To set a bit to 1 you make the operation | with an octet that has a 1 in that position and ceros in the rest.
For example: we want the first bit of the octet A to be 1 we make: A|00000001
To set a bit to 0 you make the operation & with an octet that has a 0 in that position and ones in the rest.
For example: we want the last bit of the octet A to be 0 we make: A&01111111
To get the Boolean value that holds a bit you make the operation & with an octet that has a 1 in that position and ceros in the rest.
For example: we want to see the value of the third bit of the octet A, we make: A&00000100, if A was XXXXX1XX we get 00000100 = true and if A was XXXXX0XX we get 00000000 = false;
You can always serialize bitfields. Something like:
struct bools
{
bool a:1;
bool b:1;
bool c:1;
bool d:1;
};
has a sizeof 1

Weird output for bitwise NOT

I am trying to take one's complement of 0 to get 1 but I get 4294967295. Here is what I have done:
unsigned int x = 0;
unsigned int y= ~x;
cout << y;
My output is 4294967295 but I expect 1, why is this so? By the way, I am doing this in C++.
Why do you expect 1? Bit-wise complement flips all the bits.
00000000000000000000000000000000 = 0
|
bitwise NOT
|
v
11111111111111111111111111111111 = 4294967295
Perhaps you are thinking of a logical NOT. In C++ this is written as !x.
You have to look at this in binary to understand exactly what is happening.
unsigned int x = 0, is 00000000 00000000 00000000 00000000 in memory.
The ~x statement flips all bits, meaning the above turns into:
11111111 11111111 11111111 11111111
which converts to 4294967295 in decimal form.
XOR will allow you flip only certain bits. If you only want to flip the least significant bit, use x ^ 1 instead.
Where did you get the expectation of 1 from?
Your understanding of bitwise operations clearly shows is lacking, it would be prudent to work through them first before posting in here...
you're not confusing with a ! which is a logical NOT, are you?
a ~ bitwise complement or a bitwise NOT operation flips all the bits from 1 to 0 and vice versa depending on where in the bitmask is set, so for example, a 1 is
00000000 00000000 00000000 00000001
doing a ~ bitwise NOT on that flips it to
11111111 11111111 11111111 11111110
which gives you the maximum value less 1 of the integer datatype on a 32bit system.
Here is a worthy linky to this which shows you how to do bit-twiddling here.
An integer is more than just 1 bit (it's 4 bytes, or 32 bits). By notting it, you'r flipping everything, so in this case 00000... becomes 11111...
~ flips all of the bits in the input. Your input is an unsigned int, which has 32 bits, all of which are 0. Flipping each of those 0-bits gives you 32 1-bits instead, which is binary for that large number.
If you only want to flip the least significant bit, you can use y = x ^ 1 - that is, use XOR instead.
You can use
unsigned int y= !x;
to get y = 1;