What does &~ mean? - c++

What does & ~(minOffsetAlignment - 1) mean?
Does it mean address of destructor?
This is the code snippet I got it from.
VkDeviceSize is uint64_t.
VkDeviceSize getAlignment(VkDeviceSize instanceSize, VkDeviceSize minOffsetAlignment)
{
if (minOffsetAlignment > 0)
{
return (instanceSize + minOffsetAlignment - 1) & ~(minOffsetAlignment - 1);
}
return instanceSize;
}

It's a common trick to align a certain number to a certain boundary. It assumes that minOffsetAlignment is a power of two.
If minOffsetAlignment is a power of two, its binary form will be a one followed by zeros. For example if 8, the binary will be b00001000.
If you subtract one, it will become a mask where all the bits that can be changed will be flagged as one. For the same example, it will be b00000111 (all numbers from 0 to 7).
If you take the complement of this number, it becomes a mask for clearing. In the example, b11111000.
If you AND (&) any number against this mask, it will have the effect to zero all the bits relative to numbers below the alignment.
For example, say I have the number 9 b00001001. Doing 9&7 is b00001001 & b11111000 which results in b00001000 or 8.
The resulting value is the computed value aligned for the given amount.

What does &~ mean?
In this case, & is the bitwise and operator. It is a binary operator. Each bit of the result is set if the corresponding bit is set for both operands, otherwise the bit is unset. In this case, the left hand operand is (instanceSize + minOffsetAlignment - 1) and the right hand operand is ~(minOffsetAlignment - 1)
~ is the bitwise not operator. It is a unary operator. Each bit of the result is set if the corresponding bit is unset in the operand, otherwise the bit is unset. In this case, the operand is (minOffsetAlignment - 1)

The idea of this code is to say, "Given a particular (object) instance size, how many bytes of memory are used if the allocator has to round-up object sizes to some power-of-two multiple." Note that if the instance size is already a multiple of the power of two (e.g. 8), then no rounding-up is needed.
It is often the case that memory needs to be allocated in multiples of the hardware's word size in bytes. On a 32-bit platform that would mean a multiple of 4, on 64-bit, a multiple of 8. Note however that the multiple (and thus minOffsetAlignment) must be a power of 2 (the author of this code, for better or worse, is 100% assuming the person calling this function knows this).
Let's assume for the rest of my answer that we're working with a hardware word size of 64-bit, and that our platform must allocate memory in chunks of 8 bytes (64 bits divided by 8 bits per byte == 8 bytes). So minOffsetAlignment will be passed as 8.
Consequently, if the instance size is 1-8 bytes, our function must return 8. If it's 9-16 bytes, it must return 16, and so on.
This answer shows how to round up a value to some nearest integer provided the value being rounded isn't already a multiple of that integer.
In the case of rounding up to the nearest multiple of 8, the procedure is to add 7, then do integer division to divide by 8, then multiply by 8.
So:
(0 + 7) / 8 * 8 == 0 // a zero-byte instance requires zero memory bytes
(1 + 7) / 8 * 8 == 8 // a one-byte instance requires eight memory bytes
(2 + 7) / 8 * 8 == 8 // a two-byte instance requires eight memory bytes
...
(8 + 7) / 8 * 8 == 8 // an eight-byte instance requires eight memory bytes
(9 + 7) / 8 * 8 == 16 // a nine-byte instance requires sixteen memory bytes
...
What the code in your question is doing is exactly the above algorithm, but it's using bitwise operations instead of addition, division, and multiplication which on some hardware is faster.
Now, how does it do this?
First we have to get the (instanceSize + 7) part. That's here:
(instanceSize + minOffsetAlignment - 1)
Then we have to divide it by 8, truncate the remainder, and multiply the result of the division by 8.
The last part does that all in one step, and this explains why minOffsetAlignment had to be a power of two.
First, we see:
minOffsetAlignment - 1
If minOffsetAlignment is 8, then its binary value is 0b00001000.
Subtracting 1 from 8 gives 7, which in binary is 0b00000111.
Now the complement of 7 is taken:
~(minOffsetAlignment - 1)
This inverts all the bits so we get ...1111000 (for a 64-bit integer such as VkDeviceSize there are 61 leading 1's and 3 trailing 0's).
Now, putting the whole statement together:
(instanceSize + minOffsetAlignment - 1) & ~(minOffsetAlignment - 1)
We see the & operator will clear the last three bits of whatever the result of (instanceSize + minOffsetAlignment - 1) is.
This forces the return value to be a multiple of 8 (since any binary integer with the last three bits 0 is a multiple of 8), but given that we already added 7 it also rounded it up to the nearest multiple of 8 provided instanceSize wasn't already a multiple of 8.

Related

How to calculate bitmap size?

Started working on screen capturing software specifically targeted for Windows. While looking through an example on MSDN for Capturing an Image I found myself a bit confused.
Keep in mind when I refer to the size of the bitmap that does not include headers and so forth associated with an actual file. I'm talking about raw pixel data. I would have thought that the formula should be (width*height)*bits-per-pixel. However, according to the example this is the proper way to calculate the size:
DWORD dwBmpSize = ((bmpScreen.bmWidth * bi.biBitCount + 31) / 32) * 4 * bmpScreen.bmHeight;
and or: ((width*bits-per-pixel + 31) / 32) * 4 * height
I don't understand why there's the extra calculations involving 31, 32 and 4. Perhaps padding? I'm not sure but any explanations would be quite appreciated. I've already tried Googling and didn't find any particularly helpful results.
The bits representing the bitmap pixels are packed in rows. The size of each row is rounded up to a multiple of 4 bytes (a 32-bit DWORD) by padding.
(bits_per_row + 31)/32 * 4 ensures the round up to the next multiple of 32 bits. The answer is in bytes, rather than bits hence *4 rather than *32.
See: https://en.wikipedia.org/wiki/BMP_file_format
Under Bitmap Header Types you'll find the following:
The scan lines are DWORD aligned [...]. They must be padded for scan line widths, in bytes, that are not evenly divisible by four [...]. For example, a 10- by 10-pixel 24-bpp bitmap will have two padding bytes at the end of each scan line.
The formula
((bmpScreen.bmWidth * bi.biBitCount + 31) / 32) * 4
establishes DWORD-alignment (in bytes). The trailing * 4 is really the result of * 32 / 8, where the multiplication with 32 produces a value that's a multiple of 32 (in bits), and the division by 8 translates it back to bytes.
Although this does produce the desired result, I prefer a different implementation. A DWORD is 32 bits, i.e. a power of 2. Rounding up to a power of 2 can be implemented using the following formula:
(value + ((1 << n) - 1)) & ~((1 << n) - 1)
Adding (1 << n) - 1 adjusts the initial value to go past the next n-th power of 2 (unless it already is an n-th power of 2). (1 << n) - 1 evaluates to a value, where the n least significant bits are set, ~((1 << n) - 1) negates that, i.e. all bits but the n least significant bits are set. This serves as a mask to remove the n least significant bits of the adjusted initial value.
Applied to this specific case, where a DWORD is 32 bits, i.e. n is 5, and (1 << n) - 1 evaluates to 31. value is the raw scanline width in bits:
auto raw_scanline_width_in_bits{ bmpScreen.bmWidth * bi.biBitCount };
auto aligned_scanline_width_in_bits{ (raw_scanline_width_in_bits + 31) & ~31 };
auto aligned_scanline_width_in_bytes{ raw_scanline_width_in_bits / 8 };
This produces the same results, but provides a different perspective, that may be more accessible to some.

How numbers are stored? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
What kind of method does the compiler use to store numbers? One char is 0-255. Two chars side by side are 0-255255. In c++ one short is 2 bytes. The size is 0-65535. Now, how does the compiler convert 255255 to 65535 and what happens with the - in the unsigned numbers?
The maximum value you can store in n bits (when the lowest value is 0 and the values represented are a continuous range), is 2ⁿ − 1. For 8 bits, this gives 255. For 16 bits, this gives 65535.
Your mistake is thinking that you can just concatenate 255 with 255 to get the maximum value in two chars - this is definitely wrong. Instead, to get from the range of 8 bits, which is 256, to the range of 16 bits, you would do 256 × 256 = 65536. Since our first value is 0, the maximum value is 65535, once again.
Note that a char is only guaranteed to have at least 8 bits and a short at least 16 bits (and must be at least as large as a char).
You have got the math totally wrong. Here's how it really is
since each bit can only take on either of two states only(1 and 0) , n bits as a whole can represents 2^n different quantities not numbers. When dealing with integers a standard short integer size of 2 bytes can represent 2^n - 1 (n=16 so 65535)which are mapped to decimal numbers in real life for compuational simplicity.
When dealing with 2 character they are two seperate entities (string is an array of characters). There are many ways to read the two characters on a whole, if you read is at as a string then it would be same as two seperate characters side by side. let me give you an example :
remember i will be using hexadecimal notation for simplicity!
if you have doubt mapping ASCII characters to hex take a look at this ascii to hex
for simplicity let us assume the characters stored in two adjacent positions are both A.
Now hex code for A is 0x41 so the memory would look like
1 byte ....... 2nd byte
01000100 01000001
if you were to read this from the memory as a string and print it out then the output would be
AA
if you were to read the whole 2 bytes as an integer then this would represent
0 * 2^15 + 1 * 2^14 + 0 * 2^13 + 0 * 2^12 + 0 * 2^11 + 1 * 2^10 + 0 * 2^9 + 0 * 2^8 + 0 * 2^7 + 1 * 2^6 + 0 * 2^5 + 0 * 2^4 + 0 * 2^3 + 0 * 2^2 + 0 * 2^1 + 1 * 2^0
= 17537
if unsigned integers were used then the 2 bytes of data would me mapped to integers between
0 and 65535 but if the same represented a signed value then then , though the range remains the same the biggest positive number that can be represented would be 32767. the values would lie between -32768 and 32767 this is because all of the 2 bytes cannot be used and the highest order bit is left to determine the sign. 1 represents negative and 2 represents positive.
You must also note that type conversion (two characters read as single integer) might not always give you the desired results , especially when you narrow the range. (example a doble precision float is converted to an integer.)
For more on that see this answer double to int
hope this helps.
When using decimal system it's true that range of one digit is 0-9 and range of two digits is 0-99. When using hexadecimal system the same thing applies, but you have to do the math in hexadecimal system. Range of one hexadecimal digit is 0-Fh, and range of two hexadecimal digits (one byte) is 0-FFh. Range of two bytes is 0-FFFFh, and this translates to 0-65535 in decimal system.
Decimal is a base-10 number system. This means that each successive digit from right-to-left represents an incremental power of 10. For example, 123 is 3 + (2*10) + (1*100). You may not think of it in these terms in day-to-day life, but that's how it works.
Now you take the same concept from decimal (base-10) to binary (base-2) and now each successive digit is a power of 2, rather than 10. So 1100 is 0 + (0*2) + (1*4) + (1*8).
Now let's take an 8-bit number (char); there are 8 digits in this number so the maximum value is 255 (2**8 - 1), or another way, 11111111 == 1 + (1*2) + (1*4) + (1*8) + (1*16) + (1*32) + (1*64) + (1*128).
When there are another 8 bits available to make a 16-bit value, you just continue counting powers of 2; you don't just "stick" the two 255s together to make 255255. So the maximum value is 65535, or another way, 1111111111111111 == 1 + (1*2) + (1*4) + (1*8) + (1*16) + (1*32) + (1*64) + (1*128) + (1*256) + (1*512) + (1*1024) + (1*2048) + (1*4096) + (1*8192) + (1*16384) + (1*32768).
It depends on the type: integral types must be stored as binary
(or at least, appear so to a C++ program), so you have one bit
per binary digit. With very few exceptions, all of the bits are
significant (although this is not required, and there is at
least one machine where there are extra bits in an int). On
a typical machine, char will be 8 bits, and if it isn't
signed, can store values in the range [0,2^8); in other words,
between 0 and 255 inclusive. unsigned short will be 16 bits
(range [0,2^16)), unsigned int 32 bits (range [0,2^32))
and unsigned long either 32 or 64 bits.
For the signed values, you'll have to use at least one of the
bits for the sign, reducing the maximum positive value. The
exact representation of negative values can vary, but in most
machines, it will be 2's complement, so the ranges will be
signed char:[-2^7,2^7-1)`, etc.
If you're not familiar with base two, I'd suggest you learn it
very quickly; it's fundamental to how all modern machines store
numbers. You should find out about 2's complement as well: the
usual human representation is called sign plus magnitude, and is
very rare in computers.

What does this exactly mean?

Could anyone please explain me what exactly the following line of code produce?
i = 1<<(sizeof(n) * 8 - 1);
You can assume whatever value you want for 'n'. I am trying to implement an 8 bit multiplication program using Booths algorithm.
Let's break it down:
sizeof(n) delivers the size of the type of variable n. For an int variable n on a 32 bit system, this would e.g. be 4 (bytes). See the sizeof documentation e.g. here: http://en.cppreference.com/w/cpp/keyword/sizeof)
* 8 -> multiplication by the number of bits in one byte -> i.e. sizeof(n) * 8 delivers the number of bits necessary for n.
<< is the shiftleft operator. It will shift the first operand to the left by the amount of bits specified by the second operand (see here: http://en.wikipedia.org/wiki/Logical_shift); it's the logical shift, meaning that bits shifted in from the right are filled up with zeroes.
The full expression therefore delivers an expresssion with the highest bit representable by the variable n set to 1.
Example (assuming n now to be of type char, and assuming the size of char as the typical 1 byte):
sizeof(char) = 1
=> sizeof(char) * 8 - 1 = 7
=> 1 << 7 = 10000000

How is this size alignment working

I am not able to understand the below code with respect to the comment provided. What does this code does, and what would be the equivalent code for 8-aligned?
/* segment size must be 4-aligned */
attr->options.ssize &= ~3;
Here, ssize is of unsigned int type.
Since 4 in binary is 100, any value aligned to 4-byte boundaries (i.e. a multiple of 4) will have the last two bits set to zero.
3 in binary is 11, and ~3 is the bitwise negation of those bits, i.e., ...1111100. Performing a bitwise AND with that value will keep every bit the same, except the last two which will be cleared (bit & 1 == bit, and bit & 0 == 0). This gives us a the next lower or equal value that is a multiple of 4.
To do the same operation for 8 (1000 in binary), we need to clear out the lowest three bits. We can do that with the bitwise negation of the binary 111, i.e., ~7.
All powers of two (1, 2, 4, 8, 16, 32...) can be aligned by simple a and operation.
This gives the size rounded down:
size &= ~(alignment - 1);
or if you want to round up:
size = (size + alignment-1) & ~(alignment-1);
The "alignment-1", as long as it's a value that is a power of two, will give you "all ones" up to the bit just under the power of two. ~ inverts all the bits, so you get ones for zeros and zeros for ones.
You can check that something is a power of two by:
bool power_of_two = !(alignment & (alignment-1))
This works because, for example 4:
4 = 00000100
4-1 = 00000011
& --------
0 = 00000000
or for 16:
16 = 00010000
16-1 = 00001111
& --------
0 = 00000000
If we use 5 instead:
5 = 00000101
4-1 = 00000100
& --------
4 = 00000100
So not a power of two!
Perhaps more understandable comment would be
/* make segment size 4-aligned
by zeroing two least significant bits,
effectively rounding down */
Then at least for me, immediate question pops to my mind: should it really be rounded down, when it is size? Wouldn't rounding up be more appropriate:
attr->options.ssize = (attr->options.ssize + 3) & ~3;
As already said in other answers, to make it 8-aligned, 3 bits need to be zeroed, so use 7 instead of 3. So, we might make it into a function:
unsigned size_align(unsigned size, unsigned bit_count_to_zero)
{
unsigned bits = (1 << bit_count_to_zero) - 1;
return (size + bits) & ~bits;
}
~3 is the bit pattern ...111100. When you do a bitwise AND with that pattern, it clears the bottom two bits, i.e. rounds down to the nearest multiple of 4.
~7 does the same thing for 8-aligned.
The code ensures the bottom two bits of ssize are cleared, guaranteeing that ssize is a multiple of 4. Equivalent code for 8-aligned would be
attr->options.ssize &= ~7;
number = number & ~3
The number is rounded off to the nearest multiple of 4 that is lesser than number
Ex:
if number is 0,1,2 or 3, the `number` is rounded off to 0
similarly if number is 4,5,6,or 7,numberis rounded off to 4
But if this is related to memory alignment, the memory must be aligned upwards and not downwards.

C++: Bitwise AND

I am trying to understand how to use Bitwise AND to extract the values of individual bytes.
What I have is a 4-byte array and am casting the last 2 bytes into a single 2 byte value. Then I am trying to extract the original single byte values from that 2 byte value. See the attachment for a screen shot of my code and values.
The problem I am having is I am not able to get the value of the last byte in the 2 byte value.
How would I go about doing this with Bitwise AND?
The problem I am having is I am not able to get the value of the last byte in the 2 byte value.
Your 2byte integer is formed with the values 3 and 4 (since your pointer is to a[1]). As you have already seen in your tests, you can get the 3 by applying the mask 0xFF. Now, to get the 4 you need to remove the lower bits and shift the value. In your example, by using the mask 0xFF00 you effectively remove the 3 from the 16bit number, but you leave the 4 in the high byte of your 2byte number, which is the value 1024 == 2^10 -- 11th bit set, which is the third bit in the second byte (counting from the least representative)
You can shift that result 8 bits to the right to get your 4, or else you can ignore the mask altogether, since by just shifting to the right the lowest bits will disappear:
4 == ( x>>8 )
More interesting results to test bitwise and can be obtained by working with a single number:
int x = 7; // or char, for what matters:
(x & 0x1) == 1;
(x & (0x1<<1) ) == 2; // (x & 0x2)
(x & ~(0x2)) == 5;
You need to add some bit-shifting to convert the masked value from the upper byte to the lower byte.
The problem I am having is I am not able to get the value of the last
byte in the 2 byte value.
Not sure where that "watch" table comes from or if there is more code involved, but it looks to me like the result is correct. Remember, one of them is a high byte and so the value is shifted << 8 places. On a little endian machine, the high byte would be the second one.