I have a framework which uses 16 bit floats, and I wanted to separate its components to then use for 32bit floats. In my first approach I used bit shifts and similar, and while that worked, it was wildly chaotic to read.
I then wanted to use custom bit sized structs instead, and use a union to write to that struct.
The code to reproduce the issue:
#include <iostream>
#include <stdint.h>
union float16_and_int16
{
struct
{
uint16_t Mantissa : 10;
uint16_t Exponent : 5;
uint16_t Sign : 1;
} Components;
uint16_t bitMask;
};
int main()
{
uint16_t input = 0x153F;
float16_and_int16 result;
result.bitMask = input;
printf("Mantissa: %#010x\n", result.Components.Mantissa);
printf("Exponent: %#010x\n", result.Components.Exponent);
printf("Sign: %#010x\n", result.Components.Sign);
return 0;
}
In the example I would expect my Mantissa to be 0x00000054, the exponent to be 0x0000001F, and sign 0x00000001
Instead I get Mantissa: 0x0000013f, Exponent: 0x00000005, Sign: 0x00000000
Which means that from my bit mask first the Sign was taken (first bit), next 5 bits to exponent, then 10 bit to mantissa, so the order is inverse of what I wanted. Why is that happening?
The worse part is that a different compiler could give the expected order. The standard has never specified the implementation details for bitfields, and specifically the order. The rationale being as usual that it is an implementation detail and that programmers should not rely nor depend on that.
The downside is that it is not possible to use bitfields in cross language programs, and that programmers cannot use bitfields for processing data having well known bitfields (for example in network protocol headers) because it is too complex to make sure how the implementation will process them.
For that reason I have always thought that it was just an unuseable feature and I only use bitmask on unsigned types instead of bitfields. But that last part is no more than my own opinion...
I would say your input is incorrect, for this compiler anyway. This is what the float16_and_int16 order looks like.
sign exponent mantissa
[15] [14:10] [9:0]
or
SGN | E X P O N E N T| M A N T I S S A |
15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
if input = 0x153F then bitMask ==
SGN | E X P O N E N T| M A N T I S S A |
15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
0 0 0 1 0 1 0 1 0 0 1 1 1 1 1 1
so
MANTISSA == 0100111111 (0x13F)
EXPONENT == 00101 (0x5)
SIGN == 0 (0x0)
If you want mantissa to be 0x54, exponent 0x1f and sign 0x1 you need
SGN | E X P O N E N T| M A N T I S S A |
15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
1 1 1 1 1 1 0 0 0 1 0 1 0 1 0 0
or
input = 0xFC64
Related
Given an array of n numbers and integer k and m. We have to select subsequence of length k of the array. given a function s = summation from i=1 to i=k A(i)*(i mod m). We have to maximise s.
Constraints
n<10000
k<1000
|A(i)| < 10000000
m<10000000
Suppose array is 4 9 8 2 6 7 4. And k is 4 and m is 3. For this case answer is 32. ( 9*1 + 8*2 + 2*0 + 7*1 )
My code:
#include<bits/stdc++.h>
using namespace std;
#define ll long long int
#define g_long long long
#define maxx(a, b) a > b ? a : b
int main()
{
ll n, k, m, i, j;
cin >> n >> k >> m;
ll arr[n + 1] =
{ 0 };
for (i = 0; i < n; i++)
{
cin >> arr[i];
}
ll data[8][8] =
{ 0 };
for (i = 1; i <= k; ++i)
{
for (j = 1; j <= 7; ++j)
{
ll ans = maxx((data[i - 1][j - 1] + (arr[j - 1] * (i % m))),
(data[i][j - 1]));
data[i][j] = ans;
}
}
cout << data[k][n];
}
My approach is to first generate a subsequence of length k than keep on updating maximum value.
This code passes some of the test cases but some are giving wrong answer.
Can anyone help me what I am doing wrong in my code or suggest a better approach for this question?
The 2-D Dimensional Dp table which we are going to form by the following observation:
We have to take the maximum between the two values: (dp[i-1][j-1]+(arr[j-1]*(i%m)),dp[i][j-1])
where arr is the array i.e. [4 9 8 2 6 7 4] and dp is 2-dimensional DP-Table.
DP Table is given with rows as values of k (from 0 to k) and with columns as elements of the array.
DP| 0 | 4 | 09 | 08 | 02 | 06 | 07 | 04
00 || 0 | 0 | 00 | 00 | 00 | 00 | 00 | 00
01 || 0 | 4 | 09 | 09 | 09 | 09 | 09| 09
02 || 0 | 0 | 22 | 25 | 25 | 25 | 25| 25
03 || 0 | 0 | 00 |22 | 25 | 25 | 25| 25
04 || 0 | 0 | 00 |00 | 24 | 26 | 32 | 32
The following python code passes all the test cases as discussed in comments:
n = 7
k = 4
m = 3
arr = [49999, 4999, 4999, 4999, 99999, 99999, 49999]
# Initialising 2-D DP with 0 of size (k+1)*(n+1)
# Extra rows for handling edge cases
dp = [[0 for i in range(n+1)] for j in range(k+1)]
for i in range(1,k+1):
for j in range(i,n+1):
ans = max(dp[i-1][j-1]+(arr[j-1]*(i%m)),dp[i][j-1])
dp[i][j] = ans
# Maximum element at the bottom-right-most of 2-D DP
print(dp[k][n])
Thanks to #MBo for sharing top-down approach...
#functools.lru_cache
def mx(i, k):
if (i < 0 or k == 0):
return 0
else:
return max(mx(i-1, k), (mx(i-1, k-1) + l[i]*(k%m)))
I am required to add two select only two numbers from a Variable ie 0X11223344 and I want my pointers to pick 22 in the middle of the array. how do I go about it
You can use shift and modulo operations to get the value
int main(){
return (0X11223344 >> 16) % 256;
}
The program returns 34 == 0x22
A right shift of 4 removes 1 digit. A right shift of 16 removes 4 digits. A modulo of 16 removes all but one digits. A modulo of 16*16= 256 removes all but 2 digits.
You can also get the value with pointer operations:
int main() {
int endianness = 2;
int a = 0x11223344;
char *b = ((char *) &a) + endianness;
return *b;
}
The value of endianess is implementation defined. On a system with little endiannes it's 2
|01 02 03 04| memory address
-------------
|44 33 22 11| 4 byte int with address 01 and value 0x11223344
| | |22| | 1 byte char with address 03 and value 0x22
and on a system with big endianness it's 1
|01 02 03 04| memory address
-------------
|11 22 33 44| 4 byte int with address 01 and value 0x11223344
| |22| | | 1 byte char with address 02 and value 0x22
Let i be a signed integer type. Consider
i += (i&-i);
i -= (i&-i);
where initially i>0.
What do these do? Is there an equivalent code using arithmetic only?
Is this dependent on a specific bit representation of negative integers?
Source: setter's code of an online coding puzzle (w/o any explanation/comments).
The expression i & -i is based on Two's Complement being used to represent negative integers. Simply put, it returns a value k where each bit except the least significant non-zero bit of i is set to 0, but that particular bit keeps its own value. (i.e. 1)
As long as the expression you provided executes in a system where Two's Complement is being used to represent negative integers, it would be portable. So, the answer to your second question would be that the expression is dependent on the representation of negative integers.
To answer your first question, since arithmetic expressions are dependent on the data types and their representations, I do not think that there is a solely arithmetic expression that would be equivalent to the expression i & -i. In essence, the code below would be equivalent in functionality to that expression. (assuming that i is of type int) Notice, though, that I had to make use of a loop to produce the same functionality, and not just arithmetics.
int tmp = 0, k = 0;
while(tmp < 32)
{
if(i & (1 << tmp))
{
k = i & (1 << tmp);
break;
}
tmp++;
}
i += k;
On a Two's Complement architecture, with 4 bits signed integers:
| i | bin | comp | -i | i&-i | dec |
+----+------+------+----+------+-----+
| 0 | 0000 | 0000 | -0 | 0000 | 0 |
| 1 | 0001 | 1111 | -1 | 0001 | 1 |
| 2 | 0010 | 1110 | -2 | 0010 | 2 |
| 3 | 0011 | 1101 | -3 | 0001 | 1 |
| 4 | 0100 | 1100 | -4 | 0100 | 4 |
| 5 | 0101 | 1011 | -5 | 0001 | 1 |
| 6 | 0110 | 1010 | -6 | 0010 | 2 |
| 7 | 0111 | 1001 | -7 | 0001 | 1 |
| -8 | 1000 | 1000 | -8 | 1000 | 8 |
| -7 | 1001 | 0111 | 7 | 0001 | 1 |
| -6 | 1010 | 0110 | 6 | 0010 | 2 |
| -5 | 1011 | 0101 | 5 | 0001 | 1 |
| -4 | 1100 | 0100 | 4 | 0100 | 4 |
| -3 | 1101 | 0011 | 3 | 0001 | 1 |
| -2 | 1110 | 0010 | 2 | 0010 | 2 |
| -1 | 1111 | 0001 | 1 | 0001 | 1 |
Remarks:
You can conjecture that i&-i only has one bit set (it's a power of 2) and it matches the least significant bit set of i.
i + (i&-i) has the interesting property to be one bit closer to the next power of two.
i += (i&-i) sets the least significant unset bit of i.
So, doing i += (i&-i); will eventually make you jump to the next power of two:
| i | i&-i | sum | | i | i&-i | sum |
+---+------+-----+ +---+------+-----+
| 1 | 1 | 2 | | 5 | 1 | 6 |
| 2 | 2 | 4 | | 6 | 2 | -8 |
| 4 | 4 | -8 | |-8 | -8 | UB |
|-8 | -8 | UB |
| i | i&-i | sum | | i | i&-i | sum |
+---+------+-----+ +---+------+-----+
| 3 | 1 | 4 | | 7 | 1 | -8 |
| 4 | 4 | -8 | |-8 | -8 | UB |
|-8 | -8 | UB |
UB: overflow of signed integer exhibits undefined behavior.
If i has unsigned type, the expressions are completely portable and well-defined.
If i has signed type, it's not portable, since & is defined in terms of representations but unary -, +=, and -= are defined in terms of values. If the next version of the C++ standard mandates twos complement, though, it will become portable, and will do the same thing as in the unsigned case.
In the unsigned case (and the twos complement case), it's easy to confirm that i&-i is a power of two (has only one bit nonzero), and has the same value as the lowest-place bit of i (which is also the lowest-place bit of -i). Therefore:
i -= i&-i; clears the lowest-set bit of i.
i += i&-i; increments (clearing, but with carry to higher bits) the lowest-set bit of i.
For unsigned types there is never overflow for either expression. For signed types, i -= i&-i overflows taking -i when i initially has the minimum value of the type, and i += i&-i overflows in the += when i initially has the max value of the type.
Here is what I researched prompted by other answers. The bit manipulations
i -= (i&-i); // strips off the LSB (least-significant bit)
i += (i&-i); // adds the LSB
are used, predominantly, in traversing a Fenwick tree. In particular, i&-i gives the LSB if signed integers are represented via two's complement. As already pointed out by Peter Fenwick in his original proposal, this is not portable to other signed integer representations. However,
i &= i-1; // strips off the LSB
is (it also works with one's complement and signed magnitude representations) and has one fewer operations.
However there appears to be no simple portable alternative for adding the LSB.
i & -i is the easiest way to get the least significant bit (LSB) for an integer i.
You can read more here.
A1: You can read more about 'Mathematical Equivalents' here.
A2: If the negative integer representation is not the usual standard form (i.e. weird big integers), then i & -i might not be LSB.
The easiest way to think of it is in terms of the mathematical equivalence:
-i == (~i + 1)
So -i inverts the bits of the value and then adds 1. The significance of this is that all the lower 0 bits of i are turned into 1s by the ~i operation, so adding 1 to the value causes all those low 1 bits to flip to 0 whilst carrying the 1 upwards until it lands in a 0 bit, which will just happen to be the same position as the lowest 1 bit in i.
Here's an example for the number 6 (0110 in binary):
i = 0110
~i == 1001
(~i + 1) == 1010
i & (~i + 1) == 0010
You may need to do each operation manually a few times before you realise the patterns in the bits.
Here's two more examples:
i = 1000
~i == 0111
(~i + 1) == 1000
i & (~i + 1) == 1000
i = 1100
~i == 0011
(~i + 1) == 0100
i & (~i + 1) == 0100
See how the + 1 causes a sort of 'bit cascade' carrying the one up to the first open 0 bit?
So if (i & -i) is a means of extracting the lowest 1 bit, then it follows that the use cases of i += (i & -i) and i -= (i & -i) are attempts to add and subtract the lowest 1 bit of a value.
Subtracting the lowest 1 bit of a value from itself serves as a means to zero out that bit.
Adding the lowest 1 bit of a value to itself doesn't appear to have any special purpose, it just does what it says on the tin.
It should be portable on any system using two's complement.
From three pixels I computed their LSB (Least Significant Bit); for example, these are the three consecutive LSB for three pixels: 010
Then, with the first and second LSB I perform the XOR operation: 1
The same operation -XOR- for the first and third LSB: 0
These two binary values -1 and 0-, are used to hide a message composed of binary values.
Suppose the three pixels have these three LSB 000 binary values. Then a table is created to hide/insert the two bits:
+----------+
| 000 |
+----------+
| 00 | 000 |
+----+-----+
| 01 | 001 |
+----+-----+
| 10 | 010 |
+----+-----+
| 11 | 100 |
+----+-----+
When the two bits from the message are 00 none of the three pixels' LSB is changed... but when message bits are 01 the last LSB is changed 001.
Now, suppose that the three pixels have these three LSB 001, then the table for LSB replacement is:
+----------+
| 001 |
+----------+
| 00 | 000 |
+----+-----+
| 01 | 001 |
+----+-----+
| 10 | 101 |
+----+-----+
| 11 | 011 |
+----+-----+
I need to do the same for the remaining LSB combinations: 010, 011, 100, 101, 110, 111
I have tried different logical operations to create a table such those two presented.
Note: Color version
Basically, a triplet of bits, abc, can be reduced to a pair of bits, de, using a set of specific computations, which are
d = a XOR b
e = a XOR c
For each de pair you're looking to derive the abc triplet that is the closest to any triplet of pixels, ijk.
Approach
This is a table of the XOR operations
result from
0 00, 11
1 01, 10
The important part here is that you can get the same result from two possible combinations, which are complement to each other.
In your case you have an independent condition, a XOR b, and a dependent one, a XOR c, because a is used in both of them. a (and b) can be any of the two values, but c has only one option based on what a is.
The number of abc triplets that reduce to a specific de combination can be calculated by using 2 for each independent restriction and 1 for each dependent one and multiplying them together. Therefore, 2 x 1 = 2. And half of these are complement to the other.
A more complicated example would have been abcde -> fgh, with
f = a XOR b
g = a XOR c
h = d XOR e
Since the restrictions are independent, dependent, independent, you get 2 x 1 x 2 = 4 combinations of abcde that reduce to the same fgh. Again, with half being complement to the other half.
Anyway, for each de pair compute the two abc triplets that reduce to it and then calculate the Hamming distance (HD) between each of these triplets and your pixel triplet ijk. The result with the lower value is the triplet you'd want to modify your pixels to so that they reduce to that specific de pair.
For example, the triplets 000 and 111 reduce to the pair 00. If the LSBs from your pixels are 000, 001, 010 or 100, you want to modify them to 000. And if they are 110, 101, 011 or 111, modify them to 111.
The HD can obviously be a value between 0 and 3. Since the triplets are complement to each other, if the HD between a triplet and your actual pixels is, for example, 1, the HD with the other triplet will be 2, so that both add up to 3. In a similar vein, the table you build for the pixels 000 will be complement to the one for 111.
| 000 | 111
---+-----+----
00 | 000 | 111
01 | 001 | 110
10 | 010 | 101
11 | 100 | 011
This question already has answers here:
meaning of (number) & (-number)
(4 answers)
Closed 8 years ago.
Given this for loop:
for(++i; i < MAX_N; i += i & -i)
what is it supposed to mean? What does the statement i += i & -i accomplish?
This loop is often observed in binary indexed tree (or BIT) implementation which is useful to update range or point and query range or point in logarithmic time. This loop helps to choose the appropriate bucket based on the set bit in the index. For more details, please consider to read about BIT from some other source. In below post I will show how does this loop help to find the appropriate buckets based on the least significant set bits.
2s complementary signed system (when i is signed)
i & -i is a bit hack to quickly find the number that should be added to given number to make its trailing bit 0(that's why performance of BIT is logarithmic). When you negate a number in 2s complementary system, you will get a number with bits in inverse pattern added 1 to it. When you add 1, all the less significant bits would start inverting as long as they are 1 (were 0 in original number). First 0 bit encountered (1 in original i) would become 1.
When you and both i and -i, only that bit (least significant 1 bit) would remain set and all lesser significant (right) bits would be zero and more significant bits would be inverse of original number.
Anding would generate a power of 2 number that when added to number i would clear the least significant set bit. (as per the requirement of BIT)
For example:
i = 28
+---+---+---+---+---+---+---+---+
| 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
+---+---+---+---+---+---+---+---+
*
-i
+---+---+---+---+---+---+---+---+
| 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 |
+---+---+---+---+---+---+---+---+
I I I I I S Z Z
Z = Zero
I = Inverted
S = Same
* = least significant set bit
i & -i
+---+---+---+---+---+---+---+---+
| 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
+---+---+---+---+---+---+---+---+
Adding
+---+---+---+---+---+---+---+---+
| 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
+---+---+---+---+---+---+---+---+
x
x = cleared now
Unsigned (when i is unsigned)
It will work even for 1s complementary system or any other representation system as long as i is unsigned for the following reason:
-i will take value of 2sizeof(unsigned int) * CHAR_BITS - i. So all the bits right to the least significant set bit would remain zero, least significant bit would also remain zero, but all the bits after that would be inverted because of the carry bits.
For example:
i = 28
+---+---+---+---+---+---+---+---+
| 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
+---+---+---+---+---+---+---+---+
*
-i
0 1 1 1 1 1 <--- Carry bits
+---+---+---+---+---+---+---+---+
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+---+---+---+---+---+---+---+---+
+---+---+---+---+---+---+---+---+
- | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
+---+---+---+---+---+---+---+---+
----------------------------------------
+---+---+---+---+---+---+---+---+
| 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 |
+---+---+---+---+---+---+---+---+
I I I I I S Z Z
Z = Zero
I = Inverted
S = Same
* = least significant set bit
i & -i
+---+---+---+---+---+---+---+---+
| 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
+---+---+---+---+---+---+---+---+
Adding
+---+---+---+---+---+---+---+---+
| 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
+---+---+---+---+---+---+---+---+
x
x = cleared now
Implementation without bithack
You can also use i += (int)(1U << __builtin_ctz((unsigned)i)) on gcc.
Live example here
A non obfuscated version for the same would be:
/* Finds smallest power of 2 that can reset the least significant set bit on adding */
int convert(int i) /* I purposely kept i signed as this works for both */
{
int t = 1, d;
for(; /* ever */ ;) {
d = t + i; /* Try this value of t */
if(d & t) t *= 2; /* bit at mask t was 0, try next */
else break; /* Found */
}
return t;
}
EDITAdding from this answer:
Assuming 2's complement (or that i is unsigned), -i is equal to ~i+1.
i & (~i + 1) is a trick to extract the lowest set bit of i.
It works because what +1 actually does is to set the lowest clear bit,
and clear all bits lower than that. So the only bit that is set in
both i and ~i+1 is the lowest set bit from i (that is, the lowest
clear bit in ~i). The bits lower than that are clear in ~i+1, and the
bits higher than that are non-equal between i and ~i.