I have 2 processors: A and B. A is decoding a 4x4 keypad and sending the characters to processor B which drives a display. Processor A also has real-time data that it sends to B. So I want a scheme where A filters out the data from the keypresses. A will also send a special code which tells B that the next 1 or 2 bytes are data for the display. The keypad chars will handle menuing. I was thinking that if I sent anything with the msb set: 0xFF to 0x80 - then B would filter that out as either a keypress or a special code. So the data would be by preceded by (for instance) 0xFF - sending a parameter 1 as a byte and 0xFE - sending parameter 2 as a word. The data for the byte would be shifted to the right 1 bit so B would interpret that as 0-127 in even numbers. When sending a word, it would have to be 2 bytes representing a word but each byte is shifted over 1 bit so the resulting word would end up being shifted over 2 bits (0->16,383 in multiples of 4), which is accurate enough for the display. I'm trying to do this using bit shifting for sending a word (uint16_t):
uint16_t decode(UCHAR num1, UCHAR num2)
{
uint16_t res;
num1 <<= 1;
num2 <<= 1;
num1 |= ((num2 >> 7) & 1);
res = (uint16_t)(num1 << 8) | ((num2 << 1) & 0x7f);
return res;
}
void encode(UCHAR *num1, UCHAR *num2, uint16_t res)
{
UCHAR temp1, temp2;
uint16_t tres = res >> 2;
temp1 = (UCHAR)res;
temp2 = (UCHAR)(res >> 8);
temp1 >>= 1;
temp2 >>= 1;
*num1 = (temp1 & 0x7f);
*num2 = (temp2 & 0x7f);
}
for(res = 0;res < 0xffff;res++)
{
encode(&num1,&num2,res);
res2 = decode(num1,num2);
printf("%d -> %x %x\n",res, num1, num2);
}
I know this is not quite right but I'm wondering if it would be easier to do this using unions or bit-fields.
Related
I had already asked a question how to get 4 int8_t into a 32bit int, I was told that I have to cast the int8_t to a uint8_t first to pack it with bitshifting into a 32bit integer.
int8_t offsetX = -10;
int8_t offsetY = 120;
int8_t offsetZ = -60;
using U = std::uint8_t;
int toShader = (U(offsetX) << 24) | (U(offsetY) << 16) | (U(offsetZ) << 8) | (0 << 0);
std::cout << (int)(toShader >> 24) << " "<< (int)(toShader >> 16) << " " << (int)(toShader >> 8) << std::endl;
My Output is
-10 -2440 -624444
It's not what I expected, of course, does anyone have a solution?
In the shader I want to unpack the int16 later and that is only possible with a 32bit integer because glsl does not have any other data types.
int offsetX = data[gl_InstanceID * 3 + 2] >> 24;
int offsetY = data[gl_InstanceID * 3 + 2] >> 16 ;
int offsetZ = data[gl_InstanceID * 3 + 2] >> 8 ;
What is written in the square bracket does not matter it is about the correct shifting of the bits or casting after the bracket.
If any of the offsets is negative, then the shift results in undefined behaviour.
Solution: Convert the offsets to an unsigned type first.
However, this brings another potential problem: If you convert to unsigned, then negative numbers will have very large values with set bits in most significant bytes, and OR operation with those bits will always result in 1 regardless of offsetX and offsetY. A solution is to convert into a small unsigned type (std::uint8_t), and another is to mask the unused bytes. Former is probably simpler:
using U = std::uint8_t;
int third = U(offsetX) << 24u
| U(offsetY) << 16u
| U(offsetZ) << 8u
| 0u << 0u;
I think you're forgetting to mask the bits that you care about before shifting them.
Perhaps this is what you're looking for:
int32 offsetX = (data[gl_InstanceID * 3 + 2] & 0xFF000000) >> 24;
int32 offsetY = (data[gl_InstanceID * 3 + 2] & 0x00FF0000) >> 16 ;
int32 offsetZ = (data[gl_InstanceID * 3 + 2] & 0x0000FF00) >> 8 ;
if (offsetX & 0x80) offsetX |= 0xFFFFFF00;
if (offsetY & 0x80) offsetY |= 0xFFFFFF00;
if (offsetZ & 0x80) offsetZ |= 0xFFFFFF00;
Without the bit mask, the X part will end up in offsetY, and the X and Y part in offsetZ.
on CPU side you can use union to avoid bit shifts and bit masking and branches ...
int8_t x,y,z,w; // your 8bit ints
int32_t i; // your 32bit int
union my_union // just helper union for the casting
{
int8_t i8[4];
int32_t i32;
} a;
// 4x8bit -> 32bit
a.i8[0]=x;
a.i8[1]=y;
a.i8[2]=z;
a.i8[3]=w;
i=a.i32;
// 32bit -> 4x8bit
a.i32=i;
x=a.i8[0];
y=a.i8[1];
z=a.i8[2];
w=a.i8[3];
If you do not like unions the same can be done with pointers...
Beware on GLSL side is this not possible (nor unions nor pointers) and you have to use bitshifts and masks like in the other answer...
I am sorry if my question is confusing but here is the example of what I want to do,
lets say I have an unsigned long int = 1265985549
in binary I can write this as 01001011011101010110100000001101
now I want to split this binary 32 bit number into 4 bits like this and work separately on those 4 bits
0100 1011 0111 0101 0110 1000 0000 1101
any help would be appreciated.
You can get a 4-bit nibble at position k using bit operations, like this:
uint32_t nibble(uint32_t val, int k) {
return (val >> (4*k)) & 0x0F;
}
Now you can get the individual nibbles in a loop, like this:
uint32_t val = 1265985549;
for (int k = 0; k != 8 ; k++) {
uint32_t n = nibble(val, k);
cout << n << endl;
}
Demo on ideone.
short nibble0 = (i >> 0) & 15;
short nibble1 = (i >> 4) & 15;
short nibble2 = (i >> 8) & 15;
short nibble3 = (i >> 12) & 15;
etc
Based on the comment explaining the actual use for this, here's an other way to count how many nibbles have an odd parity: (not tested)
; compute parities of nibbles
x ^= x >> 2;
x ^= x >> 1;
x &= 0x11111111;
; add the parities
x = (x + (x >> 4)) & 0x0F0F0F0F;
int count = x * 0x01010101 >> 24;
The first part is just a regular "xor all the bits" type of parity calculation (where "all bits" refers to all the bits in a nibble, not in the entire integer), the second part is based on this bitcount algorithm, skipping some steps that are unnecessary because certain bits are always zero and so don't have to be added.
I want to extract the n most significant bits from an integer in C++ and convert those n bits to an integer.
For example
int a=1200;
// its binary representation within 32 bit word-size is
// 00000000000000000000010010110000
Now I want to extract the 4 most significant digits from that representation, i.e. 1111
00000000000000000000010010110000
^^^^
and convert them again to an integer (1001 in decimal = 9).
How is possible with a simple c++ function without loops?
Some processors have an instruction to count the leading binary zeros of an integer, and some compilers have instrinsics to allow you to use that instruction. For example, using GCC:
uint32_t significant_bits(uint32_t value, unsigned bits) {
unsigned leading_zeros = __builtin_clz(value);
unsigned highest_bit = 32 - leading_zeros;
unsigned lowest_bit = highest_bit - bits;
return value >> lowest_bit;
}
For simplicity, I left out checks that the requested number of bits are available. For Microsoft's compiler, the intrinsic is called __lzcnt.
If your compiler doesn't provide that intrinsic, and you processor doesn't have a suitable instruction, then one way to count the zeros quickly is with a binary search:
unsigned leading_zeros(int32_t value) {
unsigned count = 0;
if ((value & 0xffff0000u) == 0) {
count += 16;
value <<= 16;
}
if ((value & 0xff000000u) == 0) {
count += 8;
value <<= 8;
}
if ((value & 0xf0000000u) == 0) {
count += 4;
value <<= 4;
}
if ((value & 0xc0000000u) == 0) {
count += 2;
value <<= 2;
}
if ((value & 0x80000000u) == 0) {
count += 1;
}
return count;
}
It's not fast, but (int)(log(x)/log(2) + .5) + 1 will tell you the position of the most significant non-zero bit. Finishing the algorithm from there is fairly straight-forward.
This seems to work (done in C# with UInt32 then ported so apologies to Bjarne):
unsigned int input = 1200;
unsigned int most_significant_bits_to_get = 4;
// shift + or the msb over all the lower bits
unsigned int m1 = input | input >> 8 | input >> 16 | input >> 24;
unsigned int m2 = m1 | m1 >> 2 | m1 >> 4 | m1 >> 6;
unsigned int m3 = m2 | m2 >> 1;
unsigned int nbitsmask = m3 ^ m3 >> most_significant_bits_to_get;
unsigned int v = nbitsmask;
unsigned int c = 32; // c will be the number of zero bits on the right
v &= -((int)v);
if (v>0) c--;
if ((v & 0x0000FFFF) >0) c -= 16;
if ((v & 0x00FF00FF) >0) c -= 8;
if ((v & 0x0F0F0F0F) >0 ) c -= 4;
if ((v & 0x33333333) >0) c -= 2;
if ((v & 0x55555555) >0) c -= 1;
unsigned int result = (input & nbitsmask) >> c;
I assumed you meant using only integer math.
I used some code from #OliCharlesworth's link, you could remove the conditionals too by using the LUT for trailing zeroes code there.
So this sensor I have returns a signed value between -500-500 by returning two (high and low) signed bytes. How can I use these to figure out what the actual value is? I know I need to do 2's complement, but I'm not sure how. This is what I have now -
real_velocity = temp.values[0];
if(temp.values[1] != -1)
real_velocity += temp.values[1];
//if high byte > 1, negative number - take 2's compliment
if(temp.values[1] > 1) {
real_velocity = ~real_velocity;
real_velocity += 1;
}
But it just returns the negative value of what would be a positive. So for instance, -200 returns bytes 255 (high) and 56(low). Added these are 311. But when I run the above code it tells me -311. Thank you for any help.
-200 in hex is 0xFF38,
you're getting two bytes 0xFF and 0x38,
converting these back to decimal you might recognise them
0xFF = 255,
0x38 = 56
your sensor is not returning 2 signed bytes but a simply the high and low byte of a signed 16 bit number.
so your result is
value = (highbyte << 8) + lowbyte
value being a 16 bit signed variable.
Based on the example you gave, it appears that the value is already 2's complement. You just need to shift the high byte left 8 bits and OR the values together.
real_velocity = (short) (temp.values[0] | (temp.values[1] << 8));
You can shift the bits and mask the values.
int main()
{
char data[2];
data[0] = 0xFF; //high
data[1] = 56; //low
int value = 0;
if (data[0] & 0x80) //sign
value = 0xFFFF8000;
value |= ((data[0] & 0x7F) << 8) | data[1];
std::cout<<std::hex<<value<<std::endl;
std::cout<<std::dec<<value<<std::endl;
std::cin.get();
}
Output:
ffffff38
-200
real_velocity = temp.values[0];
real_velocity = real_velocity << 8;
real_velocity |= temp.values[1];
// And, assuming 32-bit integers
real_velocity <<= 16;
real_velocity >>= 16;
For 8-bit bytes, first just convert to unsigned:
typedef unsigned char Byte;
unsigned const u = (Byte( temp.values[1] ) << 8) | Byte( temp.values[0] );
Then if that is greater than the upper range for 16-bit two's complement, subtract 216:
int const i = int(u >= (1u << 15)? u - (1u << 16) : u);
You could do tricks at the bit level, but I don't think there's any point in that.
The above assuming that CHAR_BIT = 8, that unsigned is more than 16 bits, and that the machine and desired result is two's complement.
#include <iostream>
using namespace std;
int main()
{
typedef unsigned char Byte;
struct { char values[2]; } const temp = { 56, 255 };
unsigned const u = (Byte( temp.values[1] ) << 8) | Byte( temp.values[0] );
int const i = int(u >= (1u << 15)? u - (1u << 16) : u);
cout << i << endl;
}
Does anyone knows why after I apply this algorithm in c++ to reduce the volume of a pcm apears a white noise in the background?
for(int i = 0; i<pcm.length(); i+=2) {
quint16 byte0 = pcm[i];
quint16 byte1 = pcm[i+1];
//merge byte0 and byte1
qint16 n = (byte1 << 8) + byte0;
n *= volume; // multiplier;
//split n into byte0 and byte1
byte1 = (n >> 8) & 255;
byte0 = n & 255;
//save the new values
pcm[i] = byte0;
pcm[i+1] = byte1;
}
After a long time, I come with the solution. The problem was the mode that I was merging the two bites.
for(int i = 0; i<pcm.length(); i+=2) {
quint16 byte0 = pcm[i];
quint16 byte1 = pcm[i+1];
//merge byte0 and byte1
qint16 n = 0;
n |= speakersRaw[j][i+1] & 0xFF;
n <<= 8;
n |= speakersRaw[j][i] & 0xFF;
n *= volume; // multiplier;
//split n into byte0 and byte1
byte1 = (n >> 8) & 255;
byte0 = n & 255;
//save the new values
pcm[i] = byte0;
pcm[i+1] = byte1;
}
Your n *= 0.5 is effectively doing the same as n >>= 1. You're shifting the least-significant bit from byte1 into the most significant bit of byte0, which is likely the source of your noise.
Why are you combining the two values into one integer rather than doing each one separately?
Perhaps you're packing and unpacking your bytes in the wrong order?
qint16 n = (byte0 << 8) + byte1;
byte0 = (n >> 8) & 255;
byte1 = n & 255;
The byte order mentioned by Mark Ransom is an obvious possible problem. You should check that.
The other possible problem is sign extension.
If you have signed samples and you are manipulating them in an unsigned type, you will lose the sign bit on all the negative samples.
If your byte type is signed then you will get sign extension into the high byte when you load byte0 and byte1, again not what you want.
Does the quint16 type match the actual type of the samples? If not, you should use the same type. You should make user you use unsigned char as your byte type.
Update from info in comments:
To test the sign extension theory, change the:
n *= 0.5;
line to:
n = ((short) n) * 0.5;