I'm working on an Arduino powered Tetris game. To keep track of the pieces that have fallen and become fixed I have an array of bytes
byte theGrid[] = {
B00000000,
B00000000,
B00000000,
B00000000,
B00000000,
...
This works great when the well is only 8 LEDs wide, but I need it to be 16 wide. Is there a way to perform bitwise operations on a 16 bit number, like a short? I tried just declaring theGrid as a short, but I'm getting this error no matter what I do.
tetris:62: error: 'B0000000000000000' was not declared in this scope
...leading 'B' only works with 8 bit values (0 to 255)...
from http://arduino.cc/en/pmwiki.php?n=Reference/IntegerConstants
Just use hexadecimal notation, ie. 0x0000 for 2 bytes.
0x signals that it is hex, and every digit (0123456789ABCDEF) replaces 4 bit.
Instead of bitRead and bitSet, you can use following code;
The variable is x and the bit number i, with i=0 is the right-most bit, 1 the next ...):
//set bit to 1
x |= 1<<i;
//set bit to 0
x &= ~(1<<i);
//check if bit is set
if(x & (1<<i))
Eg. x &= ~(1<<3); sets a value B11111111 (in binary representation) to B11110111,
that is 0xff to 0xf7. Btw., x &= ~(1<<3); is equivalent to x &= ~8;
Related
I came across the following code to convert 16-bit numbers to 10-bit numbers and store it inside an integer. Could anyone maybe explain to me what exactly is happening with the AND 0x03?
// Convert the data to 10-bits
int xAccl = (((data[1] & 0x03) * 256) + data[0]);
if(xAccl > 511) {
xAccl -= 1024;
}
Link to where I got the code: https://www.instructables.com/id/Measurement-of-Acceleration-Using-ADXL345-and-Ardu/
The bitwise operator & will make a mask, so in this case, it voids the 6 highest bits of the integer.
Basically, this code does a modulo % 1024 (for unsigned values).
data[1] takes the 2nd byte; & 0x03 masks that byte with binary 11 - so: takes 2 bits; * 256 is the same as << 8 - i.e. pushes those 2 bits into the 9th and 10th positions; adding data[0] to data combines these two bytes (personally I'd have used |, not +).
So; xAccl is now the first 10 bits, using big-endian ordering.
The > 511 seems to be a sign check; essentially, it is saying "if the 10th bit is set, treat the entire thing as a negative integer as though we'd used 10-bit twos complement rules".
in c++, I have the following code:
int x = -3;
x &= 0xffff;
cout << x;
This produces
65533
But if I remove the negative, so I have this:
int x = 3;
x &= 0xffff;
cout << x;
I simply get 3 as a result
Why does the first result not produce a negative number? I would expect that -3 would be sign extended to 16 bits, which should still give a twos complement negative number, considering all those extended bits would be 1. Consequently the most significant bit would be 1 too.
It looks like your system uses 32-bit ints with two's complement representation of negatives.
Constant 0xFFFF covers the least significant two bytes, with the upper two bytes are zero.
The value of -3 is 0xFFFFFFFD, so masking it with 0x0000FFFF you get 0x0000FFFD, or 65533 in decimal.
Positive 3 is 0x00000003, so masking with 0x0000FFFF gives you 3 back.
You would get the result that you expect if you specify 16-bit data type, e.g.
int16_t x = -3;
x &= 0xffff;
cout << x;
In your case int is more than 2 bytes. You probably run on modern CPU where usually these days integer is 4 bytes (or 32 bits)
If you take a look the way system stores negative numbers you will see that its a complementary number. And if you take only last 2 bytes as your mask is 0xFFFF then you will get only a part of it.
your 2 options:
use short intstead of int. Usually its a half of integer and will be only 2 bites
use bigger mask like 0xFFFFFFFF that it covers all the bits of your integer
NOTE: I use "usually" because the amount of bits in your int and short depends on your CPU and compiler.
I am trying to perform this operation, and im getting the wrong output.
signed char temp3[3] = {0x0D, 0xFF, 0xC0};
double temp = ((temp3[0] & 0x03) << 10) | (temp3[1]) | ((temp3[2] & 0xC0) >> 6)
I am trying to form a 12 bit number. get the last 2 bits of 0x0D, all 8 of 0xFF and first 2 of 0xC0 to form the binary number (011111111111) = 2047, however I am getting -1. When I break the first mask and shift of 10, I get 0. I dont know if this is my problem, trying to shift an 8 bit character 10 bits.
When bit twiddling, always use unsigned numbers.
Change the array to unsigned char.
Add the 'U' suffix to each constant, because each constant is a signed integer by default.
BTW, right shifting is undefined implementation defined for signed integers.
Per comments, changed "undefined" to "implementation defined".
There are a few things you need to address.
First up, c++ doesn't have 12 bit numbers. The best you can have are 16 bit. The top bit represents sign in twos complement form.
You also need to be very careful shift of the type of the number you are shifting. In your example, you are left shifting a char by over 8 bits. As a char is only 8 bits, you are zeroing it.
The following example gives a correct implmentation (for signed 12 bit numbers). There are no doubt more efficient ones.
// shift in top 2 bits
signed short test = static_cast<signed short>(temp3[0] & 0x03) << 10 ;
// shift in middle 8 bits
test |= (static_cast<signed short>(temp3[1]) << 2) & 0x03FC;
// rightshift, mask and append lower 2 bits
test |= (static_cast<signed short>(temp3[2]) >> 6) & 0x0003;
// sign extend top bits from 12 bits to 16 bits
test |= (temp3[0] & 0x02) == 0 ? 0x0000 : 0xF0000;
I have a question regarding both masking and bit shifting.
I have the following code:
void WriteLCD(unsigned char word, unsigned commandType, unsigned usDelay)
{
// Most Significant Bits
// Need to do bit masking for upper nibble, and shift left by 8.
LCD_D = (LCD & 0x0FFF) | (word << 8);
EnableLCD(commandType, usDelay); // Send Data
// Least Significant Bits
// Need to do bit masking for lower nibble, and shift left by 12.
LCD_D = (LCD & 0x0FFF) | (word << 12);
EnableLCD(commandType, usDelay); // Send Data
}
The "word" is 8 bits, and is being put through a 4 bit LCD interface. Meaning I have to break the most significant bits and least significant bits apart before I send the data.
LCD_D is a 16 bit number, in which only the most significant bits I pass to it I want to actually "do" something. I want the previous 12 bits preserved in case they were doing something else.
Is my logic in terms of bit masking and shifting the "word" correct in terms of passing the upper and lower nibbles appropriately to the LCD_D?
Thanks for the help!
Looks ok apart from needing to cast "word" to an unsigned short (16 bit) before the shift, in both cases, so that the shift is not performed on a char and looses the data. eg:
LCD_D = (LCD & 0x0FFF) | ((unsigned short) word << 8);
I am new to working with bits & bytes in C++ and I'm looking at some previously developed code and I need some help in understanding what is going on with the code. There is a byte array and populating it with some data and I noticed that the data was being '&' with a 0x0F (Please see code snipped below). I don't really understand what is going on there....if somebody could please explain that, it would be greatly apperciated. Thanks!
//Message Definition
/*
Byte 1: Bit(s) 3:0 = Unused; set to zero
Bit(s) 7:4 = Message ID; set to 10
*/
/*
Byte 2: Bit(s) 3:0 = Unused; set to zero
Bit(s) 7:4 = Acknowledge Message ID; set to 11
*/
//Implementation
BYTE Msg_Arry[2];
int Msg_Id = 10;
int AckMsg_Id = 11;
Msg_Arry[0] = Msg_Id & 0x0F; //MsgID & Unused
Msg_Arry[1] = AckMsg_Id & 0x0F; //AckMsgID & Unused
0x0f is 00001111 in binary. When you perform a bitwise-and (&) with this, it has the effect of masking off the top four bits of the char (because 0 & anything is always 0).
x & 0xF
returns the low four bits of the data.
If you think of the binary representation of x, and use the and operator with 0x0f (00001111 in binary), the top four bits of x will always become zero, and the bottom four bits will become what they were before the operation.
In the given example, it actually does nothing. Msg_Id and AckMsg_Id are both less than 0x0F, and so masking them has no effect here.
However the use of the bitwise-and operator (&) on integer types performs a bit for bit AND between the given operands.