I am getting data as hex values in 2 byte short values, but after swapping value got lost.
signed short value = 0x0040;
value = (value*0.5) - 40;
convertMSBTOLSB(value); //Conversion used bcz my device reading as LSB first
//Implementation of convertMSBTOLSB(value)
unsigned short temp = ((char*) &value)[0]; // assign value's LSB
temp = (temp << 8) | ((char*) &value)[1]; // shift LSB to MSB and add value's MSB
value = temp;
After conversion I got value as -8
Problem happened when I send 0x51, The final value should be 0.5 but getting zero because value is signed short.
convertMSBTOLSB is just byte swapping, how can I handle the code so that it can parse both -ve and decimal values
Expecting some input to change the code in such away that it can parse both -ve and decimal values
You won't get 0.5 because your value variable is declared short, and therefore can hold only integers.
Your question is unclear. you wrote that convertMSBTOLSB swaps between MSB and LSB, and you also wrote that
convertMSBTOLSB is just byte swapping
Since MSB and LSB reffering to Bits and not to Bytes I really dont understand what are you trying to swap here.
Your value should change its type. 0.5 and value are of incompatible types (integer and float). Therefore, the operation (value*0.5) will result in a zero.
Additional to this, 40 will be promoted to a double and therefore after assigning this back to value then the value will be truncated.
Related
I am polling a 32-bit register in a motor driver for a value.
Only bits 0-9 are required, the rest need to be ignored.
How do I ignore bits 10-31?
Image of register bits
In order to poll the motor driver for a value, I send the location of the register, which sends back the entire 32-bit number. But I only need bits 0-9 to display.
Serial.println(sendData(0x35, 0))
If you want to extract such bits then you must mask the whole integer with a value that keeps just the bits you are interested in.
This can be done with bitwise AND (&) operator, eg:
uint32_t value = reg & 0x3ff;
uint32_t value = reg & 0b1111111111; // if you have C++11
Rather than Serial.println() I'd go with Serial.print().
You can then just print out the specific bits that you're interested in with a for loop.
auto data = sendData(0x35, 0);
for (int i=0; i<=9; ++i)
Serial.print(data && (1<<i));
Any other method will result in extra bits being printed since there's no data structure that holds 10 bits.
You do a bitwise and with a number with the last 10 bits set to 1. This will set all the other bits to 0. For example:
value = value & ((1<<10) - 1);
Or
value = value & 0x3FF;
I'm currently working on bitwise operations but I am confused right now... Here's the scoop and why
I have a byte 0xCD in bits this is 1100 1101
I am shifting the bits left 7, then I'm saying & 0xFF since 0xFF in bits is 1111 1111
unsigned int bit = (0xCD << 7) & 0xFF<<7;
Now I would make the assumption that both 0xCD and 0xFF would get shifted to the left 7 times and the remaining bit would be 1&1 = 1 but I'm not getting that for output also I would also make the assumption that shifting 6 would give me bits 0&1 = 0 but I'm getting again a number above 1 like 205 0.o Is there something incorrect about the way I am trying to process bit shifting in my head? If so what is it that I am doing wrong?
Code Below:
unsigned char byte_now = 0xCD;
printf("Bits for byte_now: 0x%02x: ", byte_now);
/*
* We want to get the first bit in a byte.
* To do this we will shift the bits over 7 places for the last bit
* we will compare it to 0xFF since it's (1111 1111) if bit&1 then the bit is one
*/
unsigned int bit_flag = 0;
int bit_pos = 7;
bit_flag = (byte_now << bit_pos) & 0xFF;
printf("%d", bit_flag);
Is there something incorrect about the way I am trying to process bit shifting in my head?
There seems to be.
If so what is it that I am doing wrong?
That's unclear, so I offer a reasonably full explanation.
In the first place, it is important to understand that C does not not perform any arithmetic directly on integers smaller than int. Consider, then, your expression byte_now << bit_pos. "The usual arithmetic promotions" are performed on the operands, resulting in the left operand being converted to the int value 0xCD. The result has the same pattern of least-significant value bits as bit_flag, but also a bunch of leading zero bits.
Left shifting the result by 7 bits produces the bit pattern 110 0110 1000 0000, equivalent to 0x6680. You then perform a bitwise and operation on the result, masking off all but the least-significant 8 bits, thus yielding 0x80. What happens when you assign that to bit_flag depends on the type of that variable, but if it is an integer type that is either unsigned or has more than 7 value bits then the assignment is well-defined and value-preserving. Note that it is bit 7 that is nonzero, not bit 0.
The type of bit_flag is more important when you pass it to printf(). You've paired it with a %d field descriptor, which is correct if bit_flag has type int and incorrect otherwise. If bit_flag does have type int, then I would expect the program to print 128.
I would like to shift 0xff left by 3 bytes and store it in a uint64_t, which should work as such:
uint64_t temp = 0xff << 24;
This yields a value of 0xffffffffff000000 which is most definitely not the expected 0xff000000.
However, if I shift it by fewer than 3 bytes, it results in the correct answer.
Furthermore, trying to shift 0x01 left by 3 bytes does work.
Here's my output:
0xff shifted by 0 bytes: 0xff
0x01 shifted by 0 bytes: 0x1
0xff shifted by 1 bytes: 0xff00
0x01 shifted by 1 bytes: 0x100
0xff shifted by 2 bytes: 0xff0000
0x01 shifted by 2 bytes: 0x10000
0xff shifted by 3 bytes: 0xffffffffff000000
0x01 shifted by 3 bytes: 0x1000000
With some experimentation, left shifting works up to 3 bits for each uint64_t up to 0x7f, which yields 0x7f000000. 0x80 yields 0xffffffff80000000.
Does anyone have an explanation for this bizarre behavior? 0xff000000 certainly falls within the 264 - 1 limits of uint64_t.
Does anyone have an explanation for this bizarre behavior?
Yes, type of operation always depend on operand types and never on result type:
double r = 1.0 / 2.0;
// double divided by double and result double assigned to r
// r == 0.5
double r = 1.0 / 2;
// 2 converted to double, double divided by double and result double assigned to r
// r == 0.5
double r = 1 / 2;
// int divided by int, result int converted to double and assigned to r
// r == 0.0
When you understand and remenber this you would not fall on this mistake again.
I suspect the behavior is compiler dependent, but I am seeing the same thing.
The fix is simple. Be sure to cast the 0xff to a uint64_t type BEFORE performing the shift. That way the compiler will handle it as the correct type.
uint64_t temp = uint64_t(0xff) << 24
Shifting left creates a negative (32 bit) number which then gets filled to 64 bits.
Try
0xff << 24LL;
Let's break your problem up into two pieces. The first is the shift operation, and the other is the conversion to uint64_t.
As far as the left shift is concerned, you are invoking undefined behavior on 32-bit (or smaller) architectures. As others have mentioned, the operands are int. A 32-bit int with the given value would be 0x000000ff. Note that this is a signed number, so the left-most bit is the sign. According to the standard, if you the shift affects the sign-bit, the result is undefined. It is up to the whims of the implementation, it is subject to change at any point, and it can even be completely optimized away if the compiler recognizes it at compile-time. The latter is not realistic, but it is actually permitted. While you should never rely on code of this form, this is actually not the root of the behavior that puzzled you.
Now, for the second part. The undefined outcome of the left shift operation has to be converted to a uint64_t. The standard states for signed to unsigned integral conversions:
If the destination type is unsigned, the resulting value is the smallest unsigned value equal to the source value modulo 2n where n is the number of bits used to represent the destination type.
That is, depending on whether the destination type is wider or narrower, signed integers are sign-extended[footnote 1] or truncated and unsigned integers are zero-extended or truncated respectively.
The footnote clarifies that sign-extension is true only for two's-complement representation which is used on every platform with a C++ compiler currently.
Sign-extension means just that everything left of the sign bit on the destination variable will be filled with the sign-bit, which produces all the fs in your result. As you noted, you could left shift 0x7f by 3-bytes without this occurring, That's because 0x7f=0b01111111. After the shift, you get 0x7f000000 which is the largest signed int, ie the largest number that doesn't affect the sign bit. Therefore, in the conversion, a 0 was extended.
Converting the left operand to a large enough type solves this.
uint64_t temp = uint64_t(0xff) << 24
I am trying to create a function to find the average of sensor values on the Arduino in order to calibrate it, however the summation is not working properly and therefore the average is not correct. The table below shows a sample of the output. The left column should be the rolling sum of the output which is displayed in the right column (How are negatives getting in there?)
-10782 17112
6334 17116
23642 17308
-24802 17092
-7706 17096
9326 17032
26422 17096
-21986 17128
The calibrateSensors() function, which is supposed to execute this is shown below
void calibrateSensors(int16_t * accelOffsets){
int16_t rawAccel[3];
int sampleSize = 2000;
Serial.println("Accelerometer calibration in progress...");
for (int i=0; i<sampleSize; i ++){
readAccelData(rawAccel); // get raw accelerometer data
accelOffsets[0] += rawAccel[0]; // add x accelerometer values
accelOffsets[1] += rawAccel[1]; // add y accelerometer values
accelOffsets[2] += rawAccel[2]; // add z accelerometer values
Serial.print(accelOffsets[2]);
Serial.print("\t\t");
Serial.print(rawAccel[2]);
Serial.print(" \t FIRST I:");
Serial.println(i);
}
for (int i=0; i<3; i++)
{
accelOffsets[i] = accelOffsets[i] / sampleSize;
Serial.print("Second I:");
Serial.println(i);
}
Serial.println("Accelerometer calibration complete");
Serial.println("Accelerometer Offsets:");
Serial.print("Offsets (x,y,z): ");
Serial.print(accelOffsets[0]);
Serial.print(", ");
Serial.print(accelOffsets[1]);
Serial.print(", ");
Serial.println(accelOffsets[2]);
}
and the readAccelData() function is as follows
void readAccelData(int16_t * destination){
// x/y/z accel register data stored here
uint8_t rawData[6];
// Read the six raw data registers into data array
I2Cdev::readBytes(MPU6050_ADDRESS, ACCEL_XOUT_H, 6, &rawData[0]);
// Turn the MSB and LSB into a signed 16-bit value
destination[0] = (int16_t)((rawData[0] << 8) | rawData[1]) ;
destination[1] = (int16_t)((rawData[2] << 8) | rawData[3]) ;
destination[2] = (int16_t)((rawData[4] << 8) | rawData[5]) ;
Any idea where I am going wrong?
You have two problems:
You do not initialise your arrays. They start with garbage data in them (space is allocated, but not cleared). You can initialise an array to be all zeros by doing:
int array[5] = {};
This will result in a array that initially looks like [0,0,0,0,0]
Your second problem is that you are creating an array of 16-bit signed integers.
A 16-bit integer can store 65536 different values. Problem is that because you are using a signed type, there are only 32767 positive integer values that you can use. You are overflowing and getting negative numbers when you try and store a value larger than that.
I believe the arduino supports 32-bit ints. If you only want positive numbers, then use an unsigned type.
To use an explicit 32-bit integer:
#include <stdint.h>
int32_t my_int = 0;
Some info on standard variable sizes (note that they can be different sizes based on the arduino model the code is built for):
https://www.arduino.cc/en/Reference/Int
On the Arduino Uno (and other ATMega based boards) an int stores a
16-bit (2-byte) value. This yields a range of -32,768 to 32,767
(minimum value of -2^15 and a maximum value of (2^15) - 1). On the
Arduino Due and SAMD based boards (like MKR1000 and Zero), an int
stores a 32-bit (4-byte) value. This yields a range of -2,147,483,648
to 2,147,483,647 (minimum value of -2^31 and a maximum value of (2^31)
- 1).
https://www.arduino.cc/en/Reference/UnsignedInt
On the Uno and other ATMEGA based boards, unsigned ints (unsigned
integers) are the same as ints in that they store a 2 byte value.
Instead of storing negative numbers however they only store positive
values, yielding a useful range of 0 to 65,535 (2^16) - 1).
The Due stores a 4 byte (32-bit) value, ranging from 0 to
4,294,967,295 (2^32 - 1).
https://www.arduino.cc/en/Reference/UnsignedLong
Unsigned long variables are extended size variables for number
storage, and store 32 bits (4 bytes). Unlike standard longs unsigned
longs won't store negative numbers, making their range from 0 to
4,294,967,295 (2^32 - 1).
With this code:
void calibrateSensors(int16_t * accelOffsets){
int16_t rawAccel[3];
// [...]
accelOffsets[0] += rawAccel[0];
There's an obvious problem: You are adding two 16bit signed integers here. A typical maximum value for a 16bit signed integer is 0x7fff (the first bit would be used as the sign bit), in decimal 32767.
Given your first two sample numbers, 17112 + 17116 is already 34228, so you're overflowing your integer type.
Overflowing a signed integer is undefined behavior in c, because different implementations could use different representations for negative numbers. For a program with undefined behavior, you can't expect any particular result. A very likely result is that the value will "wrap around" into the negative range.
As you already use types from stdint.h, the solution is simple: Use uint32_t for your sums, this type has enough bits for values up to 4294967295.
As a general rule: If you never need a negative value, just stick to the unsigned type. I don't see a reason why you use int16_t here, just use uint16_t.
I've got to program a function that receives
a binary number like 10001, and
a decimal number that indicates how many shifts I should perform.
The problem is that if I use the C++ operator <<, the zeroes are pushed from behind but the first numbers aren't dropped... For example
shifLeftAddingZeroes(10001,1)
returns 100010 instead of 00010 that is what I want.
I hope I've made myself clear =P
I assume you are storing that information in int. Take into consideration, that this number actually has more leading zeroes than what you see, ergo your number is most likely 16 bits, meaning 00000000 00000001 . Maybe try AND-ing it with number having as many 1 as the number you want to have after shifting? (Assuming you want to stick to bitwise operations).
What you want is to bit shift and then limit the number of output bits which can be active (hold a value of 1). One way to do this is to create a mask for the number of bits you want, then AND the bitshifted value with that mask. Below is a code sample for doing that, just replace int_type with the type of value your using -- or make it a template type.
int_type shiftLeftLimitingBitSize(int_type value, int numshift, int_type numbits=some_default) {
int_type mask = 0;
for (unsigned int bit=0; bit < numbits; bit++) {
mask += 1 << bit;
}
return (value << numshift) & mask;
}
Your output for 10001,1 would now be shiftLeftLimitingBitSize(0b10001, 1, 5) == 0b00010.
Realize that unless your numbits is exactly the length of your integer type, you will always have excess 0 bits on the 'front' of your number.