Arduino - How to convert uint16_t to Hex - c++

I'm currently working on a sketch that will send data to 3 TLC5971 LED drivers. I need to accept a brightness level in the form uint16_t and then convert that value into two 8-bit hex numbers to pass to the LED drivers.
Example 65535 --> 0xFF and 0xFF
Is there any relatively easy way to do this? I've found several methods that return arrays of characters and whatnot but they don't seem to be easy to implement.
Does anyone have any experience doing anything similar?

Try this:
uint16_t value;
uint8_t msb = (value & 0xFF00U) >> 8U;
uint8_t lsb = (value & 0x00FFU);
Edit 1:
You could complicate matters and use a structure:
struct MSB_LSB
{
unsigned int MSB : 8;
unsigned int LSB : 8;
};
uint16_t value = 0x55aa;
uint8_t msb = ((MSB_LSB)(value)).MSB;
uint8_t lsb = ((MSB_LSB)(value)).LSB;
BTW, decimal, hexadecimal and octal are examples of representations. The representations are for humans to understand. The computer internally stores numbers in a representation convenient for the processor. So when you specify a decimal number, the compiler converts the decimal number into a value better suited for the processor. We humans, can represent the number in a representation that is easiest for us to understand.

uint16_t value = 0x1234; // Use example with different high/low byte values
unsigned char high_byte = value >> 8; // = 0x12
unsigned char low_byte = value & 0xFF; // = 0x34

using a union:
typedef union
{
unsigned short s;
unsigned char b[2];
} short2bytes_t;
unsigned short value=65535;
unsigned char MSB, LSB;
short2bytes_t s2b;
s2b.s = value;
MSB=s2b.b[1];
LSB=s2b.b[0];
printf("%d -> 0x%02X 0x%02X\n", value, MSB, LSB);

Related

2 int8_t's to uint16_t and back

I want to support some serial device in my application.
This device is used with another program and I want to interact with both the device and the save files this program creates.
Yet for some yet to be discovered reason, weird integer casting is going on.
The device returns uint8's over a serial USB connection, the program saves them as int8 to a file and when you read the file, you need to combine 2 int8's to a single uint16.
So when writing the save-file after reading the device, i need to convert an int8 to uint8, resulting in any value higher then 127 to be written as a negative.
Then when I read the save file, I need to combine 2 int8's into a single uint16.
(So convert the negative value to positive and then stick them together)
And then when I save to a save file from within my application, I need to split my uint16 into 2 int8's.
I need to come up with the functions "encode", "combine" and "explode"
// When I read the device and write to the save file:
uint8_t val_1 = 255;
int8_t val_2 = encode(val_1);
REQUIRE(-1 == val_2);
// When I read from the save file to use it in my program.
int8_t val_1 = 7;
int8_t val_2 = -125;
uint16_t val_3 = combine(val_1, val_2);
REQUIRE(1923 == val_3);
// When I export data from my program to the device save-file
int8_t val_4;
int8_t val_5;
explode(val_3, &val_1, &val_2);
REQUIRE(7 == val_4);
REQUIRE(-125 == val_5);
Can anyone give me a head start here?
Your encode method can just be an assignment. Implicit conversion between unsigned integer types and signed integer types is well defined.
uint8_t val_1 = 255;
int8_t val_2 = val_1;
REQUIRE(-1 == val_2);
As for combine - you'll want to cast your first value to a uint16_t to ensure you have enough bits available, and then bitshift it left by 8 bits. This causes the bits from your first value to make up the 8 most significant bits of your new value (the 8 least significant bits are zero). You can then add your second value, which will set the 8 least significant bits.
uint16_t combine(uint8_t a, uint8_t b) {
return ((uint16_t)a << 8) + b;
}
Explode is just going to be the opposite of this. You need to bitshift right 8 bits to get the first output value, and then just simply assign to get the lowest 8 bits.
void explode(uint16_t from, int8_t &to1, int8_t &to2) {
// This gets the lowest 8 bits, and implicitly converts
// from unsigned to signed
to2 = from;
// Move the 8 most significant bits to be the 8 least
// significant bits, and then just assign as we did
// for the other value
to1 = (from >> 8);
}
As a full program:
#include <iostream>
#include <cstdint>
using namespace std;
int8_t encode(uint8_t from) {
// implicit conversion from unsigned to signed
return from;
}
uint16_t combine(uint8_t a, uint8_t b) {
return ((uint16_t)a << 8) + b;
}
void explode( uint16_t from, int8_t &to1, int8_t &to2 ) {
to2 = from;
to1 = (from >> 8);
}
int main() {
uint8_t val_1 = 255;
int8_t val_2 = encode(val_1);
assert(-1 == val_2);
// When I read from the save file to use it in my program.
val_1 = 7;
val_2 = -125;
uint16_t val_3 = combine(val_1, val_2);
assert(1923 == val_3);
// When I export data from my program to the device save-file
int8_t val_4;
int8_t val_5;
explode(val_3, val_4, val_5);
assert(7 == val_4);
assert(-125 == val_5);
}
For further reading on bit-manipulation mechanics, you could take a look here.

Endian-safe conversion from uint16 with value less than 256 to uint8

I am interacting with an API that returns uint16_t values; in this case I know that the value is never going to exceed 255. I need to convert the value to a uint8_t for usage with a separate API. I am currently doing this in the following way:
uint16_t u16_value = 100;
uint8_t u8_value = u16_value << 8;
This solution currently exposes endianness issues if moving from a little-endian (my current system) to a big-endian system.
What is the best way to mitigate against this?
From cppreference
For unsigned and positive a, the value of a << b is the value of a * 2**b, reduced modulo maximum value of the return type plus 1 (that is, bitwise left shift is performed and the bits that get shifted out of the destination type are discarded).
There's nothing about endianness here. You can just do
uint16_t u16_value = 100;
uint8_t u8_value = u16_value;
or
uint16_t u16_value = 100;
uint8_t u8_value = static_cast<uint8_t>(u16_value);
To be explicit.

How to grab specific bits from a 256 bit message?

I'm using winsock to receive udp messages 256 bits long. I use 8 32-bit integers to hold the data.
int32_t dataReceived[8];
recvfrom(client, (char *)&dataReceived, 8 * sizeof(int), 0, &fromAddr, &fromLen);
I need to grab specific bits like, bit #100, #225, #55, etc. So some bits will be in dataReceived[3], some in dataReceived[4], etc.
I was thinking I need to bitshift each array, but things got complicated. Am I approaching this all wrong?
Why are you using int32_t type for buffer elements and not uint32_t?
I usually use something like this:
int bit_needed = 100;
uint32_t the_bit = dataReceived[bit_needed>>5] & (1U << (bit_needed & 0x1F));
Or you can use this one (but it won't work for sign in signed integers):
int bit_needed = 100;
uint32_t the_bit = (dataReceived[bit_needed>>5] >> (bit_needed & 0x1F)) & 1U;
In other answers you can access only lowes 8bits in each int32_t.
When you count bits and bytes from 0:
int bit_needed = 100;
So:
int byte = int(bit_needed / 8);
int bit = bit_needed % 8;
int the_bit = dataReceived[byte] & (1 << bit);
If the recuired bit contains 0, then the_bit will be zero. If it's 1, then the_bit will hold 2 to the power of that bit ordinal place within the byte.
You can make a small function to do the job.
uint8_t checkbit(uint32_t *dataReceived, int bitToCheck)
{
byte = bitToCheck/32;
bit = bitToCheck - byte*32;
if( dataReceived[byte] & (1U<< bit))
return 1;
else
return 0;
}
Note that you should use uint32_t rather than int32_t, if you are using bit shifting. Signed integer bit shifts lead to unwanted results, especially if the MSbit is 1.
You can use a macro in C or C++ to check for specific bit:
#define bit_is_set(var,bit) ((var) & (1 << (bit)))
and then a simple if:
if(bit_is_set(message,29)){
//bit is set
}

Converting 24 bit integer (2s complement) to 32 bit integer in C++

The dataFile.bin is a binary file with 6-byte records. The first 3
bytes of each record contain the latitude and the last 3 bytes contain
the longitude. Each 24 bit value represents radians multiplied by
0X1FFFFF
This is a task I've been working on. I havent done C++ in years so its taking me way longer than I thought it would -_-. After googling around I saw this algorthim which made sense to me.
int interpret24bitAsInt32(byte[] byteArray) {
int newInt = (
((0xFF & byteArray[0]) << 16) |
((0xFF & byteArray[1]) << 8) |
(0xFF & byteArray[2])
);
if ((newInt & 0x00800000) > 0) {
newInt |= 0xFF000000;
} else {
newInt &= 0x00FFFFFF;
}
return newInt;
}
The problem is a syntax issue I am restricting to working by the way the other guy had programmed this. I am not understanding how I can store the CHAR "data" into an INT. Wouldn't it make more sense if "data" was an Array? Since its receiving 24 integers of information stored into a BYTE.
double BinaryFile::from24bitToDouble(char *data) {
int32_t iValue;
// ****************************
// Start code implementation
// Task: Fill iValue with the 24bit integer located at data.
// The first byte is the LSB.
// ****************************
//iValue +=
// ****************************
// End code implementation
// ****************************
return static_cast<double>(iValue) / FACTOR;
}
bool BinaryFile::readNext(DataRecord &record)
{
const size_t RECORD_SIZE = 6;
char buffer[RECORD_SIZE];
m_ifs.read(buffer,RECORD_SIZE);
if (m_ifs) {
record.latitude = toDegrees(from24bitToDouble(&buffer[0]));
record.longitude = toDegrees(from24bitToDouble(&buffer[3]));
return true;
}
return false;
}
double BinaryFile::toDegrees(double radians) const
{
static const double PI = 3.1415926535897932384626433832795;
return radians * 180.0 / PI;
}
I appreciate any help or hints even if you dont understand a clue or hint will help me alot. I just need to talk to someone.
I am not understanding how I can store the CHAR "data" into an INT.
Since char is a numeric type, there is no problem combining them into a single int.
Since its receiving 24 integers of information stored into a BYTE
It's 24 bits, not bytes, so there are only three integer values that need to be combined.
An easier way of producing the same result without using conditionals is as follows:
int interpret24bitAsInt32(byte[] byteArray) {
return (
(byteArray[0] << 24)
| (byteArray[1] << 16)
| (byteArray[2] << 8)
) >> 8;
}
The idea is to store the three bytes supplied as an input into the upper three bytes of the four-byte int, and then shift it down by one byte. This way the program would sign-extend your number automatically, avoiding conditional execution.
Note on portability: This code is not portable, because it assumes 32-bit integer size. To make it portable use <cstdint> types:
int32_t interpret24bitAsInt32(const std::array<uint8_t,3> byteArray) {
return (
(const_cast<int32_t>(byteArray[0]) << 24)
| (const_cast<int32_t>(byteArray[1]) << 16)
| (const_cast<int32_t>(byteArray[2]) << 8)
) >> 8;
}
It also assumes that the most significant byte of the 24-bit number is stored in the initial element of byteArray, then comes the middle element, and finally the least significant byte.
Note on sign extension: This code automatically takes care of sign extension by constructing the value in the upper three bytes and then shifting it to the right, as opposed to constructing the value in the lower three bytes right away. This additional shift operation ensures that C++ takes care of sign-extending the result for us.
When an unsigned char is casted to an int the higher order bits are filled with 0's
When a signed char is casted to a casted int, the sign bit is extended.
ie:
int x;
char y;
unsigned char z;
y=0xFF
z=0xFF
x=y;
/*x will be 0xFFFFFFFF*/
x=z;
/*x will be 0x000000FF*/
So, your algorithm, uses 0xFF as a mask to remove C' sign extension, ie
0xFF == 0x000000FF
0xABCDEF10 & 0x000000FF == 0x00000010
Then uses bit shifts and logical ands to put the bits in their proper place.
Lastly checks the most significant bit (newInt & 0x00800000) > 0 to decide if completing with 0's or ones the highest byte.
int32_t upperByte = ((int32_t) dataRx[0] << 24);
int32_t middleByte = ((int32_t) dataRx[1] << 16);
int32_t lowerByte = ((int32_t) dataRx[2] << 8);
int32_t ADCdata32 = (((int32_t) (upperByte | middleByte | lowerByte)) >> 8); // Right-shift of signed data maintains signed bit

How to store double - endian independent

Despite the fact that big-endian computers are not very widely used, I want to store the double datatype in an independant format.
For int, this is really simple, since bit shifts make that very convenient.
int number;
int size=sizeof(number);
char bytes[size];
for (int i=0; i<size; ++i)
bytes[size-1-i] = (number >> 8*i) & 0xFF;
This code snipet stores the number in big endian format, despite the machine it is being run on. What is the most elegant way to do this for double?
The best way for portability and taking format into account, is serializing/deserializing the mantissa and the exponent separately. For that you can use the frexp()/ldexp() functions.
For example, to serialize:
int exp;
unsigned long long mant;
mant = (unsigned long long)(ULLONG_MAX * frexp(number, &exp));
// then serialize exp and mant.
And then to deserialize:
// deserialize to exp and mant.
double result = ldexp ((double)mant / ULLONG_MAX, exp);
The elegant thing to do is to limit the endianness problem to as small a scope as possible. That narrow scope is the I/O boundary between your program and the outside world. For example, the functions that send binary data to / receive binary data from some other application need to be aware of the endian problem, as do the functions that write binary data to / read binary data from some data file. Make those interfaces cognizant of the representation problem.
Make everything else blissfully ignorant of the problem. Use the local representation everywhere else. Represent a double precision floating point number as a double rather than an array of 8 bytes, represent a 32 bit integer as an int or int32_t rather than an array of 4 bytes, et cetera. Dealing with the endianness problem throughout your code is going to make your code bloated, error prone, and ugly.
The same. Any numeric object, including double, is eventually several bytes which are interpreted in a specific order according to endianness. So if you revert the order of the bytes you'll get exactly the same value in the reversed endianness.
char *src_data;
char *dst_data;
for (i=0;i<N*sizeof(double);i++) *dst_data++=src_data[i ^ mask];
// where mask = 7, if native == low endian
// mask = 0, if native = big_endian
The elegance lies in mask which handles also short and integer types: it's sizeof(elem)-1 if the target and source endianness differ.
Not very portable and standards violating, but something like this:
std::array<unsigned char, 8> serialize_double( double const* d )
{
std::array<unsigned char, 8> retval;
char const* begin = reinterpret_cast<char const*>(d);
char const* end = begin + sizeof(double);
union
{
uint8 i8s[8];
uint16 i16s[4];
uint32 i32s[2];
uint64 i64s;
} u;
u.i64s = 0x0001020304050607ull; // one byte order
// u.i64s = 0x0706050403020100ull; // the other byte order
for (size_t index = 0; index < 8; ++index)
{
retval[ u.i8s[index] ] = begin[index];
}
return retval;
}
might handle a platform with 8 bit chars, 8 byte doubles, and any crazy-ass byte ordering (ie, big endian in words but little endian between words for 64 bit values, for example).
Now, this doesn't cover the endianness of doubles being different than that of 64 bit ints.
An easier approach might be to cast your double into a 64 bit unsigned value, then output that as you would any other int.
void reverse_endian(double number, char (&bytes)[sizeof(double)])
{
const int size=sizeof(number);
memcpy(bytes, &number, size);
for (int i=0; i<size/2; ++i)
std::swap(bytes[i], bytes[size-i-1]);
}