Converting 4bytes to signed and unsigned ints - c++

I want to convert a four-byte string to either int32 or uint32 in c++.
This answer helped me to write the following code:
#include <string>
#include <iostream>
#include <stdint.h>
int main()
{
std::string a("\xaa\x00\x00\xaa", 4);
int u = *(int *) a.c_str();
int v = *(unsigned int *) a.c_str();
int x = *(int32_t *) a.c_str();
int y = *(uint32_t *) a.c_str();
std::cout << a << std::endl;
std::cout << "int: " << u << std::endl;
std::cout << "uint: " << v << std::endl;
std::cout << "int32_t: " << x << std::endl;
std::cout << "uint32_t: " << y << std::endl;
return 0;
}
However the outputs are all the same, where the int (or int32_t) values are correct, but the unsigned ones are wrong. Why does not the unsigned conversions work?
// output
int: -1442840406
uint: -1442840406
int32_t: -1442840406
uint32_t: -1442840406
Pythons struct.unpack, gives the right conversion
In [1]: import struct
In [2]: struct.unpack("<i", b"\xaa\x00\x00\xaa")
Out[2]: (-1442840406,)
In [3]: struct.unpack("<I", b"\xaa\x00\x00\xaa")
Out[3]: (2852126890,)
I would also like a similar solution to work for int16 and uint16, but first things first, since I guess an extension would be trivial if I manage to solve this problem.

You need to store the unsigned values in an unsigned variables and it will work:
#include <string>
#include <iostream>
#include <stdint.h>
int main()
{
std::string a("\xaa\x00\x00\xaa", 4);
int u = *(int *) a.c_str();
unsigned int v = *(unsigned int *) a.c_str();
int32_t x = *(int32_t *) a.c_str();
uint32_t y = *(uint32_t *) a.c_str();
std::cout << a << std::endl;
std::cout << "int: " << u << std::endl;
std::cout << "uint: " << v << std::endl;
std::cout << "int32_t: " << x << std::endl;
std::cout << "uint32_t: " << y << std::endl;
return 0;
}
When you cast the value to unsigned and then store it in a signed variable, the compiler plays along. Later, when you print the signed variable, the compiler generates code to print a signed variable output.

Related

how to initialize an unsigned integer uint8_t

How to use uint8_t and to initialize a variable
#include<iostream>
using namespace std;
int main()
{
uint8_t a = 6;
cout << a;
return 1;
}
It is printing some symbol
C++ treats uint8_t as char - because that's pretty much what it is.
If you pass a char to cout, it'll print as a char, which, with a value of 6, is the ACK symbol (which would probably display strangely, depending on your terminal settings).
If you want it to be printed as a number, casting it to an unsigned in cout should do the trick:
cout << (unsigned)a;
You can cast the variable a in order to print that as a number and not an ascii symbol
#include<iostream>
#include <cstdint>
int main()
{
uint8_t a = 6;
std::cout << "a: " << a << std::endl;
std::cout << "a casted to char(is the same type actually): " << char(a) << std::endl;
std::cout << "a casted to int: " << int(a) << std::endl;
getchar();
return 0;
}
You can use good old type-unsafe printf.
#include <cstdint>
#include <cstdio>
int main()
{
std::uint8_t a = 6;
std::printf("%d\n", a);
}

C++ - Convert float to unsigned char array and then back to float

When I try to convert a float to an unsigned char array and then back to a float, I'm not getting the original float value. Even when I look at the bits of the float array, I'm seeing a bunch of different bits set than what were set originally.
Here is an example that I made in a Qt Console application project.
Edit: My original code below contains some mistakes that were pointed out in the comments, but I wanted to make my intent clear so that it doesn't confuse future visitors who visit this question.
I was basically trying to shift the bits and OR them back into a single float, but I forgot the shifting part. Plus, I now don't think you can do bitwise operations on floats. That is kind of hacky anyway. I also thought the std::bitset constructor took in more types in C++11, but I don't think that's true, so it was implicitly being cast. Finally, I should've been using reinterpret_cast instead when trying to cast to my new float.
#include <QCoreApplication>
#include <iostream>
#include <bitset>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
const float f = 3.2;
unsigned char b[sizeof(float)];
memcpy(b, &f, sizeof(f));
const float newF = static_cast<float>(b[0] | b[1] | b[2] | b[3]);
std::cout << "Original float: " << f << std::endl;
// I expect "newF" to have the same value as "f"
std::cout << "New float: " << newF << std::endl;
std::cout << "bitset of original float: " << std::bitset<32>(f) << std::endl;
std::cout << "bitset of combined new float: " << std::bitset<32>(newF) << std::endl;
std::cout << "bitset of each float bit: " << std::endl;
std::cout << " b[0]: " << std::bitset<8>(b[0]) << std::endl;
std::cout << " b[1]: " << std::bitset<8>(b[1]) << std::endl;
std::cout << " b[2]: " << std::bitset<8>(b[2]) << std::endl;
std::cout << " b[3]: " << std::bitset<8>(b[3]) << std::endl;
return a.exec();
}
Here is the output from the code above:
Original float: 3.2
New float: 205
bitset of original float: 00000000000000000000000000000011
bitset of combined new float: 00000000000000000000000011001101
bitset of each float bit:
b[0]: 11001101
b[1]: 11001100
b[2]: 01001100
b[3]: 01000000
A previous answer and comment that has been deleted (not sure why) led me to use memcpy.
const float f = 3.2;
unsigned char b[sizeof(float)];
memcpy(b, &f, sizeof(f));
float newF = 0.0;
memcpy(&newF, b, sizeof(float));

Different behaviour of static_cast

I am trying to convert a uint64_t to a uint8_t. I know this makes no sense normaly but since the JSON Library converts all numeric values to a uint64_t or int64_t I have to convert it back. I am always sure I do not receive values which will not fit into the uint8_t.
Now when I compile and run the following code on OSx everything works as expected. But as soon as I move to a Raspberry Pi 2 the code no longer works. The value is 0.
Can anybody explain why this is happening? And does somebody has a better solution?
#include <iostream>
#include "json.h"
using JSON = nlohmann::json;
typedef struct {
uint8_t boardId;
uint8_t commandGroupId;
uint8_t commandId;
} ExternalMessageType;
int main(int argc, const char * argv[])
{
JSON x;
ExternalMessageType y;
x["board-id"] = 1;
x["command-group-id"] = 1;
x["command-id"] = 11;
y.boardId = static_cast<uint8_t>(x["board-id"]);
y.commandGroupId = static_cast<uint8_t>(x["command-group-id"]);
y.commandId = static_cast<uint8_t>(x["command-id"]);
std::cout << "Board: " << (int)y.boardId << std::endl;
std::cout << "Group: " << (int)y.commandGroupId << std::endl;
std::cout << "Command: " << (int)y.commandId << std::endl;
if (y.commandGroupId == 1) {
std::cout << "Command Group is ok." << std::endl;
switch (y.commandId) {
case 11: {
std::cout << "Speed Message" << std::endl;
} break;
}
} else {
std::cout << "Command Group is not ok." << std::endl;
}
return 0;
}

INTtoCHAR function couts wrong value

here is what my function looks like:
signed char INTtoCHAR(int INT)
{
signed char CHAR = (signed char)INT;
return CHAR;
}
int CHARtoINT(signed char CHAR)
{
int INT = (int)CHAR;
return INT;
}
It works properly that it assigns the int value to the char, but when I want to cout that char then it gives me some weired signs. It compiles without errors.
My testing code is:
int main()
{
int x = 5;
signed char after;
char compare = '5';
after = INTtoCHAR(5);
if(after == 5)
{
std::cout << "after:" << after << "/ compare: " << compare << std::endl;
}
return 0;
}
After is indeed 5 but it doesn't print 5. Any ideas?
Adding to the above answer using the unary operator +, there is another way as well: typecasting.
std::cout << "after:" << (int)after << "/ compare: " << compare << std::endl;
Correct output
Use +after while printing, instead of after. This will promote after to a type printable as a number, regardless of type.
So change this:
std::cout << "after:" << after << ", compare: " << compare << std::endl;
to this:
std::cout << "after:" << +after << ", compare: " << compare << std::endl;
For more, see this answer.

C++ cout hex format

i am a c coder, new to c++.
i try to print the following with cout with strange output. Any comment on this behaviour is appreciated.
#include<iostream>
using namespace std;
int main()
{
unsigned char x = 0xff;
cout << "Value of x " << hex<<x<<" hexadecimal"<<endl;
printf(" Value of x %x by printf", x);
}
output:
Value of x ΓΏ hexadecimal
Value of x ff by printf
<< handles char as a 'character' that you want to output, and just outputs that byte exactly. The hex only applies to integer-like types, so the following will do what you expect:
cout << "Value of x " << hex << int(x) << " hexadecimal" << endl;
Billy ONeal's suggestion of static_cast would look like this:
cout << "Value of x " << hex << static_cast<int>(x) << " hexadecimal" << endl;
You are doing the hex part correctly, but x is a character, and C++ is trying to print it as a character. You have to cast it to an integer.
#include<iostream>
using namespace std;
int main()
{
unsigned char x = 0xff;
cout << "Value of x " << hex<<static_cast<int>(x)<<" hexadecimal"<<endl;
printf(" Value of x %x by printf", x);
}
If I understand your question correctly, you should expect to know how to convert hex to dec since you have already assigned unsigned char x = 0xff;
#include <iostream>
int main()
{
unsigned char x = 0xff;
std::cout << std::dec << static_cast<int>(x) << std::endl;
}
which shall give the value 255 instead.
Further detail related to the the str stream to dec shall refer in http://www.cplusplus.com/reference/ios/dec/.
If you want to know the hexadecimal value from the decimal one, here is a simple example
#include <iostream>
#include <iomanip>
int main()
{
int x = 255;
std::cout << std::showbase << std::setw(4) << std::hex << x << std::endl;
}
which prints oxff.
The library <iomanip> is optional if you want to see 0x ahead of ff. The original reply related to hex number printing was in http://www.cplusplus.com/forum/windows/51591/.