When I try to convert a float to an unsigned char array and then back to a float, I'm not getting the original float value. Even when I look at the bits of the float array, I'm seeing a bunch of different bits set than what were set originally.
Here is an example that I made in a Qt Console application project.
Edit: My original code below contains some mistakes that were pointed out in the comments, but I wanted to make my intent clear so that it doesn't confuse future visitors who visit this question.
I was basically trying to shift the bits and OR them back into a single float, but I forgot the shifting part. Plus, I now don't think you can do bitwise operations on floats. That is kind of hacky anyway. I also thought the std::bitset constructor took in more types in C++11, but I don't think that's true, so it was implicitly being cast. Finally, I should've been using reinterpret_cast instead when trying to cast to my new float.
#include <QCoreApplication>
#include <iostream>
#include <bitset>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
const float f = 3.2;
unsigned char b[sizeof(float)];
memcpy(b, &f, sizeof(f));
const float newF = static_cast<float>(b[0] | b[1] | b[2] | b[3]);
std::cout << "Original float: " << f << std::endl;
// I expect "newF" to have the same value as "f"
std::cout << "New float: " << newF << std::endl;
std::cout << "bitset of original float: " << std::bitset<32>(f) << std::endl;
std::cout << "bitset of combined new float: " << std::bitset<32>(newF) << std::endl;
std::cout << "bitset of each float bit: " << std::endl;
std::cout << " b[0]: " << std::bitset<8>(b[0]) << std::endl;
std::cout << " b[1]: " << std::bitset<8>(b[1]) << std::endl;
std::cout << " b[2]: " << std::bitset<8>(b[2]) << std::endl;
std::cout << " b[3]: " << std::bitset<8>(b[3]) << std::endl;
return a.exec();
}
Here is the output from the code above:
Original float: 3.2
New float: 205
bitset of original float: 00000000000000000000000000000011
bitset of combined new float: 00000000000000000000000011001101
bitset of each float bit:
b[0]: 11001101
b[1]: 11001100
b[2]: 01001100
b[3]: 01000000
A previous answer and comment that has been deleted (not sure why) led me to use memcpy.
const float f = 3.2;
unsigned char b[sizeof(float)];
memcpy(b, &f, sizeof(f));
float newF = 0.0;
memcpy(&newF, b, sizeof(float));
Related
Source Code:
#include <iostream>
using namespace std;
int main() {
unsigned long P;
P = 0x7F << 24;
cout << P << endl;
P = 0x80 << 24;
cout << P << endl;
return 0;
}
Output:
2130706432
18446744071562067968
As you can see, the first result is correct.
But the second result is extremely wrong.
The expected result is 2147483648 and it does not match with 18446744071562067968.
I want to know why
The type of the expression 0x80 << 24 is not unsigned long, itโs int. You then assign the result of that expression to P, and in the process convert it to an unsigned long. But at that point it has already overflown (incidentally causing undefined behaviour). Use unsigned long literals in your expression:
P = 0x80ul << 24;
This problem is not entirely portable, since it depends on the number of bits in your representation of unsigned long. In this case, there is an overflow followed by an underflow, and the two effects combine to produce your surprising result.
The basic solution is indicated here: ULL suffix on a numeric literal
I've broken it down in the code below.
#include <iostream>
using namespace std;
int main() {
cout << "sizeof(unsigned long) = " << sizeof(unsigned long) << "\n";
cout << "sizeof(0x80) = " << sizeof(0x80) << "\n";
int32_t a = (0x80 << 24); // overflow: positive to negative
uint64_t b = a; // underflow: negative to positive
uint64_t c = (0x80 << 24); // simple broken
uint64_t d = (0x80UL << 24); // simple fixed
uint32_t e = (0x80U << 24); // what you probably intended
cout << "a = " << a << "\n";
cout << "b = " << b << "\n";
cout << "c = " << c << "\n";
cout << "d = " << d << "\n";
cout << "e = " << e << "\n";
}
Output:
$ ./c-unsigned-long-cannot-hold-the-correct-number-over-2-147-483-647.cpp
sizeof(unsigned long) = 8
sizeof(0x80) = 4
a = -2147483648
b = 18446744071562067968
c = 18446744071562067968
d = 2147483648
e = 2147483648
If you're doing bit-shift operations like this, it probably makes sense to be explicit about the integer sizes (as I have shown in the code above).
What's the difference between long long and long
Fixed width integer types (since C++11)
why I am getting output blank? pointers are able to modify but can't read.why?
#include <iostream>
using namespace std;
int main(){
int a = 0;
char *x1,*x2,*x3,*x4;
x1 = (char *)&a;
x2 = x1;x2++;
x3 = x2;x3++;
x4 = x3;x4++;
*x1=1;
*x2=1;
*x3=1;
*x4=1;
cout <<"#" << *x1 << " " << *x2 << " " << *x3 << " " << *x4 << "#"<<endl ;
cout << a << endl;
}
[Desktop]๐ g++ test_pointer.cpp
[Desktop]๐ ./a.out
# #
16843009
I want to read the value of integer by using pointers type of char.
so i can read byte by byte.
You're streaming chars. These get automatically ASCII-ised for you by IOStreams*, so you're seeing (or rather, not seeing) unprintable characters (in fact, all 0x01 bytes).
You can cast to int to see the numerical value, and perhaps add std::hex for a conventional view.
Example:
#include <iostream>
#include <iomanip>
int main()
{
int a = 0;
// Alias the first four bytes of `a` using `char*`
char* x1 = (char*)&a;
char* x2 = x1 + 1;
char* x3 = x1 + 2;
char* x4 = x1 + 3;
*x1 = 1;
*x2 = 1;
*x3 = 1;
*x4 = 1;
std::cout << std::hex << std::setfill('0');
std::cout << '#' << std::setw(2) << "0x" << (int)*x1
<< ' ' << std::setw(2) << "0x" << (int)*x2
<< ' ' << std::setw(2) << "0x" << (int)*x3
<< ' ' << std::setw(2) << "0x" << (int)*x4
<< '#' << '\n';
std::cout << "0x" << a << '\n';
}
// Output:
// #0x01 0x01 0x01 0x01#
// 0x1010101
(live demo)
Those saying that your program has undefined are incorrect (assuming your int has at least four bytes in it); aliasing objects via char* is specifically permitted.
The 16843009 output is correct; that's equal to 0x01010101 which you'd again see if you put your stream into hex mode.
N.B. Some people will recommend reinterpret_cast<char*>(&a) and static_cast<int>(*x1), instead of C-style casts, though personally I find them ugly and unnecessary in this particular case. For the output you can at least write +*x1 to get a "free" promotion to int (via the unary + operator), but that's not terribly self-documenting.
* Technically it's something like the opposite; IOStreams usually automatically converts your numbers and booleans and things into the right ASCII characters to appear correct on screen. For char it skips that step, assuming that you're already providing the ASCII value you want.
Assuming an int is at least 4 bytes long on your system, the program manipulates the 4 bytes of int a.
The result 16843009 is the decimal value of 0x01010101, so this is as you might expect.
You don't see anything in the first line of output because you write 4 characters of a binary value 1 (or 0x01) which are invisible characters (ASCII SOH).
When you modify your program like this
*x1='1';
*x2='3';
*x3='5';
*x4='7';
you will see output with the expected characters
#1 3 5 7#
926233393
The value 926233393 is the decimal representation of 0x37353331 where 0x37 is the ASCII value of the character '7' etc.
(These results are valid for a little-endian architecture.)
You can use unary + for converting character type (printed as symbol) into integer type (printed as number):
cout <<"#" << +*x1 << " " << +*x2 << " " << +*x3 << " " << +*x4 << "#"<<endl ;
See integral promotion:
Have a look at your declarations of the x's
char *x1,*x2,*x3,*x4;
these are pointers to chars (characters).
In your stream output they are interpreted as printable characters.
A short look into the ascii-Table let you see that the lower numbers are not printeable.
Since your int a is zero also the x's that point to the individual bytes are zero.
One possibility to get readeable output would be to cast the characters to int, so that the stream would print the numerical representation instead the ascii character:
cout <<"#" << int(*x1) << " " << int(*x2) << " " << int(*x3) << " " << int(*x4) << "#"<<endl ;
If I understood your problem correctly, this is the solution
#include <stdio.h>
#include <iostream>
using namespace std;
int main(){
int a = 0;
char *x1,*x2,*x3,*x4;
x1 = (char*)&a;
x2 = x1;x2++;
x3 = x2;x3++;
x4 = x3;x4++;
*x1=1;
*x2=1;
*x3=1;
*x4=1;
cout <<"#" << (int)*x1 << " " << (int)*x2 << " " << (int)*x3 << " " << (int)*x4 << "#"<<endl ;
cout << a << endl;
}
I want to convert a four-byte string to either int32 or uint32 in c++.
This answer helped me to write the following code:
#include <string>
#include <iostream>
#include <stdint.h>
int main()
{
std::string a("\xaa\x00\x00\xaa", 4);
int u = *(int *) a.c_str();
int v = *(unsigned int *) a.c_str();
int x = *(int32_t *) a.c_str();
int y = *(uint32_t *) a.c_str();
std::cout << a << std::endl;
std::cout << "int: " << u << std::endl;
std::cout << "uint: " << v << std::endl;
std::cout << "int32_t: " << x << std::endl;
std::cout << "uint32_t: " << y << std::endl;
return 0;
}
However the outputs are all the same, where the int (or int32_t) values are correct, but the unsigned ones are wrong. Why does not the unsigned conversions work?
// output
int: -1442840406
uint: -1442840406
int32_t: -1442840406
uint32_t: -1442840406
Pythons struct.unpack, gives the right conversion
In [1]: import struct
In [2]: struct.unpack("<i", b"\xaa\x00\x00\xaa")
Out[2]: (-1442840406,)
In [3]: struct.unpack("<I", b"\xaa\x00\x00\xaa")
Out[3]: (2852126890,)
I would also like a similar solution to work for int16 and uint16, but first things first, since I guess an extension would be trivial if I manage to solve this problem.
You need to store the unsigned values in an unsigned variables and it will work:
#include <string>
#include <iostream>
#include <stdint.h>
int main()
{
std::string a("\xaa\x00\x00\xaa", 4);
int u = *(int *) a.c_str();
unsigned int v = *(unsigned int *) a.c_str();
int32_t x = *(int32_t *) a.c_str();
uint32_t y = *(uint32_t *) a.c_str();
std::cout << a << std::endl;
std::cout << "int: " << u << std::endl;
std::cout << "uint: " << v << std::endl;
std::cout << "int32_t: " << x << std::endl;
std::cout << "uint32_t: " << y << std::endl;
return 0;
}
When you cast the value to unsigned and then store it in a signed variable, the compiler plays along. Later, when you print the signed variable, the compiler generates code to print a signed variable output.
here is what my function looks like:
signed char INTtoCHAR(int INT)
{
signed char CHAR = (signed char)INT;
return CHAR;
}
int CHARtoINT(signed char CHAR)
{
int INT = (int)CHAR;
return INT;
}
It works properly that it assigns the int value to the char, but when I want to cout that char then it gives me some weired signs. It compiles without errors.
My testing code is:
int main()
{
int x = 5;
signed char after;
char compare = '5';
after = INTtoCHAR(5);
if(after == 5)
{
std::cout << "after:" << after << "/ compare: " << compare << std::endl;
}
return 0;
}
After is indeed 5 but it doesn't print 5. Any ideas?
Adding to the above answer using the unary operator +, there is another way as well: typecasting.
std::cout << "after:" << (int)after << "/ compare: " << compare << std::endl;
Correct output
Use +after while printing, instead of after. This will promote after to a type printable as a number, regardless of type.
So change this:
std::cout << "after:" << after << ", compare: " << compare << std::endl;
to this:
std::cout << "after:" << +after << ", compare: " << compare << std::endl;
For more, see this answer.
i am a c coder, new to c++.
i try to print the following with cout with strange output. Any comment on this behaviour is appreciated.
#include<iostream>
using namespace std;
int main()
{
unsigned char x = 0xff;
cout << "Value of x " << hex<<x<<" hexadecimal"<<endl;
printf(" Value of x %x by printf", x);
}
output:
Value of x รฟ hexadecimal
Value of x ff by printf
<< handles char as a 'character' that you want to output, and just outputs that byte exactly. The hex only applies to integer-like types, so the following will do what you expect:
cout << "Value of x " << hex << int(x) << " hexadecimal" << endl;
Billy ONeal's suggestion of static_cast would look like this:
cout << "Value of x " << hex << static_cast<int>(x) << " hexadecimal" << endl;
You are doing the hex part correctly, but x is a character, and C++ is trying to print it as a character. You have to cast it to an integer.
#include<iostream>
using namespace std;
int main()
{
unsigned char x = 0xff;
cout << "Value of x " << hex<<static_cast<int>(x)<<" hexadecimal"<<endl;
printf(" Value of x %x by printf", x);
}
If I understand your question correctly, you should expect to know how to convert hex to dec since you have already assigned unsigned char x = 0xff;
#include <iostream>
int main()
{
unsigned char x = 0xff;
std::cout << std::dec << static_cast<int>(x) << std::endl;
}
which shall give the value 255 instead.
Further detail related to the the str stream to dec shall refer in http://www.cplusplus.com/reference/ios/dec/.
If you want to know the hexadecimal value from the decimal one, here is a simple example
#include <iostream>
#include <iomanip>
int main()
{
int x = 255;
std::cout << std::showbase << std::setw(4) << std::hex << x << std::endl;
}
which prints oxff.
The library <iomanip> is optional if you want to see 0x ahead of ff. The original reply related to hex number printing was in http://www.cplusplus.com/forum/windows/51591/.