I'm trying to use std::hex to read hexadecimal integers from a file.
0
a
80000000
...
These integers are both positive and negative.
It seems that std::hex cannot handle negative numbers. I don't understand why, and I don't see a range defined in the docs.
Here is a test bench:
#include <iostream>
#include <sstream>
#include <iomanip>
int main () {
int i;
std::stringstream ss;
// This is the smallest number
// That can be stored in 32 bits -1*2^(31)
ss << "80000000";
ss >> std::hex >> i;
std::cout << std::hex << i << std::endl;
}
Output:
7fffffff
Setting std::hex tells the stream to read integer tokens as though using std::scanf with the %X formatter. %X reads into an unsigned integer, and the resulting value would overflow an int even through the bit pattern fits. Because of the overflow, the read fails, and the contents of i cannot be trusted to hold the expected value. Side note: i will be set to 0 if compiling to C++11 or more recent or unchanged from its current unspecified value before c++11.
Note that if we check the stream state after the read, something you should ALWAYS do, we can see that the read failed:
#include <iostream>
#include <sstream>
#include <iomanip>
#include <cstdint> // added for fixed width integers.
int main () {
int32_t i; //ensure 32 bit int
std::stringstream ss;
// This is the smallest number
// That can be stored in 32 bits -1*2^(31)
ss << "80000000";
if (ss >> std::hex >> i)
{
std::cout << std::hex << i << std::endl;
}
else
{
std::cout << "FAIL! " << std::endl; //will execute this
}
}
The solution is, as the asker surmised in the comments to read into an unsigned int (uint32_t to avoid further surprises if int is not 32 bits). The following is the zero-surprises version of the code using memcpy to transfer the exact bit pattern read into i.
#include <iostream>
#include <sstream>
#include <iomanip>
#include <cstdint> // added for fixed width integers.
#include <cstring> //for memcpy
int main () {
int32_t i; //ensure 32 bit int
std::stringstream ss;
// This is the smallest number
// That can be stored in 32 bits -1*2^(31)
ss << "80000000";
uint32_t temp;
if (ss >> std::hex >> temp)
{
memcpy(&i, &temp, sizeof(i));// probably compiles down to cast
std::cout << std::hex << i << std::endl;
}
else
{
std::cout << "FAIL! " << std::endl;
}
}
That said, diving into old-school C-style coding for a moment
if (ss >> std::hex >> *reinterpret_cast<uint32_t*>(&i))
{
std::cout << std::hex << i << std::endl;
}
else
{
std::cout << "FAIL! " << std::endl;
}
violates the strict aliasing rule, but I'd be stunned to see it fail once 32 bit int is forced with int32_t i;. This might even be legal in more recent C++ Standards as being "Type Similar", but I'm still wrapping my head around that.
Related
Live On Coliru
FormatFloat
I try to implement one conversion of Golang strconv.FormatFloat() in C++.
#include <sstream>
#include <iostream>
#include <string>
#include <iomanip>
using namespace std;
std::string convert_str(double d)
{
std::stringstream ss;
if (d >= 0.0001)
{
ss << std::fixed << std::setprecision(4); // I know the precision, so this is fine
ss << d;
return ss.str();
}
else
{
ss << std::scientific;
ss << d;
return ss.str();
}
}
int main()
{
std::cout << convert_str(0.002) << std::endl; // 0.0020
std::cout << convert_str(0.00001234560000) << std::endl; // 1.234560e-05
std::cout << convert_str(0.000012) << std::endl; // 1.200000e-05
return 0;
}
Output:
0.0020
1.234560e-05 // should be 1.23456e-05
1.200000e-05 // should be 1.2e-05
Question> How can I setup the output modifier so that the trailing zero doesn't show up?
strconv.FormatFloat(num, 'e', -1, 64)
The special precision value (-1) is used for the smallest number of
digits necessary such that ParseFloat() will return f exactly.
At the risk of being heavily downvoted criticised for posting a C answer to a C++ question ... you can use the %lg format specifier in a call to sprintf.
From cpprefernce:
Unless alternative representation is requested the trailing zeros are
removed, also the decimal point character is removed if no fractional
part is left.
So, if you only want to remove the trailing zeros when using scientific notation, you can change your convert_str function to something like the following:
std::string convert_str(double d)
{
if (d >= 0.0001) {
std::stringstream ss;
ss << std::fixed << std::setprecision(4); // I know the precision, so this is fine
ss << d;
return ss.str();
}
else {
char cb[64];
sprintf(cb, "%lg", d);
return cb;
}
}
For the three test cases in your code, this will give:
0.0020
1.23456e-05
1.2e-05
From C++20 and later, the std::format class may offer a more modern alternative; however, I'm not (yet) fully "up to speed" with that, so I cannot present a solution using it. Others may want to do so.
Yes, std::scientific don't remove trailing zeros from scientific notation. The good news, for your specific case, is that cout already format values below 0.0001 using scientific notation, and removing trailing zeros. So you can let your code like this:
#include <sstream>
#include <iostream>
#include <string>
#include <iomanip>
using namespace std;
std::string convert_str(double d)
{
std::stringstream ss;
if (d >= 0.0001)
{
ss << std::fixed << std::setprecision(4); // I know the precision, so this is fine
ss << d;
return ss.str();
}
else
{
ss << d;
return ss.str();
}
}
int main()
{
std::cout << convert_str(0.002) << std::endl; // 0.0020
std::cout << convert_str(0.00001234560000) << std::endl; // 1.23456e-05
std::cout << convert_str(0.000012) << std::endl; // 1.2e-05
return 0;
}
The wanted output can be generated with a combination of the std::setprecision and std::defaultfloat manipulators:
std::cout << std::setprecision(16) << std::defaultfloat
<< 0.002 << '\n'
<< 0.00001234560000 << '\n'
<< 0.000012 << '\n';
Live at: https://godbolt.org/z/67fWa1seo
I have a uint8_t and want to convert it to a two-digit hex string in C++ in the same way that the format string %02x would.
To do this, I've enlisted the help of a stringstream and IO manipulators to configure how the stream should format numbers:
#include <iomanip>
#include <iostream>
#include <sstream>
int main()
{
uint8_t x = 3;
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< x << std::endl;
return 0;
}
So this should print 03 right? No, it prints 0.
Your Standard Libraries implementation of <cstdint> (btw. ... you didn't include it and uint8_t is in the namespace std) uses a typedef for uint8_t:
namespace std {
// ...
typedef char unsigned `uint8_t`
// ...
};
so std::ostream interprets it as character, not as an integer type. To make sure it gets interpreted as an integer just cast it explicitly:
#include <cstdint>
#include <iomanip>
#include <iostream>
int main()
{
std::uint8_t x{ 3 };
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< static_cast<int>(x) << '\n';
}
Actually, it prints 0\0x03. That's right, it interprets the variable x as a character, not as a number.
The correct way to do this is to use the unary plus operator:
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< +x << std::endl;
What I'm trying to do is converting a string's bytes into hexadecimal format.
Based on this answer (and many others consistent) I've tried the code:
#include <sstream>
#include <iomanip>
#include <iostream>
int main ()
{
std::string inputText = u8"A7°";
std::stringstream ss;
// print every char of the string as hex on 2 values
for (unsigned int i = 0; i < inputText.size (); ++i)
{
ss << std::hex << std::setfill ('0') << std::setw (2) << (int) inputText[i];
}
std::cout << ss.str() << std::endl;
}
but with some characters coded in UTF 8 it does't work.
For Instance, in strings containing the degrees symbol ( ° ) coded in UTF8, the result is: ffffffc2ffffffb0 instead of c2b0.
Now I would expect the algorithm to work on individual bytes regardless of their contents and furthermore the result seems to ignore the setw(2) parameter.
Why does I get such a result?
(run test program here)
As Pete Becker already hinted in a comment, converting a negative value to a larger integer fills the higher bits with '1'. The solution is to first cast the char to unsigned char before casting it to int:
#include <string>
#include <iostream>
#include <iomanip>
int main()
{
std::string inputText = "-12°C";
// print every char of the string as hex on 2 values
for (unsigned int i = 0; i < inputText.size(); ++i)
{
std::cout << std::hex << std::setfill('0')
<< std::setw(2) << (int)(unsigned char)inputText[i];
}
}
setw sets the minimal width, it does not truncate longer values.
I have a question regarding following code. When I run it, it prints always just "g" instead of a hex code. Why? How can I output the hex code? Fiddle: http://ideone.com/FjYr2M
#include <iostream>
using namespace std;
void prepareAndSend() {
char Command[50];
sprintf(Command,"%04XT1000A", "076");
unsigned char checksum = 0x02;
char* p = Command;
while(*p) {
checksum ^= *p++;
}
checksum ^= 0x03;
std::cout << std::hex << checksum << std::endl;
}
int main() {
prepareAndSend();
return 0;
}
sprintf(Command,"%04XT1000A", "076");
Undefined behavior, turn your compiler warnings on.
sprintf(Command,"%04XT1000A", 0x76);
You also need to cast checksum to avoid using the unsigned char version of operator<<
std::cout << std::hex << static_cast<int>(checksum) << std::endl;
Cast checksum to int
std::cout << std::hex << static_cast<int>(checksum) << std::endl;
Since checksum is unsigned char, the operator<< tries to print it as a character
How can i convert an integer ranging from 0 to 255 to a string with exactly two chars, containg the hexadecimal representation of the number?
Example
input: 180
output: "B4"
My goal is to set the grayscale color in Graphicsmagick. So, taking the same example i want the following final output:
"#B4B4B4"
so that i can use it for assigning the color: Color("#B4B4B4");
Should be easy, right?
You don't need to. This is an easier way:
ColorRGB(red/255., green/255., blue/255.)
You can use the native formatting features of the IOStreams part of the C++ Standard Library, like this:
#include <string>
#include <sstream>
#include <iostream>
#include <ios>
#include <iomanip>
std::string getHexCode(unsigned char c) {
// Not necessarily the most efficient approach,
// creating a new stringstream each time.
// It'll do, though.
std::stringstream ss;
// Set stream modes
ss << std::uppercase << std::setw(2) << std::setfill('0') << std::hex;
// Stream in the character's ASCII code
// (using `+` for promotion to `int`)
ss << +c;
// Return resultant string content
return ss.str();
}
int main() {
// Output: "B4, 04"
std::cout << getHexCode(180) << ", " << getHexCode(4);
}
Live example.
Using printf using the %x format specifier. Alternatively, strtol specifying the base as 16.
#include<cstdio>
int main()
{
int a = 180;
printf("%x\n", a);
return 0;
}