What I'm trying to do is converting a string's bytes into hexadecimal format.
Based on this answer (and many others consistent) I've tried the code:
#include <sstream>
#include <iomanip>
#include <iostream>
int main ()
{
std::string inputText = u8"A7°";
std::stringstream ss;
// print every char of the string as hex on 2 values
for (unsigned int i = 0; i < inputText.size (); ++i)
{
ss << std::hex << std::setfill ('0') << std::setw (2) << (int) inputText[i];
}
std::cout << ss.str() << std::endl;
}
but with some characters coded in UTF 8 it does't work.
For Instance, in strings containing the degrees symbol ( ° ) coded in UTF8, the result is: ffffffc2ffffffb0 instead of c2b0.
Now I would expect the algorithm to work on individual bytes regardless of their contents and furthermore the result seems to ignore the setw(2) parameter.
Why does I get such a result?
(run test program here)
As Pete Becker already hinted in a comment, converting a negative value to a larger integer fills the higher bits with '1'. The solution is to first cast the char to unsigned char before casting it to int:
#include <string>
#include <iostream>
#include <iomanip>
int main()
{
std::string inputText = "-12°C";
// print every char of the string as hex on 2 values
for (unsigned int i = 0; i < inputText.size(); ++i)
{
std::cout << std::hex << std::setfill('0')
<< std::setw(2) << (int)(unsigned char)inputText[i];
}
}
setw sets the minimal width, it does not truncate longer values.
Related
I'm trying to use std::hex to read hexadecimal integers from a file.
0
a
80000000
...
These integers are both positive and negative.
It seems that std::hex cannot handle negative numbers. I don't understand why, and I don't see a range defined in the docs.
Here is a test bench:
#include <iostream>
#include <sstream>
#include <iomanip>
int main () {
int i;
std::stringstream ss;
// This is the smallest number
// That can be stored in 32 bits -1*2^(31)
ss << "80000000";
ss >> std::hex >> i;
std::cout << std::hex << i << std::endl;
}
Output:
7fffffff
Setting std::hex tells the stream to read integer tokens as though using std::scanf with the %X formatter. %X reads into an unsigned integer, and the resulting value would overflow an int even through the bit pattern fits. Because of the overflow, the read fails, and the contents of i cannot be trusted to hold the expected value. Side note: i will be set to 0 if compiling to C++11 or more recent or unchanged from its current unspecified value before c++11.
Note that if we check the stream state after the read, something you should ALWAYS do, we can see that the read failed:
#include <iostream>
#include <sstream>
#include <iomanip>
#include <cstdint> // added for fixed width integers.
int main () {
int32_t i; //ensure 32 bit int
std::stringstream ss;
// This is the smallest number
// That can be stored in 32 bits -1*2^(31)
ss << "80000000";
if (ss >> std::hex >> i)
{
std::cout << std::hex << i << std::endl;
}
else
{
std::cout << "FAIL! " << std::endl; //will execute this
}
}
The solution is, as the asker surmised in the comments to read into an unsigned int (uint32_t to avoid further surprises if int is not 32 bits). The following is the zero-surprises version of the code using memcpy to transfer the exact bit pattern read into i.
#include <iostream>
#include <sstream>
#include <iomanip>
#include <cstdint> // added for fixed width integers.
#include <cstring> //for memcpy
int main () {
int32_t i; //ensure 32 bit int
std::stringstream ss;
// This is the smallest number
// That can be stored in 32 bits -1*2^(31)
ss << "80000000";
uint32_t temp;
if (ss >> std::hex >> temp)
{
memcpy(&i, &temp, sizeof(i));// probably compiles down to cast
std::cout << std::hex << i << std::endl;
}
else
{
std::cout << "FAIL! " << std::endl;
}
}
That said, diving into old-school C-style coding for a moment
if (ss >> std::hex >> *reinterpret_cast<uint32_t*>(&i))
{
std::cout << std::hex << i << std::endl;
}
else
{
std::cout << "FAIL! " << std::endl;
}
violates the strict aliasing rule, but I'd be stunned to see it fail once 32 bit int is forced with int32_t i;. This might even be legal in more recent C++ Standards as being "Type Similar", but I'm still wrapping my head around that.
I have a uint8_t and want to convert it to a two-digit hex string in C++ in the same way that the format string %02x would.
To do this, I've enlisted the help of a stringstream and IO manipulators to configure how the stream should format numbers:
#include <iomanip>
#include <iostream>
#include <sstream>
int main()
{
uint8_t x = 3;
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< x << std::endl;
return 0;
}
So this should print 03 right? No, it prints 0.
Your Standard Libraries implementation of <cstdint> (btw. ... you didn't include it and uint8_t is in the namespace std) uses a typedef for uint8_t:
namespace std {
// ...
typedef char unsigned `uint8_t`
// ...
};
so std::ostream interprets it as character, not as an integer type. To make sure it gets interpreted as an integer just cast it explicitly:
#include <cstdint>
#include <iomanip>
#include <iostream>
int main()
{
std::uint8_t x{ 3 };
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< static_cast<int>(x) << '\n';
}
Actually, it prints 0\0x03. That's right, it interprets the variable x as a character, not as a number.
The correct way to do this is to use the unary plus operator:
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< +x << std::endl;
I want to know how can I make the string I converted from DWORD to onstringstream and then to AnsiString.
But that doesn't really matter, the conversion could be from int to string, I just want to know how I can make every string converted to ALWAYS show 6 digits, like if my number is 57, in the string it will be 000057.
Thanks!
Use io manipulators setfill and setw:
#include <iostream>
#include <string>
#include <sstream>
#include <iomanip>
int main()
{
std::ostringstream s;
s << std::setfill('0') << std::setw(6) << 154;
std::cout << s.str() << "\n";
return 0;
}
So, the question about formatted output?
you can use iostream::width and `iostream::fill':
// field width
#include <iostream>
using namespace std;
int main () {
cout << 100 << endl;
cout.width(6);
cout.fill('0');
cout << 100 << endl;
return 0;
}
How can i convert an integer ranging from 0 to 255 to a string with exactly two chars, containg the hexadecimal representation of the number?
Example
input: 180
output: "B4"
My goal is to set the grayscale color in Graphicsmagick. So, taking the same example i want the following final output:
"#B4B4B4"
so that i can use it for assigning the color: Color("#B4B4B4");
Should be easy, right?
You don't need to. This is an easier way:
ColorRGB(red/255., green/255., blue/255.)
You can use the native formatting features of the IOStreams part of the C++ Standard Library, like this:
#include <string>
#include <sstream>
#include <iostream>
#include <ios>
#include <iomanip>
std::string getHexCode(unsigned char c) {
// Not necessarily the most efficient approach,
// creating a new stringstream each time.
// It'll do, though.
std::stringstream ss;
// Set stream modes
ss << std::uppercase << std::setw(2) << std::setfill('0') << std::hex;
// Stream in the character's ASCII code
// (using `+` for promotion to `int`)
ss << +c;
// Return resultant string content
return ss.str();
}
int main() {
// Output: "B4, 04"
std::cout << getHexCode(180) << ", " << getHexCode(4);
}
Live example.
Using printf using the %x format specifier. Alternatively, strtol specifying the base as 16.
#include<cstdio>
int main()
{
int a = 180;
printf("%x\n", a);
return 0;
}
I have tried to find this topic on the web but I couldn't find the one I need.
I have a string of character:
char * tempBuf = "qj";
The result I want is 0x716A, and that value is going to be converted into decimal value.
Is there any function in vc++ that can be used for that?
You can use a stringstream to convert each character to a hexadecimal representation.
#include <iostream>
#include <sstream>
#include <cstring>
int main()
{
const char* tempBuf = "qj";
std::stringstream ss;
const char* it = tempBuf;
const char* end = tempBuf + std::strlen(tempBuf);
for (; it != end; ++it)
ss << std::hex << unsigned(*it);
unsigned result;
ss >> result;
std::cout << "Hex value: " << std::hex << result << std::endl;
std::cout << "Decimal value: " << std::dec << result << std::endl;
}
So if I understood correctly the idea...
#include <stdint.h>
uint32_t charToUInt32(const char* src) {
uint32_t ret = 0;
char* dst = (char*)&ret;
for(int i = 0; (i < 4) && (*src); ++i, ++src)
dst[i] = *src;
return ret;
}
If I understand what you want correctly: just loop over the characters, start to finish; at each character, multiply the sum so far by 256, and add the value of the next character; that gives the decimal value in one shot.
What you are looking for is called "hex encoding". There are a lot of libraries out there that can do that (unless what you were looking for was how to implement one yourself).
One example is crypto++.