Hexadecimal representation of an integer to std::string [duplicate] - c++

How do I convert an integer to a hex string in C++?
I can find some ways to do it, but they mostly seem targeted towards C. It doesn't seem there's a native way to do it in C++. It is a pretty simple problem though; I've got an int which I'd like to convert to a hex string for later printing.

Use <iomanip>'s std::hex. If you print, just send it to std::cout, if not, then use std::stringstream
std::stringstream stream;
stream << std::hex << your_int;
std::string result( stream.str() );
You can prepend the first << with << "0x" or whatever you like if you wish.
Other manips of interest are std::oct (octal) and std::dec (back to decimal).
One problem you may encounter is the fact that this produces the exact amount of digits needed to represent it. You may use setfill and setw this to circumvent the problem:
stream << std::setfill ('0') << std::setw(sizeof(your_type)*2)
<< std::hex << your_int;
So finally, I'd suggest such a function:
template< typename T >
std::string int_to_hex( T i )
{
std::stringstream stream;
stream << "0x"
<< std::setfill ('0') << std::setw(sizeof(T)*2)
<< std::hex << i;
return stream.str();
}

To make it lighter and faster I suggest to use direct filling of a string.
template <typename I> std::string n2hexstr(I w, size_t hex_len = sizeof(I)<<1) {
static const char* digits = "0123456789ABCDEF";
std::string rc(hex_len,'0');
for (size_t i=0, j=(hex_len-1)*4 ; i<hex_len; ++i,j-=4)
rc[i] = digits[(w>>j) & 0x0f];
return rc;
}

You can do it with C++20 std::format:
std::string s = std::format("{:x}", 42); // s == 2a
Until std::format is widely available you can use the {fmt} library, std::format is based on (godbolt):
std::string s = fmt::format("{:x}", 42); // s == 2a
Disclaimer: I'm the author of {fmt} and C++20 std::format.

Use std::stringstream to convert integers into strings and its special manipulators to set the base. For example like that:
std::stringstream sstream;
sstream << std::hex << my_integer;
std::string result = sstream.str();

Just print it as an hexadecimal number:
int i = /* ... */;
std::cout << std::hex << i;

#include <boost/format.hpp>
...
cout << (boost::format("%x") % 1234).str(); // output is: 4d2

You can try the following. It's working...
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
using namespace std;
template <class T>
string to_string(T t, ios_base & (*f)(ios_base&))
{
ostringstream oss;
oss << f << t;
return oss.str();
}
int main ()
{
cout<<to_string<long>(123456, hex)<<endl;
system("PAUSE");
return 0;
}

Since C++20, with std::format, you might do:
std::format("{:#x}", your_int); // 0x2a
std::format("{:#010x}", your_int); // 0x0000002a
Demo

Just have a look on my solution,[1] that I verbatim[2] copied from my project. My goal was to combine flexibility and safety within my actual needs:[3]
no 0x prefix added: caller may decide
automatic width deduction: less typing
explicit width control: widening for formatting, (lossless) shrinking to save space
capable for dealing with long long
restricted to integral types: avoid surprises by silent conversions
ease of understanding
no hard-coded limit
#include <string>
#include <sstream>
#include <iomanip>
/// Convert integer value `val` to text in hexadecimal format.
/// The minimum width is padded with leading zeros; if not
/// specified, this `width` is derived from the type of the
/// argument. Function suitable from char to long long.
/// Pointers, floating point values, etc. are not supported;
/// passing them will result in an (intentional!) compiler error.
/// Basics from: http://stackoverflow.com/a/5100745/2932052
template <typename T>
inline std::string int_to_hex(T val, size_t width=sizeof(T)*2)
{
std::stringstream ss;
ss << std::setfill('0') << std::setw(width) << std::hex << (val|0);
return ss.str();
}
[1] based on the answer by Kornel Kisielewicz
[2] Only the German API doc was translated to English.
[3] Translated into the language of CppTest, this is how it reads:
TEST_ASSERT(int_to_hex(char(0x12)) == "12");
TEST_ASSERT(int_to_hex(short(0x1234)) == "1234");
TEST_ASSERT(int_to_hex(long(0x12345678)) == "12345678");
TEST_ASSERT(int_to_hex((long long)(0x123456789abcdef0)) == "123456789abcdef0");
TEST_ASSERT(int_to_hex(0x123, 1) == "123");
TEST_ASSERT(int_to_hex(0x123, 8) == "00000123");
// width deduction test as suggested by Lightness Races in Orbit:
TEST_ASSERT(int_to_hex(short(0x12)) == "0012");

Thanks to Lincoln's comment below, I've changed this answer.
The following answer properly handles 8-bit ints at compile time. It doees, however, require C++17. If you don't have C++17, you'll have to do something else (e.g. provide overloads of this function, one for uint8_t and one for int8_t, or use something besides "if constexpr", maybe enable_if).
template< typename T >
std::string int_to_hex( T i )
{
// Ensure this function is called with a template parameter that makes sense. Note: static_assert is only available in C++11 and higher.
static_assert(std::is_integral<T>::value, "Template argument 'T' must be a fundamental integer type (e.g. int, short, etc..).");
std::stringstream stream;
stream << "0x" << std::setfill ('0') << std::setw(sizeof(T)*2) << std::hex;
// If T is an 8-bit integer type (e.g. uint8_t or int8_t) it will be
// treated as an ASCII code, giving the wrong result. So we use C++17's
// "if constexpr" to have the compiler decides at compile-time if it's
// converting an 8-bit int or not.
if constexpr (std::is_same_v<std::uint8_t, T>)
{
// Unsigned 8-bit unsigned int type. Cast to int (thanks Lincoln) to
// avoid ASCII code interpretation of the int. The number of hex digits
// in the returned string will still be two, which is correct for 8 bits,
// because of the 'sizeof(T)' above.
stream << static_cast<int>(i);
}
else if (std::is_same_v<std::int8_t, T>)
{
// For 8-bit signed int, same as above, except we must first cast to unsigned
// int, because values above 127d (0x7f) in the int will cause further issues.
// if we cast directly to int.
stream << static_cast<int>(static_cast<uint8_t>(i));
}
else
{
// No cast needed for ints wider than 8 bits.
stream << i;
}
return stream.str();
}
Original answer that doesn't handle 8-bit ints correctly as I thought it did:
Kornel Kisielewicz's answer is great. But a slight addition helps catch cases where you're calling this function with template arguments that don't make sense (e.g. float) or that would result in messy compiler errors (e.g. user-defined type).
template< typename T >
std::string int_to_hex( T i )
{
// Ensure this function is called with a template parameter that makes sense. Note: static_assert is only available in C++11 and higher.
static_assert(std::is_integral<T>::value, "Template argument 'T' must be a fundamental integer type (e.g. int, short, etc..).");
std::stringstream stream;
stream << "0x"
<< std::setfill ('0') << std::setw(sizeof(T)*2)
<< std::hex << i;
// Optional: replace above line with this to handle 8-bit integers.
// << std::hex << std::to_string(i);
return stream.str();
}
I've edited this to add a call to std::to_string because 8-bit integer types (e.g. std::uint8_t values passed) to std::stringstream are treated as char, which doesn't give you the result you want. Passing such integers to std::to_string handles them correctly and doesn't hurt things when using other, larger integer types. Of course you may possibly suffer a slight performance hit in these cases since the std::to_string call is unnecessary.
Note: I would have just added this in a comment to the original answer, but I don't have the rep to comment.

I can see all the elaborate coding samples others have used as answers, but there is nothing wrong with simply having this in a C++ application:
printf ("%04x", num);
for num = 128:
007f
https://en.wikipedia.org/wiki/Printf_format_string
C++ is effectively the original C language which has been extended, so anything in C is also perfectly valid C++.

A new C++17 way: std::to_chars from <charconv> (https://en.cppreference.com/w/cpp/utility/to_chars):
char addressStr[20] = { 0 };
std::to_chars(std::begin(addressStr), std::end(addressStr), address, 16);
return std::string{addressStr};
This is a bit verbose since std::to_chars works with a pre-allocated buffer to avoid dynamic allocations, but this also lets you optimize the code since allocations get very expensive if this is in a hot spot.
For extra optimization, you can omit pre-initializing the buffer and check the return value of to_chars to check for errors and get the length of the data written. Note: to_chars does NOT write a null-terminator!

int num = 30;
std::cout << std::hex << num << endl; // This should give you hexa- decimal of 30

I do:
int hex = 10;
std::string hexstring = stringFormat("%X", hex);
Take a look at SO answer from iFreilicht and the required template header-file from here GIST!

_itoa_s
char buf[_MAX_U64TOSTR_BASE2_COUNT];
_itoa_s(10, buf, _countof(buf), 16);
printf("%s\n", buf); // a
swprintf_s
uint8_t x = 10;
wchar_t buf[_MAX_ITOSTR_BASE16_COUNT];
swprintf_s(buf, L"%02X", x);

My solution. Only integral types are allowed.
You can test/run on https://replit.com/#JomaCorpFX/ToHex
Update. You can set optional prefix 0x in second parameter.
definition.h
#include <iomanip>
#include <sstream>
template <class T, class T2 = typename std::enable_if<std::is_integral<T>::value>::type>
static std::string ToHex(const T & data, bool addPrefix = true);
template<class T, class>
inline std::string ToHex(const T & data, bool addPrefix)
{
std::stringstream sstream;
sstream << std::hex;
std::string ret;
if (typeid(T) == typeid(char) || typeid(T) == typeid(unsigned char) || sizeof(T)==1)
{
sstream << static_cast<int>(data);
ret = sstream.str();
if (ret.length() > 2)
{
ret = ret.substr(ret.length() - 2, 2);
}
}
else
{
sstream << data;
ret = sstream.str();
}
return (addPrefix ? u8"0x" : u8"") + ret;
}
main.cpp
#include <iostream>
#include "definition.h"
int main()
{
std::cout << ToHex<unsigned char>(254) << std::endl;
std::cout << ToHex<char>(-2) << std::endl;
std::cout << ToHex<int>(-2) << std::endl;
std::cout << ToHex<long long>(-2) << std::endl;
std::cout<< std::endl;
std::cout << ToHex<unsigned char>(254, false) << std::endl;
std::cout << ToHex<char>(-2, false) << std::endl;
std::cout << ToHex<int>(-2, false) << std::endl;
std::cout << ToHex<long long>(-2, false) << std::endl;
return 0;
}
Results:
0xfe
0xfe
0xfffffffe
0xfffffffffffffffe
fe
fe
fffffffe
fffffffffffffffe

For those of you who figured out that many/most of the ios::fmtflags don't work with std::stringstream yet like the template idea that Kornel posted way back when, the following works and is relatively clean:
#include <iomanip>
#include <sstream>
template< typename T >
std::string hexify(T i)
{
std::stringbuf buf;
std::ostream os(&buf);
os << "0x" << std::setfill('0') << std::setw(sizeof(T) * 2)
<< std::hex << i;
return buf.str().c_str();
}
int someNumber = 314159265;
std::string hexified = hexify< int >(someNumber);

Code for your reference:
#include <iomanip>
#include <sstream>
...
string intToHexString(int intValue) {
string hexStr;
/// integer value to hex-string
std::stringstream sstream;
sstream << "0x"
<< std::setfill ('0') << std::setw(2)
<< std::hex << (int)intValue;
hexStr= sstream.str();
sstream.clear(); //clears out the stream-string
return hexStr;
}

I would like to add an answer to enjoy the beauty of C ++ language. Its adaptability to work at high and low levels. Happy programming.
public:template <class T,class U> U* Int2Hex(T lnumber, U* buffer)
{
const char* ref = "0123456789ABCDEF";
T hNibbles = (lnumber >> 4);
unsigned char* b_lNibbles = (unsigned char*)&lnumber;
unsigned char* b_hNibbles = (unsigned char*)&hNibbles;
U* pointer = buffer + (sizeof(lnumber) << 1);
*pointer = 0;
do {
*--pointer = ref[(*b_lNibbles++) & 0xF];
*--pointer = ref[(*b_hNibbles++) & 0xF];
} while (pointer > buffer);
return buffer;
}
Examples:
char buffer[100] = { 0 };
Int2Hex(305419896ULL, buffer);//returns "0000000012345678"
Int2Hex(305419896UL, buffer);//returns "12345678"
Int2Hex((short)65533, buffer);//returns "FFFD"
Int2Hex((char)18, buffer);//returns "12"
wchar_t buffer[100] = { 0 };
Int2Hex(305419896ULL, buffer);//returns L"0000000012345678"
Int2Hex(305419896UL, buffer);//returns L"12345678"
Int2Hex((short)65533, buffer);//returns L"FFFD"
Int2Hex((char)18, buffer);//returns L"12"

for fixed number of digits, for instance 2:
static const char* digits = "0123456789ABCDEF";//dec 2 hex digits positional map
char value_hex[3];//2 digits + terminator
value_hex[0] = digits[(int_value >> 4) & 0x0F]; //move of 4 bit, that is an HEX digit, and take 4 lower. for higher digits use multiple of 4
value_hex[1] = digits[int_value & 0x0F]; //no need to move the lower digit
value_hex[2] = '\0'; //terminator
you can also write a for cycle variant to handle variable digits amount
benefits:
speed: it is a minimal bit operation, without external function calls
memory: it use local string, no allocation out of function stack frame, no free of memory needed. Anyway if needed you can use a field or a global to make the value_ex to persists out of the stack frame

ANOTHER SIMPLE APPROACH
#include<iostream>
#include<iomanip> // for setbase(), works for base 8,10 and 16 only
using namespace std;
int main(){
int x = (16*16+16+1)*15;
string ans;
stringstream ss;
ss << setbase(16) << x << endl;
ans = ss.str();
cout << ans << endl;//prints fff

With the variable:
char selA[12];
then:
snprintf(selA, 12, "SELA;0x%X;", 85);
will result in selA containing the string SELA;0x55;
Note that the things surrounding the 55 are just particulars related to the serial protocol used in my application.

#include <iostream>
#include <sstream>
int main()
{
unsigned int i = 4967295; // random number
std::string str1, str2;
unsigned int u1, u2;
std::stringstream ss;
Using void pointer:
// INT to HEX
ss << (void*)i; // <- FULL hex address using void pointer
ss >> str1; // giving address value of one given in decimals.
ss.clear(); // <- Clear bits
// HEX to INT
ss << std::hex << str1; // <- Capitals doesn't matter so no need to do extra here
ss >> u1;
ss.clear();
Adding 0x:
// INT to HEX with 0x
ss << "0x" << (void*)i; // <- Same as above but adding 0x to beginning
ss >> str2;
ss.clear();
// HEX to INT with 0x
ss << std::hex << str2; // <- 0x is also understood so need to do extra here
ss >> u2;
ss.clear();
Outputs:
std::cout << str1 << std::endl; // 004BCB7F
std::cout << u1 << std::endl; // 4967295
std::cout << std::endl;
std::cout << str2 << std::endl; // 0x004BCB7F
std::cout << u2 << std::endl; // 4967295
return 0;
}

char_to_hex returns a string of two characters
const char HEX_MAP[] = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F'};
char replace(unsigned char c)
{
return HEX_MAP[c & 0x0f];
}
std::string char_to_hex(unsigned char c)
{
std::string hex;
// First four bytes
char left = (c >> 4);
// Second four bytes
char right = (c & 0x0f);
hex += replace(left);
hex += replace(right);
return hex;
}

All the answers I read are pretty slow, except one of them, but that one only works for little endian CPUs. Here's a fast implementation that works on big and little endian CPUs.
std::string Hex64(uint64_t number)
{
static const char* maps = "0123456789abcdef";
// if you want more speed, pass a buffer as a function parameter and return an std::string_view (or nothing)
char buffer[17]; // = "0000000000000000"; // uncomment if leading 0s are desired
char* c = buffer + 16;
do
{
*--c = maps[number & 15];
number >>= 4;
}
while (number > 0);
// this strips the leading 0s, if you want to keep them, then return std::string(buffer, 16); and uncomment the "000..." above
return std::string(c, 16 - (c - buffer));
}
Compared to std::format and fmt::format("{:x}", value), I get between 2x (for values > (1ll << 60)) and 6x the speed (for smaller values).
Examples of input/output:
const std::vector<std::tuple<uint64_t, std::string>> vectors = {
{18446744073709551615, "ffffffffffffffff"},
{ 4294967295u, "ffffffff"},
{ 16777215, "ffffff"},
{ 65535, "ffff"},
{ 255, "ff"},
{ 16, "10"},
{ 15, "f"},
{ 0, "0"},
};

You can define MACRO to use as one liner like this.
#include <sstream>
#define to_hex_str(hex_val) (static_cast<std::stringstream const&>(std::stringstream() << "0x" << std::hex << hex_val)).str()

This question is quite old but the answers given are to my opinion not the best.
If you are using C++20 then you have the option to use std::format which is a very good solution. However if you are using C++11/14/17 or below you will not have this option.
Most other answers either use the std::stringstream or implement their own conversion modifying the underlying string buffer directly by themselves.
The first option is rather heavy weight. The second option is inherently insecure and bug prone.
Since I had to implement an integer to hex string lately I chose to do a a true C++ safe implementation using function overloads and template partial specialization to let the compiler handle the type checks. The code uses sprintf (which one of its flavors is generally used by the standard library for std::to_string). And it relies on template partial specialization to correctly select the right sprintf format and leading 0 addition. It separately and correctly handles different pointer sizes and unsigned long sizes for different OSs and architectures. (4/4/4, 4/4/8, 4/8/8)
This answer targets C++11
H File:
#ifndef STRINGUTILS_H_
#define STRINGUTILS_H_
#include <string>
namespace string_utils
{
/* ... Other string utils ... */
std::string hex_string(unsigned char v);
std::string hex_string(unsigned short v);
std::string hex_string(unsigned int v);
std::string hex_string(unsigned long v);
std::string hex_string(unsigned long long v);
std::string hex_string(std::ptrdiff_t v);
} // namespace string_utils
#endif
CPP File
#include "stringutils.h"
#include <cstdio>
namespace
{
template <typename T, int Width> struct LModifier;
template <> struct LModifier<unsigned char, sizeof(unsigned char)>
{
static constexpr char fmt[] = "%02hhX";
};
template <> struct LModifier<unsigned short, sizeof(unsigned short)>
{
static constexpr char fmt[] = "%04hX";
};
template <> struct LModifier<unsigned int, sizeof(unsigned int)>
{
static constexpr char fmt[] = "%08X";
};
template <> struct LModifier<unsigned long, 4>
{
static constexpr char fmt[] = "%08lX";
};
template <> struct LModifier<unsigned long, 8>
{
static constexpr char fmt[] = "%016lX";
};
template <> struct LModifier<unsigned long long, sizeof(unsigned long long)>
{
static constexpr char fmt[] = "%016llX";
};
template <> struct LModifier<std::ptrdiff_t, 4>
{
static constexpr char fmt[] = "%08tX";
};
template <> struct LModifier<std::ptrdiff_t, 8>
{
static constexpr char fmt[] = "%016tX";
};
constexpr char LModifier<unsigned char, sizeof(unsigned char)>::fmt[];
constexpr char LModifier<unsigned short, sizeof(unsigned short)>::fmt[];
constexpr char LModifier<unsigned int, sizeof(unsigned int)>::fmt[];
constexpr char LModifier<unsigned long, sizeof(unsigned long)>::fmt[];
constexpr char LModifier<unsigned long long, sizeof(unsigned long long)>::fmt[];
constexpr char LModifier<std::ptrdiff_t, sizeof(std::ptrdiff_t)>::fmt[];
template <typename T, std::size_t BUF_SIZE = sizeof(T) * 2U> std::string hex_string_(T v)
{
std::string ret(BUF_SIZE + 1, 0);
std::sprintf((char *)ret.data(), LModifier<T, sizeof(T)>::fmt, v);
return ret;
}
} // anonymous namespace
std::string string_utils::hex_string(unsigned char v)
{
return hex_string_(v);
}
std::string string_utils::hex_string(unsigned short v)
{
return hex_string_(v);
}
std::string string_utils::hex_string(unsigned int v)
{
return hex_string_(v);
}
std::string string_utils::hex_string(unsigned long v)
{
return hex_string_(v);
}
std::string string_utils::hex_string(unsigned long long v)
{
return hex_string_(v);
}
std::string string_utils::hex_string(std::ptrdiff_t v)
{
return hex_string_(v);
}

Related

Cannot directly convert number to hex null-terminated string, has to convert to std::string then use .c_str()

I've tried to convert an integer to a hex null-terminated (or "C-style") string but I cannot use it with printf or my custom log function. It only works if I convert it to an std::string then use .c_str() when passing it as a parameter, which produces ugly, hard-to-understand code.
It's important to know that using std::string and appending to it with "str +=" does work.
const char* IntToHexString(int nDecimalNumber) {
int nTemp = 0;
char szHex[128] = { 0 };
char hex[] = { '0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F' };
while (nDecimalNumber > 0) {
nTemp = nDecimalNumber % 16;
sprintf(szHex, "%s%s", hex[nTemp], szHex);
nDecimalNumber = nDecimalNumber / 16;
}
sprintf(szHex, "0x%s", szHex);
return szHex;
}
I've tried to use Visual Studio Debugger but it doesn't show any error messages, because crashes somewhere in a DLL that has no symbols loaded
Your main problem is that you define a variable on the stack, locally in the function, and then return it.
After the function returns, the char* will point to "somewhere", to an undefined position. That is a major bug. You have also other bugs that have been commented on already. Like sprintf(szHex, "0x%s", szHex);, which is undefined behaviour (UB) or sprintf(szHex, "%s%s", hex[nTemp], szHex); which has the same problem + additionally a wrong format string.
The more C++ solution would be, as already shown in many posts:
#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>
std::string toHexString(unsigned int hexValue)
{
std::ostringstream oss;
oss << "0x" << std::hex << hexValue;
return std::string(oss.str());
}
int main()
{
std::cout << toHexString(15) << '\n';
// or directly
std::cout << "0x" << std::hex << 15 << '\n';
return 0;
}
Of course a C-Style solution is also possible.
But all the following I would not recommend:
If you want to stick to C like solution with char *, you could make the char szHex[128] = { 0 }; static. Or, even better, pass in the pointer to a buffer and return its address, like in
#include <stdio.h>
#include <iostream>
char* toHexCharP(unsigned int hexValue, char *outBuffer, const size_t maxSizeOutBuffer)
{
snprintf(outBuffer,maxSizeOutBuffer-1,"0x%X",hexValue);
return outBuffer;
}
constexpr size_t MaxBufSize = 100U;
int main()
{
char buf[MaxBufSize];
std::cout << toHexCharP(15, buf, MaxBufSize) << '\n';
return 0;
}
But as said, I would not recomend. Too dangerous.
Your solution should look as follows:
std::string IntToHexString(int nDecimalNumber) {
std::ostringstream str;
str << std::hex << nDecimalNumber;
return str.str();
}
// ...
std::string transformed = IntToHexString(123);
You can then use transformed.c_str() to get your string as const char*.
Unless you have reasons to do so, you should never work with const char* in modern C++. Use std::string::c_str() if you need to.

Converting unsigned char * to hexstring

Below code takes a hex string(every byte is represented as its corresponidng hex value)
converts it to unsigned char * buffer and then converts back to hex string.
This code is testing the conversion from unsigned char* buffer to hex string
which I need to send over the network to a receiver process.
I chose hex string as unsigned char can be in range of 0 to 255 and there is no printable character after 127.
The below code just tells the portion that bugs me. Its in the comment.
#include <iostream>
#include <sstream>
#include <iomanip>
using namespace std;
// converts a hexstring to corresponding integer. i.e "c0" - > 192
int convertHexStringToInt(const string & hexString)
{
stringstream geek;
int x=0;
geek << std::hex << hexString;
geek >> x;
return x;
}
// converts a complete hexstring to unsigned char * buffer
void convertHexStringToUnsignedCharBuffer(string hexString, unsigned char*
hexBuffer)
{
int i=0;
while(hexString.length())
{
string hexStringPart = hexString.substr(0,2);
hexString = hexString.substr(2);
int hexStringOneByte = convertHexStringToInt (hexStringPart);
hexBuffer[i] = static_cast<unsigned char>((hexStringOneByte & 0xFF)) ;
i++;
}
}
int main()
{
//below hex string is a hex representation of a unsigned char * buffer.
//this is generated by an excryption algorithm in unsigned char* format
//I am converting it to hex string to make it printable for verification pupose.
//and takes the hexstring as inpuit here to test the conversion logic.
string inputHexString = "552027e33844dd7b71676b963c0b8e20";
string outputHexString;
stringstream geek;
unsigned char * hexBuffer = new unsigned char[inputHexString.length()/2];
convertHexStringToUnsignedCharBuffer(inputHexString, hexBuffer);
for (int i=0;i<inputHexString.length()/2;i++)
{
geek <<std::hex << std::setw(2) << std::setfill('0')<<(0xFF&hexBuffer[i]); // this works
//geek <<std::hex << std::setw(2) << std::setfill('0')<<(hexBuffer[i]); -- > this does not work
// I am not able to figure out why I need to do the bit wise and operation with unsigned char "0xFF&hexBuffer[i]"
// without this the conversion does not work for individual bytes having ascii values more than 127.
}
geek >> outputHexString;
cout << "input hex string: " << inputHexString<<endl;
cout << "output hex string: " << outputHexString<<endl;
if(0 == inputHexString.compare(outputHexString))
cout<<"hex encoding successful"<<endl;
else
cout<<"hex encoding failed"<<endl;
if(NULL != hexBuffer)
delete[] hexBuffer;
return 0;
}
// output
// can some one explain ? I am sure its something silly that I am missing.
the C++20 way:
unsigned char* data = new unsigned char[]{ "Hello world\n\t\r\0" };
std::size_t data_size = sizeof("Hello world\n\t\r\0") - 1;
auto sp = std::span(data, data_size );
std::transform( sp.begin(), sp.end(),
std::ostream_iterator<std::string>(std::cout),
[](unsigned char c) -> std::string {
return std::format("{:02X}", int(c));
});
or if you want to store result into string:
std::string result{};
result.reserve(size * 2 + 1);
std::transform( sp.begin(), sp.end(),
std::back_inserter(result),
[](unsigned char c) -> std::string {
return std::format("{:02X}", int(c));
});
Output:
48656C6C6F20776F726C640A090D00
The output of an unsigned char is like the output of a char which obviously does not what the OP expects.
I tested the following on coliru:
#include <iomanip>
#include <iostream>
int main()
{
std::cout << "Output of (unsigned char)0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (unsigned char)0xc0 << '\n';
return 0;
}
and got:
Output of (unsigned char)0xc0: 0�
This is caused by the std::ostream::operator<<() which is chosen out of the available operators. I looked on cppreference
operator<<(std::basic_ostream) and
std::basic_ostream::operator<<
and found
template< class Traits >
basic_ostream<char,Traits>& operator<<( basic_ostream<char,Traits>& os,
unsigned char ch );
in the former (with a little bit help from M.M).
The OP suggested a fix: bit-wise And with 0xff which seemed to work. Checking this in coliru.com:
#include <iomanip>
#include <iostream>
int main()
{
std::cout << "Output of (unsigned char)0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (0xff & (unsigned char)0xc0) << '\n';
return 0;
}
Output:
Output of (unsigned char)0xc0: c0
Really, this seems to work. Why?
0xff is an int constant (stricly speaking: an integer literal) and has type int. Hence, the bit-wise And promotes (unsigned char)0xc0 to int as well, yields the result of type int, and hence, the std::ostream::operator<< for int is applied.
This is an option to solve this. I can provide another one – just converting the unsigned char to unsigned.
Where the promotion of unsigned char to int introduces a possible sign-bit extension (which is undesired in this case), this doesn't happen when unsigned char is converted to unsigned. The output stream operator for unsigned provides the intended output as well:
#include <iomanip>
#include <iostream>
int main()
{
std::cout << "Output of (unsigned char)0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (unsigned)(unsigned char)0xc0 << '\n';
const unsigned char c = 0xc0;
std::cout << "Output of unsigned char c = 0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (unsigned)c << '\n';
return 0;
}
Output:
Output of (unsigned char)0xc0: c0
Output of unsigned char c = 0xc0: c0
Live Demo on coliru

Converting hex number to cstring [duplicate]

How do I convert an integer to a hex string in C++?
I can find some ways to do it, but they mostly seem targeted towards C. It doesn't seem there's a native way to do it in C++. It is a pretty simple problem though; I've got an int which I'd like to convert to a hex string for later printing.
Use <iomanip>'s std::hex. If you print, just send it to std::cout, if not, then use std::stringstream
std::stringstream stream;
stream << std::hex << your_int;
std::string result( stream.str() );
You can prepend the first << with << "0x" or whatever you like if you wish.
Other manips of interest are std::oct (octal) and std::dec (back to decimal).
One problem you may encounter is the fact that this produces the exact amount of digits needed to represent it. You may use setfill and setw this to circumvent the problem:
stream << std::setfill ('0') << std::setw(sizeof(your_type)*2)
<< std::hex << your_int;
So finally, I'd suggest such a function:
template< typename T >
std::string int_to_hex( T i )
{
std::stringstream stream;
stream << "0x"
<< std::setfill ('0') << std::setw(sizeof(T)*2)
<< std::hex << i;
return stream.str();
}
To make it lighter and faster I suggest to use direct filling of a string.
template <typename I> std::string n2hexstr(I w, size_t hex_len = sizeof(I)<<1) {
static const char* digits = "0123456789ABCDEF";
std::string rc(hex_len,'0');
for (size_t i=0, j=(hex_len-1)*4 ; i<hex_len; ++i,j-=4)
rc[i] = digits[(w>>j) & 0x0f];
return rc;
}
You can do it with C++20 std::format:
std::string s = std::format("{:x}", 42); // s == 2a
Until std::format is widely available you can use the {fmt} library, std::format is based on (godbolt):
std::string s = fmt::format("{:x}", 42); // s == 2a
Disclaimer: I'm the author of {fmt} and C++20 std::format.
Use std::stringstream to convert integers into strings and its special manipulators to set the base. For example like that:
std::stringstream sstream;
sstream << std::hex << my_integer;
std::string result = sstream.str();
Just print it as an hexadecimal number:
int i = /* ... */;
std::cout << std::hex << i;
#include <boost/format.hpp>
...
cout << (boost::format("%x") % 1234).str(); // output is: 4d2
You can try the following. It's working...
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
using namespace std;
template <class T>
string to_string(T t, ios_base & (*f)(ios_base&))
{
ostringstream oss;
oss << f << t;
return oss.str();
}
int main ()
{
cout<<to_string<long>(123456, hex)<<endl;
system("PAUSE");
return 0;
}
Since C++20, with std::format, you might do:
std::format("{:#x}", your_int); // 0x2a
std::format("{:#010x}", your_int); // 0x0000002a
Demo
Just have a look on my solution,[1] that I verbatim[2] copied from my project. My goal was to combine flexibility and safety within my actual needs:[3]
no 0x prefix added: caller may decide
automatic width deduction: less typing
explicit width control: widening for formatting, (lossless) shrinking to save space
capable for dealing with long long
restricted to integral types: avoid surprises by silent conversions
ease of understanding
no hard-coded limit
#include <string>
#include <sstream>
#include <iomanip>
/// Convert integer value `val` to text in hexadecimal format.
/// The minimum width is padded with leading zeros; if not
/// specified, this `width` is derived from the type of the
/// argument. Function suitable from char to long long.
/// Pointers, floating point values, etc. are not supported;
/// passing them will result in an (intentional!) compiler error.
/// Basics from: http://stackoverflow.com/a/5100745/2932052
template <typename T>
inline std::string int_to_hex(T val, size_t width=sizeof(T)*2)
{
std::stringstream ss;
ss << std::setfill('0') << std::setw(width) << std::hex << (val|0);
return ss.str();
}
[1] based on the answer by Kornel Kisielewicz
[2] Only the German API doc was translated to English.
[3] Translated into the language of CppTest, this is how it reads:
TEST_ASSERT(int_to_hex(char(0x12)) == "12");
TEST_ASSERT(int_to_hex(short(0x1234)) == "1234");
TEST_ASSERT(int_to_hex(long(0x12345678)) == "12345678");
TEST_ASSERT(int_to_hex((long long)(0x123456789abcdef0)) == "123456789abcdef0");
TEST_ASSERT(int_to_hex(0x123, 1) == "123");
TEST_ASSERT(int_to_hex(0x123, 8) == "00000123");
// width deduction test as suggested by Lightness Races in Orbit:
TEST_ASSERT(int_to_hex(short(0x12)) == "0012");
Thanks to Lincoln's comment below, I've changed this answer.
The following answer properly handles 8-bit ints at compile time. It doees, however, require C++17. If you don't have C++17, you'll have to do something else (e.g. provide overloads of this function, one for uint8_t and one for int8_t, or use something besides "if constexpr", maybe enable_if).
template< typename T >
std::string int_to_hex( T i )
{
// Ensure this function is called with a template parameter that makes sense. Note: static_assert is only available in C++11 and higher.
static_assert(std::is_integral<T>::value, "Template argument 'T' must be a fundamental integer type (e.g. int, short, etc..).");
std::stringstream stream;
stream << "0x" << std::setfill ('0') << std::setw(sizeof(T)*2) << std::hex;
// If T is an 8-bit integer type (e.g. uint8_t or int8_t) it will be
// treated as an ASCII code, giving the wrong result. So we use C++17's
// "if constexpr" to have the compiler decides at compile-time if it's
// converting an 8-bit int or not.
if constexpr (std::is_same_v<std::uint8_t, T>)
{
// Unsigned 8-bit unsigned int type. Cast to int (thanks Lincoln) to
// avoid ASCII code interpretation of the int. The number of hex digits
// in the returned string will still be two, which is correct for 8 bits,
// because of the 'sizeof(T)' above.
stream << static_cast<int>(i);
}
else if (std::is_same_v<std::int8_t, T>)
{
// For 8-bit signed int, same as above, except we must first cast to unsigned
// int, because values above 127d (0x7f) in the int will cause further issues.
// if we cast directly to int.
stream << static_cast<int>(static_cast<uint8_t>(i));
}
else
{
// No cast needed for ints wider than 8 bits.
stream << i;
}
return stream.str();
}
Original answer that doesn't handle 8-bit ints correctly as I thought it did:
Kornel Kisielewicz's answer is great. But a slight addition helps catch cases where you're calling this function with template arguments that don't make sense (e.g. float) or that would result in messy compiler errors (e.g. user-defined type).
template< typename T >
std::string int_to_hex( T i )
{
// Ensure this function is called with a template parameter that makes sense. Note: static_assert is only available in C++11 and higher.
static_assert(std::is_integral<T>::value, "Template argument 'T' must be a fundamental integer type (e.g. int, short, etc..).");
std::stringstream stream;
stream << "0x"
<< std::setfill ('0') << std::setw(sizeof(T)*2)
<< std::hex << i;
// Optional: replace above line with this to handle 8-bit integers.
// << std::hex << std::to_string(i);
return stream.str();
}
I've edited this to add a call to std::to_string because 8-bit integer types (e.g. std::uint8_t values passed) to std::stringstream are treated as char, which doesn't give you the result you want. Passing such integers to std::to_string handles them correctly and doesn't hurt things when using other, larger integer types. Of course you may possibly suffer a slight performance hit in these cases since the std::to_string call is unnecessary.
Note: I would have just added this in a comment to the original answer, but I don't have the rep to comment.
I can see all the elaborate coding samples others have used as answers, but there is nothing wrong with simply having this in a C++ application:
printf ("%04x", num);
for num = 128:
007f
https://en.wikipedia.org/wiki/Printf_format_string
C++ is effectively the original C language which has been extended, so anything in C is also perfectly valid C++.
A new C++17 way: std::to_chars from <charconv> (https://en.cppreference.com/w/cpp/utility/to_chars):
char addressStr[20] = { 0 };
std::to_chars(std::begin(addressStr), std::end(addressStr), address, 16);
return std::string{addressStr};
This is a bit verbose since std::to_chars works with a pre-allocated buffer to avoid dynamic allocations, but this also lets you optimize the code since allocations get very expensive if this is in a hot spot.
For extra optimization, you can omit pre-initializing the buffer and check the return value of to_chars to check for errors and get the length of the data written. Note: to_chars does NOT write a null-terminator!
int num = 30;
std::cout << std::hex << num << endl; // This should give you hexa- decimal of 30
I do:
int hex = 10;
std::string hexstring = stringFormat("%X", hex);
Take a look at SO answer from iFreilicht and the required template header-file from here GIST!
_itoa_s
char buf[_MAX_U64TOSTR_BASE2_COUNT];
_itoa_s(10, buf, _countof(buf), 16);
printf("%s\n", buf); // a
swprintf_s
uint8_t x = 10;
wchar_t buf[_MAX_ITOSTR_BASE16_COUNT];
swprintf_s(buf, L"%02X", x);
My solution. Only integral types are allowed.
You can test/run on https://replit.com/#JomaCorpFX/ToHex
Update. You can set optional prefix 0x in second parameter.
definition.h
#include <iomanip>
#include <sstream>
template <class T, class T2 = typename std::enable_if<std::is_integral<T>::value>::type>
static std::string ToHex(const T & data, bool addPrefix = true);
template<class T, class>
inline std::string ToHex(const T & data, bool addPrefix)
{
std::stringstream sstream;
sstream << std::hex;
std::string ret;
if (typeid(T) == typeid(char) || typeid(T) == typeid(unsigned char) || sizeof(T)==1)
{
sstream << static_cast<int>(data);
ret = sstream.str();
if (ret.length() > 2)
{
ret = ret.substr(ret.length() - 2, 2);
}
}
else
{
sstream << data;
ret = sstream.str();
}
return (addPrefix ? u8"0x" : u8"") + ret;
}
main.cpp
#include <iostream>
#include "definition.h"
int main()
{
std::cout << ToHex<unsigned char>(254) << std::endl;
std::cout << ToHex<char>(-2) << std::endl;
std::cout << ToHex<int>(-2) << std::endl;
std::cout << ToHex<long long>(-2) << std::endl;
std::cout<< std::endl;
std::cout << ToHex<unsigned char>(254, false) << std::endl;
std::cout << ToHex<char>(-2, false) << std::endl;
std::cout << ToHex<int>(-2, false) << std::endl;
std::cout << ToHex<long long>(-2, false) << std::endl;
return 0;
}
Results:
0xfe
0xfe
0xfffffffe
0xfffffffffffffffe
fe
fe
fffffffe
fffffffffffffffe
For those of you who figured out that many/most of the ios::fmtflags don't work with std::stringstream yet like the template idea that Kornel posted way back when, the following works and is relatively clean:
#include <iomanip>
#include <sstream>
template< typename T >
std::string hexify(T i)
{
std::stringbuf buf;
std::ostream os(&buf);
os << "0x" << std::setfill('0') << std::setw(sizeof(T) * 2)
<< std::hex << i;
return buf.str().c_str();
}
int someNumber = 314159265;
std::string hexified = hexify< int >(someNumber);
Code for your reference:
#include <iomanip>
#include <sstream>
...
string intToHexString(int intValue) {
string hexStr;
/// integer value to hex-string
std::stringstream sstream;
sstream << "0x"
<< std::setfill ('0') << std::setw(2)
<< std::hex << (int)intValue;
hexStr= sstream.str();
sstream.clear(); //clears out the stream-string
return hexStr;
}
I would like to add an answer to enjoy the beauty of C ++ language. Its adaptability to work at high and low levels. Happy programming.
public:template <class T,class U> U* Int2Hex(T lnumber, U* buffer)
{
const char* ref = "0123456789ABCDEF";
T hNibbles = (lnumber >> 4);
unsigned char* b_lNibbles = (unsigned char*)&lnumber;
unsigned char* b_hNibbles = (unsigned char*)&hNibbles;
U* pointer = buffer + (sizeof(lnumber) << 1);
*pointer = 0;
do {
*--pointer = ref[(*b_lNibbles++) & 0xF];
*--pointer = ref[(*b_hNibbles++) & 0xF];
} while (pointer > buffer);
return buffer;
}
Examples:
char buffer[100] = { 0 };
Int2Hex(305419896ULL, buffer);//returns "0000000012345678"
Int2Hex(305419896UL, buffer);//returns "12345678"
Int2Hex((short)65533, buffer);//returns "FFFD"
Int2Hex((char)18, buffer);//returns "12"
wchar_t buffer[100] = { 0 };
Int2Hex(305419896ULL, buffer);//returns L"0000000012345678"
Int2Hex(305419896UL, buffer);//returns L"12345678"
Int2Hex((short)65533, buffer);//returns L"FFFD"
Int2Hex((char)18, buffer);//returns L"12"
for fixed number of digits, for instance 2:
static const char* digits = "0123456789ABCDEF";//dec 2 hex digits positional map
char value_hex[3];//2 digits + terminator
value_hex[0] = digits[(int_value >> 4) & 0x0F]; //move of 4 bit, that is an HEX digit, and take 4 lower. for higher digits use multiple of 4
value_hex[1] = digits[int_value & 0x0F]; //no need to move the lower digit
value_hex[2] = '\0'; //terminator
you can also write a for cycle variant to handle variable digits amount
benefits:
speed: it is a minimal bit operation, without external function calls
memory: it use local string, no allocation out of function stack frame, no free of memory needed. Anyway if needed you can use a field or a global to make the value_ex to persists out of the stack frame
ANOTHER SIMPLE APPROACH
#include<iostream>
#include<iomanip> // for setbase(), works for base 8,10 and 16 only
using namespace std;
int main(){
int x = (16*16+16+1)*15;
string ans;
stringstream ss;
ss << setbase(16) << x << endl;
ans = ss.str();
cout << ans << endl;//prints fff
With the variable:
char selA[12];
then:
snprintf(selA, 12, "SELA;0x%X;", 85);
will result in selA containing the string SELA;0x55;
Note that the things surrounding the 55 are just particulars related to the serial protocol used in my application.
#include <iostream>
#include <sstream>
int main()
{
unsigned int i = 4967295; // random number
std::string str1, str2;
unsigned int u1, u2;
std::stringstream ss;
Using void pointer:
// INT to HEX
ss << (void*)i; // <- FULL hex address using void pointer
ss >> str1; // giving address value of one given in decimals.
ss.clear(); // <- Clear bits
// HEX to INT
ss << std::hex << str1; // <- Capitals doesn't matter so no need to do extra here
ss >> u1;
ss.clear();
Adding 0x:
// INT to HEX with 0x
ss << "0x" << (void*)i; // <- Same as above but adding 0x to beginning
ss >> str2;
ss.clear();
// HEX to INT with 0x
ss << std::hex << str2; // <- 0x is also understood so need to do extra here
ss >> u2;
ss.clear();
Outputs:
std::cout << str1 << std::endl; // 004BCB7F
std::cout << u1 << std::endl; // 4967295
std::cout << std::endl;
std::cout << str2 << std::endl; // 0x004BCB7F
std::cout << u2 << std::endl; // 4967295
return 0;
}
char_to_hex returns a string of two characters
const char HEX_MAP[] = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F'};
char replace(unsigned char c)
{
return HEX_MAP[c & 0x0f];
}
std::string char_to_hex(unsigned char c)
{
std::string hex;
// First four bytes
char left = (c >> 4);
// Second four bytes
char right = (c & 0x0f);
hex += replace(left);
hex += replace(right);
return hex;
}
All the answers I read are pretty slow, except one of them, but that one only works for little endian CPUs. Here's a fast implementation that works on big and little endian CPUs.
std::string Hex64(uint64_t number)
{
static const char* maps = "0123456789abcdef";
// if you want more speed, pass a buffer as a function parameter and return an std::string_view (or nothing)
char buffer[17]; // = "0000000000000000"; // uncomment if leading 0s are desired
char* c = buffer + 16;
do
{
*--c = maps[number & 15];
number >>= 4;
}
while (number > 0);
// this strips the leading 0s, if you want to keep them, then return std::string(buffer, 16); and uncomment the "000..." above
return std::string(c, 16 - (c - buffer));
}
Compared to std::format and fmt::format("{:x}", value), I get between 2x (for values > (1ll << 60)) and 6x the speed (for smaller values).
Examples of input/output:
const std::vector<std::tuple<uint64_t, std::string>> vectors = {
{18446744073709551615, "ffffffffffffffff"},
{ 4294967295u, "ffffffff"},
{ 16777215, "ffffff"},
{ 65535, "ffff"},
{ 255, "ff"},
{ 16, "10"},
{ 15, "f"},
{ 0, "0"},
};
You can define MACRO to use as one liner like this.
#include <sstream>
#define to_hex_str(hex_val) (static_cast<std::stringstream const&>(std::stringstream() << "0x" << std::hex << hex_val)).str()
This question is quite old but the answers given are to my opinion not the best.
If you are using C++20 then you have the option to use std::format which is a very good solution. However if you are using C++11/14/17 or below you will not have this option.
Most other answers either use the std::stringstream or implement their own conversion modifying the underlying string buffer directly by themselves.
The first option is rather heavy weight. The second option is inherently insecure and bug prone.
Since I had to implement an integer to hex string lately I chose to do a a true C++ safe implementation using function overloads and template partial specialization to let the compiler handle the type checks. The code uses sprintf (which one of its flavors is generally used by the standard library for std::to_string). And it relies on template partial specialization to correctly select the right sprintf format and leading 0 addition. It separately and correctly handles different pointer sizes and unsigned long sizes for different OSs and architectures. (4/4/4, 4/4/8, 4/8/8)
This answer targets C++11
H File:
#ifndef STRINGUTILS_H_
#define STRINGUTILS_H_
#include <string>
namespace string_utils
{
/* ... Other string utils ... */
std::string hex_string(unsigned char v);
std::string hex_string(unsigned short v);
std::string hex_string(unsigned int v);
std::string hex_string(unsigned long v);
std::string hex_string(unsigned long long v);
std::string hex_string(std::ptrdiff_t v);
} // namespace string_utils
#endif
CPP File
#include "stringutils.h"
#include <cstdio>
namespace
{
template <typename T, int Width> struct LModifier;
template <> struct LModifier<unsigned char, sizeof(unsigned char)>
{
static constexpr char fmt[] = "%02hhX";
};
template <> struct LModifier<unsigned short, sizeof(unsigned short)>
{
static constexpr char fmt[] = "%04hX";
};
template <> struct LModifier<unsigned int, sizeof(unsigned int)>
{
static constexpr char fmt[] = "%08X";
};
template <> struct LModifier<unsigned long, 4>
{
static constexpr char fmt[] = "%08lX";
};
template <> struct LModifier<unsigned long, 8>
{
static constexpr char fmt[] = "%016lX";
};
template <> struct LModifier<unsigned long long, sizeof(unsigned long long)>
{
static constexpr char fmt[] = "%016llX";
};
template <> struct LModifier<std::ptrdiff_t, 4>
{
static constexpr char fmt[] = "%08tX";
};
template <> struct LModifier<std::ptrdiff_t, 8>
{
static constexpr char fmt[] = "%016tX";
};
constexpr char LModifier<unsigned char, sizeof(unsigned char)>::fmt[];
constexpr char LModifier<unsigned short, sizeof(unsigned short)>::fmt[];
constexpr char LModifier<unsigned int, sizeof(unsigned int)>::fmt[];
constexpr char LModifier<unsigned long, sizeof(unsigned long)>::fmt[];
constexpr char LModifier<unsigned long long, sizeof(unsigned long long)>::fmt[];
constexpr char LModifier<std::ptrdiff_t, sizeof(std::ptrdiff_t)>::fmt[];
template <typename T, std::size_t BUF_SIZE = sizeof(T) * 2U> std::string hex_string_(T v)
{
std::string ret(BUF_SIZE + 1, 0);
std::sprintf((char *)ret.data(), LModifier<T, sizeof(T)>::fmt, v);
return ret;
}
} // anonymous namespace
std::string string_utils::hex_string(unsigned char v)
{
return hex_string_(v);
}
std::string string_utils::hex_string(unsigned short v)
{
return hex_string_(v);
}
std::string string_utils::hex_string(unsigned int v)
{
return hex_string_(v);
}
std::string string_utils::hex_string(unsigned long v)
{
return hex_string_(v);
}
std::string string_utils::hex_string(unsigned long long v)
{
return hex_string_(v);
}
std::string string_utils::hex_string(std::ptrdiff_t v)
{
return hex_string_(v);
}

How to choose snprinf mask in template function for integral type?

I have a template function that takes an argument of integral type and copies it to a character array on stack with std::snprintf:
static const size_t size = 256;
char buffer[size];
template <class T, std::enable_if<std::is_integral<T>::value, T>::type>
bool to_array(T integer) {
auto ret = std::snprint(buffer, size, "%lld", integer);
return ret > 0;
}
The problem is that if this function is used with int type for example, compiler prints warning, that "%lld" mask reqiures long long int type.
To fix it, I used boost::fusion::map:
using bf = boost::fusion;
using integral_masks = bf::map<
bf::pair<char, const char*>,
bf::pair<short, const char*>,
....
bf::pair<unsigned long long, const char*>
>;
integral_masks masks(
bf::make_pair<char>("%c"),
bf::make_pair<int>("%d"),
....
bf::make_pair<unsigned long>("%lu")
bf::make_pair<unsigned long long>("%llu")
);
auto ret = std::snprint(buffer, size, bf::at_key<T>(masks), integer);
This works, however it looks a bit heavy, and boost::fusion headers increase compile times dramatically. Maybe there is a better and easier way to do it?
Since you're "trying to avoid allocation" and you're using boost anyways: use Boost Iostreams custom devices
PS Lest it's not obvious, by using streams you get all the goodness:
combine with Boost Format if you want printf style or positional argument format strings
combine with Boost Locale for localized messages (gettext) and formatting (ordinals, dates, numerics, ...)
Live On Coliru
#include <array>
#include <boost/iostreams/device/array.hpp>
#include <boost/iostreams/stream.hpp>
#include <iostream>
namespace io = boost::iostreams;
int main()
{
std::array<char, 128> buf;
auto b = buf.begin(), e = buf.end();
io::array_sink as(b, e);
io::stream<io::array_sink> os(as);
os << '1' << uint16_t(42) << uint32_t(42) << std::showbase << std::hex << int64_t(-1) << "\n"
<< std::boolalpha << false << "\n"
<< std::numeric_limits<double>::infinity();
std::cout << "result '" << std::string(b, os.tellp()) << "'\n";
}
This will just stop writing output after buf has been filled.
Realistically, you might just want the back_inserter. That way you get the best of both worlds: control over allocations while not restricting to an arbitray limit.
See also std::string::reserve for further optimizations. You can reuse the string as often as you wish without incurring more allocations.
Live On Coliru
#include <array>
#include <boost/iostreams/device/back_inserter.hpp>
#include <boost/iostreams/stream.hpp>
#include <iostream>
namespace io = boost::iostreams;
int main()
{
std::string buf;
io::stream<io::back_insert_device<std::string> > os(io::back_inserter(buf));
os << '1' << uint16_t(42) << uint32_t(42) << std::showbase << std::hex << int64_t(-1) << "\n"
<< std::boolalpha << false << "\n"
<< std::numeric_limits<double>::infinity();
os.flush();
std::cout << "result '" << buf << "'\n";
}
Both the above uses use Boost Iostreams in header-only mode (no runtime dependency on Boost (shared) libraries).
You may use constexpr function:
constexpr const char* format_of(char) { return "%c"; }
constexpr const char* format_of(int) { return "%d"; }
constexpr const char* format_of(unsigned long) { return "%lu"; }
constexpr const char* format_of(unsigned long long) { return "%llu"; }
Live example

How to convert Byte Array to hex string in visual c++?

Declaration of a method are following:
//some.h
void TDES_Decryption(BYTE *Data, BYTE *Key, BYTE *InitalVector, int Length);
I am calling this method from the following code:
//some.c
extern "C" __declspec(dllexport) bool _cdecl OnDecryption(LPCTSTR stringKSN, LPCTSTR BDK){
TDES_Decryption(m_Track1Buffer, m_cryptoKey, init_vector, len);
return m_Track1Buffer;
}
Where as data type of m_Track1Buffer is BYTE m_Track1Buffer[1000];
Now i want to make some changes in above method i.e. want to return the String in hex instead of Byte. How should i convert this m_Track1buffer to Hex string
As you have mentioned c++, here is an answer. Iomanip is used to store ints in hex form into stringstream.
#include <iomanip>
#include <sstream>
#include <string>
std::string hexStr(const uint8_t *data, int len)
{
std::stringstream ss;
ss << std::hex;
for( int i(0) ; i < len; ++i )
ss << std::setw(2) << std::setfill('0') << (int)data[i];
return ss.str();
}
This code will convert byte array of fixed size 100 into hex string:
BYTE array[100];
char hexstr[201];
int i;
for (i=0; i<ARRAY_SIZE(array); i++) {
sprintf(hexstr+i*2, "%02x", array[i]);
}
hexstr[i*2] = 0;
Here is a somewhat more flexible version (Use uppercase characters? Insert spaces between bytes?) that can be used with plain arrays and various standard containers:
#include <string>
#include <sstream>
#include <iomanip>
template<typename TInputIter>
std::string make_hex_string(TInputIter first, TInputIter last, bool use_uppercase = true, bool insert_spaces = false)
{
std::ostringstream ss;
ss << std::hex << std::setfill('0');
if (use_uppercase)
ss << std::uppercase;
while (first != last)
{
ss << std::setw(2) << static_cast<int>(*first++);
if (insert_spaces && first != last)
ss << " ";
}
return ss.str();
}
Example usage (plain array):
uint8_t byte_array[] = { 0xDE, 0xAD, 0xC0, 0xDE, 0x00, 0xFF };
auto from_array = make_hex_string(std::begin(byte_array), std::end(byte_array), true, true);
assert(from_array == "DE AD C0 DE 00 FF");
Example usage (std::vector):
// fill with values from the array above
std::vector<uint8_t> byte_vector(std::begin(byte_array), std::end(byte_array));
auto from_vector = make_hex_string(byte_vector.begin(), byte_vector.end(), false);
assert(from_vector == "deadc0de00ff");
Using stringstream, sprintf and other functions in the loop is simply not C++. It's horrible for performance and these kind of functions usually get called a lot (unless you're just writing some things into the log).
Here's one way of doing it.
Writing directly into the std::string's buffer is discouraged because specific std::string implementation might behave differently and this will not work then but we're avoiding one copy of the whole buffer this way:
#include <iostream>
#include <string>
#include <vector>
std::string bytes_to_hex_string(const std::vector<uint8_t> &input)
{
static const char characters[] = "0123456789ABCDEF";
// Zeroes out the buffer unnecessarily, can't be avoided for std::string.
std::string ret(input.size() * 2, 0);
// Hack... Against the rules but avoids copying the whole buffer.
auto buf = const_cast<char *>(ret.data());
for (const auto &oneInputByte : input)
{
*buf++ = characters[oneInputByte >> 4];
*buf++ = characters[oneInputByte & 0x0F];
}
return ret;
}
int main()
{
std::vector<uint8_t> bytes = { 34, 123, 252, 0, 11, 52 };
std::cout << "Bytes to hex string: " << bytes_to_hex_string(bytes) << std::endl;
}
how about using the boost library like this (snippet taken from http://theboostcpplibraries.com/boost.algorithm ):
#include <boost/algorithm/hex.hpp>
#include <vector>
#include <string>
#include <iterator>
#include <iostream>
using namespace boost::algorithm;
int main()
{
std::vector<char> v{'C', '+', '+'};
hex(v, std::ostream_iterator<char>{std::cout, ""});
std::cout << '\n';
std::string s = "C++";
std::cout << hex(s) << '\n';
std::vector<char> w{'4', '3', '2', 'b', '2', 'b'};
unhex(w, std::ostream_iterator<char>{std::cout, ""});
std::cout << '\n';
std::string t = "432b2b";
std::cout << unhex(t) << '\n';
}