How to change the cout format for pointers - c++

I have to write a C++ program on in VS which does the same as a previously written programm for Solaris which is compiled with gcc.
The following "problem" occured:
int var1 = 42;
int* var1Ptr = &var1;
cout << "Address of pointer " << var1Ptr << endl;
This code returns in the solaris program a 0x indexed address (0x08FFAFC).
In my VS code it returns as 008FFAFC.
Since we only do comparison within the code it would be fine, but the supportteam have their own tools which extract data from the logs which is looking for those 0x indexed values. Is there a way to format it like this without adding the 0x prefix everytime we write into the logs?
cout << "Maybe this way: " << hex << int(&var1Ptr) << endl;
doesn't have the effect I wanted.

A little helper class:
namespace detail {
template<class T>
struct debug_pointer
{
constexpr static std::size_t pointer_digits()
{
return sizeof(void*) * 2;
};
static constexpr std::size_t width = pointer_digits();
std::ostream& operator()(std::ostream& os) const
{
std::uintptr_t i = reinterpret_cast<std::uintptr_t>(p);
return os << "0x" << std::hex << std::setw(width) << std::setfill('0') << i;
}
T* p;
friend
std::ostream& operator<<(std::ostream& os, debug_pointer const& dp) {
return dp(os);
}
};
}
offered via a custom manipulator...
template<class T>
auto debug_pointer(T* p)
{
return detail::debug_pointer<T>{ p };
}
allows us this expression:
int i;
std::cout << debug_pointer(&i) << std::endl;
Which will yield either an 8-digit or 16-digit hex pointer value, depending on your architecture (mine is 64-bit):
0x00007fff5fbff3bc

Related

C++: Printing with conditional format depending on the base of an int value

In c++, Is there any format specifier to print an unsigned in different base, depending on its value? A format specifier expressing something like this:
using namespace std;
if(x > 0xF000)
cout << hex << "0x" << x;
else
cout << dec << x ;
Because I will have to do this a lot of times in my current project, I would like to know if c++ provides such a format specifier.
There is no such functionality built-in to C++. You can use a simple wrapper to accomplish this, though:
struct large_hex {
unsigned int x;
};
ostream& operator <<(ostream& os, const large_hex& lh) {
if (lh.x > 0xF000) {
return os << "0x" << hex << lh.x << dec;
} else {
return os << lh.x;
}
}
Use as cout << large_hex{x}.
If you want to make the threshold configurable you could make it a second field of large_hex or a template parameter (exercise for the reader).

How does the virtual inheritance table work in g++?

I'm trying to get a better understanding how virtual inheritance works in practice (that is, not according to the standard, but in an actual implementation like g++). The actual question is at the bottom, in bold face.
So, I built myself a inheritance graph, which has among other things, these simple types:
struct A {
unsigned a;
unsigned long long u;
A() : a(0xAAAAAAAA), u(0x1111111111111111ull) {}
virtual ~A() {}
};
struct B : virtual A {
unsigned b;
B() : b(0xBBBBBBBB) {
a = 0xABABABAB;
}
};
(In the whole hierarchy I also have a C: virtual A and BC: B,C, so that the virtual inheritance makes sense.)
I wrote a couple of functions to dump the layout of instances, taking the vtable pointer and printing the first 6 8-byte values (arbitrary to fit on screen), and then dump the actual memory of the object. This looks something like that:
Dumping an A object:
actual A object of size 24 at location 0x936010
vtable expected at 0x402310 {
401036, 401068, 434232, 0, 0, 0,
}
1023400000000000aaaaaaaa000000001111111111111111
[--vtable ptr--]
Dumping a B object and where the A object is located, which is indicated by printing a lot As at the respective position.
actual B object of size 40 at location 0x936030
vtable expected at 0x4022b8 {
4012d2, 40133c, fffffff0, fffffff0, 4023c0, 4012c8,
}
b822400000000000bbbbbbbb00000000e022400000000000abababab000000001111111111111111
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA (offset: 16)
As you can see, the A part of B is located at an offset of 16 bytes to the start of the B object (which could be different if I had instantiated BC and dyn-casted it to a B*!).
I would have expected the 16 (or at least a 2, due to alignment) to show up somewhere in the table, because the program has to look up the actual location (offset) of A at runtime. So, how does the layout really look like?
Edit: The dump is done by calling dump and dumpPositions:
using std::cout;
using std::endl;
template <typename FROM, typename TO, typename STR> void dumpPositions(FROM const* x, STR name) {
uintptr_t const offset {reinterpret_cast<uintptr_t>(dynamic_cast<TO const*>(x)) - reinterpret_cast<uintptr_t>(x)};
for (unsigned i = 0; i < sizeof(FROM); i++) {
if (offset <= i && i < offset+sizeof(TO))
cout << name << name;
else
cout << " ";
}
cout << " (offset: " << offset << ")";
cout << endl;
}
template <typename T> void hexDump(T const* x, size_t const length, bool const comma = false) {
for (unsigned i = 0; i < length; i++) {
T const& value {static_cast<T const&>(x[i])};
cout.width(sizeof(T)*2);
if (sizeof(T) > 1)
cout.fill(' ');
else
cout.fill('0');
cout << std::hex << std::right << (unsigned)value << std::dec;
if (comma)
cout << ",";
}
cout << endl;
}
template <typename FROM, typename STR> void dump(FROM const* x, STR name) {
cout << name << " object of size " << sizeof(FROM) << " at location " << x << endl;
uintptr_t const* const* const vtable {reinterpret_cast<uintptr_t const* const*>(x)};
cout << "vtable expected at " << reinterpret_cast<void const*>(*vtable) << " {" << endl;
hexDump(*vtable,6,true);
cout << "}" << endl;
hexDump(reinterpret_cast<unsigned char const*>(x),sizeof(FROM));
}
The answer is actually documented here, in the Itanium ABI. In particular section 2.5 contains the layout of the virtual table.

Custom stream manipulator for streaming integers in any base

I can make an std::ostream object output integer numbers in hex, for example
std::cout << std::hex << 0xabc; //prints `abc`, not the base-10 representation
Is there any manipulator that is universal for all bases? Something like
std::cout << std::base(4) << 20; //I want this to output 110
If there is one, then I have no further question.
If there isn't one, then can I write one? Won't it require me to access private implementation details of std::ostream?
Note that I know I can write a function that takes a number and converts it to a string which is the representation of that number in any base. Or I can use one that already exists. I am asking about custom stream manipulators - are they possible?
You can do something like the following. I have commented the code to explain what each part is doing, but essentially its this:
Create a "manipulator" struct which stores some data in the stream using xalloc and iword.
Create a custom num_put facet which looks for your manipulator and applies the manipulation.
Here is the code...
Edit: Note that im not sure I handled the std::ios_base::internal flag correctly here - as I dont actually know what its for.
Edit 2: I found out what std::ios_base::internal is for, and updated the code to handle it.
Edit 3: Added a call to std::locacle::global to show how to make all the standard stream classes support the new stream manipulator by default, rather than having to imbue them.
#include <algorithm>
#include <cassert>
#include <climits>
#include <iomanip>
#include <iostream>
#include <locale>
namespace StreamManip {
// Define a base manipulator type, its what the built in stream manipulators
// do when they take parameters, only they return an opaque type.
struct BaseManip
{
int mBase;
BaseManip(int base) : mBase(base)
{
assert(base >= 2);
assert(base <= 36);
}
static int getIWord()
{
// call xalloc once to get an index at which we can store data for this
// manipulator.
static int iw = std::ios_base::xalloc();
return iw;
}
void apply(std::ostream& os) const
{
// store the base value in the manipulator.
os.iword(getIWord()) = mBase;
}
};
// We need this so we can apply our custom stream manipulator to the stream.
std::ostream& operator<<(std::ostream& os, const BaseManip& bm)
{
bm.apply(os);
return os;
}
// convience function, so we can do std::cout << base(16) << 100;
BaseManip base(int b)
{
return BaseManip(b);
}
// A custom number output facet. These are used by the std::locale code in
// streams. The num_put facet handles the output of numberic values as characters
// in the stream. Here we create one that knows about our custom manipulator.
struct BaseNumPut : std::num_put<char>
{
// These absVal functions are needed as std::abs doesnt support
// unsigned types, but the templated doPutHelper works on signed and
// unsigned types.
unsigned long int absVal(unsigned long int a) const
{
return a;
}
unsigned long long int absVal(unsigned long long int a) const
{
return a;
}
template <class NumType>
NumType absVal(NumType a) const
{
return std::abs(a);
}
template <class NumType>
iter_type doPutHelper(iter_type out, std::ios_base& str, char_type fill, NumType val) const
{
// Read the value stored in our xalloc location.
const int base = str.iword(BaseManip::getIWord());
// we only want this manipulator to affect the next numeric value, so
// reset its value.
str.iword(BaseManip::getIWord()) = 0;
// normal number output, use the built in putter.
if (base == 0 || base == 10)
{
return std::num_put<char>::do_put(out, str, fill, val);
}
// We want to conver the base, so do it and output.
// Base conversion code lifted from Nawaz's answer
int digits[CHAR_BIT * sizeof(NumType)];
int i = 0;
NumType tempVal = absVal(val);
while (tempVal != 0)
{
digits[i++] = tempVal % base;
tempVal /= base;
}
// Get the format flags.
const std::ios_base::fmtflags flags = str.flags();
// Add the padding if needs by (i.e. they have used std::setw).
// Only applies if we are right aligned, or none specified.
if (flags & std::ios_base::right ||
!(flags & std::ios_base::internal || flags & std::ios_base::left))
{
std::fill_n(out, str.width() - i, fill);
}
if (val < 0)
{
*out++ = '-';
}
// Handle the internal adjustment flag.
if (flags & std::ios_base::internal)
{
std::fill_n(out, str.width() - i, fill);
}
char digitCharLc[] = "0123456789abcdefghijklmnopqrstuvwxyz";
char digitCharUc[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const char *digitChar = (str.flags() & std::ios_base::uppercase)
? digitCharUc
: digitCharLc;
while (i)
{
// out is an iterator that accepts characters
*out++ = digitChar[digits[--i]];
}
// Add the padding if needs by (i.e. they have used std::setw).
// Only applies if we are left aligned.
if (str.flags() & std::ios_base::left)
{
std::fill_n(out, str.width() - i, fill);
}
// clear the width
str.width(0);
return out;
}
// Overrides for the virtual do_put member functions.
iter_type do_put(iter_type out, std::ios_base& str, char_type fill, long val) const
{
return doPutHelper(out, str, fill, val);
}
iter_type do_put(iter_type out, std::ios_base& str, char_type fill, unsigned long val) const
{
return doPutHelper(out, str, fill, val);
}
};
} // namespace StreamManip
int main()
{
// Create a local the uses our custom num_put
std::locale myLocale(std::locale(), new StreamManip::BaseNumPut());
// Set our locacle to the global one used by default in all streams created
// from here on in. Any streams created in this app will now support the
// StreamManip::base modifier.
std::locale::global(myLocale);
// imbue std::cout, so it uses are custom local.
std::cout.imbue(myLocale);
std::cerr.imbue(myLocale);
// Output some stuff.
std::cout << std::setw(50) << StreamManip::base(2) << std::internal << -255 << std::endl;
std::cout << StreamManip::base(4) << 255 << std::endl;
std::cout << StreamManip::base(8) << 255 << std::endl;
std::cout << StreamManip::base(10) << 255 << std::endl;
std::cout << std::uppercase << StreamManip::base(16) << 255 << std::endl;
return 0;
}
Custom manipulators are indeed possible. See for example this question. I'm not familiar with any specific one for universal bases.
You really have two separate problems. The one I think you're asking about is entirely solvable. The other, unfortunately, is rather less so.
Allocating and using some space in the stream to hold some stream state is a problem that was foreseen. Streams have a couple of members (xalloc, iword, pword) that let you allocate a spot in an array in the stream, and read/write data there. As such, the stream manipulator itself is entirely possible. You'd basically use xalloc to allocate a spot in the stream's array to hold the current base, to be used by the insertion operator when it converts a number.
The problem for which I don't see a solution is rather simpler: the standard library already provides an operator<< to insert an int into a stream, and it obviously does not know about your hypothetical data to hold the base for a conversion. You can't overload that, because it would need exactly the same signature as the existing one, so your overload would be ambiguous.
The overloads for int, short, etc., however, are overloaded member functions. I guess if you wanted to badly enough, you could get by with using a template to overload operator<<. If I recall correctly, that would be preferred over even an exact match with a non-template function as the library provides. You'd still be breaking the rules, but if you put such a template in namespace std, there's at least some chance that it would work.
I attempted to write a code, and its working with some limitations. Its not stream manipulator as such, as that is simply not possible, as pointed out by others (especially #Jerry).
Here is my code:
struct base
{
mutable std::ostream *_out;
int _value;
base(int value=10) : _value(value) {}
template<typename T>
const base& operator << (const T & data) const
{
*_out << data;
return *this;
}
const base& operator << (const int & data) const
{
switch(_value)
{
case 2:
case 4:
case 8: return print(data);
case 16: *_out << std::hex << data; break;
default: *_out << data;
}
return *this;
}
const base & print(int data) const
{
int digits[CHAR_BIT * sizeof(int)], i = 0;
while(data)
{
digits[i++] = data % _value;
data /= _value;
}
while(i) *_out << digits[--i] ;
return *this;
}
friend const base& operator <<(std::ostream& out, const base& b)
{
b._out = &out;
return b;
}
};
And this is the test code:
int main() {
std::cout << base(2) << 255 <<", " << 54 << ", " << 20<< "\n";
std::cout << base(4) << 255 <<", " << 54 << ", " << 20<< "\n";
std::cout << base(8) << 255 <<", " << 54 << ", " << 20<< "\n";
std::cout << base(16) << 255 <<", " << 54 << ", " << 20<< "\n";
}
Output:
11111111, 110110, 10100
3333, 312, 110
377, 66, 24
ff, 36, 14
Online demo : http://www.ideone.com/BWhW5
Limitations:
The base cannot be changed twice. So this would be an error:
std::cout << base(4) << 879 << base(8) << 9878 ; //error
Other manipulator cannot be used after base is used:
std::cout << base(4) << 879 << std::hex << 9878 ; //error
std::cout << std::hex << 879 << base(8) << 9878 ; //ok
std::endl cannot be used after base is used:
std::cout << base(4) << 879 << std::endl ; //error
//that is why I used "\n" in the test code.
I don't think that syntax is possible for arbitrary streams (using a manipulator, #gigantt linked an answer that shows some alternative non-manipulator solutions). The standard manipulators merely set options that are implemented inside the stream.
OTOH, you could certainly make this syntax work:
std::cout << base(4, 20);
Where base is an object that provides a stream insertion operator (no need to return a temporary string).

print time on each call to std::cout

How would someone do that?
for example I do like:
std::cout << "something";
then it should print the time before "something"
Make your own stream for that :) This should work:
class TimedStream {
public:
template<typename T>
TimedStream& operator<<(const T& t) {
std::cout << getSomeFormattedTimeAsString() << t << std::endl;
return *this;
}
};
TimedStream timed_cout;
void func() {
timed_cout << "123";
}
You'd be able to use this class for every type for which std::cout << obj can be done, so no further work is needed.
But please note that the time will be written before every <<, so you cannot chain them easily. Another solution with explicit timestamp is:
class TimestampDummy {} timestamp;
ostream& operator<<(ostream& o, TimestampDummy& t) {
o << yourFancyFormattedTimestamp();
}
void func() {
cout << timestamp << "123 " << 456 << endl;
}
You could use a simple function that prints the timestamp and then returns the stream for further printing:
std::ostream& tcout() {
// Todo: get a timestamp in the desired format
return std::cout << timestamp << ": ";
}
You would then call this function instead of using std::cout directly, whenever you want a timestamp inserted:
tcout() << "Hello" << std::endl;
ostream& printTimeWithString(ostream& out, const string& value)
{
out << currentTime() << ' ' << value << std::endl;
return out;
}
Generate current time using your favourite Boost.DateTime output format.
This looks like homework. You want something in the line of:
std::cout << time << "something";
Find a way the retrieve the time on your system, using a system call.
Then you'll have to implement a << operator for your system-dependent time class/struct.

how do I print an unsigned char as hex in c++ using ostream?

I want to work with unsigned 8-bit variables in C++. Either unsigned char or uint8_t do the trick as far as the arithmetic is concerned (which is expected, since AFAIK uint8_t is just an alias for unsigned char, or so the debugger presents it.
The problem is that if I print out the variables using ostream in C++ it treats it as char. If I have:
unsigned char a = 0;
unsigned char b = 0xff;
cout << "a is " << hex << a <<"; b is " << hex << b << endl;
then the output is:
a is ^#; b is 377
instead of
a is 0; b is ff
I tried using uint8_t, but as I mentioned before, that's typedef'ed to unsigned char, so it does the same. How can I print my variables correctly?
Edit: I do this in many places throughout my code. Is there any way I can do this without casting to int each time I want to print?
Use:
cout << "a is " << hex << (int) a <<"; b is " << hex << (int) b << endl;
And if you want padding with leading zeros then:
#include <iomanip>
...
cout << "a is " << setw(2) << setfill('0') << hex << (int) a ;
As we are using C-style casts, why not go the whole hog with terminal C++ badness and use a macro!
#define HEX( x )
setw(2) << setfill('0') << hex << (int)( x )
you can then say
cout << "a is " << HEX( a );
Edit: Having said that, MartinStettner's solution is much nicer!
I would suggest using the following technique:
struct HexCharStruct
{
unsigned char c;
HexCharStruct(unsigned char _c) : c(_c) { }
};
inline std::ostream& operator<<(std::ostream& o, const HexCharStruct& hs)
{
return (o << std::hex << (int)hs.c);
}
inline HexCharStruct hex(unsigned char _c)
{
return HexCharStruct(_c);
}
int main()
{
char a = 131;
std::cout << hex(a) << std::endl;
}
It's short to write, has the same efficiency as the original solution and it lets you choose to use the "original" character output. And it's type-safe (not using "evil" macros :-))
You can read more about this at http://cpp.indi.frih.net/blog/2014/09/tippet-printing-numeric-values-for-chars-and-uint8_t/ and http://cpp.indi.frih.net/blog/2014/08/code-critique-stack-overflow-posters-cant-print-the-numeric-value-of-a-char/. I am only posting this because it has become clear that the author of the above articles does not intend to.
The simplest and most correct technique to do print a char as hex is
unsigned char a = 0;
unsigned char b = 0xff;
auto flags = cout.flags(); //I only include resetting the ioflags because so
//many answers on this page call functions where
//flags are changed and leave no way to
//return them to the state they were in before
//the function call
cout << "a is " << hex << +a <<"; b is " << +b << endl;
cout.flags(flags);
The readers digest version of how this works is that the unary + operator forces a no op type conversion to an int with the correct signedness. So, an unsigned char converts to unsigned int, a signed char converts to int, and a char converts to either unsigned int or int depending on whether char is signed or unsigned on your platform (it comes as a shock to many that char is special and not specified as either signed or unsigned).
The only negative of this technique is that it may not be obvious what is happening to a someone that is unfamiliar with it. However, I think that it is better to use the technique that is correct and teach others about it rather than doing something that is incorrect but more immediately clear.
Well, this works for me:
std::cout << std::hex << (0xFF & a) << std::endl;
If you just cast (int) as suggested it might add 1s to the left of a if its most significant bit is 1. So making this binary AND operation guarantees the output will have the left bits filled by 0s and also converts it to unsigned int forcing cout to print it as hex.
I hope this helps.
In C++20 you'll be able to use std::format to do this:
std::cout << std::format("a is {:x}; b is {:x}\n", a, b);
Output:
a is 0; b is ff
In the meantime you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
fmt::print("a is {:x}; b is {:x}\n", a, b);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
Hm, it seems I re-invented the wheel yesterday... But hey, at least it's a generic wheel this time :) chars are printed with two hex digits, shorts with 4 hex digits and so on.
template<typename T>
struct hex_t
{
T x;
};
template<typename T>
hex_t<T> hex(T x)
{
hex_t<T> h = {x};
return h;
}
template<typename T>
std::ostream& operator<<(std::ostream& os, hex_t<T> h)
{
char buffer[2 * sizeof(T)];
for (auto i = sizeof buffer; i--; )
{
buffer[i] = "0123456789ABCDEF"[h.x & 15];
h.x >>= 4;
}
os.write(buffer, sizeof buffer);
return os;
}
I think TrungTN and anon's answer is okay, but MartinStettner's way of implementing the hex() function is not really simple, and too dark, considering hex << (int)mychar is already a workaround.
here is my solution to make "<<" operator easier:
#include <sstream>
#include <iomanip>
string uchar2hex(unsigned char inchar)
{
ostringstream oss (ostringstream::out);
oss << setw(2) << setfill('0') << hex << (int)(inchar);
return oss.str();
}
int main()
{
unsigned char a = 131;
std::cout << uchar2hex(a) << std::endl;
}
It's just not worthy implementing a stream operator :-)
I think we are missing an explanation of how these type conversions work.
char is platform dependent signed or unsigned. In x86 char is equivalent to signed char.
When an integral type (char, short, int, long) is converted to a larger capacity type, the conversion is made by adding zeros to the left in case of unsigned types and by sign extension for signed ones. Sign extension consists in replicating the most significant (leftmost) bit of the original number to the left till we reach the bit size of the target type.
Hence if I am in a signed char by default system and I do this:
char a = 0xF0; // Equivalent to the binary: 11110000
std::cout << std::hex << static_cast<int>(a);
We would obtain F...F0 since the leading 1 bit has been extended.
If we want to make sure that we only print F0 in any system we would have to make an additional intermediate type cast to an unsigned char so that zeros are added instead and, since they are not significant for a integer with only 8-bits, not printed:
char a = 0xF0; // Equivalent to the binary: 11110000
std::cout << std::hex << static_cast<int>(static_cast<unsigned char>(a));
This produces F0
I'd do it like MartinStettner but add an extra parameter for number of digits:
inline HexStruct hex(long n, int w=2)
{
return HexStruct(n, w);
}
// Rest of implementation is left as an exercise for the reader
So you have two digits by default but can set four, eight, or whatever if you want to.
eg.
int main()
{
short a = 3142;
std:cout << hex(a,4) << std::endl;
}
It may seem like overkill but as Bjarne said: "libraries should be easy to use, not easy to write".
I would suggest:
std::cout << setbase(16) << 32;
Taken from:
http://www.cprogramming.com/tutorial/iomanip.html
You can try the following code:
unsigned char a = 0;
unsigned char b = 0xff;
cout << hex << "a is " << int(a) << "; b is " << int(b) << endl;
cout << hex
<< "a is " << setfill('0') << setw(2) << int(a)
<< "; b is " << setfill('0') << setw(2) << int(b)
<< endl;
cout << hex << uppercase
<< "a is " << setfill('0') << setw(2) << int(a)
<< "; b is " << setfill('0') << setw(2) << int(b)
<< endl;
Output:
a is 0; b is ff
a is 00; b is ff
a is 00; b is FF
I use the following on win32/linux(32/64 bit):
#include <iostream>
#include <iomanip>
template <typename T>
std::string HexToString(T uval)
{
std::stringstream ss;
ss << "0x" << std::setw(sizeof(uval) * 2) << std::setfill('0') << std::hex << +uval;
return ss.str();
}
I realize this is an old question, but its also a top Google result in searching for a solution to a very similar problem I have, which is the desire to implement arbitrary integer to hex string conversions within a template class. My end goal was actually a Gtk::Entry subclass template that would allow editing various integer widths in hex, but that's beside the point.
This combines the unary operator+ trick with std::make_unsigned from <type_traits> to prevent the problem of sign-extending negative int8_t or signed char values that occurs in this answer
Anyway, I believe this is more succinct than any other generic solution. It should work for any signed or unsigned integer types, and throws a compile-time error if you attempt to instantiate the function with any non-integer types.
template <
typename T,
typename = typename std::enable_if<std::is_integral<T>::value, T>::type
>
std::string toHexString(const T v)
{
std::ostringstream oss;
oss << std::hex << +((typename std::make_unsigned<T>::type)v);
return oss.str();
}
Some example usage:
int main(int argc, char**argv)
{
int16_t val;
// Prints 'ff' instead of "ffffffff". Unlike the other answer using the '+'
// operator to extend sizeof(char) int types to int/unsigned int
std::cout << toHexString(int8_t(-1)) << std::endl;
// Works with any integer type
std::cout << toHexString(int16_t(0xCAFE)) << std::endl;
// You can use setw and setfill with strings too -OR-
// the toHexString could easily have parameters added to do that.
std::cout << std::setw(8) << std::setfill('0') <<
toHexString(int(100)) << std::endl;
return 0;
}
Update: Alternatively, if you don't like the idea of the ostringstream being used, you can combine the templating and unary operator trick with the accepted answer's struct-based solution for the following. Note that here, I modified the template by removing the check for integer types. The make_unsigned usage might be enough for compile time type safety guarantees.
template <typename T>
struct HexValue
{
T value;
HexValue(T _v) : value(_v) { }
};
template <typename T>
inline std::ostream& operator<<(std::ostream& o, const HexValue<T>& hs)
{
return o << std::hex << +((typename std::make_unsigned<T>::type) hs.value);
}
template <typename T>
const HexValue<T> toHex(const T val)
{
return HexValue<T>(val);
}
// Usage:
std::cout << toHex(int8_t(-1)) << std::endl;
If you're using prefill and signed chars, be careful not to append unwanted 'F's
char out_character = 0xBE;
cout << setfill('0') << setw(2) << hex << unsigned short(out_character);
prints: ffbe
using int instead of short results in ffffffbe
To prevent the unwanted f's you can easily mask them out.
char out_character = 0xBE;
cout << setfill('0') << setw(2) << hex << unsigned short(out_character) & 0xFF;
I'd like to post my re-re-inventing version based on #FredOverflow's. I made the following modifications.
fix:
Rhs of operator<< should be of const reference type. In #FredOverflow's code, h.x >>= 4 changes output h, which is surprisingly not compatible with standard library, and type T is requared to be copy-constructable.
Assume only CHAR_BITS is a multiple of 4. #FredOverflow's code assumes char is 8-bits, which is not always true, in some implementations on DSPs, particularly, it is not uncommon that char is 16-bits, 24-bits, 32-bits, etc.
improve:
Support all other standard library manipulators available for integral types, e.g. std::uppercase. Because format output is used in _print_byte, standard library manipulators are still available.
Add hex_sep to print separate bytes (note that in C/C++ a 'byte' is by definition a storage unit with the size of char). Add a template parameter Sep and instantiate _Hex<T, false> and _Hex<T, true> in hex and hex_sep respectively.
Avoid binary code bloat. Function _print_byte is extracted out of operator<<, with a function parameter size, to avoid instantiation for different Size.
More on binary code bloat:
As mentioned in improvement 3, no matter how extensively hex and hex_sep is used, only two copies of (nearly) duplicated function will exits in binary code: _print_byte<true> and _print_byte<false>. And you might realized that this duplication can also be eliminated using exactly the same approach: add a function parameter sep. Yes, but if doing so, a runtime if(sep) is needed. I want a common library utility which may be used extensively in the program, thus I compromised on the duplication rather than runtime overhead. I achieved this by using compile-time if: C++11 std::conditional, the overhead of function call can hopefully be optimized away by inline.
hex_print.h:
namespace Hex
{
typedef unsigned char Byte;
template <typename T, bool Sep> struct _Hex
{
_Hex(const T& t) : val(t)
{}
const T& val;
};
template <typename T, bool Sep>
std::ostream& operator<<(std::ostream& os, const _Hex<T, Sep>& h);
}
template <typename T> Hex::_Hex<T, false> hex(const T& x)
{ return Hex::_Hex<T, false>(x); }
template <typename T> Hex::_Hex<T, true> hex_sep(const T& x)
{ return Hex::_Hex<T, true>(x); }
#include "misc.tcc"
hex_print.tcc:
namespace Hex
{
struct Put_space {
static inline void run(std::ostream& os) { os << ' '; }
};
struct No_op {
static inline void run(std::ostream& os) {}
};
#if (CHAR_BIT & 3) // can use C++11 static_assert, but no real advantage here
#error "hex print utility need CHAR_BIT to be a multiple of 4"
#endif
static const size_t width = CHAR_BIT >> 2;
template <bool Sep>
std::ostream& _print_byte(std::ostream& os, const void* ptr, const size_t size)
{
using namespace std;
auto pbyte = reinterpret_cast<const Byte*>(ptr);
os << hex << setfill('0');
for (int i = size; --i >= 0; )
{
os << setw(width) << static_cast<short>(pbyte[i]);
conditional<Sep, Put_space, No_op>::type::run(os);
}
return os << setfill(' ') << dec;
}
template <typename T, bool Sep>
inline std::ostream& operator<<(std::ostream& os, const _Hex<T, Sep>& h)
{
return _print_byte<Sep>(os, &h.val, sizeof(T));
}
}
test:
struct { int x; } output = {0xdeadbeef};
cout << hex_sep(output) << std::uppercase << hex(output) << endl;
output:
de ad be ef DEADBEEF
This will also work:
std::ostream& operator<< (std::ostream& o, unsigned char c)
{
return o<<(int)c;
}
int main()
{
unsigned char a = 06;
unsigned char b = 0xff;
std::cout << "a is " << std::hex << a <<"; b is " << std::hex << b << std::endl;
return 0;
}
I have used in this way.
char strInput[] = "yourchardata";
char chHex[2] = "";
int nLength = strlen(strInput);
char* chResut = new char[(nLength*2) + 1];
memset(chResut, 0, (nLength*2) + 1);
for (int i = 0; i < nLength; i++)
{
sprintf(chHex, "%02X", strInput[i]& 0x00FF);
memcpy(&(chResut[i*2]), chHex, 2);
}
printf("\n%s",chResut);
delete chResut;
chResut = NULL;