I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;
Related
I have a bitset in which I need to store a # of randomly generated integers (store its bit representation of course). So, the thing is that I am confuse on how to do that.
i.e suppose that I generate the integers (all unsigned int) 8, 15 , 20, one at a time. How can I store the recently generated integer in my existing bit set.
Say that I start by generating "8" and store in the bitset, then I generate "15" and store it in the bitset.
I don't know or don't understand how to store those values within the bitset.
Note: I know in advance the size of the bitset, the size is based on the number of integers that I am going to generate, and that I know too. So, at the end what I need is a bitset with all the bits set matching the bits of all the generated integers.
I'll Appreciate your help.
How can I store the recently generated integer in my existing bit set.
You can generate a temporary bitset form the integer and then assign values between the two bitsets.
Example program:
#include <iostream>
#include <bitset>
#include <cstdlib>
int main()
{
const int size = sizeof(int)*8;
std::bitset<2*size> res;
std::bitset<size> res1(rand());
std::bitset<size> res2(rand());
for ( size_t i = 0; i < size; ++i )
{
res[i] = res1[i];
res[size+i] = res2[i];
}
std::cout << "res1: " << res1 << std::endl;
std::cout << "res2: " << res2 << std::endl;
std::cout << "res: " << res << std::endl;
return 0;
}
Output:
res1: 01101011100010110100010101100111
res2: 00110010011110110010001111000110
res: 0011001001111011001000111100011001101011100010110100010101100111
Update
A function to set the bitset values given an integer can be used to avoid the cost of creating temporary bitsets.
#include <iostream>
#include <bitset>
#include <cstdlib>
#include <climits>
const int size = sizeof(int)*8;
void setBitsetValue(std::bitset<2*size>& res,
int num,
size_t bitsetIndex,
size_t numIndex)
{
if ( numIndex < size )
{
res[bitsetIndex] = (num >> numIndex) & 0x1;
setBitsetValue(res, num, bitsetIndex+1, numIndex+1);
}
}
int main()
{
std::bitset<2*size> res;
int num1 = rand()%INT_MAX;
int num2 = rand()%INT_MAX;
std::bitset<size> res1(num1);
std::bitset<size> res2(num2);
std::cout << "res1: " << res1 << std::endl;
std::cout << "res2: " << res2 << std::endl;
setBitsetValue(res, num1, 0, 0);
setBitsetValue(res, num2, size, 0);
std::cout << "res: " << res << std::endl;
return 0;
}
I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;
I have this array : BYTE set[6] = { 0xA8,0x12,0x84,0x03,0x00,0x00, } and i need to insert this value : "" int Value = 1200; "" ....on last 4 bytes. Practically to convert from int to hex and then to write inside the array...
Is this possible ?
I already have BitConverter::GetBytes function, but that's not enough.
Thank you,
To answer original quesion: sure you can.
As soon as your sizeof(int) == 4 and sizeof(BYTE) == 1.
But I'm not sure what you mean by "converting int to hex". If you want a hex string representation, you'll be much better off just using one of standard methods of doing it.
For example, on last line I use std::hex to print numbers as hex.
Here is solution to what you've been asking for and a little more (live example: http://codepad.org/rsmzngUL):
#include <iostream>
using namespace std;
int main() {
const int value = 1200;
unsigned char set[] = { 0xA8,0x12,0x84,0x03,0x00,0x00 };
for (const unsigned char* c = set; c != set + sizeof(set); ++c) {
cout << static_cast<int>(*c) << endl;
}
cout << endl << "Putting value into array:" << endl;
*reinterpret_cast<int*>(&set[2]) = value;
for (const unsigned char* c = set; c != set + sizeof(set); ++c) {
cout << static_cast<int>(*c) << endl;
}
cout << endl << "Printing int's bytes one by one: " << endl;
for (int byteNumber = 0; byteNumber != sizeof(int); ++byteNumber) {
const unsigned char oneByte = reinterpret_cast<const unsigned char*>(&value)[byteNumber];
cout << static_cast<int>(oneByte) << endl;
}
cout << endl << "Printing value as hex: " << hex << value << std::endl;
}
UPD: From comments to your question:
1. If you need just getting separate digits out of the number in separate bytes, it's a different story.
2. Little vs Big endianness matters as well, I did not account for that in my answer.
did you mean this ?
#include <stdio.h>
#include <stdlib.h>
#define BYTE unsigned char
int main ( void )
{
BYTE set[6] = { 0xA8,0x12,0x84,0x03,0x00,0x00, } ;
sprintf ( &set[2] , "%d" , 1200 ) ;
printf ( "\n%c%c%c%c", set[2],set[3],set[4],set[5] ) ;
return 0 ;
}
output :
1200
I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;
I want to work with unsigned 8-bit variables in C++. Either unsigned char or uint8_t do the trick as far as the arithmetic is concerned (which is expected, since AFAIK uint8_t is just an alias for unsigned char, or so the debugger presents it.
The problem is that if I print out the variables using ostream in C++ it treats it as char. If I have:
unsigned char a = 0;
unsigned char b = 0xff;
cout << "a is " << hex << a <<"; b is " << hex << b << endl;
then the output is:
a is ^#; b is 377
instead of
a is 0; b is ff
I tried using uint8_t, but as I mentioned before, that's typedef'ed to unsigned char, so it does the same. How can I print my variables correctly?
Edit: I do this in many places throughout my code. Is there any way I can do this without casting to int each time I want to print?
Use:
cout << "a is " << hex << (int) a <<"; b is " << hex << (int) b << endl;
And if you want padding with leading zeros then:
#include <iomanip>
...
cout << "a is " << setw(2) << setfill('0') << hex << (int) a ;
As we are using C-style casts, why not go the whole hog with terminal C++ badness and use a macro!
#define HEX( x )
setw(2) << setfill('0') << hex << (int)( x )
you can then say
cout << "a is " << HEX( a );
Edit: Having said that, MartinStettner's solution is much nicer!
I would suggest using the following technique:
struct HexCharStruct
{
unsigned char c;
HexCharStruct(unsigned char _c) : c(_c) { }
};
inline std::ostream& operator<<(std::ostream& o, const HexCharStruct& hs)
{
return (o << std::hex << (int)hs.c);
}
inline HexCharStruct hex(unsigned char _c)
{
return HexCharStruct(_c);
}
int main()
{
char a = 131;
std::cout << hex(a) << std::endl;
}
It's short to write, has the same efficiency as the original solution and it lets you choose to use the "original" character output. And it's type-safe (not using "evil" macros :-))
You can read more about this at http://cpp.indi.frih.net/blog/2014/09/tippet-printing-numeric-values-for-chars-and-uint8_t/ and http://cpp.indi.frih.net/blog/2014/08/code-critique-stack-overflow-posters-cant-print-the-numeric-value-of-a-char/. I am only posting this because it has become clear that the author of the above articles does not intend to.
The simplest and most correct technique to do print a char as hex is
unsigned char a = 0;
unsigned char b = 0xff;
auto flags = cout.flags(); //I only include resetting the ioflags because so
//many answers on this page call functions where
//flags are changed and leave no way to
//return them to the state they were in before
//the function call
cout << "a is " << hex << +a <<"; b is " << +b << endl;
cout.flags(flags);
The readers digest version of how this works is that the unary + operator forces a no op type conversion to an int with the correct signedness. So, an unsigned char converts to unsigned int, a signed char converts to int, and a char converts to either unsigned int or int depending on whether char is signed or unsigned on your platform (it comes as a shock to many that char is special and not specified as either signed or unsigned).
The only negative of this technique is that it may not be obvious what is happening to a someone that is unfamiliar with it. However, I think that it is better to use the technique that is correct and teach others about it rather than doing something that is incorrect but more immediately clear.
Well, this works for me:
std::cout << std::hex << (0xFF & a) << std::endl;
If you just cast (int) as suggested it might add 1s to the left of a if its most significant bit is 1. So making this binary AND operation guarantees the output will have the left bits filled by 0s and also converts it to unsigned int forcing cout to print it as hex.
I hope this helps.
In C++20 you'll be able to use std::format to do this:
std::cout << std::format("a is {:x}; b is {:x}\n", a, b);
Output:
a is 0; b is ff
In the meantime you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
fmt::print("a is {:x}; b is {:x}\n", a, b);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
Hm, it seems I re-invented the wheel yesterday... But hey, at least it's a generic wheel this time :) chars are printed with two hex digits, shorts with 4 hex digits and so on.
template<typename T>
struct hex_t
{
T x;
};
template<typename T>
hex_t<T> hex(T x)
{
hex_t<T> h = {x};
return h;
}
template<typename T>
std::ostream& operator<<(std::ostream& os, hex_t<T> h)
{
char buffer[2 * sizeof(T)];
for (auto i = sizeof buffer; i--; )
{
buffer[i] = "0123456789ABCDEF"[h.x & 15];
h.x >>= 4;
}
os.write(buffer, sizeof buffer);
return os;
}
I think TrungTN and anon's answer is okay, but MartinStettner's way of implementing the hex() function is not really simple, and too dark, considering hex << (int)mychar is already a workaround.
here is my solution to make "<<" operator easier:
#include <sstream>
#include <iomanip>
string uchar2hex(unsigned char inchar)
{
ostringstream oss (ostringstream::out);
oss << setw(2) << setfill('0') << hex << (int)(inchar);
return oss.str();
}
int main()
{
unsigned char a = 131;
std::cout << uchar2hex(a) << std::endl;
}
It's just not worthy implementing a stream operator :-)
I think we are missing an explanation of how these type conversions work.
char is platform dependent signed or unsigned. In x86 char is equivalent to signed char.
When an integral type (char, short, int, long) is converted to a larger capacity type, the conversion is made by adding zeros to the left in case of unsigned types and by sign extension for signed ones. Sign extension consists in replicating the most significant (leftmost) bit of the original number to the left till we reach the bit size of the target type.
Hence if I am in a signed char by default system and I do this:
char a = 0xF0; // Equivalent to the binary: 11110000
std::cout << std::hex << static_cast<int>(a);
We would obtain F...F0 since the leading 1 bit has been extended.
If we want to make sure that we only print F0 in any system we would have to make an additional intermediate type cast to an unsigned char so that zeros are added instead and, since they are not significant for a integer with only 8-bits, not printed:
char a = 0xF0; // Equivalent to the binary: 11110000
std::cout << std::hex << static_cast<int>(static_cast<unsigned char>(a));
This produces F0
I'd do it like MartinStettner but add an extra parameter for number of digits:
inline HexStruct hex(long n, int w=2)
{
return HexStruct(n, w);
}
// Rest of implementation is left as an exercise for the reader
So you have two digits by default but can set four, eight, or whatever if you want to.
eg.
int main()
{
short a = 3142;
std:cout << hex(a,4) << std::endl;
}
It may seem like overkill but as Bjarne said: "libraries should be easy to use, not easy to write".
I would suggest:
std::cout << setbase(16) << 32;
Taken from:
http://www.cprogramming.com/tutorial/iomanip.html
You can try the following code:
unsigned char a = 0;
unsigned char b = 0xff;
cout << hex << "a is " << int(a) << "; b is " << int(b) << endl;
cout << hex
<< "a is " << setfill('0') << setw(2) << int(a)
<< "; b is " << setfill('0') << setw(2) << int(b)
<< endl;
cout << hex << uppercase
<< "a is " << setfill('0') << setw(2) << int(a)
<< "; b is " << setfill('0') << setw(2) << int(b)
<< endl;
Output:
a is 0; b is ff
a is 00; b is ff
a is 00; b is FF
I use the following on win32/linux(32/64 bit):
#include <iostream>
#include <iomanip>
template <typename T>
std::string HexToString(T uval)
{
std::stringstream ss;
ss << "0x" << std::setw(sizeof(uval) * 2) << std::setfill('0') << std::hex << +uval;
return ss.str();
}
I realize this is an old question, but its also a top Google result in searching for a solution to a very similar problem I have, which is the desire to implement arbitrary integer to hex string conversions within a template class. My end goal was actually a Gtk::Entry subclass template that would allow editing various integer widths in hex, but that's beside the point.
This combines the unary operator+ trick with std::make_unsigned from <type_traits> to prevent the problem of sign-extending negative int8_t or signed char values that occurs in this answer
Anyway, I believe this is more succinct than any other generic solution. It should work for any signed or unsigned integer types, and throws a compile-time error if you attempt to instantiate the function with any non-integer types.
template <
typename T,
typename = typename std::enable_if<std::is_integral<T>::value, T>::type
>
std::string toHexString(const T v)
{
std::ostringstream oss;
oss << std::hex << +((typename std::make_unsigned<T>::type)v);
return oss.str();
}
Some example usage:
int main(int argc, char**argv)
{
int16_t val;
// Prints 'ff' instead of "ffffffff". Unlike the other answer using the '+'
// operator to extend sizeof(char) int types to int/unsigned int
std::cout << toHexString(int8_t(-1)) << std::endl;
// Works with any integer type
std::cout << toHexString(int16_t(0xCAFE)) << std::endl;
// You can use setw and setfill with strings too -OR-
// the toHexString could easily have parameters added to do that.
std::cout << std::setw(8) << std::setfill('0') <<
toHexString(int(100)) << std::endl;
return 0;
}
Update: Alternatively, if you don't like the idea of the ostringstream being used, you can combine the templating and unary operator trick with the accepted answer's struct-based solution for the following. Note that here, I modified the template by removing the check for integer types. The make_unsigned usage might be enough for compile time type safety guarantees.
template <typename T>
struct HexValue
{
T value;
HexValue(T _v) : value(_v) { }
};
template <typename T>
inline std::ostream& operator<<(std::ostream& o, const HexValue<T>& hs)
{
return o << std::hex << +((typename std::make_unsigned<T>::type) hs.value);
}
template <typename T>
const HexValue<T> toHex(const T val)
{
return HexValue<T>(val);
}
// Usage:
std::cout << toHex(int8_t(-1)) << std::endl;
If you're using prefill and signed chars, be careful not to append unwanted 'F's
char out_character = 0xBE;
cout << setfill('0') << setw(2) << hex << unsigned short(out_character);
prints: ffbe
using int instead of short results in ffffffbe
To prevent the unwanted f's you can easily mask them out.
char out_character = 0xBE;
cout << setfill('0') << setw(2) << hex << unsigned short(out_character) & 0xFF;
I'd like to post my re-re-inventing version based on #FredOverflow's. I made the following modifications.
fix:
Rhs of operator<< should be of const reference type. In #FredOverflow's code, h.x >>= 4 changes output h, which is surprisingly not compatible with standard library, and type T is requared to be copy-constructable.
Assume only CHAR_BITS is a multiple of 4. #FredOverflow's code assumes char is 8-bits, which is not always true, in some implementations on DSPs, particularly, it is not uncommon that char is 16-bits, 24-bits, 32-bits, etc.
improve:
Support all other standard library manipulators available for integral types, e.g. std::uppercase. Because format output is used in _print_byte, standard library manipulators are still available.
Add hex_sep to print separate bytes (note that in C/C++ a 'byte' is by definition a storage unit with the size of char). Add a template parameter Sep and instantiate _Hex<T, false> and _Hex<T, true> in hex and hex_sep respectively.
Avoid binary code bloat. Function _print_byte is extracted out of operator<<, with a function parameter size, to avoid instantiation for different Size.
More on binary code bloat:
As mentioned in improvement 3, no matter how extensively hex and hex_sep is used, only two copies of (nearly) duplicated function will exits in binary code: _print_byte<true> and _print_byte<false>. And you might realized that this duplication can also be eliminated using exactly the same approach: add a function parameter sep. Yes, but if doing so, a runtime if(sep) is needed. I want a common library utility which may be used extensively in the program, thus I compromised on the duplication rather than runtime overhead. I achieved this by using compile-time if: C++11 std::conditional, the overhead of function call can hopefully be optimized away by inline.
hex_print.h:
namespace Hex
{
typedef unsigned char Byte;
template <typename T, bool Sep> struct _Hex
{
_Hex(const T& t) : val(t)
{}
const T& val;
};
template <typename T, bool Sep>
std::ostream& operator<<(std::ostream& os, const _Hex<T, Sep>& h);
}
template <typename T> Hex::_Hex<T, false> hex(const T& x)
{ return Hex::_Hex<T, false>(x); }
template <typename T> Hex::_Hex<T, true> hex_sep(const T& x)
{ return Hex::_Hex<T, true>(x); }
#include "misc.tcc"
hex_print.tcc:
namespace Hex
{
struct Put_space {
static inline void run(std::ostream& os) { os << ' '; }
};
struct No_op {
static inline void run(std::ostream& os) {}
};
#if (CHAR_BIT & 3) // can use C++11 static_assert, but no real advantage here
#error "hex print utility need CHAR_BIT to be a multiple of 4"
#endif
static const size_t width = CHAR_BIT >> 2;
template <bool Sep>
std::ostream& _print_byte(std::ostream& os, const void* ptr, const size_t size)
{
using namespace std;
auto pbyte = reinterpret_cast<const Byte*>(ptr);
os << hex << setfill('0');
for (int i = size; --i >= 0; )
{
os << setw(width) << static_cast<short>(pbyte[i]);
conditional<Sep, Put_space, No_op>::type::run(os);
}
return os << setfill(' ') << dec;
}
template <typename T, bool Sep>
inline std::ostream& operator<<(std::ostream& os, const _Hex<T, Sep>& h)
{
return _print_byte<Sep>(os, &h.val, sizeof(T));
}
}
test:
struct { int x; } output = {0xdeadbeef};
cout << hex_sep(output) << std::uppercase << hex(output) << endl;
output:
de ad be ef DEADBEEF
This will also work:
std::ostream& operator<< (std::ostream& o, unsigned char c)
{
return o<<(int)c;
}
int main()
{
unsigned char a = 06;
unsigned char b = 0xff;
std::cout << "a is " << std::hex << a <<"; b is " << std::hex << b << std::endl;
return 0;
}
I have used in this way.
char strInput[] = "yourchardata";
char chHex[2] = "";
int nLength = strlen(strInput);
char* chResut = new char[(nLength*2) + 1];
memset(chResut, 0, (nLength*2) + 1);
for (int i = 0; i < nLength; i++)
{
sprintf(chHex, "%02X", strInput[i]& 0x00FF);
memcpy(&(chResut[i*2]), chHex, 2);
}
printf("\n%s",chResut);
delete chResut;
chResut = NULL;