C++ cout hex format - c++

i am a c coder, new to c++.
i try to print the following with cout with strange output. Any comment on this behaviour is appreciated.
#include<iostream>
using namespace std;
int main()
{
unsigned char x = 0xff;
cout << "Value of x " << hex<<x<<" hexadecimal"<<endl;
printf(" Value of x %x by printf", x);
}
output:
Value of x ÿ hexadecimal
Value of x ff by printf

<< handles char as a 'character' that you want to output, and just outputs that byte exactly. The hex only applies to integer-like types, so the following will do what you expect:
cout << "Value of x " << hex << int(x) << " hexadecimal" << endl;
Billy ONeal's suggestion of static_cast would look like this:
cout << "Value of x " << hex << static_cast<int>(x) << " hexadecimal" << endl;

You are doing the hex part correctly, but x is a character, and C++ is trying to print it as a character. You have to cast it to an integer.
#include<iostream>
using namespace std;
int main()
{
unsigned char x = 0xff;
cout << "Value of x " << hex<<static_cast<int>(x)<<" hexadecimal"<<endl;
printf(" Value of x %x by printf", x);
}

If I understand your question correctly, you should expect to know how to convert hex to dec since you have already assigned unsigned char x = 0xff;
#include <iostream>
int main()
{
unsigned char x = 0xff;
std::cout << std::dec << static_cast<int>(x) << std::endl;
}
which shall give the value 255 instead.
Further detail related to the the str stream to dec shall refer in http://www.cplusplus.com/reference/ios/dec/.
If you want to know the hexadecimal value from the decimal one, here is a simple example
#include <iostream>
#include <iomanip>
int main()
{
int x = 255;
std::cout << std::showbase << std::setw(4) << std::hex << x << std::endl;
}
which prints oxff.
The library <iomanip> is optional if you want to see 0x ahead of ff. The original reply related to hex number printing was in http://www.cplusplus.com/forum/windows/51591/.

Related

unsigned long cannot hold the correct number over 2,147,483,647

Source Code:
#include <iostream>
using namespace std;
int main() {
unsigned long P;
P = 0x7F << 24;
cout << P << endl;
P = 0x80 << 24;
cout << P << endl;
return 0;
}
Output:
2130706432
18446744071562067968
As you can see, the first result is correct.
But the second result is extremely wrong.
The expected result is 2147483648 and it does not match with 18446744071562067968.
I want to know why
The type of the expression 0x80 << 24 is not unsigned long, it’s int. You then assign the result of that expression to P, and in the process convert it to an unsigned long. But at that point it has already overflown (incidentally causing undefined behaviour). Use unsigned long literals in your expression:
P = 0x80ul << 24;
This problem is not entirely portable, since it depends on the number of bits in your representation of unsigned long. In this case, there is an overflow followed by an underflow, and the two effects combine to produce your surprising result.
The basic solution is indicated here: ULL suffix on a numeric literal
I've broken it down in the code below.
#include <iostream>
using namespace std;
int main() {
cout << "sizeof(unsigned long) = " << sizeof(unsigned long) << "\n";
cout << "sizeof(0x80) = " << sizeof(0x80) << "\n";
int32_t a = (0x80 << 24); // overflow: positive to negative
uint64_t b = a; // underflow: negative to positive
uint64_t c = (0x80 << 24); // simple broken
uint64_t d = (0x80UL << 24); // simple fixed
uint32_t e = (0x80U << 24); // what you probably intended
cout << "a = " << a << "\n";
cout << "b = " << b << "\n";
cout << "c = " << c << "\n";
cout << "d = " << d << "\n";
cout << "e = " << e << "\n";
}
Output:
$ ./c-unsigned-long-cannot-hold-the-correct-number-over-2-147-483-647.cpp
sizeof(unsigned long) = 8
sizeof(0x80) = 4
a = -2147483648
b = 18446744071562067968
c = 18446744071562067968
d = 2147483648
e = 2147483648
If you're doing bit-shift operations like this, it probably makes sense to be explicit about the integer sizes (as I have shown in the code above).
What's the difference between long long and long
Fixed width integer types (since C++11)

why i am getting output blank?

why I am getting output blank? pointers are able to modify but can't read.why?
#include <iostream>
using namespace std;
int main(){
int a = 0;
char *x1,*x2,*x3,*x4;
x1 = (char *)&a;
x2 = x1;x2++;
x3 = x2;x3++;
x4 = x3;x4++;
*x1=1;
*x2=1;
*x3=1;
*x4=1;
cout <<"#" << *x1 << " " << *x2 << " " << *x3 << " " << *x4 << "#"<<endl ;
cout << a << endl;
}
[Desktop]👉 g++ test_pointer.cpp
[Desktop]👉 ./a.out
# #
16843009
I want to read the value of integer by using pointers type of char.
so i can read byte by byte.
You're streaming chars. These get automatically ASCII-ised for you by IOStreams*, so you're seeing (or rather, not seeing) unprintable characters (in fact, all 0x01 bytes).
You can cast to int to see the numerical value, and perhaps add std::hex for a conventional view.
Example:
#include <iostream>
#include <iomanip>
int main()
{
int a = 0;
// Alias the first four bytes of `a` using `char*`
char* x1 = (char*)&a;
char* x2 = x1 + 1;
char* x3 = x1 + 2;
char* x4 = x1 + 3;
*x1 = 1;
*x2 = 1;
*x3 = 1;
*x4 = 1;
std::cout << std::hex << std::setfill('0');
std::cout << '#' << std::setw(2) << "0x" << (int)*x1
<< ' ' << std::setw(2) << "0x" << (int)*x2
<< ' ' << std::setw(2) << "0x" << (int)*x3
<< ' ' << std::setw(2) << "0x" << (int)*x4
<< '#' << '\n';
std::cout << "0x" << a << '\n';
}
// Output:
// #0x01 0x01 0x01 0x01#
// 0x1010101
(live demo)
Those saying that your program has undefined are incorrect (assuming your int has at least four bytes in it); aliasing objects via char* is specifically permitted.
The 16843009 output is correct; that's equal to 0x01010101 which you'd again see if you put your stream into hex mode.
N.B. Some people will recommend reinterpret_cast<char*>(&a) and static_cast<int>(*x1), instead of C-style casts, though personally I find them ugly and unnecessary in this particular case. For the output you can at least write +*x1 to get a "free" promotion to int (via the unary + operator), but that's not terribly self-documenting.
* Technically it's something like the opposite; IOStreams usually automatically converts your numbers and booleans and things into the right ASCII characters to appear correct on screen. For char it skips that step, assuming that you're already providing the ASCII value you want.
Assuming an int is at least 4 bytes long on your system, the program manipulates the 4 bytes of int a.
The result 16843009 is the decimal value of 0x01010101, so this is as you might expect.
You don't see anything in the first line of output because you write 4 characters of a binary value 1 (or 0x01) which are invisible characters (ASCII SOH).
When you modify your program like this
*x1='1';
*x2='3';
*x3='5';
*x4='7';
you will see output with the expected characters
#1 3 5 7#
926233393
The value 926233393 is the decimal representation of 0x37353331 where 0x37 is the ASCII value of the character '7' etc.
(These results are valid for a little-endian architecture.)
You can use unary + for converting character type (printed as symbol) into integer type (printed as number):
cout <<"#" << +*x1 << " " << +*x2 << " " << +*x3 << " " << +*x4 << "#"<<endl ;
See integral promotion:
Have a look at your declarations of the x's
char *x1,*x2,*x3,*x4;
these are pointers to chars (characters).
In your stream output they are interpreted as printable characters.
A short look into the ascii-Table let you see that the lower numbers are not printeable.
Since your int a is zero also the x's that point to the individual bytes are zero.
One possibility to get readeable output would be to cast the characters to int, so that the stream would print the numerical representation instead the ascii character:
cout <<"#" << int(*x1) << " " << int(*x2) << " " << int(*x3) << " " << int(*x4) << "#"<<endl ;
If I understood your problem correctly, this is the solution
#include <stdio.h>
#include <iostream>
using namespace std;
int main(){
int a = 0;
char *x1,*x2,*x3,*x4;
x1 = (char*)&a;
x2 = x1;x2++;
x3 = x2;x3++;
x4 = x3;x4++;
*x1=1;
*x2=1;
*x3=1;
*x4=1;
cout <<"#" << (int)*x1 << " " << (int)*x2 << " " << (int)*x3 << " " << (int)*x4 << "#"<<endl ;
cout << a << endl;
}

how to initialize an unsigned integer uint8_t

How to use uint8_t and to initialize a variable
#include<iostream>
using namespace std;
int main()
{
uint8_t a = 6;
cout << a;
return 1;
}
It is printing some symbol
C++ treats uint8_t as char - because that's pretty much what it is.
If you pass a char to cout, it'll print as a char, which, with a value of 6, is the ACK symbol (which would probably display strangely, depending on your terminal settings).
If you want it to be printed as a number, casting it to an unsigned in cout should do the trick:
cout << (unsigned)a;
You can cast the variable a in order to print that as a number and not an ascii symbol
#include<iostream>
#include <cstdint>
int main()
{
uint8_t a = 6;
std::cout << "a: " << a << std::endl;
std::cout << "a casted to char(is the same type actually): " << char(a) << std::endl;
std::cout << "a casted to int: " << int(a) << std::endl;
getchar();
return 0;
}
You can use good old type-unsafe printf.
#include <cstdint>
#include <cstdio>
int main()
{
std::uint8_t a = 6;
std::printf("%d\n", a);
}

Why do I have to cast a byte as unsigned twice to see hex output? [duplicate]

This question already has answers here:
Are int8_t and uint8_t intended to be char types?
(5 answers)
Closed 7 years ago.
I want to print a variable as hex:
#include <iostream>
#include <string>
#include <cstdint>
int main() {
auto c = 0xb7;
std::cout << std::hex << static_cast<unsigned char>(c) << std::endl;
std::cout << std::hex << static_cast<unsigned>(static_cast<unsigned char>(c)) << std::endl;
std::cout << std::hex << (uint8_t)(c) << std::endl;
std::cout << std::hex << (unsigned)(uint8_t)(c) << std::endl;
return 0;
}
The output seems to be:
\ufffd (tries to print it as a char)
b7
\ufffd (tries to print it as a char)
b7
I do understand that c has higher bits set (10110111), but I cast it to uint8_t and unsigned char once already.
Why do I have to cast uint8_t or unsigned char to unsigned again to get the expected output?
std::hex sets the basefield of the stream str to hex as if by calling str.setf(std::ios_base::hex, std::ios_base::basefield).
When this basefield hex bit is set, iostreams use hexadecimal base for integer I/O.
Code
#include <iostream>
int main()
{
int i = 0xb7;
unsigned u = 0xb7;
char c = static_cast<char>(0xb7);
unsigned char b = 0xb7;
std::cout << std::hex << i << std::endl;
std::cout << std::hex << u << std::endl;
std::cout << std::hex << c << std::endl;
std::cout << std::hex << b << std::endl;
return 0;
}
Output
b7
b7
�
�
I suspect this output to vary on a Windows (non UTF-8) system.

how to read binary file content as strings?

I need to read 16 bits from the binary file as std::string or char *. For example, a binary file contains 89 ab cd ef, and I want to be able to extract them as std::strings or char *. I have tried the following code:
ifstream *p = new ifstream();
char *buffer;
p->seekg(address, ios::beg);
buffer = new char[16];
memset(buffer, 0, 16);
p->read(buffer, 16);
When I try to std::cout the buffer, nothing appeared. How can I read these characters in the binary file?
EDIT: I was looking for the buffer to be a int type such as "0x89abcdef". Is it possible to achieve?
Something like:
#include <string>
#include <iostream>
#include <fstream>
#include <iomanip>
int main()
{
if (ifstream input("filename"))
{
std::string s(2 /*bytes*/, '\0' /*initial content - irrelevant*/);
if (input.read(&s[0], 2 /*bytes*/))
std::cout << "SUCCESS: [0] " << std::hex << (int)s[0] << " [1] " << (int)s[1] << '\n';
else
std::cerr << "Couldn't read from file\n";
}
else
std::cerr << "Couldn't open file\n";
}
You can't read a binary stream as though it were text.
You can, of course, read as binary (by using "file.read()" and "file.write()" methods on your stream object). Just like what you're doing now :)
You can also convert binary to text: "convert to hex text string" and "uuencode base 64" are two common ways to do this.
You'll want to read the bytes as numbers (of type long long probably).
Then you can print those using formatting specifiers like this:
#include <iostream>
#include <iomanip>
int main()
{
using namespace std;
int x = 2;
int y = 255;
cout << showbase // show the 0x prefix
<< internal // fill between the prefix and the number
<< setfill('0'); // fill with 0s
cout << hex << setw(4) << x << dec << " = " << setw(3) << x << endl;
cout << hex << setw(4) << y << dec << " = " << setw(3) << y << endl;
return 0;
}