c++ (OpenBSD 5.8 amd64), outputting uint64_t as a real number - c++

Trying to understand why I get the following output from my program:
$ ./chartouintest
UInts: 153 97 67 49 139 0 3 129
Hexes: 99 61 43 31 8b 00 03 81
uint64 val: 8103008b31436199
$
I am trying to output just the actual UInt64 numerical value, but can't seem to do it (the output is not right)
here is the code:
#include <iostream>
#include <iomanip>
#include <stdlib.h>
union bytes {
unsigned char c[8];
uint64_t l;
} memobj;
int main() {
//fill with random bytes
for(unsigned int i=0; i < sizeof(memobj.c); ++i) { memobj.c[i] = (unsigned char)rand();}
//see values of all elements as unsigned int8's and hex vals
std::cout << "UInts: ";
for (int x=0; x < sizeof(memobj.c); ++x) { std::cout << (unsigned int)memobj.c[x] << " "; }
std::cout << std::endl;
std::cout << "Hexes: ";
for (int x=0; x < sizeof(memobj.c); ++x) { std::cout << std::setw(2) << std::setfill('0') << std::hex << (unsigned int)memobj.c[x] << " "; }
std::cout << std::endl;
std::cout << "uint64 val: " << memobj.l << std::endl;
}
what am i doing wrong???
thanks in advance for the help!
J

Writing one member of a union and reading another is undefined behavior (with exceptions, but in this case it's UB).
You shouldn't expect anything. The compiler can do WTF it wants with your code, for instance giving a nice "expected" result in debug mode and garbage or crashing in release mode. Of course another compiler might play another trick. You'll never know for sure, so why bother ?
Why not doing it the right way ? memcpy perhaps ?
EDIT:
To really answer the question, a note about std::cout : the std::hex sets the stream to an hexadecimal representation, that's why the final "uint64 val: " display is in hex base (and not in decimal as the OP expects). Other than that, nothing is wrong with the output, despite the UB threat.

Related

my data in the console is displayed in the wrong sequence

Why is my data in the console displayed in the wrong sequence?
I have the following code:
#include <iostream>
template <typename T, typename T2>
T2 printArr(const T* array, int i) {
for (int j = 0; j < i; j++) {
std::cout << array[j] << " ";
}
std::cout << std::endl;
return array[i - 1];
}
int main() {
const int iSize = 3;
int i_arr[iSize] = {23, 45, 78};
std::cout << "Int array: ";
std::cout << "Last element: " << printArr<int, int>(i_arr, iSize) << std::endl;
}
What do I get by compiling it:
Int array: 23 45 78
Last element: 78
What should I get in my opinion:
Int array: Last element: 23 45 78
78
Most likely, I do not understand how the computer that compiles my code thinks.
Here you can see that the result is the same as I described in my question: http://cpp.sh/2lfbcf
And I also try to compile the code in Visual Studio 2019 and the result is identical
Before C++17, given an expression E1 << E2, it is unspecified whether every value computation and side-effect of E1 is sequenced before every value computation and side effect of E2.
See cppreference note 19
In your code, using a standard before C++17, it is unspecified whether the return value of printArr() is calculated (which as a side-effect, streams to std::cout) before or after std::cout << "Last element: ".

Why does conversion of int -> hex/oct/bin print as plain decimal value in output?

#include <iostream>
#include <sstream>
using namespace std;
int main() {
// <variables>
int t = 0b11011101;
stringstream aa;
int n;
string hexed;
int hexedNot;
// </variables>
aa << t;
aa >> n;
cout << n << endl; // output 221
aa.clear();
aa << hex << n;
aa >> hexed;
aa.clear();
aa << hex << n;
aa >> hexedNot;
cout << hexed << endl; // output dd
cout << hexedNot; // output 221
return 2137;
}
I want to convert int decimals to hex/oct/bin ints with stringstream, but I don't know how to approach it properly. If I try to convert it to a string containing hex, it's fine, but when I try to do the same with an integer, it just doesn't work. Any ideas? I can't use c++11 and I want to do it in a really slim and easy way.
I know that I can use cout << hex << something;, but that would just change my output and it wouldn't write the hexified value into my variable.
The std::string hexed; was read from the std::stream, where you read after injecting a hexadecimal representation of n to the stream:
aa << hex << n;
The next operation
aa >> hexed;
reads from the stream to the std::string variable. Hence
cout << hexed << endl; // output dd
You seem to have a big misconception:
I know that I can use cout << hex << something;, but that would just change my output and it wouldn't write the hexified value into my variable.
There's no thing like "hexified value" in c++ or any other programming languages. There are just (integer) values.
Integers are integers, their representation in input/output is a different kettle of fish:
It's not bound directly to their variable type or what representation they were initialized from, but what you tell the std::istream/std::ostream to do.
To get the hexadecimal representation of 221 printed on the terminal, just write
cout << hex << hexedNot;
As for your comment:
but I want to have a variable int X = 0xDD or int X = 0b11011101 if that's possible. If not, I'll continue to use the cout << hex << sth; like I always have.
Of course these are possible. Rather than insisting of their textual representation you should try
int hexValue = 0xDD;
int binValue = 0b11011101;
if(hexValue == binValue) {
cout << "Wow, 0xDD and 0b11011101 represent the same int value!" << endl;
}
Don't confuse presentation and content.
Integers (as all other values) are stored as binary in computer memory ("content"). Whether cout prints that in binary, hexadecimal, or decimal, is just a formatting thing ("representation"). 0b11011101, 0xdd, and 221 are all just different representations of the same number.
C++ or any other language I know, doesn't store formatting information with integer variables. But you can always create your own type to do that:
// The type of std::dec, std::hex, std::oct:
using FormattingType = std::ios_base&(&)(std::ios_base&);
class IntWithRepresentation {
public:
IntWithRepresentation(int value, FormattingType formatting)
: value(value), formatting(formatting) {}
int value;
FormattingType formatting;
};
// Overload std::cout <<
std::ostream& operator<<(std::ostream& output_stream, IntWithRepresentation const& i) {
output_stream << i.formatting << i.value;
return output_stream;
}
int main() {
IntWithRepresentation dec = {221, std::dec};
IntWithRepresentation hex = {0xdd, std::hex};
IntWithRepresentation oct = {221, std::oct};
std::cout << dec << std::endl;
std::cout << hex << std::endl;
std::cout << oct << std::endl;
}

Construct a string who contain the hexValue of a binary array

I'm trying to construct a string from a byte array (libcrypto++) but I have issues with '0' in order to connect to SQS in c++
The result is almost correct except some '0' go at the end of the string.
std::string shaDigest(const std::string &key = "") {
byte out[64] = {0};
CryptoPP::SHA256().CalculateDigest(out, reinterpret_cast<const byte*>(key.c_str()), key.size());
std::stringstream ss;
std::string rep;
for (int i = 0; i < 64; i++) {
ss << std::hex << static_cast<int>(out[i]);
}
ss >> rep;
rep.erase(rep.begin()+64, rep.end());
return rep;
}
output:
correct : c46268185ea2227958f810a84dce4ade54abc4f42a03153ef720150a40e2e07b
mine : c46268185ea2227958f810a84dce4ade54abc4f42a3153ef72015a40e2e07b00
^ ^
Edit: I'm trying to do the same that hashlib.sha256('').hexdigest() in python does.
If that indeed works, here's the solution with my suggestions incorporated.
std::string shaDigest(const std::string &key = "") {
std::array<byte, 64> out {};
CryptoPP::SHA256().CalculateDigest(out.data(), reinterpret_cast<const byte*>(key.c_str()), key.size());
std::stringstream ss;
ss << std::hex << std::setfill('0');
for (byte b : out) {
ss << std::setw(2) << static_cast<int>(b);
}
// I don't think `.substr(0,64)` is needed here;
// hex ASCII form of 64-byte array should always have 128 characters
return ss.str();
}
You correctly convert bytes in hexadecimal, and it works correctly as soon as the byte value is greater than 15. But below, the first hexa digit is a 0 and is not printed by default. The two absent 0 are for 0x03 -> 3 and 0x0a -> a.
You should use :
for (int i = 0; i < 64; i++) {
ss << std::hex << std::setw(2) << std::setfill('0') << static_cast<int>(out[i]);
}
You need to set the width for the integer numbers for the proper zero-padding of numbers with otherwise less than two hexadecimal digits. Note that you need to re-set the width before every number that is inserted into the stream.
Example:
#include <iostream>
#include <iomanip>
int main() {
std::cout << std::hex << std::setfill('0');
for (int i=0; i<0x11; i++)
std::cout << std::setw(2) << i << "\n";
}
Output:
$ g++ test.cc && ./a.out
00
01
02
03
04
05
06
07
08
09
0a
0b
0c
0d
0e
0f
10
For reference:
http://en.cppreference.com/w/cpp/io/manip/setw
http://en.cppreference.com/w/cpp/io/manip/setfill

Print out every bit of variable like 0 or 1 in byte blocks [duplicate]

I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;

Using a bitset library

I am doing my first steps with C++ and with some help I have created a code to make an easy function. But I have a problem. I am using a bitset function that needs a specific library and I don't know who to introduce this library in my code.
I have been reading some information in the net but I don't achieve to do it, so I wonder if any of you can tell me in a detailed way how to do it.
So that you make an idea I have been looking in http://www.boost.org/doc/libs/1_36_0/libs/dynamic_bitset/dynamic_bitset.html, http://www.boost.org/doc/libs/1_46_0/libs/dynamic_bitset/dynamic_bitset.html#cons2 and simmilar places.
I attached my code so that you make and idea what I am doing.
Thanks in advance :)
// Program that converts a number from decimal to binary and show the positions where the bit of the number in binary contains 1
#include<iostream>
#include <boost/dynamic_bitset.hpp>
int main() {
unsigned long long dec;
std::cout << "Write a number in decimal: ";
std::cin >> dec;
boost::dynamic_bitset<> bs(64, dec);
std::cout << bs << std::endl;
for(size_t i = 0; i < 64; i++){
if(bs[i])
std::cout << "Position " << i << " is 1" << std::endl;
}
//system("pause");
return 0;
}
If you don't want your bitset to dynamically grow, you can just use the bitset that comes built-in with all standards compliant C++ implementations:
#include <iostream>
#include <bitset>
int main() {
unsigned long long dec;
std::cout << "Write a number in decimal: ";
std::cin >> dec;
const size_t number_of_bits = sizeof(dec) * 8;
std::bitset<number_of_bits> bs(dec);
std::cout << bs << std::endl;
for (size_t i = 0; i < number_of_bits; i++) {
if (bs[i])
std::cout << "Position " << i << " is 1" << std::endl;
}
return 0;
}
To use the dynamic_bitset class, you have to download the Boost libraries and add the boost folder to your compiler's include directories. If you are using the GNU C++ compiler you should something like:
g++ -I path/to/boost_1_46_1 mycode.cpp -o mycode