How to plus hex( No output)? [duplicate] - c++

This question already has answers here:
C++ cout hex values?
(10 answers)
Closed 2 years ago.
My code:
#include <iostream>
using namespace std;
int main() {
unsigned int a = 0x0009, b = 0x0002;
unsigned int c = a + b;
cout << c;
}
Now
c = 11
I want to this:
c = 000B
How can I do ?

When you do this
int main() {
unsigned int a = 0x0009, b = 0x0002;
unsigned int c = a + b;
}
Then c has the value of 11, it also has the value of 0x000B. It also has the value of 10 in a representation that uses 11 as base.
11 and 0x000B (and 10) are different representations of the same value.
When you use std::cout then the number is printed as decimal by default. What representation you choose to print the value on the screen has no influence whatsoever on the actual value of c.

What I understand is that you want to retrieve the result in an hexadecimal specific format XXXX.
Computing the addition is the same as any number base, you only need to use (here I display) the result in your format.
You can do this, for instance:
#include <iostream>
#include <iomanip>
std::string displayInPersonalizedHexa(unsigned int a)
{
std::stringstream ss;
ss << std::uppercase<< std::setfill('0') << std::setw(4) << std::hex<< a;
std::string x;
ss >>x;
//std::cout << x;
return x;
}
int main() {
unsigned int a = 0x0009, b = 0x0002;
unsigned int c = a + b;
// displays 000B
std::cout << displayInPersonalizedHexa(c) << std::endl;
// adds c=c+1
c=c+1;
// displays 000C
std::cout << displayInPersonalizedHexa(c) << std::endl;
//0xC+5 = 0x11
c=c+5;
// displays 0011
std::cout << displayInPersonalizedHexa(c) << std::endl;
}
This will output
000B
000C
0011

Related

Multiplying two uint16_t numbers results in an int [duplicate]

This question already has answers here:
Why must a short be converted to an int before arithmetic operations in C and C++?
(4 answers)
Closed 4 years ago.
Take a look at the following snippet:
#include <iostream>
#include <cstdint>
#include <boost/type_index.hpp>
using boost::typeindex::type_id_with_cvr;
int main(int argc, char** argv)
{
constexpr uint16_t b = 2;
constexpr uint16_t c = 3;
constexpr const auto bc = b * c;
std::cout << "b: " << type_id_with_cvr<decltype(b)>().pretty_name() << std::endl;
std::cout << "b * c: " << type_id_with_cvr<decltype(bc)>().pretty_name() << std::endl;
}
This results in the following:
b: unsigned short const
b * c: int const
Why does multiplying two unsinged ints results in an integer?
Compiler: g++ 5.4.0
unsigned short values are implicitly converted to an int before the multiplication.
short and char are considered "storage types" and implicitly converted to int before doing computations. It's the reason for which
unsigned char x = 255, y = 1;
printf("%i\n", x+y); // you get 256, not 0

Converting unsigned char * to hexstring

Below code takes a hex string(every byte is represented as its corresponidng hex value)
converts it to unsigned char * buffer and then converts back to hex string.
This code is testing the conversion from unsigned char* buffer to hex string
which I need to send over the network to a receiver process.
I chose hex string as unsigned char can be in range of 0 to 255 and there is no printable character after 127.
The below code just tells the portion that bugs me. Its in the comment.
#include <iostream>
#include <sstream>
#include <iomanip>
using namespace std;
// converts a hexstring to corresponding integer. i.e "c0" - > 192
int convertHexStringToInt(const string & hexString)
{
stringstream geek;
int x=0;
geek << std::hex << hexString;
geek >> x;
return x;
}
// converts a complete hexstring to unsigned char * buffer
void convertHexStringToUnsignedCharBuffer(string hexString, unsigned char*
hexBuffer)
{
int i=0;
while(hexString.length())
{
string hexStringPart = hexString.substr(0,2);
hexString = hexString.substr(2);
int hexStringOneByte = convertHexStringToInt (hexStringPart);
hexBuffer[i] = static_cast<unsigned char>((hexStringOneByte & 0xFF)) ;
i++;
}
}
int main()
{
//below hex string is a hex representation of a unsigned char * buffer.
//this is generated by an excryption algorithm in unsigned char* format
//I am converting it to hex string to make it printable for verification pupose.
//and takes the hexstring as inpuit here to test the conversion logic.
string inputHexString = "552027e33844dd7b71676b963c0b8e20";
string outputHexString;
stringstream geek;
unsigned char * hexBuffer = new unsigned char[inputHexString.length()/2];
convertHexStringToUnsignedCharBuffer(inputHexString, hexBuffer);
for (int i=0;i<inputHexString.length()/2;i++)
{
geek <<std::hex << std::setw(2) << std::setfill('0')<<(0xFF&hexBuffer[i]); // this works
//geek <<std::hex << std::setw(2) << std::setfill('0')<<(hexBuffer[i]); -- > this does not work
// I am not able to figure out why I need to do the bit wise and operation with unsigned char "0xFF&hexBuffer[i]"
// without this the conversion does not work for individual bytes having ascii values more than 127.
}
geek >> outputHexString;
cout << "input hex string: " << inputHexString<<endl;
cout << "output hex string: " << outputHexString<<endl;
if(0 == inputHexString.compare(outputHexString))
cout<<"hex encoding successful"<<endl;
else
cout<<"hex encoding failed"<<endl;
if(NULL != hexBuffer)
delete[] hexBuffer;
return 0;
}
// output
// can some one explain ? I am sure its something silly that I am missing.
the C++20 way:
unsigned char* data = new unsigned char[]{ "Hello world\n\t\r\0" };
std::size_t data_size = sizeof("Hello world\n\t\r\0") - 1;
auto sp = std::span(data, data_size );
std::transform( sp.begin(), sp.end(),
std::ostream_iterator<std::string>(std::cout),
[](unsigned char c) -> std::string {
return std::format("{:02X}", int(c));
});
or if you want to store result into string:
std::string result{};
result.reserve(size * 2 + 1);
std::transform( sp.begin(), sp.end(),
std::back_inserter(result),
[](unsigned char c) -> std::string {
return std::format("{:02X}", int(c));
});
Output:
48656C6C6F20776F726C640A090D00
The output of an unsigned char is like the output of a char which obviously does not what the OP expects.
I tested the following on coliru:
#include <iomanip>
#include <iostream>
int main()
{
std::cout << "Output of (unsigned char)0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (unsigned char)0xc0 << '\n';
return 0;
}
and got:
Output of (unsigned char)0xc0: 0�
This is caused by the std::ostream::operator<<() which is chosen out of the available operators. I looked on cppreference
operator<<(std::basic_ostream) and
std::basic_ostream::operator<<
and found
template< class Traits >
basic_ostream<char,Traits>& operator<<( basic_ostream<char,Traits>& os,
unsigned char ch );
in the former (with a little bit help from M.M).
The OP suggested a fix: bit-wise And with 0xff which seemed to work. Checking this in coliru.com:
#include <iomanip>
#include <iostream>
int main()
{
std::cout << "Output of (unsigned char)0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (0xff & (unsigned char)0xc0) << '\n';
return 0;
}
Output:
Output of (unsigned char)0xc0: c0
Really, this seems to work. Why?
0xff is an int constant (stricly speaking: an integer literal) and has type int. Hence, the bit-wise And promotes (unsigned char)0xc0 to int as well, yields the result of type int, and hence, the std::ostream::operator<< for int is applied.
This is an option to solve this. I can provide another one – just converting the unsigned char to unsigned.
Where the promotion of unsigned char to int introduces a possible sign-bit extension (which is undesired in this case), this doesn't happen when unsigned char is converted to unsigned. The output stream operator for unsigned provides the intended output as well:
#include <iomanip>
#include <iostream>
int main()
{
std::cout << "Output of (unsigned char)0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (unsigned)(unsigned char)0xc0 << '\n';
const unsigned char c = 0xc0;
std::cout << "Output of unsigned char c = 0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (unsigned)c << '\n';
return 0;
}
Output:
Output of (unsigned char)0xc0: c0
Output of unsigned char c = 0xc0: c0
Live Demo on coliru

Why unsigned int right shift is always filled with '1'

#include <iostream>
#include <string>
#include <bitset>
int main()
{
char c = 128;
unsigned int shift2 = (unsigned int)c;
std::string shift2bin = std::bitset<8>(shift2).to_string(); //to binary
std::cout << " shift2bin: " << shift2bin << std::endl;
unsigned int shift3 = shift2 >> 1;
std::string shift3bin = std::bitset<8>(shift3).to_string(); //to binary
std::cout << " shift3bin: " << shift3bin << std::endl;
}
Output:
shift2bin: 10000000
shift3bin: 11000000
I expect the result to be as follows:
shift2bin: 10000000
shift3bin: 01000000
Question> Why unsigned int right shift uses 1 as the filler?
As seen in this answer, unsigned right shifts always zero-fill. However, try this to print out all the bits in the unsigned int:
std::string shift2bin = std::bitset<sizeof(shift2)*8>(shift2).to_string(); //to binary
std::cout << " shift2bin: " << shift2bin << std::endl;
You will see something like (as you appear to have char signed by default):
shift2bin: 11111111111111111111111110000000
^^^^^^^^
If you do the same for shift3bin, you will see:
shift3bin: 01111111111111111111111111000000
^^^^^^^^
So, you can see how you appear to get a "1" fill.

C++ Save/Load double (to/from files)

[Rewritten for clarity.]
I need to write and read doubles to and from files, in a format that will always have the same number of characters. The format doesn't need to be human-readable: it just needs to be quick to load (with as little dynamic memory and conversion stuff as possible, file space is important but doesn't matter quite as much).
Is there a standard (or at least safe and reliable) way to get the components of a double so that I can store the signicicand sign and mantissa sign as a '1' or '0' and the significand and mantissa separately in a hex format with a constant length?
Essentially, how can I grab the specific bit/number components from a double? Is it even possible to do this on separate systems (assuming the same OS family such as Windows) or is the standard for the components of doubles not enforced per OS?
I am using MinGW and of course compiling for Windows. I'd like to use the C Standard Library where possible, not the C++ Standard Library. Also, I'd like to avoid other libraries (like Boost) but if there are specific Windows functions then those would help a lot.
The most direct way of doing so would be to open your fstream in binary mode, and then use the write() and read() methods of fstream to read your double to/from the stream:
#include <fstream>
#include <iostream>
int main( int argc, char** argv ) {
std::fstream fp( "foo", std::fstream::in |
std::fstream::out |
std::fstream::trunc |
std::fstream::binary );
double d1, d2;
d1 = 3.14;
fp.write( (char*)&d1, sizeof( d1 ) );
fp.seekg( 0, std::fstream::beg );
fp.read( (char*)&d2, sizeof( d2 ) );
std::cout << "d1 = " << d1 << " d2 = " << d2 << std::endl;
}
Probably you want somthing like this:
#include <iostream>
#include <sstream>
#include <iomanip>
using namespace std;
template <typename T>
string convertToHex(const T &x)
{
char *xc = (char *)&x;
ostringstream s;
for (char *c = xc; c < xc + sizeof(x); ++c)
s << hex << setw(2) << setfill('0') << static_cast<int>(*c) << " ";
return s.str();
}
template <typename T>
void convertFromHex(string s, T &x)
{
char *xc = (char *)&x;
istringstream is(s);
for (char *c = xc; c < xc + sizeof(x); ++c)
{
int tmp;
is >> hex >> tmp;
*c = tmp;
}
}
int main()
{
double a = 10;
string as = convertToHex(a);
cout << "a: " << as << endl;
double b;
convertFromHex(as, b);
cout << "b: " << b << endl;
}
Output:
a: 00 00 00 00 00 00 24 40
b: 10
Here is very very simple example with boost::serializer (http://www.boost.org/doc/libs/1_54_0/libs/serialization/doc/index.html). I am using boost::archive::text_iarchive and boost::archive::text_oarchive, but you can switch it to boost::archive::binary_iarchive and boost::archive::binary_oarchive. Should work.
#include <boost/archive/text_iarchive.hpp>
#include <boost/archive/text_oarchive.hpp>
#include <sstream>
#define _USE_MATH_DEFINES
#include <cmath>
using namespace std;
int main()
{
double a = M_PI;
string text;
{
ostringstream textStream;
boost::archive::text_oarchive oa(textStream);
oa << a;
text = textStream.str();
}
cout << "a: " << text << endl;
double b;
{
istringstream textStream(text);
boost::archive::text_iarchive ia(textStream);
ia >> b;
}
cout << "b: " << b << endl;
}
Output:
a: 22 serialization::archive 9 3.1415926535897931
b: 3.14159

c++ double to string shows floating points

I have a string: (66)
Then I convert it to double and do some math: atof(t.c_str()) / 30
then I convert it back to string: string s = boost::lexical_cast<string>(hizdegerd)
Problem is when I show it on label it becomes 2,20000001.
I've tried everything. sprintf etc.
I want to show only one digit after point.
hizdegerd = atof(t.c_str()) / 30;
char buffer [50];
hizdegerd=sprintf (buffer, "%2.2f",hizdegerd);
if(oncekideger != hizdegerd)
{
txtOyunHiz->SetValue(hizdegerd);
oncekideger = hizdegerd;
}
I think I'd wrap the formatting up into a function template, something like this:
#include <iostream>
#include <sstream>
#include <iomanip>
template <class T>
std::string fmt(T in, int width = 0, int prec = 0) {
std::ostringstream s;
s << std::setw(width) << std::setprecision(prec) << in;
return s.str();
}
int main(){
std::string s = fmt(66.0 / 30.0, 2, 2);
std::cout << s << "\n";
}
You can use this way of conversion back to string and then only the wished number of digits for the precision will be taken in consideration:
ostringstream a;
a.precision(x); // the number of precision digits will be x-1
double b = 1.45612356;
a << b;
std::string s = a.str();
Since you wrote "I want to show":
#include<iostream>
#include<iomanip>
int main()
{
std::cout << std::fixed << std::setprecision(1) << 34.2356457;
}
Output:
34.2
By the way, sprintf is buffer-overflow-vulnerable and is not C++ .