unsigned int to significant number [duplicate] - c++

This question already has answers here:
cout not printing unsigned char
(5 answers)
Closed 3 years ago.
Hello I am working on a project. Get unsigned 16 bit numbers and average them. No problem with getting average but when I tried to print to the screen, it prints meaningless symbols. So I figured out that I must convert it to decimal. Vs2015 converts it but I want to do it myself because my code is for a microprocessor.
Here is my simulation...
int main(){
uint32_t x = 0x12345678;
unsigned char c[4];
c[0] = x >> 24;// least significant
c[1] = x >> 16;
c[2] = x >> 8;
c[3] = x; // most significant
cout << x << endl;
cout << c[0] << endl;
cout << c[1] << endl;
cout << c[2] << endl;
cout << c[3] << endl;
system("pause");
return 0;
}
Output:
305419896
4
V
x

The problem here is, that the inserter operator << will treat char variables as character and not as a number. So, if the char variable contains 65, it will not print 65 but 'A'.
You need to convert the value to an unsigned int.
So:
std::cout << static_cast<unsigned int>(c[0]) << "\n";
Then it will give you the expected output.

Related

How to switch from hexadecimal to 2^16 system in c++

I have a task like this:
The user enters the numbers N1(str1) and N2(str2) in hexadecimal. The program must convert the numbers from hexadecimal to a system of 2 ^ 16 and count the sum of the numbers N1 and N2 in the 2^16 system, then translate the result into a hexadecimal system.
I had such an idea:
first convert from hexadecimal to decimal (I can do this).
Then take each number modulo 2 ^ 16 the logarithm of the base 2 ^ 16 of the number N1dec(dec11) (or N2dec(dec22)) times and write the remainders in the corresponding arrays. This is where my problems began. My conversion from decimal to 2^16 system does not work. Hope You can help.
#include <iostream>
using namespace std;
int main()
{
//HEX to decimal
const char* const str1 = "101D0";//7A120 = 500000; 101D0 = 66000; //1F4 = 500=dec1=N1
cout << "Hello!\nFirst number in HEX system is " << str1 << endl;
istringstream is(str1);
int dec1;
is >> hex >> dec1;
if (!is && !is.eof()) throw "dammit!";
cout << "First number in decimal system: " << dec1 << endl;
const char* const str2 = "1567";//5479=dec2=num2
cout << "Second number in HEX system is " << str2 << endl;
istringstream iss(str2);
int dec2;
iss >> hex >> dec2;
if (!iss && !iss.eof()) throw "dammit!";
cout << "Second number in decimal system: " << dec2 << endl;
//
//Decimal to 2^16 system
int dec11 = dec1;//because dec11 will be = 0
int dec22 = dec2;//because dec22 will be = 0
int k = 1 << 16;
cout << "2^16 = " << k << endl;
int intPART1 = log(dec11) / log(k);
cout << "Int part of log2^16 (" << dec11 << ") is " << intPART1 << endl << "So num1 in 2^16 system will look like ";
int *n1 = new int[intPART1 + 1];
for (int i = 0; i <= intPART1; i++)
{
if (i != 0)
{
n1[i] = dec11 % k*(1<<16-1);
dec11 = dec11 / k;
}
else
{
n1[i] = dec11 % k;
dec11 = dec11 / k;
}
}
for (int i = intPART1; i >= 0; i--)
{
cout << n1[i] << " ";
}
cout << endl;
int intPART2 = log(dec22) / log(k);
cout << "Int part of log2^16 (" << dec22 << ") is " << intPART2 << endl << "So num2 in 2^16 system will look like ";
int *n2 = new int[intPART2 + 1];
for (int i = 0; i <= intPART2; i++)
{
if (i != 0)
{
n2[i] = dec22 % k*(1 << 16 - 1);
dec22 = dec22 / k;
}
else
{
n2[i] = dec22 % k;
dec22 = dec22 / k;
}
}
for (int i = intPART2; i >= 0; i--)
{
cout << n2[i] << " ";
}
cout << endl;
Since hexadecimal values are of base 16, let us say 16^1 and base 2^16 can be recalculated to 16^4 we can already see that your target base is a multiple of your source base. This makes the computation pretty easy and straight forward. All we have to do is some bit shifiting.
int hexToInt(char c)
{
if (c >= 'a')
return c - 'a' + 10;
if (c >= 'A')
return c - 'A' + 10;
return c - '0';
}
// Converts hex to base 2^16. vector[0] holds the MSB.
std::vector<unsigned short> toBase0x10000(std::string const& hex)
{
std::size_t bufSize = hex.size() / 4 + (hex.size() % 4 > 0);
std::vector<unsigned short> number(bufSize);
int shift = 0;
int value = 0;
std::size_t numIndex = number.size();
for (int i = hex.size() - 1; i >= 0; i--)
{
value |= hexToInt(hex[i]) << shift;
shift += 4;
if (shift == 16)
{
number[--numIndex] = static_cast<unsigned short>(value);
shift = 0;
value = 0;
}
}
if (value != 0)
number[--numIndex] = static_cast<unsigned short>(value);
return number;
}
std::string fromBase0x10000(std::vector<unsigned short> const& num)
{
std::stringstream ss;
for (auto&& digit : num)
ss << std::hex << digit;
return ss.str();
}
toBase0x10000 returns a std::vector<unsigned short> so each element in the vector represents one digit of your base 2^16 number (since unsigned short can hold exaclty that value range).
As a side effect this implementation supports any precision number so you are not limited by the value range of numeric types like int or long.
Here is a full example.
Since this looks like a learning exercise you want to solve yourself, here are two hints.
A hex digit represents four bits, so each base-65,536 digit consists of four hex digits. You can therefore read the digits in groups of four, with no need to convert to or from decimal. The same algorithm you learned to decode four decimal digits will work for hex, except the multiplications will be even more efficient because the compiler will optimize them into left-shift instructions.
You should use the uint16_t type from <stdint.h> for this arithmetic, as it it is exactly the right size and unsigned. Unsigned arithmetic overflow is defined as wrapping around, which is what you want. Signed overflow is undefined behavior. (Or #include <cstdint> followed by using std::uint16_t; if you prefer.)
To add digits in any base b, take the sum of the digits modulo b. This will be even easier when b is a power of 2, because the x86 and many other CPUs have a 16-bit unsigned add instruction that does this in hardware, and on any machine that doesn’t, the compiler can optimize this to the bitmask & 0xFFFFU.
In both cases, you can, if you want, write out the binary optimizations by hand using << and & rather than * and %. This might even improve the generated code, slightly, if you use signed rather than unsigned math. However, any modern compiler is smart enough to perform this kind of micro-optimization for you. You are better off not optimizing prematurely, and writing code that is easier to read and understand.

C++ Printing an integer derived from a string's character

I'd like to print the very first number. For some reason, its printing out as 49 instead of 1...
int n = 111111251;
string s = to_string(n);
int num = s[0];
cout << num << endl;
Its printing out 49 because that is the ascii value of 1.
If you want to print out the character, just print out s[0] directly, or convert it back to an int properly. Consider the following code:
int main()
{
int n = 111111251;
string s = to_string(n);
cout << s[0] << endl;
int num = s[0];
cout << num << endl;
int num2 = s[0] - '0';
cout << num2 << endl;
return 0;
}
This prints out:
1
49
1
While doing the right thing converting an integer to a string you change the type of the first element in the string from 'char' to 'int'. That effects the output: The character representation is still '1', but the numerical value is (ASCII) 49 (Have a look at http://en.wikipedia.org/wiki/ASCII).
You probably want char character = s[0]; cout << character << endl; or int num = s[0]; cout << char(num) << endl;
49 is the ASCII value of 1.
Hope I helped,

Bit Operations, mainly ~

I am currently converting decimal to binary, making sure it is 8 bits. All bit operations work except the ~ (NOT) operations. They come out as a huge integer value. I am not sure why, since the other bit operations work. Here is my code: (The commented out lines are what is not working)
Edit: If I want to get 8 bit binary strings, what do I do? Use unsigned chars? If I change all unsigned ints to unsigned chars then my BinaryToDecimal function produces incorrect binary conversion.
#include <iostream>
#include <string>
using namespace std;
string BinaryToDecimal(unsigned int dec)
{
string binary = "";
float remainder = 0.0f;
while( dec != 0 )
{
remainder = dec % 2;
dec /= 2;
if( remainder == 0 )
binary.append("0");
else
binary.append("1");
}
// Reverse binary string
string ret = string(binary.rbegin(), binary.rend());
return ret;
}
int main()
{
unsigned int a = 0;
unsigned int b = 0;
cout << "Enter a number to convert to binary: ";
cin >> a;
cout << "Enter a number to convert to binary: ";
cin >> b;
cout << "A = " << BinaryToDecimal(a) << endl;
cout << "B = " << BinaryToDecimal(b) << endl;
unsigned int c = a & b;
unsigned int d = a | b;
//unsigned int e = ~a;
//unsigned int f = ~b;
unsigned int g = a ^ b;
unsigned int h = a << 2;
unsigned int i = b >> 3;
cout << "A & B = " << BinaryToDecimal(c) << endl;
cout << "A | B = " << BinaryToDecimal(d) << endl;
//cout << "~A = " << BinaryToDecimal(e) << endl;
//cout << "~B = " << BinaryToDecimal(f) << endl;
cout << "A ^ B = " << BinaryToDecimal(g) << endl;
cout << "A << 2 = " << BinaryToDecimal(h) << endl;
cout << "B >> 3 = " << BinaryToDecimal(i) << endl;
}
If you perform a binary NOT on a small unsigned integer, you will get a large number as a result, seeing as most of the most significant bits will be set to 1 (the inverse of what they were in the operand).
In this case you're doing ~ 0 which will certainly give you a large number, in fact the largest possible unsigned int, since all bits will be set to 1.
(What result were you expecting?)
You are using an unsigned int for the operations, such that the inversion of small number becomes a large number because of leading 1 starting from the MSB. If you only want the representation is 8 bit only, you should use unsigned char for its storage.
But you cannot change a or b to unsigned char. Otherwise, cin >> a will put the number's ASCII code to a, not a number. For example, your input is 5, it puts 0x35 ('5'), not number 5.
If you don't want to change unsigned int of your code, you can do some minor enhancements
string BinaryToDecimal(unsigned int dec)
{
string binary = "";
float remainder = 0.0f;
dec &= 0xff; // only 8 bits you care about
while( dec != 0 )
{
....
But you are using while( dec !=0 ), which is buggy. If the result is already 0, then the function returns an empty string, not "0000". Instead, you should use a counter to count only for 8 bit.
for (int i = 0; i < 8; i++ ) {
if ((dec & 1) != 0)
binary.append("1");
else
binary.append("0");
dec >>= 1;
}
Also, using bit wise AND to test the bit is 0 or 1, and shift operation, is better than / and % operators.
Finally, for 8 bit 5 (0000_0101), its inversion is 250 (1111_1010), not 1010.

How to print unsigned char[] as HEX in C++?

I would like to print the following hashed data. How should I do it?
unsigned char hashedChars[32];
SHA256((const unsigned char*)data.c_str(),
data.length(),
hashedChars);
printf("hashedChars: %X\n", hashedChars); // doesn't seem to work??
The hex format specifier is expecting a single integer value but you're providing instead an array of char. What you need to do is print out the char values individually as hex values.
printf("hashedChars: ");
for (int i = 0; i < 32; i++) {
printf("%x", hashedChars[i]);
}
printf("\n");
Since you are using C++ though you should consider using cout instead of printf (it's more idiomatic for C++.
cout << "hashedChars: ";
for (int i = 0; i < 32; i++) {
cout << hex << hashedChars[i];
}
cout << endl;
In C++
#include <iostream>
#include <iomanip>
unsigned char buf0[] = {4, 85, 250, 206};
for (int i = 0;i < sizeof buf0 / sizeof buf0[0]; i++) {
std::cout << std::setfill('0')
<< std::setw(2)
<< std::uppercase
<< std::hex << (0xFF & buf0[i]) << " ";
}
As mentioned in comments, you need to cast your unsigned char to be recognized as an integral (int or unsigned int since you work with positives values), and you need to add some zeros where the value could be printed by 1 character instead of 2 :
cout << "hashedChars: ";
for (int i = 0; i < 32; i++) {
cout << std::hex << std::setfill('0')
<< std::setw(2) << static_cast<int>(hashedChars[i]);
}
cout << endl;

How to access each byte in an integer pointer?

I know that if the integer is dissected rather than being computed as a whole, the total of individual bytes will yield incorrect result. However, for curiosity, I want to examine individual byte and make.I'm not sure if this is correct to inspect each byte in an integer pointer:
#include <iostream>
int main(int argc, char *argv[])
{
using namespace std;
int *num = new int;
*num = 123456789;
cout << "Num: " << *num << '\n';
char* numchar_ptr = reinterpret_cast<char*> (num);
for (int i = 0; i < 4; ++i)
{
cout << "number char: " << i << ' ' << (short) *(numchar_ptr+i) << '\n';
*(numchar_ptr+i) = i
}
cout << "New num: " << *num << '\n';
delete num;
return 0;
}
According to the loop, the bytes in the integer will be: 0 1 2 3
which is equal to 00000000 00000001 00000010 00000011 in binary and 66051 in decimal
But I got the result "New num" is 50462976. Why?
Read carefully the wikipedia page on endianness
You need to take endianness into account. On your system, numbers are stored in little-endian representation, which means that the lowest-addressed byte is the least-significant.
Therefore, your number is:
0 * (1 << 0)
+ 1 * (1 << 8)
+ 2 * (1 << 16)
+ 3 * (1 << 24)
which is 50462976.