I know that if the integer is dissected rather than being computed as a whole, the total of individual bytes will yield incorrect result. However, for curiosity, I want to examine individual byte and make.I'm not sure if this is correct to inspect each byte in an integer pointer:
#include <iostream>
int main(int argc, char *argv[])
{
using namespace std;
int *num = new int;
*num = 123456789;
cout << "Num: " << *num << '\n';
char* numchar_ptr = reinterpret_cast<char*> (num);
for (int i = 0; i < 4; ++i)
{
cout << "number char: " << i << ' ' << (short) *(numchar_ptr+i) << '\n';
*(numchar_ptr+i) = i
}
cout << "New num: " << *num << '\n';
delete num;
return 0;
}
According to the loop, the bytes in the integer will be: 0 1 2 3
which is equal to 00000000 00000001 00000010 00000011 in binary and 66051 in decimal
But I got the result "New num" is 50462976. Why?
Read carefully the wikipedia page on endianness
You need to take endianness into account. On your system, numbers are stored in little-endian representation, which means that the lowest-addressed byte is the least-significant.
Therefore, your number is:
0 * (1 << 0)
+ 1 * (1 << 8)
+ 2 * (1 << 16)
+ 3 * (1 << 24)
which is 50462976.
Related
i'm working on a C++ application, to convert binary's to decimal numbers, and currently i have a working application, but the problem arises now that it have to have intermediate results.
Since i know the conversation on paper, and i know the basics of C++ i didn't believe this would be very difficult, but i need some help, i don't need you to write it for me, i need someone to help point me in the right direction, so i can solve it by myself.
The intermediate results would have to be something like this if i convert 1100 to decimal:
---------------
0 * 2⁰ = 0
0 * 2¹ = 0
1 * 2² = 4
1 * 2³ = 8
-----------------
which would end up being 12 (8 + 4 + 0 + 0 = 12)
the code so far can be found here: https://paste.ubuntu.com/p/fP2GfT5Pzq/
and below
#include <iostream>
#include <cmath>
using namespace std;
int binary_to_decimal(long long);
int main()
{
long long number;
cout << "Please enter the Binary you wish to convert: " << endl;
cin >> number;
cout << "In binary : '" << number << "' In decimal: '" << binary_to_decimal(number) << "'" << endl;
}
int binary_to_decimal(long long number)
{
int decimal_number = 0, i = 0, remainder;
while (number != 0)
{
remainder = number % 10;
// Number = number divided by then
number /= 10;
// decimal_number = decimal_number + remainder to the power of 2
decimal_number += remainder * pow(2, i);
++i;
}
return decimal_number;
}
Hope you can help :)
If you just want to print intermediate results, then you are very close. Since you are very close, the only hint I could give you is: you need to print stuff inside that while loop. Here's the code, you don't have to look at it if you don't want to :)
cout << "---------------" << endl;
while (number != 0)
{
remainder = number % 10;
number /= 10;
int digit = remainder * pow(2,i);
decimal_number += digit;
cout << remainder << " * (2^" << i << ") = " << digit << endl;
++i;
}
cout << "---------------" << endl;
This question already has answers here:
cout not printing unsigned char
(5 answers)
Closed 3 years ago.
Hello I am working on a project. Get unsigned 16 bit numbers and average them. No problem with getting average but when I tried to print to the screen, it prints meaningless symbols. So I figured out that I must convert it to decimal. Vs2015 converts it but I want to do it myself because my code is for a microprocessor.
Here is my simulation...
int main(){
uint32_t x = 0x12345678;
unsigned char c[4];
c[0] = x >> 24;// least significant
c[1] = x >> 16;
c[2] = x >> 8;
c[3] = x; // most significant
cout << x << endl;
cout << c[0] << endl;
cout << c[1] << endl;
cout << c[2] << endl;
cout << c[3] << endl;
system("pause");
return 0;
}
Output:
305419896
4
V
x
The problem here is, that the inserter operator << will treat char variables as character and not as a number. So, if the char variable contains 65, it will not print 65 but 'A'.
You need to convert the value to an unsigned int.
So:
std::cout << static_cast<unsigned int>(c[0]) << "\n";
Then it will give you the expected output.
I have a task like this:
The user enters the numbers N1(str1) and N2(str2) in hexadecimal. The program must convert the numbers from hexadecimal to a system of 2 ^ 16 and count the sum of the numbers N1 and N2 in the 2^16 system, then translate the result into a hexadecimal system.
I had such an idea:
first convert from hexadecimal to decimal (I can do this).
Then take each number modulo 2 ^ 16 the logarithm of the base 2 ^ 16 of the number N1dec(dec11) (or N2dec(dec22)) times and write the remainders in the corresponding arrays. This is where my problems began. My conversion from decimal to 2^16 system does not work. Hope You can help.
#include <iostream>
using namespace std;
int main()
{
//HEX to decimal
const char* const str1 = "101D0";//7A120 = 500000; 101D0 = 66000; //1F4 = 500=dec1=N1
cout << "Hello!\nFirst number in HEX system is " << str1 << endl;
istringstream is(str1);
int dec1;
is >> hex >> dec1;
if (!is && !is.eof()) throw "dammit!";
cout << "First number in decimal system: " << dec1 << endl;
const char* const str2 = "1567";//5479=dec2=num2
cout << "Second number in HEX system is " << str2 << endl;
istringstream iss(str2);
int dec2;
iss >> hex >> dec2;
if (!iss && !iss.eof()) throw "dammit!";
cout << "Second number in decimal system: " << dec2 << endl;
//
//Decimal to 2^16 system
int dec11 = dec1;//because dec11 will be = 0
int dec22 = dec2;//because dec22 will be = 0
int k = 1 << 16;
cout << "2^16 = " << k << endl;
int intPART1 = log(dec11) / log(k);
cout << "Int part of log2^16 (" << dec11 << ") is " << intPART1 << endl << "So num1 in 2^16 system will look like ";
int *n1 = new int[intPART1 + 1];
for (int i = 0; i <= intPART1; i++)
{
if (i != 0)
{
n1[i] = dec11 % k*(1<<16-1);
dec11 = dec11 / k;
}
else
{
n1[i] = dec11 % k;
dec11 = dec11 / k;
}
}
for (int i = intPART1; i >= 0; i--)
{
cout << n1[i] << " ";
}
cout << endl;
int intPART2 = log(dec22) / log(k);
cout << "Int part of log2^16 (" << dec22 << ") is " << intPART2 << endl << "So num2 in 2^16 system will look like ";
int *n2 = new int[intPART2 + 1];
for (int i = 0; i <= intPART2; i++)
{
if (i != 0)
{
n2[i] = dec22 % k*(1 << 16 - 1);
dec22 = dec22 / k;
}
else
{
n2[i] = dec22 % k;
dec22 = dec22 / k;
}
}
for (int i = intPART2; i >= 0; i--)
{
cout << n2[i] << " ";
}
cout << endl;
Since hexadecimal values are of base 16, let us say 16^1 and base 2^16 can be recalculated to 16^4 we can already see that your target base is a multiple of your source base. This makes the computation pretty easy and straight forward. All we have to do is some bit shifiting.
int hexToInt(char c)
{
if (c >= 'a')
return c - 'a' + 10;
if (c >= 'A')
return c - 'A' + 10;
return c - '0';
}
// Converts hex to base 2^16. vector[0] holds the MSB.
std::vector<unsigned short> toBase0x10000(std::string const& hex)
{
std::size_t bufSize = hex.size() / 4 + (hex.size() % 4 > 0);
std::vector<unsigned short> number(bufSize);
int shift = 0;
int value = 0;
std::size_t numIndex = number.size();
for (int i = hex.size() - 1; i >= 0; i--)
{
value |= hexToInt(hex[i]) << shift;
shift += 4;
if (shift == 16)
{
number[--numIndex] = static_cast<unsigned short>(value);
shift = 0;
value = 0;
}
}
if (value != 0)
number[--numIndex] = static_cast<unsigned short>(value);
return number;
}
std::string fromBase0x10000(std::vector<unsigned short> const& num)
{
std::stringstream ss;
for (auto&& digit : num)
ss << std::hex << digit;
return ss.str();
}
toBase0x10000 returns a std::vector<unsigned short> so each element in the vector represents one digit of your base 2^16 number (since unsigned short can hold exaclty that value range).
As a side effect this implementation supports any precision number so you are not limited by the value range of numeric types like int or long.
Here is a full example.
Since this looks like a learning exercise you want to solve yourself, here are two hints.
A hex digit represents four bits, so each base-65,536 digit consists of four hex digits. You can therefore read the digits in groups of four, with no need to convert to or from decimal. The same algorithm you learned to decode four decimal digits will work for hex, except the multiplications will be even more efficient because the compiler will optimize them into left-shift instructions.
You should use the uint16_t type from <stdint.h> for this arithmetic, as it it is exactly the right size and unsigned. Unsigned arithmetic overflow is defined as wrapping around, which is what you want. Signed overflow is undefined behavior. (Or #include <cstdint> followed by using std::uint16_t; if you prefer.)
To add digits in any base b, take the sum of the digits modulo b. This will be even easier when b is a power of 2, because the x86 and many other CPUs have a 16-bit unsigned add instruction that does this in hardware, and on any machine that doesn’t, the compiler can optimize this to the bitmask & 0xFFFFU.
In both cases, you can, if you want, write out the binary optimizations by hand using << and & rather than * and %. This might even improve the generated code, slightly, if you use signed rather than unsigned math. However, any modern compiler is smart enough to perform this kind of micro-optimization for you. You are better off not optimizing prematurely, and writing code that is easier to read and understand.
I have this code
.....
const EVP_CIPHER * cipher = EVP_des_ecb();
uint8_t ot_byte,st_byte;
EVP_CIPHER_CTX ctx;
int trash;
EVP_EncryptInit(&ctx,cipher, key, iv);
cout << size - offset << endl;
int i=0;
for (; i < size - offset ;i++){
check = read(input_fd,&ot_byte,1);
cout << (i < size - offset) << " " << i << endl;
EVP_EncryptUpdate(&ctx, &st_byte, &trash, &ot_byte, 1);
check = write(output_fd,&st_byte,1);
}
cout << (i < size - offset) << " " << i << endl;
close(output_fd);
close(output_fd);
the output is
702000
1 0
1 1
1 2
1 3
1 4
1 5
1 6
1 7
0 5019693
When I "comment off" the EVP update function, the loop goes through all 702000 iterations. Where is the mistake? Is there a possibility, that EVP somehow goes behind its buffer and corrupts stack data?
uint8_t type will be small, these functions return at least 8 bytes
I am currently converting decimal to binary, making sure it is 8 bits. All bit operations work except the ~ (NOT) operations. They come out as a huge integer value. I am not sure why, since the other bit operations work. Here is my code: (The commented out lines are what is not working)
Edit: If I want to get 8 bit binary strings, what do I do? Use unsigned chars? If I change all unsigned ints to unsigned chars then my BinaryToDecimal function produces incorrect binary conversion.
#include <iostream>
#include <string>
using namespace std;
string BinaryToDecimal(unsigned int dec)
{
string binary = "";
float remainder = 0.0f;
while( dec != 0 )
{
remainder = dec % 2;
dec /= 2;
if( remainder == 0 )
binary.append("0");
else
binary.append("1");
}
// Reverse binary string
string ret = string(binary.rbegin(), binary.rend());
return ret;
}
int main()
{
unsigned int a = 0;
unsigned int b = 0;
cout << "Enter a number to convert to binary: ";
cin >> a;
cout << "Enter a number to convert to binary: ";
cin >> b;
cout << "A = " << BinaryToDecimal(a) << endl;
cout << "B = " << BinaryToDecimal(b) << endl;
unsigned int c = a & b;
unsigned int d = a | b;
//unsigned int e = ~a;
//unsigned int f = ~b;
unsigned int g = a ^ b;
unsigned int h = a << 2;
unsigned int i = b >> 3;
cout << "A & B = " << BinaryToDecimal(c) << endl;
cout << "A | B = " << BinaryToDecimal(d) << endl;
//cout << "~A = " << BinaryToDecimal(e) << endl;
//cout << "~B = " << BinaryToDecimal(f) << endl;
cout << "A ^ B = " << BinaryToDecimal(g) << endl;
cout << "A << 2 = " << BinaryToDecimal(h) << endl;
cout << "B >> 3 = " << BinaryToDecimal(i) << endl;
}
If you perform a binary NOT on a small unsigned integer, you will get a large number as a result, seeing as most of the most significant bits will be set to 1 (the inverse of what they were in the operand).
In this case you're doing ~ 0 which will certainly give you a large number, in fact the largest possible unsigned int, since all bits will be set to 1.
(What result were you expecting?)
You are using an unsigned int for the operations, such that the inversion of small number becomes a large number because of leading 1 starting from the MSB. If you only want the representation is 8 bit only, you should use unsigned char for its storage.
But you cannot change a or b to unsigned char. Otherwise, cin >> a will put the number's ASCII code to a, not a number. For example, your input is 5, it puts 0x35 ('5'), not number 5.
If you don't want to change unsigned int of your code, you can do some minor enhancements
string BinaryToDecimal(unsigned int dec)
{
string binary = "";
float remainder = 0.0f;
dec &= 0xff; // only 8 bits you care about
while( dec != 0 )
{
....
But you are using while( dec !=0 ), which is buggy. If the result is already 0, then the function returns an empty string, not "0000". Instead, you should use a counter to count only for 8 bit.
for (int i = 0; i < 8; i++ ) {
if ((dec & 1) != 0)
binary.append("1");
else
binary.append("0");
dec >>= 1;
}
Also, using bit wise AND to test the bit is 0 or 1, and shift operation, is better than / and % operators.
Finally, for 8 bit 5 (0000_0101), its inversion is 250 (1111_1010), not 1010.