C++ Math Weird After 65536 - c++

This is my first question asked so I am not sure exactly what to say. Basically, I wrote a program to find the diagonal of a rectangular prism with the inputs for length, width, and height being whole numbers ranging from 1 - 100,000. (The output of this function would only be stated in the console if it was a whole number.) Everything seems to work until it got to the number 65536, after which, the next output was 0.
I am still new to programming, and if I missed anything, feel free to ask, thank you all in advance!
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include<math.h>
#include <cmath>
int l = 1;
int w = 1;
int h = 1;
double temp1;
double temp2;
double Hypo1;
double temp3;
double Hypo2;
double temp4;
int main(){
while(h < 100000){
//Math to find diagonal of rectangular prism.
temp1 = l * l;
temp2 = w * w;
Hypo1 = temp1 + temp2;
temp3 = h * h;
temp4 = Hypo1 + temp3;
Hypo2 = sqrt(temp4);
//Output if answer is a whole number.
if(abs(floor(Hypo2)) == Hypo2){
std::cout << "<Length = "; std::cout << l;
std::cout << " | Width = "; std::cout << w;
std::cout << " | Height = "; std::cout << h;
std::cout << ">";
std::cout << " Total:"; std::cout << Hypo2 << std::endl;
}
//Add one to each input.
if(l == w && l == h){
l++;
}
else if(w < l && w == h){
w++;
}
else if(h < l && h < w){
h++;
}
}
}

Welcome to the wonders of Overflow.
So, here's what's happening:
You're using int, which stores values in a 4 byte (32 bit) variable. When you multiply two numbers stored in X bits you may need to store the result in 2*X bits.
In this case, 65536 is, in binary, 0000 0000 0000 0001 0000 0000 0000 0000 (in hex, 0x 0001 0000). When you multiply 65536 by itself, the result will be 1 0000 0000 0000 0000 0000 0000 0000 0000 (in hex, 0x 1 0000 0000). Now, the problem is that this value needs 33 bits to be correctly stored. As it only has 32, it stored the 32 least significant bits and discards the most significant bit. As such, the stored value will be 0. This is also what happens with greater values.
To correct this, replace int with long long or, even better, unsigned long long.
As a personal advice, get used to uint32_t and other standard types. They will come in handy. To use these, #include <cstdint>. In this case, you should use uint64_t to store unsigned integers in a 64-bit variable

You declared h as an int, so the result of h*h will also be an int. The conversion to double happens after the calculation is already done.
If you take a look at INT_MAX it's probably 2,147,483,647 on your platform.
So if you look at 65536 * 65536 it's 4,294,967,296, well outside of the value range.
If you convert one of the factors into a double value first, you might have some more luck temp3 = double(h) * h

Related

How to read 8bits RAW data from file

I have 8 bit raw data representing a grayscale image (int value 0-255) in a file looking like this:
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 001c 354d 6576 8797
9fa4 a7a4 864b 252e 5973 7673 7b86 7e6d
6164 6c7a 8b98 a2ac b8bd ae96 857f 6d40
1d14 160b 0000 0000 0000 0000 0000 0000
and i need to read them and print their int value (0-255).
I try this, but all result looks like this: 0020 0a00 2000 2000 2000 2000 2000 2000
I dont know what is wrong, fopen as binary file is OK?
FILE * pFile;
pFile = fopen(inFileName.c_str(), "rb");
if (pFile==NULL){
cerr << "erro" << endl;
}
uint8_t bufferImmagine[height*width];
fread(bufferImmagine,1,height*width,pFile);
fclose (pFile);
for (int i = 0; i < height*width; ++i)
{
cout << bufferImmagine[i] << " ";
}
So bufferImmagine is an array of type uint8_t, and on many platforms uint8_t is an alias for unsigned char. When you stream unsigned char into std::cout, the stream treats it like a character, rather than a number.
If you want to print out each byte as a number 0-255, convert each byte to a wider integral type, like unsigned:
std::cout << static_cast<unsigned int>(bufferImmagine[i]) << " ";
I see a few potential problems with the code:
1) As mentioned in the comments, uint8_t bufferImmagine[height*width]; is problematic, in that C++ doesn't support variable-length arrays, so unless height and width can be made into compile-time constants, it would be better to use a std::vector instead (or if you are allergic to data structures you could allocate an array with the new operator, but then you risk leaking memory if you aren't very careful).
2) Depending on the values of width and height, the size of bufferImmagine could be quite large; possibly larger than the amount of stack space available. Another reason to use std::vector or allocate on the heap, instead.
3) You never check the return value of the fread() call to see if it read all of the data, or not. It might be reading fewer bytes than you asked it to read (most likely because the file you're reading isn't long enough), but without checking the value you won't know anything has gone wrong, so you'll be very confused if/when an error occurs.
4) As Jack C. mentioned in his answer, cout may print out uint8_t's as ASCII characters rather than integers, which isn't what you want here.
5) Not really a problem, but if you'd like the rows of your output to correspond to the rows-of-pixels in your file, then you'll want to apply carriage returns at the end of each 'row'. E.g.:
int i = 0;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
const unsigned int iVal = bufferImmagine[i];
cout << iVal << " ";
i++;
}
cout << endl;
}

Wrong result with bitwise inclusive OR

I can't figure out why does inclusive OR return wrong result.
char arr[] = { 0x0a, 0xc0 };
uint16_t n{};
n = arr[0]; // I get 0x000a here.
n = n << 8; // Shift to the left and get 0x0a00 here.
n = n | arr[1]; // But now the n value is 0xffc0 instead of 0x0ac0.
What is the mistake in this example? Console app, MVS Community 2017.
The unintended 0xff is caused by sign bit extension of 0xc0.
0xc0 = 0b11000000
Hence, the uppermost bit is set which means sign for char (as signed char).
Please, note that all arithmetic and bitwise operations in C++ work with at least int (or unsigned int). Smaller types are promoted before and clipped afterwards.
Please, note also that char may be signed or unsigned. That's compiler implementation dependent. Obviously, it's signed in the case of OP. To prevent the unintended sign extension, the argument has to become unsigned (early enough).
Demonstration:
#include <iostream>
int main()
{
char arr[] = { '\x0a', '\xc0' };
uint16_t n{};
n = arr[0]; // I get 0x000a here.
n = n << 8; // Shift to the left and get 0x0a00 here.
n = n | arr[1]; // But now the n value is 0xffc0 instead of 0x0ac0.
std::cout << std::hex << "n (wrong): " << n << std::endl;
n = arr[0]; // I get 0x000a here.
n = n << 8; // Shift to the left and get 0x0a00 here.
n = n | (unsigned char)arr[1]; // (unsigned char) prevents sign extension
std::cout << std::hex << "n (right): " << n << std::endl;
return 0;
}
Session:
g++ -std=c++11 -O2 -Wall -pthread main.cpp && ./a.out
n (wrong): ffc0
n (right): ac0
Life demo on coliru
Note:
I had to changechar arr[] = { 0x0a, 0xc0 };to char arr[] = { '\x0a', '\xc0' }; to come around serious compiler complaints. I guess, these complaints where strongly related to this issue.
I got it to work correctly by doing:
int arr[] = { 0x0a, 0xc0 };
int n{};
n = arr[0]; // I get 0x000a here.
n = n << 8; // Shift to the left and get 0x0a00 here.
n = n | arr[1];
std::cout << n << std::endl;
There was some truncation if you leave the 'arr' array as char.
You have fallen a victim to signed integer promotion.
When assigning 0xc0 to the second element (signed char default because of MVS) in the array, this is represented as follows:
arr[1] = 1100 - 0000, or in decimal -64
When this is cast to an uint16_t, it is promoted to an integer with the value -64. This is:
n = 1111 - 1111 - 1100 - 0000 = -64
due to the 2's complement implementation of integers.
Therefore:
n = 1111 - 1111 - 1100 - 0000
arr[1] = 0000 - 0000 - 1010 - 0000 (after being promoted)
n | arr[1] = 1111 - 1111 -1110-0000 = 0xffc0

c++: How to flip the binary values of each bit in int

Suppose I have an integer int a
In c++, as this int uses 4 bytes(32 bits) of memory, all bits would be occupied by either 1's or 0's. So, I wish to flip the values of each bit. That is, wherever in each bit there is 1 convert it to 0 and 0 to 1.
Is there an easy way to go about this?
Edit: I also want to play with boolean algebra also. That is if I can execute basic boolean operations like addition, subtraction, etc.
You're looking for the binary not operator (~).
So
int a = 0x04;
int b = ~a;
the value of b is 1111 1111 1111 1011 while the value of a is 0000 0000 0000 0100.
The wikipedia and the GNU C have plenty of information of these binary operators.
Here is an example of bitwise NOT operator:
#include <iostream>
int main()
{
int a = 0;
int x = ~a;
unsigned int y = ~a;
std::cout << x << '\n';
std::cout << y << '\n';
}
Output:
-1
4294967295
For more information about binary operators and more try here

retrieve last 6 bits from an integer

I need to fetch last 6 bits of a integer or Uint32. For example if I have a value of 183, I need last six bits which will be 110 111 ie 55.
I have written a small piece of code, but it's not behaving as expected. Could you guys please point out where I am making a mistake?
int compress8bitTolessBit( int value_to_compress, int no_of_bits_to_compress )
{
int ret = 0;
while(no_of_bits_to_compress--)
{
std::cout << " the value of bits "<< no_of_bits_to_compress << std::endl;
ret >>= 1;
ret |= ( value_to_compress%2 );
value_to_compress /= 2;
}
return ret;
}
int _tmain(int argc, _TCHAR* argv[])
{
int val = compress8bitTolessBit( 183, 5 );
std::cout <<" the value is "<< val << std::endl;
system("pause>nul");
return 0;
}
You have entered the realm of binary arithmetic. C++ has built-in operators for this kind of thing. The act of "getting certain bits" of an integer is done with an "AND" binary operator.
0101 0101
AND 0000 1111
---------
0000 0101
In C++ this is:
int n = 0x55 & 0xF;
// n = 0x5
So to get the right-most 6 bits,
int n = original_value & 0x3F;
And to get the right-most N bits,
int n = original_value & ((1 << N) - 1);
Here is more information on
Binary arithmetic operators in C++
Binary operators in general
I don't get the problem, can't you just use bitwise operators? Eg
u32 trimmed = value & 0x3F;
This will keep just the 6 least significant bits by using the bitwise AND operator.
tl;dr:
int val = x & 0x3F;
int value = input & ((1 << (no_of_bits_to_compress + 1) - 1)
This one calculates the (n+1)th power of two: 1 << (no_of_bits_to_compress + 1) and subtracts 1 to get a mask with all n bits set.
The last k bits of an integer A.
1. A % (1<<k); // simply A % 2^k
2. A - ((A>>k)<<k);
The first method uses the fact that the last k bits is what is trimmed after doing k right shits(divide by 2^k).

Converting to binary in C++

I made a function that converts numbers to binary. For some reason it's not working. It gives the wrong output. The output is in binary format, but it always gives the wrong result for binary numbers that end with a zero(at least that's what I noticed..)
unsigned long long to_binary(unsigned long long x)
{
int rem;
unsigned long long converted = 0;
while (x > 1)
{
rem = x % 2;
x /= 2;
converted += rem;
converted *= 10;
}
converted += x;
return converted;
}
Please help me fix it, this is really frustrating..
Thanks!
Use std::bitset to do the translation:
#include <iostream>
#include <bitset>
#include <limits.h>
int main()
{
int val;
std::cin >> val;
std::bitset<sizeof(int) * CHAR_BIT> bits(val);
std::cout << bits << "\n";
}
You're reversing the bits.
You cannot use the remains of x as an indicator when to terminate the loop.
Consider e.g. 4.
After first loop iteration:
rem == 0
converted == 0
x == 2
After second loop iteration:
rem == 0
converted == 0
x == 1
And then you set converted to 1.
Try:
int i = sizeof(x) * 8; // i is now number of bits in x
while (i>0) {
--i;
converted *= 10;
converted |= (x >> i) & 1;
// Shift x right to get bit number i in the rightmost position,
// then and with 1 to remove any bits left of bit number i,
// and finally or it into the rightmost position in converted
}
Running the above code with x as an unsigned char (8 bits) with value 129 (binary 10000001)
Starting with i = 8, size of unsigned char * 8. In the first loop iteration i will be 7. We then take x (129) and shift it right 7 bits, that gives the value 1. This is OR'ed into converted which becomes 1. Next iteration, we start by multiplying converted with 10 (so now it's 10), we then shift x 6 bits right (value becomes 2) and ANDs it with 1 (value becomes 0). We OR 0 with converted, which is then still 10. 3rd-7th iteration do the same thing, converted is multiplied with 10 and one specific bit is extracted from x and OR'ed into converted. After these iterations, converted is 1000000.
In the last iteration, first converted is multiplied with 10 and becomes 10000000, we shift x right 0 bits, yielding the original value 129. We AND x with 1, this gives the value 1. 1 is then OR'ed into converted, which becomes 10000001.
You're doing it wrong ;)
http://www.bellaonline.com/articles/art31011.asp
The remain of the first division is the rightmost bit in the binary form, with your function it becomes the leftmost bit.
You can do something like this :
unsigned long long to_binary(unsigned long long x)
{
int rem;
unsigned long long converted = 0;
unsigned long long multiplicator = 1;
while (x > 0)
{
rem = x % 2;
x /= 2;
converted += rem * multiplicator;
multiplicator *= 10;
}
return converted;
}
edit: the code proposed by CygnusX1 is a little bit more efficient, but less comprehensive I think, I'll advise taking his version.
improvement : I changed the stop condition of the while loop, so we can remove the line adding x at the end.
You are actually reversing the binary number!
to_binary(2) will return 01, instead of 10. When initial 0es are truncated, it will look the same as 1.
how about doing it this way:
unsigned long long digit = 1;
while (x>0) {
if (x%2)
converted+=digit;
x/=2;
digit*=10;
}
What about std::bitset?
http://www.cplusplus.com/reference/stl/bitset/to_string/
If you want to display you number as binary, you need to format it as a string. The easiest way to do this that I know of is to use the STL bitset.
#include <bitset>
#include <iostream>
#include <sstream>
typedef std::bitset<64> bitset64;
std::string to_binary(const unsigned long long int& n)
{
const static int mask = 0xffffffff;
int upper = (n >> 32) & mask;
int lower = n & mask;
bitset64 upper_bs(upper);
bitset64 lower_bs(lower);
bitset64 result = (upper_bs << 32) | lower_bs;
std::stringstream ss;
ss << result;
return ss.str();
};
int main()
{
for(int i = 0; i < 10; ++i)
{
std::cout << i << ": " << to_binary(i) << "\n";
};
return 1;
};
The output from this program is:
0: 0000000000000000000000000000000000000000000000000000000000000000
1: 0000000000000000000000000000000000000000000000000000000000000001
2: 0000000000000000000000000000000000000000000000000000000000000010
3: 0000000000000000000000000000000000000000000000000000000000000011
4: 0000000000000000000000000000000000000000000000000000000000000100
5: 0000000000000000000000000000000000000000000000000000000000000101
6: 0000000000000000000000000000000000000000000000000000000000000110
7: 0000000000000000000000000000000000000000000000000000000000000111
8: 0000000000000000000000000000000000000000000000000000000000001000
9: 0000000000000000000000000000000000000000000000000000000000001001
If your purpose is only display them as their binary representation, then you may try itoa or std::bitset
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include <bitset>
using namespace std;
int main()
{
unsigned long long x = 1234567890;
// c way
char buffer[sizeof(x) * 8];
itoa (x, buffer, 2);
printf ("binary: %s\n",buffer);
// c++ way
cout << bitset<numeric_limits<unsigned long long>::digits>(x) << endl;
return EXIT_SUCCESS;
}
void To(long long num,char *buff,int base)
{
if(buff==NULL) return;
long long m=0,no=num,i=1;
while((no/=base)>0) i++;
buff[i]='\0';
no=num;
while(no>0)
{
m=no%base;
no=no/base;
buff[--i]=(m>9)?((base==16)?('A' + m - 10):m):m+48;
}
}
Here is a simple solution.
#include <iostream>
using namespace std;
int main()
{
int num=241; //Assuming 16 bit integer
for(int i=15; i>=0; i--) cout<<((num >> i) & 1);
cout<<endl;
for(int i=0; i<16; i++) cout<<((num >> i) & 1);
cout<<endl;
return 0;
}