Binary Conversion with recursive fucntion is giving an odd value - c++

I am making a function that converts numbers to bianry with a recursive function, although when I write big values, it is giving me an odd value, for example, when I write 2000 it gives me the result of -1773891888. When I follow the function with the debugger it gives me the correct value of 2000 in binary until the last second.
Thank you!!
#include <iostream>
int Binary(int n);
int main() {
int n;
std::cin >> n;
std::cout << n << " = " << Binary(n) << std::endl;
}
int Binary(int n) {
if (n == 0)return 0;
if (n == 1)return 1;
return Binary(n / 2)*10 + n % 2;
}

Integer values in C++ can only store values in a bounded range (usually -231 to +231 - 1), which maxes out around two billion and change. That means that if you try storing in an integer a binary value with more than ten digits, you’ll overflow this upper limit. On most systems, this causes the value to wrap around, hence the negative outputs.
To fix this, I’d recommend having your function return a std::string storing the bits rather than an integer, since logically speaking the number you’re returning really isn’t a base-10 integer that you’d like to do further arithmetic operations on. This will let you generate binary sequences of whatever length you need without risking an integer overflow.
At least your logic is correct!

Related

I wrote a program to convert a negative number to binary, is this code correct?

I have used the 2's compliment approach to convert a negative number to binary
I am getting the answer right
after I converted the number to binary let's say n = -6
I ignored the negative sign (by making the number positive)
I took it's 2's compliment
Now if the MSB (Most Significant Bit) is 1 that means the number is negative this ans is stored in newAns
But my doubt is, for me to print the negative binary number since MSB of newAns was 1
do I have to find 2's compliment of newAns again or not?
#include<iostream>
#include <math.h>
using namespace std;
int decimalToBinary(int n){
int ans = 0;
int i = 0;
while(n!=0){
int bit = n&1;
ans = (bit * pow(10,i)) + ans;
n = n >> 1;
i++;
}
return ans;
}
int main(){
int n;
cin >> n;
if(n<0){
// if number is negative
n = n*(-1);
int ans = decimalToBinary(n);
// Find 2's compliment of the number
// 1's comp
int newAns = (~ans);
// 2's comp
newAns = newAns+1;
cout << newAns << endl;
} else {
// if number is positive
int ans = decimalToBinary(n);
cout << ans << endl;
}
}
I think the biggest issue is a confusion about representing numbers in bases.
int values are always "binary" in memory because memory is fundamentally binary. There's no such thing as a "decimal int" vs. "binary int". Representation in a specific base is a text/string/printing concept. The int is the value itself, and a "decimal int" is how you format it for someone to read. If you want to represent that number as a "decimal" or "binary", then you should convert it to a string.
A lot of weirdness in this code. Instead of n=n*(-1);decimalToBinary(n) why not just decimalToBinary(-n)? Very confusing. But also, why are we using twos-complement at all? It's just confusing to devs (i.e. bad). Use unary operator -().
“Binary” and “decimal” are representations of numbers as sequences of digits. For example, “123” interpreted as decimal means 1 hundred, 2 tens, and 3 ones, and “101” interpreted as binary means 1 four, 0 twos, and 1 one.
Your decimalToBinary routines does not convert to binary, because it does not produce a normal sequence of binary digits. For the number five, it produces the number one-hundred-and-one. When one-hundred-and-one is converted to decimal and printed, the output is “101”. This could be called decimal-coded-binary.
Because the value returned by decimalToBinary, which is stored in ans, is not a binary representation of n, ~ans does not take its one’s complement. When ans has the value one-hundred-and-one, decimal 101, its binary representation is 000…0001100101. The ~ operator takes the one’s complement of those bits, yielding 111…1110011010.
This is not a useful way to compute the two’s complement representation of a negative number. Depending on what you are trying to learn with this assignment, two methods of computing and displaying the two’s complement representation are:
Once the number is read with cin >> n, it is already in binary form, as the input routines invoked by cin >> n convert it. Your C++ representation almost certainly uses two’s complement, but you can be sure by using unsigned int u = n;, as the conversion from int to unsigned int will effectively produce a two’s complement representation. Then you can simply write code to print the bits of u: For each bit in it, print “0” or “1” according to what the value of the bit is.
Alternatively, if you want to work with the mathematics of binary representations, write code to convert n to an array of characters, in which each character is “0” or “1”, according to the binary representation of n. If n is negative, you can first convert its negation to binary, and then you can iterate through the array of characters to change each “0” to a “1” and vice-versa, and finally you can write code to add “1” to the binary numeral in the array. That will include writing code to carry from position to position.
Here is the Correct Solution
#include <iostream>
using namespace std;
void print_binary(int num)
{
// `result` stores the binary notation of `num` in decimal format
int result = 0;
// It ignores leading zeros and leading ones
int place_value = 1;
while (!(num == 0 | num == -1))
{
// Extracting the rightmost bit from `num`
int bit = num & 1;
// appending `bit` to `result`
result += bit * place_value;
// Shifting `num` to the right
// so that second bit (from right) now become the rightmost bit
num = num >> 1;
place_value *= 10;
}
cout << result << endl;
}
int main()
{
int number = -6;
int neg_number = ~number + 1; // Took 2's compliment of number
print_binary(number);
print_binary(neg_number);
}

Finding the largest prime factor? (Doesn't work in large number?)

I am a beginner in C++, and I just finished reading chapter 1 of the C++ Primer. So I try the problem of computing the largest prime factor, and I find out that my program works well up to a number of sizes 10e9 but fails after that e.g.600851475143 as it always returns a wired number e.g.2147483647 when I feed any large number into it. I know a similar question has been asked many times, I just wonder why this could happen to me. Thanks in advance.
P.S. I guess the reason has to do with some part of my program that is not capable to handle some large number.
#include <iostream>
int main()
{
int val = 0, temp = 0;
std::cout << "Please enter: " << std::endl;
std::cin >> val;
for (int num = 0; num != 1; val = num){
num = val/2;
temp = val;
while (val%num != 0)
--num;
}
std::cout << temp << std::endl;
return 0;
}
Your int type is 32 bits (like on most systems). The largest value a two's complement signed 32 bit value can store is 2 ** 31 - 1, or 2147483647. Take a look at man limits.h if you want to know the constants defining the limits of your types, and/or use larger types (e.g. unsigned would double your range at basically no cost, uint64_t from stdint.h/inttypes.h would expand it by a factor of 8.4 billion and only cost something meaningful on 32 bit systems).
2147483647 isn't a wired number its INT_MAX which is defined in climits header file. This happens when you reach maximum capacity of an int.
You can use a bigger data type such as std::size_t or unsigned long long int, for that purpose, which have a maximum value of 18446744073709551615.

Problem when i used some large large value i get wrong output with my function

So I'm new to stackoverflow and coding I was learning about functions in c++ and how the stack frame works etc..
in that I made a function for factorials and used that to calculate binomial coefficients. it worked fine for small values like n=10 and r=5 etc... but for large a medium value like 23C12 it gave 4 as answer.
IDK what is wrong with the code or I forgot to add something.
My code:
#include <iostream>
using namespace std;
int fact(int n)
{
int a = 1;
for (int i = 1; i <= n; i++)
{
a *= i;
}
return a;
}
int main()
{
int n, r;
cin >> n >> r;
if (n >= r)
{
int coeff = fact(n) / (fact(n - r) * fact(r));
cout << coeff << endl;
}
else
{
cout << "please enter valid values. i.e n>=r." << endl;
}
return 0;
}
Thanks for your help!
You're not doing anything "wrong" per se. It's just that factorials quicky become huge numbers.
In your example you're using ints, which are typically 32-bit variables. If you take a look at a table of factorials, you'll note that log2(13!) = 32.535.... So the largest factorial that will fit in a 32-bit number is 12!. For a 64-bit variable, the largest factorial you can store is 20! (since log2(21!) = 65.469...).
When you get 4 as the result that's because of overflow.
If you need to be able to calculate such huge numbers, I suggest a bignum library such as GMP.
Factorials overflow easily. In practice you rarely need bare factorials, but they almost always appear in fractions. In your case:
int coeff = fact(n) / (fact(n - r) * fact(r));
Note the the first min(n,n-r,r) factors of the denominator and numerator are identical. I am not going to provide you the code, but I hope an example will help to understand what to do instead.
Consider n=5, r=3 then coeff is
5*4*3*2*1 / 2*1 * 3*2*1
And before actually carrying out any calculations you can reduce that to
5*4 / 2*1
If you are certain that the final result coeff does fit in an int, you can also calculate it using ints. You just need to take care not to overflow the intermediate terms.

Trying to convert string array to int but does not work in C++

I am trying to solve the Project Euler Problem #16: https://projecteuler.net/problem=16. It requires me to sum up every digits from the result of 2^1000.
I iterate through the string (converted from double type) by treating the string as an array. However when I try to convert string digit back to int, I always got error.
I tried with stoi, It prompts me "no matching function for call to 'atoi'". I also tried with stringstream for conversion, it still did not work.
Please see my following code:
#include <iostream>
#include <complex> // pow
#include <string> // to_string C++ 11
const double NUM = pow(2, 1000);
int main(int argc, char const *argv[]) {
int sum = 0;
//printf("%.0f\n", NUM); // Remove decimal digits
auto str = std::to_string(NUM);
std::string str_trim = str.substr(0, str.length()-7);
std::cout << str_trim << std::endl; // 2^1000 in string
for (int i = 0; i <= str_trim.length(); i++) {
std::cout << str_trim[i] << std::endl;
sum = sum + str_trim[i];
std::cout << sum << std::endl;
}
return 0;
}
Any idea to solve this problem? Thank you.
For a pure coincidence this approach will work fine on most compilers for 2^1000 because the rounding done by IEEE754 double format (the most common format for floating point numbers in C++) will drop bits by considering them to be zeros (and in 2^1000 those bits are indeed zeros).
To sum up the digit you can just iterate over the chars and execute
total += s[i] - '0';
using the fact that in C++ chars are indeed small integers.
If you need to convert std::string to int use std::stoi. C++ also has a few other functions for other types: std::stoi, std::stol, std::stoll, std::stoul, std::stoull, std::stof, std::stod, std::stold
Still no C++ type can hold 2^1000 so you can't use standard types and standard functions for such values.
Even if you manage to extract the digits from the NUM your answer will be incorrect:
const double NUM = pow(2, 1000);
cannot be relied upon to store the number exactly.
An alternative way of solving this problem is to evaluate 2 to the 1000th power in binary (the result is simple: it is 1 followed by 1,000 zeros). Your problem is then reduced to converting that back to base 10 and summing the digits; perhaps even doing that part simultaneously. But you will not be able to use any of the built-in types to represent that number. That is what makes this problem interesting.

What is wrong with this recursive factorial implementation?

I compiled and run in my computer, and it executes correctly. I tried IDEONE, and I got a successful answer.
But when I submit it in SPOJ, I'm getting a wrong answer. Is something wrong in this implementation?
#include <iostream>
#include <cstdio>
using namespace std;
int factorial(int n) {
if (n <= 1)
return 1;
return n * factorial(n - 1);
}
int main() {
int t;
int n;
cout << "";
cin >> t;
for (int i = 0; i < t; i++) {
cout << "";
cin >> n;
printf("%d\n", factorial(n));
}
return 0;
}
The problem with the above code is due to the finite space we can use to store the value of an int. On a 32-bit machine, int's have 32 bits (value 0 or 1), which means that the maximum value an unsigned int can have is (2^31 - 1) and the maximum value an int can have is (2^30 - 1) (since it needs one bit to denote whether it is positive or negative, while the unsigned int is always positive and can devote that bit to just regular value).
Now, that aside, you should look into ways of storing the value of a very large number in a different data structure! Maybe an array would be a good choice...
Just to brainstorm, imagine creating an int bigInteger[100] (that should be large enough to hold 100!). To multiply two of your numbers, you could then implement a bitMultiplication(int bitNum[], int num) function that would take in your array by reference and perform bitwise multiplication (see the following post for details: Multiplying using Bitwise Operators).
Use that bitMulitiplication(int bitNum[], int num) instead of the regular multiplication in your recursive factorial function, and you should have a function that works on large n!