I am kind of new to C++ and I am trying to write a recursive factorial calculator. I did write but it is giving multiple negative values for entries like 20, 21, 22, 33, 40, etc. And also the code fails to calculate factorial for integers greater than 65 despite my attempt to enable using long long int. Can someone please explain to me why this is happening? I didn't have any issue in python. Why is it happening in c++?
Here is my code:
#include "stdafx.h"
#include <iostream>
#include <conio.h>
using namespace std;
long long int factorial(long int n) {
long long int temp;
if (n == 1 || n == 0) {
return 1;
}
else {
temp = n*factorial(n - 1);
return temp;
}
}
int main()
{
int n, i;
cout << "Enter positive integer or zero: ";
cin >> n;
while (n < 0 || cin.fail()) {
cout << "\nFactorial cannot be calculated for n is negative." << endl;
cin.clear();
cin.ignore(numeric_limits<streamsize>::max(), '\n');
cout << "Please try with integer >= 0: ";
cin >> n;
}
cout << factorial(n) << endl;
_getch();
return 0;
}
It's simple overflow issue. You already know the result from python, so you can check if it's too big for the type you're using (obviously it is).
As for python, it has built-in support: Handling very large numbers in Python
Use a C++ bigint library.
What you are experiencing is undefined behavior as a result of integer overflow. you are using a long long int which is a signed integer most likely to be represented as an 8 byte integer (this is platform specific).
Assuming from here one that your long long int is only 8 bytes(64 bits) that would mean that the maximum positive value that it can store is approximately 2^63 which is approx 9.223372037e+18.
Trying to calculate the factorial of numbers like 20, 21, 22, 33, 40, etc will result in a value larger than the maximum that a long long int can store which will result in undefined behavior which in this case is manifesting as integer wraparound.
To fix this you would need to use an integer data type capabale of representing larger values. I would start by switching to an unsigned long long int which will get you twice the range if numbers because an unsigned type deals only in positive numbers. That is just a band-aid on the issue though. To truly handle the problem you will need to find a library that does arbitrary precision integer math. (A bigint library)
(There are also some platform specific things you can do to ask your compiler for a 128bit int, but the better solution is to switch to a bigint data type)
EDIT:
I should clarify that by "bigint" i was not necessarily referring to any particular library. As suggested in the comments there are multiple options as to which library could be used to get the job done.
Related
Alright so, I'm a noob first of all. I started studying code (in C++), and I want to make a random number generator. It's great and all, but as far as I've observed, the generated numbers never exceed the "int" limit of 32768, even tho my variables are all "unsigned long long" (I'm pretty sure that's how you get the largest pool of numbers). I'm pretty sure it's something small, but it;s been bothering me for a day, and I really need answers.
Here's how my current code looks like :
#include <iostream>
#include <stdlib.h>
using namespace std;
int main()
{
unsigned long long n,m,r,mx;
cout<< "Please Enter The Number Of Desired Randomly Generated Numbers : ";
cin>>m;
cout<< "Please Enter An Upper Limit to the Random Numbers : " ;
cin>>mx;
srand ( time(NULL) );
for (int i=1; i<=m ; i++)
{
n = rand() % mx;
cout << n << endl;
}
cout<< "Rate this Program Out Of 10: ";
cin >> r;
cout << r << " " << "/" << "10";
return 0;
}
Even though all the numbers you use are unsigned long long, rand() will only ever return a number less than or equal to RAND_MAX which is guaranteed to be 32767 or more.
To guarantee a return value more than 32767 you're going to need some more advanced random number generation techniques. The standard library has a module for this called random.
Take a look at the uniform_int_distribution object. That page gives an example of how to use it to generate regular integers however the object does take a template parameter that allows you to specify what kind of integer is returned.
In your case you would want to use:
std::uniform_int_distribution<unsigned long long>
Using that in the example on the page will generate 10 unsigned long long numbers (however if you copy the example exactly they will still be limited to between 1 and 6).
I'm trying to write a program that uses the series to compute the value of PI. The user will input how far it wants the program to compute the series and then the program should output its calculated value of PI. I believe I've successfully written the code for this, however it does not do well with large numbers and only gives me a few decimal places. When I tried to use cout << fixed << setprecision(42); It just gave me "nan" as the value of PI.
int main() {
long long seqNum; // sequence number users will input
long double val; // the series output
cout << "Welcome to the compute PI program." << endl; // welcome message
cout << "Please inter the sequence number in the form of an integer." << endl;
cin >> seqNum; // user input
while ( seqNum < 0) // validation, number must be positive
{
cout << "Please enter a positive number." << endl;
cin >> seqNum;
} // end while
if (seqNum > 0)
{
for ( long int i = 0; i < seqNum; i++ )
{
val = val + 4*(pow(-1.00,i)/(1 + 2*i)); // Gregory-Leibniz sum calculation
}// end for
cout << val;
} // end if
return 0;
}
Any help would be really appreciated. Thank you
Your problem involves an elementary, fundamental principle related to double values: a double, or any floating point type, can hold only a fixed upper limit of significant digits. There is no unlimited digits of precision with plain, garden-variety doubles. There's a hard, upper limit. The exact limit is implementation defined, but on modern C++ implementations the typical limit is just 16 or 17 digits of precision, not even close to your desired 42 digits of precision.
#include <limits>
#include <iostream>
int main()
{
std::cout << std::numeric_limits<double>::max_digits10 << std::endl;
return 0;
}
This gives you the maximum digits of precision with your platform/C++ compiler. This shows a maximum of 17 digits of precision with g++ 9.2 on Linux (max_digits10 is C++11 or later, use digits10 with old C++ compilers to show a closely-related metric).
Your desired 42 digits of precision likely far exceed what your modest doubles can handle. There are various special-purpose math libraries that can perform calculations with higher levels of precision, you can investigate those, if you wish.
You did not initialize or assign any value to val, but you are reading it when you get to the first iteration of
val = val + 4*(pow(-1.00,i)/(1 + 2*i));
This cause your program to have undefined behavior. Initialize val, probably to zero:
long double val = 0; // the series output
That aside, as mentioned in the answer of #SamVarshavchik there is a hard limit on the precision you can reach with the built-in floating point types and 42 places significance is almost certainly outside of that. Similarly the integer types that you are using are limited in size to probably at most 2^64 which is approximately 10^19.
Even if these limits weren't the problem, the series requires summation of roughly 10^42 terms to get PI to a precision of 42 places. It would take you longer than the universe has been around to calculate to that precision with all of earths current computing power combined.
I am trying to learn how to program in C++, so I created something that allowed to you enter a minimum, and maximum parameter, and it will compute k+(k+1)+(k+2)+...+(max), and compared it to the analytical value, using the standard formula (n(n+1)/2). It seems to work fine when I try small numbers, but when, for example, trying min=4, max=4*10^5 (400,000), I get a negative result for the sum, but a positive result checking with the analytical method, even after changing the type from 'int' to 'long'. Trying other combinations, I have achieved the opposite, with the analytical method resulting in a negative sum. I suspect this is related to the fact the type int can go up to a certain number of digits, but I wanted some confirmation on that, and if it isn't, what the actual problem is. The code is provided below:
#include <iostream>
// Values are inconsistent when paramin,parammax become large.
// For example, try (parammin,parammax)=(4,400,000)
int main() {
int parammax,parammin;
std::cout << "Input a minimum, then maximum parameter to sum up to" << std::endl;
std::cin >> parammin >> parammax;
int sum=0;
for (int iter = parammin; iter <= parammax; iter++){
sum += iter;
}
std::cout << "The sum is: " << sum << std::endl;
const int analyticalmethod = (parammax*(parammax+1)-parammin*(parammin-1))/2;
std::cout << "The analytical result for the sum is,"
" via (max*(max+1)-min*(min-1))/2: "
<< analyticalmethod << std::endl;
return 0;
}
Using very large numbers without control is dangerous in C++. The basic types int, long and long long are implementation dependant, with only the following requirements:
int is at least 16 bits large
long is at least as large as int and at least 32 bits large
long long is at least as large as long and at least 64 bits large
If you think you can need larger values, you should considere a multi precision library like the excellent gmp.
I made a little program to determine the length of a user-provided integer:
#include <iostream>
using namespace std;
int main()
{
int c=0; //counter for loop
int q=1; //quotient of number upon division
cout << "Hello Cerberus! Please enter a number." << endl;
cin >> q;
if(q > -10 && q < 10)
{
cout << "The number you entered is 1 digit long." << endl;
}
else
{
while(q != 0)
{
q=q/10;
c++;
}
cout << "The number you entered is " << c << " digits long." << endl;
}
return 0;
}
It works quite nicely, unless the numbers get too big. Once the input is 13 digits long or so, the program defaults to "The number you entered is 1 digit long" (it shouldn't even present that solution unless the number is between -10 and 10).
Is there a length limit for user-input integers, or is this demonstrative of my computer's memory limits?
It's a limit in your computer's architecture. Every numeric type has a fixed upper limit, because the type describes data with a fixed size. For example, your int is likely to take up either four or eight bytes in memory (depending on CPU; based on your observations, I'd say the former), and there are only so many combinations of bits that can be stored in so many bytes of memory.
You can determine the range of int on your platform using std::numeric_limits, but personally I recommend sticking with the fixed-width type aliases (e.g. int32_t, int64_t) and picking whichever ones have sufficient range for your application.
Alternatively, there do exist so-called "bigint" libraries that are essentially classes wrapping integer arrays and adding clever functionality to make arbitrarily-large values work as if they were of arithmetic types. That's probably overkill for you here though.
Just don't be tempted to start using floating-point types (float, double) for their magic range-enhancing abilities; just like with the integral types, their precision is fundamentally limited, but using floating-point types adds additional problems and concerns on top.
There is no fundamental limit on user input, though. That's because your stream is converting text characters, and your stream can basically have as many text characters in it as you could possibly imagine. At that level, you're really only limited by available memory.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Calculating factorial of large numbers in C
Firstly, I am new to C++. I have tried below program to calculate factorial of any number.
#include <iostream>
using namespace std;
unsigned long factorial(unsigned long n){
return n <= 1 ? 1 : n * factorial(n-1);
}
int main()
{
cout << "Enter any Number to calculate factorial." <<endl;
unsigned long n;
cin >> n ;
cout << factorial(n) ;
}
If i give small numbers like 5,3,20 it returns me exact value. But if I give numbers like 34 etc.. It is returning me zero. I assume that it is exceeding the limit range. Please help me on this to return exact result whatever the number I enter.
Factorials are huge.
34! = 295232799039604140847618609643520000000
This barely fits into 128 bits. If your compiler supports a 128-bit number type, you can use it to calculate factorials up to 34. If not, or if you need anything larger, you will need to use some kind of bignum library.