Alright so, I'm a noob first of all. I started studying code (in C++), and I want to make a random number generator. It's great and all, but as far as I've observed, the generated numbers never exceed the "int" limit of 32768, even tho my variables are all "unsigned long long" (I'm pretty sure that's how you get the largest pool of numbers). I'm pretty sure it's something small, but it;s been bothering me for a day, and I really need answers.
Here's how my current code looks like :
#include <iostream>
#include <stdlib.h>
using namespace std;
int main()
{
unsigned long long n,m,r,mx;
cout<< "Please Enter The Number Of Desired Randomly Generated Numbers : ";
cin>>m;
cout<< "Please Enter An Upper Limit to the Random Numbers : " ;
cin>>mx;
srand ( time(NULL) );
for (int i=1; i<=m ; i++)
{
n = rand() % mx;
cout << n << endl;
}
cout<< "Rate this Program Out Of 10: ";
cin >> r;
cout << r << " " << "/" << "10";
return 0;
}
Even though all the numbers you use are unsigned long long, rand() will only ever return a number less than or equal to RAND_MAX which is guaranteed to be 32767 or more.
To guarantee a return value more than 32767 you're going to need some more advanced random number generation techniques. The standard library has a module for this called random.
Take a look at the uniform_int_distribution object. That page gives an example of how to use it to generate regular integers however the object does take a template parameter that allows you to specify what kind of integer is returned.
In your case you would want to use:
std::uniform_int_distribution<unsigned long long>
Using that in the example on the page will generate 10 unsigned long long numbers (however if you copy the example exactly they will still be limited to between 1 and 6).
Related
I am trying to learn how to program in C++, so I created something that allowed to you enter a minimum, and maximum parameter, and it will compute k+(k+1)+(k+2)+...+(max), and compared it to the analytical value, using the standard formula (n(n+1)/2). It seems to work fine when I try small numbers, but when, for example, trying min=4, max=4*10^5 (400,000), I get a negative result for the sum, but a positive result checking with the analytical method, even after changing the type from 'int' to 'long'. Trying other combinations, I have achieved the opposite, with the analytical method resulting in a negative sum. I suspect this is related to the fact the type int can go up to a certain number of digits, but I wanted some confirmation on that, and if it isn't, what the actual problem is. The code is provided below:
#include <iostream>
// Values are inconsistent when paramin,parammax become large.
// For example, try (parammin,parammax)=(4,400,000)
int main() {
int parammax,parammin;
std::cout << "Input a minimum, then maximum parameter to sum up to" << std::endl;
std::cin >> parammin >> parammax;
int sum=0;
for (int iter = parammin; iter <= parammax; iter++){
sum += iter;
}
std::cout << "The sum is: " << sum << std::endl;
const int analyticalmethod = (parammax*(parammax+1)-parammin*(parammin-1))/2;
std::cout << "The analytical result for the sum is,"
" via (max*(max+1)-min*(min-1))/2: "
<< analyticalmethod << std::endl;
return 0;
}
Using very large numbers without control is dangerous in C++. The basic types int, long and long long are implementation dependant, with only the following requirements:
int is at least 16 bits large
long is at least as large as int and at least 32 bits large
long long is at least as large as long and at least 64 bits large
If you think you can need larger values, you should considere a multi precision library like the excellent gmp.
I am adding numbers from 1 to n in C++. I have used both the iteration method and mathematical formula. The code works fine for up to 9 digits.
But when I give input a 10 digit number, the formula and iteration methods give separate answers.
I have tried to look it up on google but couldn't find any solution for this. My code:
#include <bits/stdc++.h>
using namespace std;
int main(){
unsigned long long i, n, sum = 0, out_put;
cout << "Sum of numbers from 1 to: ";
cin >> n;
/// using mathematical formula
out_put = n*(n+1);
out_put = out_put/2;
cout << " = " << out_put << endl;
/// using iteration
for (i=1; i<=n; i++){
sum = sum+i;
}
cout << " == " << sum << endl;
return 0;
}
How do know which one is the correct one? If I assume the formula can't be wrong then why is the iteration method giving incorrect answer? I have used unsigned long long to prevent overflow but still didn't work.
What you are seeing is overflow happening on your calculations at different points. 9,999,999,999 * 10,000,000,000 is ~9.9e19 while an unsigned long long holds ~1.8e19. So the result wraps around and you get one answer.
Your for loop is also going to overflow but it is going to do so at a different point meaning the answers will diverge from one another since the modulo arithmetic is happening with a smaller number.
Your problem is that n*(n+1) can be too large to store in an unsigned long long, even though the end result (half of that) which you calculate via iteration may still fit.
Assuming your unsigned long long has 64 bits, it can hold integers up to 18446744073709551615. Anything above that will restart from 0.
Edit: As Nathan points out, you can of course have both calculations overflow. The sum would still give the correct result modulo 2^64, but the direct calculation can be off because the division does not generally yield the same result modulo 2^64 after you have wrapped around.
#include <bits/stdc++.h>
using namespace std;
int main(){
unsigned long long i, n, sum = 0, out_put;
cout << "Sum of numbers from 1 to: ";
cin >> n;
/// using mathematical formula
out_put = n*(n+1);
out_put = out_put/2;
cout << " = " << out_put << endl;
/// using iteration
for (i=1; i<=n; i++){
sum = sum+i;
}
cout << " == " << sum << endl;
return 0;
}
I am kind of new to C++ and I am trying to write a recursive factorial calculator. I did write but it is giving multiple negative values for entries like 20, 21, 22, 33, 40, etc. And also the code fails to calculate factorial for integers greater than 65 despite my attempt to enable using long long int. Can someone please explain to me why this is happening? I didn't have any issue in python. Why is it happening in c++?
Here is my code:
#include "stdafx.h"
#include <iostream>
#include <conio.h>
using namespace std;
long long int factorial(long int n) {
long long int temp;
if (n == 1 || n == 0) {
return 1;
}
else {
temp = n*factorial(n - 1);
return temp;
}
}
int main()
{
int n, i;
cout << "Enter positive integer or zero: ";
cin >> n;
while (n < 0 || cin.fail()) {
cout << "\nFactorial cannot be calculated for n is negative." << endl;
cin.clear();
cin.ignore(numeric_limits<streamsize>::max(), '\n');
cout << "Please try with integer >= 0: ";
cin >> n;
}
cout << factorial(n) << endl;
_getch();
return 0;
}
It's simple overflow issue. You already know the result from python, so you can check if it's too big for the type you're using (obviously it is).
As for python, it has built-in support: Handling very large numbers in Python
Use a C++ bigint library.
What you are experiencing is undefined behavior as a result of integer overflow. you are using a long long int which is a signed integer most likely to be represented as an 8 byte integer (this is platform specific).
Assuming from here one that your long long int is only 8 bytes(64 bits) that would mean that the maximum positive value that it can store is approximately 2^63 which is approx 9.223372037e+18.
Trying to calculate the factorial of numbers like 20, 21, 22, 33, 40, etc will result in a value larger than the maximum that a long long int can store which will result in undefined behavior which in this case is manifesting as integer wraparound.
To fix this you would need to use an integer data type capabale of representing larger values. I would start by switching to an unsigned long long int which will get you twice the range if numbers because an unsigned type deals only in positive numbers. That is just a band-aid on the issue though. To truly handle the problem you will need to find a library that does arbitrary precision integer math. (A bigint library)
(There are also some platform specific things you can do to ask your compiler for a 128bit int, but the better solution is to switch to a bigint data type)
EDIT:
I should clarify that by "bigint" i was not necessarily referring to any particular library. As suggested in the comments there are multiple options as to which library could be used to get the job done.
I made a little program to determine the length of a user-provided integer:
#include <iostream>
using namespace std;
int main()
{
int c=0; //counter for loop
int q=1; //quotient of number upon division
cout << "Hello Cerberus! Please enter a number." << endl;
cin >> q;
if(q > -10 && q < 10)
{
cout << "The number you entered is 1 digit long." << endl;
}
else
{
while(q != 0)
{
q=q/10;
c++;
}
cout << "The number you entered is " << c << " digits long." << endl;
}
return 0;
}
It works quite nicely, unless the numbers get too big. Once the input is 13 digits long or so, the program defaults to "The number you entered is 1 digit long" (it shouldn't even present that solution unless the number is between -10 and 10).
Is there a length limit for user-input integers, or is this demonstrative of my computer's memory limits?
It's a limit in your computer's architecture. Every numeric type has a fixed upper limit, because the type describes data with a fixed size. For example, your int is likely to take up either four or eight bytes in memory (depending on CPU; based on your observations, I'd say the former), and there are only so many combinations of bits that can be stored in so many bytes of memory.
You can determine the range of int on your platform using std::numeric_limits, but personally I recommend sticking with the fixed-width type aliases (e.g. int32_t, int64_t) and picking whichever ones have sufficient range for your application.
Alternatively, there do exist so-called "bigint" libraries that are essentially classes wrapping integer arrays and adding clever functionality to make arbitrarily-large values work as if they were of arithmetic types. That's probably overkill for you here though.
Just don't be tempted to start using floating-point types (float, double) for their magic range-enhancing abilities; just like with the integral types, their precision is fundamentally limited, but using floating-point types adds additional problems and concerns on top.
There is no fundamental limit on user input, though. That's because your stream is converting text characters, and your stream can basically have as many text characters in it as you could possibly imagine. At that level, you're really only limited by available memory.
This is a program created for a game, in which I am using rand() to try and generate randomly what types of trees can be found at a settlement. The seed for rand() has been set to time in main.cpp so that it is unique each time. My question here however is about my modulus: trees[x]=rand()%40;
If I understand how rand() works correctly, once it outputs a number it has already outputted, it will repeat the same number sequence it has already because it uses a formula. Is using the modulus limiting my program to only produce 40 different random number sequences? Or does it continue to draw the new random number for each of the following arrays from the system clock?
#include <stdafx.h>
#include <iostream>
#include <cstdlib> // for rand() and srand()
#include <ctime> // for time()
using namespace std;
int forestdeterminator()
{
int trees[32];
for (int x=0; x<32; ++x)
trees[x]=rand()%40;
if (trees[0]>=1 && trees[0]<=9)
cout << "Birch Trees" << endl;
if (trees[1]>=1 && trees[1]<=3)
cout << "Mahogany Trees" << endl;
if (trees[2]>=1 && trees[2]<=20)
cout << "Oak Trees" << endl;
if (trees[3]>=1 && trees[3]<=4)
cout << "Cherry Trees" << endl;
if (trees[4]==1)
cout << "Tigerwood Trees" << endl;
if (trees[5]==1)
cout << "Swampwood Trees (Swamp Only)" << endl;
if (trees[6]>=1 && trees[6]<=8)
cout << "Yew Trees" << endl;
if (trees[7]==1)
cout << "Petrified Trees" << endl;
if (trees[8]>=1 && trees[8]<=24)
cout << "Pine Trees" << endl;
if etc etc...
No. You are only using the result of the rand() function, so it has absolutely no effect on the output of the PRNG. Only if you used rand()%40 to successively seed the PRNG would you run into that limit.
Also, note that a PRNG is typically only seeded from the system clock once, at its initialization. From there on, each number "depends" on the previously outputted one.
Finally, be aware that using a modulus on the output of a PRNG will in almost all cases skew the resulting probability distribution. This effect is very very small, for small modulus, but may be important to consider, depending on your application.
Is using the modulus limiting my program to only produce 40 different random number sequences?
Sequences, debatable. Numbers. Definitely. You can't have effective randomness with only 40 possible outputs; there's just not enough freedom there to fluctuate. But no, you're not effecting the numbers that are outputted, you're just heavily limiting what the output of your program is.
what you do is put 32 (pseudo)randomly generated integers in range from 0 to 39 to an array called trees. doing this doesn't affect how the function rand() works. it'll keep generating numbers from the full scope, no matter what operations you apply to it's former results.
so, if I understand you, the answer is: no, using rand()%40 somewhere in your code won't magically make the rand() function generate only numbers from range 0-39.