I have written following code in C++:
#include <cmath>
#include <iostream>
using namespace std;
int main()
{
double sum, containers, n ,c, max_cap, temp;
unsigned int j = 1;
cin >> n >> c;
sum = containers = n;
for (unsigned int i = 2 ; i <= c; ++i)
{
max_cap = i * n;
if (max_cap - sum > 0)
{
temp = ceil((max_cap - sum)/i);
containers += temp;
sum += i * temp;
}
}
cout << containers << '\n';
}
When the input given to this code is "728 1287644555" it takes about 5 seconds to compute the answer but when the input is roughly three times i.e. "763 3560664427" it is not giving a long time.(I waited for around half hour) As it can be seen the algo is of linear order. Therefore, it should take roughly 15 seconds. Why is this happening? Is it because the input is too large in second case? If yes then how is it affecting time so much?
My guess would be unsigned integer overflow.
for (unsigned int i = 2 ; i <= c; ++i)
i increases until it is > c, but c is a double whereas i is an unsigned int. It reaches the maximum (UINT_MAX) and wraps to 0 before it reaches the value of c.
I.e. 1287644555 is less than UINT_MAX, so it completes. But 3560664427 is greater than UINT_MAX, so it loops forever. Which only raises the question of what strange architecture you are running this on :)
On my own machine (UINT_MAX = 4294967295) the first input takes 16 seconds to process while the second takes 43.5 seconds, pretty much what you'd expect.
Related
#include<iostream>
#include<vector>
#include<cstdlib>
#include <cassert>
using namespace std;
long long int LastDigitofSumofFibonacci(int n){long long int first=0;
long long int second=1;
long long int fibOfn;
long long int sum=1;
vector<long long int> V;
V.push_back(first);
V.push_back(second);
if(n==0) return first;
else if(n==1) return second;
else{
for(int i=2;i<60;i++){
fibOfn=first+second;
first=second;
second=fibOfn;
sum=sum+fibOfn;
//cout<<"i "<<i<<" fibOfn "<<fibOfn<<" fibOfnlastdigit "<<fibOfnlastdigit<<" first "<<first<<" second "<<second<<" sum "<<sum;
//cout<<endl;
sum=sum%10;
V.push_back(sum);
}
}
//for(auto element:V)
//cout<<element<<" ";
//cout<<endl;
//cout<<(n)%60<<endl;
return V[(n)%60];
}
int main(){
int n;
cin>>n;
long long int Base=LastDigitofSumofFibonacci(n);
cout<<Base<<endl;
}
In this I am trying to calculate the the last digit of Fibonacci series. I know and also read from net that last digit follow pattern of 60(0-59). from that concept wise I think my code is OK. but still I am unable to get the correct answers for large digit number.
I cleaned up the code a bit and fixed the issue with second not being computed % 10.
Since only the last digit is relevant all variables can be just int, no need for long long int to store a single digit. It would actually save ram to use uint8_t as type for the cache, but 60 bytes or 240 bytes isn't going to make a difference here.
The result repeats every 60 steps, which is the basis for the algorithm. But why compute this every time the function gets called? So lets make a static array so the computation only happens once. Lets go one step further with constinit and compute it at compile time.
As last change I made the argument to LastDigitofSumofFibonacci unsigned int. Unless you want to extend the fibonacci series backwards into the negative and extend the algorithm. unsigned int generates better code for n % 60.
#include <iostream>
#include <array>
int LastDigitofSumofFibonacci(unsigned int n) {
// The last digit of `fib(n)` and their sum repeats every 60 steps.
// https://en.wikipedia.org/wiki/Pisano_period
// Compute the first 60 values as lookup table at compile time.
static constinit std::array<int, 60> cache = []() {
int first = 0, second = 1;
std::array<int, 60> a{0, 1};
for (int i = 2; i < 60; i++) {
int t = first + second;
first = second;
second = t % 10;
a[i] = (a[i - 1] + t) % 10;
}
return a;
}();
// and now just look up the answer at run time.
return cache[n % 60];
}
int main(){
int n;
std::cin >> n;
std::cout << LastDigitofSumofFibonacci(n) << std::endl;
}
Somehow the code got a lot shorter just from eliminating some overhead here and there.
In my code I am trying to multiply two numbers. The algorithm is simple as (k)*(k-1)^n. I stored the product (k-1)^n in variable p1 and then I multiply it with k. For n=10, k=10 (k-1)^n-1 should be 387420489 and I got this in variable p1 but on multiplying it with k, I get a negative number. I used modulus but instead of 3874208490, I get some other large positive number. What is the correct approach?
#include <iostream>
using namespace std;
typedef long long ll;
ll big = 1000000000 + 7;
ll multiply(ll a, ll b)
{
ll ans = 1;
for (int i = 1; i <= b; i++)
ans = ans * a;
return ans % big;
}
int main()
{
int t;
scanf("%d", &t);
while (t--)
{
ll n, k;
cin >> n >> k;
ll p1 = multiply(k - 1, n - 1);
cout << p1 << endl; // this gives correct value
ll p2 = (k % big) * (p1 % big);
cout << ((p2 + big) % big) % big << endl;
}
}
What is ll type? If it is just int (and I pretty sure it is), it gets overflowed, because 32-bit signed type can't store values more than (2^31)-1, which approximately equals to 2 * 10^9. You can use long long int to make it work, then your code will work with the results less than 2^63.
It's not surprising you get an overflow. I plugged your equation into wolfram alpha, fixing n at 10 and iterating over k from 0 to 100.
The curve gets very vertical, very quickly at around k = 80.
10^21 requires 70 binary bits to represent it, and you only have 63 in a long long.
You're going to have to decide what the limits of this algorithm's parameters are and pick data types corresponding. Perhaps a double would be more suitable?
link to plot is here
I am trying to solve a question in which i need to find out the number of possible ways to make a team of two members.(note: a team can have at most two person)
After making this code, It works properly but in some test cases it shows floating point error ad i can't find out what it is exactly.
Input: 1st line : Number of test cases
2nd line: number of total person
Thank you
#include<iostream>
using namespace std;
long C(long n, long r)
{
long f[n + 1];
f[0] = 1;
for (long i = 1; i <= n; i++)
{
f[i] = i * f[i - 1];
}
return f[n] / f[r] / f[n - r];
}
int main()
{
long n, r, m,t;
cin>>t;
while(t--)
{
cin>>n;
r=1;
cout<<C(n, min(r, n - r))+1<<endl;
}
return 0;
}
You aren't getting a floating point exception. You are getting a divide by zero exception. Because your code is attempting to divide by the number 0 (which can't be done on a computer).
When you invoke C(100, 1) the main loop that initializes the f array inside C increases exponentially. Eventually, two values are multiplied such that i * f[i-1] is zero due to overflow. That leads to all the subsequent f[i] values being initialized to zero. And then the division that follows the loop is a division by zero.
Although purists on these forums will say this is undefined, here's what's really happening on most 2's complement architectures. Or at least on my computer....
At i==21:
f[20] is already equal to 2432902008176640000
21 * 2432902008176640000 overflows for 64-bit signed, and will typically become -4249290049419214848 So at this point, your program is bugged and is now in undefined behavior.
At i==66
f[65] is equal to 0x8000000000000000. So 66 * f[65] gets calculated as zero for reasons that make sense to me, but should be understood as undefined behavior.
With f[66] assigned to 0, all subsequent assignments of f[i] become zero as well. After the main loop inside C is over, the f[n-r] is zero. Hence, divide by zero error.
Update
I went back and reverse engineered your problem. It seems like your C function is just trying to compute this expression:
N!
-------------
R! * (N-R)!
Which is the "number of unique sorted combinations"
In which case instead of computing the large factorial of N!, we can reduce that expression to this:
n
[ ∏ i ]
n-r
--------------------
R!
This won't eliminate overflow, but will allow your C function to be able to take on larger values of N and R to compute the number of combinations without error.
But we can also take advantage of simple reduction before trying to do a big long factorial expression
For example, let's say we were trying to compute C(15,5). Mathematically that is:
15!
--------
10! 5!
Or as we expressed above:
1*2*3*4*5*6*7*8*9*10*11*12*13*14*15
-----------------------------------
1*2*3*4*5*6*7*8*9*10 * 1*2*3*4*5
The first 10 factors of the numerator and denominator cancel each other out:
11*12*13*14*15
-----------------------------------
1*2*3*4*5
But intuitively, you can see that "12" in the numerator is already evenly divisible by denominators 2 and 3. And that 15 in the numerator is evenly divisible by 5 in the denominator. So simple reduction can be applied:
11*2*13*14*3
-----------------------------------
1 * 4
There's even more room for greatest common divisor reduction, but this is a great start.
Let's start with a helper function that computes the product of all the values in a list.
long long multiply_vector(std::vector<int>& values)
{
long long result = 1;
for (long i : values)
{
result = result * i;
if (result < 0)
{
std::cout << "ERROR - multiply_range hit overflow" << std::endl;
return 0;
}
}
return result;
}
Not let's implement C as using the above function after doing the reduction operation
long long C(int n, int r)
{
if ((r >= n) || (n < 0) || (r < 0))
{
std::cout << "invalid parameters passed to C" << std::endl;
return 0;
}
// compute
// n!
// -------------
// r! * (n-r)!
//
// assume (r < n)
// Which maps to
// n
// [∏ i]
// n - r
// --------------------
// R!
int end = n;
int start = n - r + 1;
std::vector<int> numerators;
std::vector<int> denominators;
long long numerator = 1;
long long denominator = 1;
for (int i = start; i <= end; i++)
{
numerators.push_back(i);
}
for (int i = 2; i <= r; i++)
{
denominators.push_back(i);
}
size_t n_length = numerators.size();
size_t d_length = denominators.size();
for (size_t n = 0; n < n_length; n++)
{
int nval = numerators[n];
for (size_t d = 0; d < d_length; d++)
{
int dval = denominators[d];
if ((nval % dval) == 0)
{
denominators[d] = 1;
numerators[n] = nval / dval;
}
}
}
numerator = multiply_vector(numerators);
denominator = multiply_vector(denominators);
if ((numerator == 0) || (denominator == 0))
{
std::cout << "Giving up. Can't resolve overflow" << std::endl;
return 0;
}
long long result = numerator / denominator;
return result;
}
You are not using floating-point. And you seem to be using variable sized arrays, which is a C feature and possibly a C++ extension but not standard.
Anyway, you will get overflow and therefore undefined behaviour even for rather small values of n.
In practice the overflow will lead to array elements becoming zero for not much larger values of n.
Your code will then divide by zero and crash.
They also might have a test case like (1000000000, 999999999) which is trivial to solve, but not for your code which I bet will crash.
You don't specify what you mean by "floating point error" - I reckon you are referring to the fact that you are doing an integer division rather than a floating point one so that you will always get integers rather than floats.
int a, b;
a = 7;
b = 2;
std::cout << a / b << std::endl;
this will result in 3, not 3.5! If you want floating point result you should use floats instead like this:
float a, b;
a = 7;
b = 2;
std::cout << a / b << std::end;
So the solution to your problem would simply be to use float instead of long long int.
Note also that you are using variable sized arrays which won't work in C++ - why not use std::vector instead??
Array syntax as:
type name[size]
Note: size must a constant not a variable
Example #1:
int name[10];
Example #2:
const int asize = 10;
int name[asize];
I have programmed a sieve of Eratosthenes algorithm in C++, and it works fine for smaller numbers that I have tested it with. However, when I use large numbers, i.e. 2 000 000 as the upper limit, the program begins giving wrong answers. Can anyone clarify why?
Your help is appreciated.
#include <iostream>
#include <time.h>
using namespace std;
int main() {
clock_t a, b;
a = clock();
int n = 0, k = 2000000; // n = Sum of primes, k = Upper limit
bool r[k - 2]; // r = All numbers below k and above 1 (if true, it has been marked as a non-prime)
for(int i = 0; i < k - 2; i++) // Check all numbers
if(!r[i]) { // If it hasn't been marked as a non-prime yet ...
n += i + 2; // Add the prime to the total sum (+2 because of the shift - index 0 is 2, index 1 is 3, etc.)
for(int j = 2 * i + 2; j < k - 2; j += i + 2) // Go through all multiples of the prime under the limit
r[j] = true; // Mark the multiple as a non-prime
}
b = clock();
cout << "Final Result: " << n << endl;
cout << b - a << "ms runtime achieved." << endl;
return 0;
}
EDIT: I just did some debugging and found that it works with the limit at around 400. At 500, however, it is off - it should be 21536, but is 21499
EDIT 2: Ah, I found two errors and those seem to have fixed the problem.
The first was found by others who answered, and is that n is overflowing - upon being made a long long data type, it has begun working.
The second, rather facepalm-worthy mistake, was that the booleans in r had to be initialized. After running loop before checking for primes to make all of them false, the right answer is gotten. Does anyone know why this occured?
You simply get an integer overflow. The C++ type int is has a limited range (on a 32 bit System usually from -(2^32) / 2 to 2^32 / 2 - 1, that is the usual maximum is 2147483647 (The specific maximum on your setup can be found out by #including the <limits> header and evaluating std::numeric_limits<int>::max(). Even when k is smaller than the maximum, your code will sooner or later cause an overflow in the expressions n += i + 2 or int j = 2 * i + 2.
You will have to choose a better (read: more appropriate) type like unsigned which does not support negative numbers and can thus can represent numbers twice as large as int. You can also try unsigned long or even unsigned long long.
Also note that variable length arrays (VLAs; that's what bool r[k - 2] is) are not standard C++. You might want to use std::vector instead. You also did not initialize the array to false (std::vector would do this automatically), which could also be the problem, especially if you say that it does not work even at k=500.
In C++, you should also use <ctime> instead of <time.h> (then clock_t and andclock()are defined in thestdnamespace, but since you areusing namespace std`, this won't make a difference for you), but this is more or less a matter of style.
I found a working example in my "code archive". Although it is not based on yours, you might find it useful:
#include <vector>
#include <iostream>
int main()
{
typedef std::vector<bool> marked_t;
typedef marked_t::size_type number_t; // The type used for indexing marked_t.
const number_t max = 500;
static const number_t iDif = 2; // Account for the numbers 1 and 2.
marked_t marked(max - iDif);
number_t i = iDif;
while (i*i <= max) {
while (marked[i - iDif] == true)
++i;
for (number_t fac = iDif; i * fac < max; ++fac)
marked[i * fac - iDif] = true;
++i;
}
for (marked_t::size_type i = 0; i < marked.size(); ++i) {
if (!marked[i])
std::cout << i + iDif << ',';
}
}
This program is a c++ program that finds primes using the sieve of eratosthenes to calculate primes. It is then supposed to store the time it takes to do this, and reperform the calculation 100 times, storing the times each time. There are two things that I need help with in this program:
Firstly, I can only test numbers up to 480million I would like to get higher than that.
Secondly, when i time the program it only gets the first timing and then prints zeros as the time. This is not correct and I don't know what the problem with the clock is. -Thanks for the help
Here is my code.
#include <iostream>
#include <ctime>
using namespace std;
int main ()
{
int long MAX_NUM = 1000000;
int long MAX_NUM_ARRAY = MAX_NUM+1;
int long sieve_prime = 2;
int time_store = 0;
while (time_store<=100)
{
int long sieve_prime_constant = 0;
int *Num_Array = new int[MAX_NUM_ARRAY];
std::fill_n(Num_Array, MAX_NUM_ARRAY, 3);
Num_Array [0] = 1;
Num_Array [1] = 1;
clock_t time1,time2;
time1 = clock();
while (sieve_prime_constant <= MAX_NUM_ARRAY)
{
if (Num_Array [sieve_prime_constant] == 1)
{
sieve_prime_constant++;
}
else
{
Num_Array [sieve_prime_constant] = 0;
sieve_prime=sieve_prime_constant;
while (sieve_prime<=MAX_NUM_ARRAY - sieve_prime_constant)
{
sieve_prime = sieve_prime + sieve_prime_constant;
Num_Array [sieve_prime] = 1;
}
if (sieve_prime_constant <= MAX_NUM_ARRAY)
{
sieve_prime_constant++;
sieve_prime = sieve_prime_constant;
}
}
}
time2 = clock();
delete[] Num_Array;
cout << "It took " << (float(time2 - time1)/(CLOCKS_PER_SEC)) << " seconds to execute this loop." << endl;
cout << "This loop has already been executed " << time_store << " times." << endl;
float Time_Array[100];
Time_Array[time_store] = (float(time2 - time1)/(CLOCKS_PER_SEC));
time_store++;
}
return 0;
}
I think the problem is that you don't reset the starting prime:
int long sieve_prime = 2;
Currently that is outside your loop. On second thoughts... That's not the problem. Has this code been edited to incorporate the suggestions in Mats Petersson's answer? I just corrected the bad indentation.
Anyway, for the other part of your question, I suggest you use char instead of int for Num_Array. There is no use using int to store a boolean. By using char you should be able to store about 4 times as many values in the same amount of memory (assuming your int is 32-bit, which it probably is).
That means you could handle numbers up to almost 2 billion. Since you are using signed long as your type instead of unsigned long, that is approaching the numeric limits for your calculation anyway.
If you want to use even less memory, you could use std::bitset, but be aware that performance could be significantly impaired.
By the way, you should declare your timing array at the top of main:
float Time_Array[100];
Putting it inside the loop just before it is used is a bit whack.
Oh, and just in case you're interested, here is my own implementation of the sieve which, personally, I find easier to read than yours....
std::vector<char> isPrime( N, 1 );
for( int i = 2; i < N; i++ )
{
if( !isPrime[i] ) continue;
for( int x = i*2; x < N; x+=i ) isPrime[x] = 0;
}
This section of code is supposed to go inside your loop:
int *Num_Array = new int[MAX_NUM_ARRAY];
std::fill_n(Num_Array, MAX_NUM_ARRAY, 3);
Num_Array [0] = 1;
Num_Array [1] = 1;
Edit: and this one needs be in the loop too:
int long sieve_prime_constant = 0;
When I run this on my machine, it takes 0.2s per loop. If I add two zeros to the MAX_NUM_ARRAY, it takes 4.6 seconds per iteration (up to the 20th loop, I got bored waiting longer than 1.5 minute)
Agree with the earlier comments. If you really want to juice things up you don't store an array of all possible values (as int, or char), but only keep the primes. Then you test each subsequent number for divisibility through all primes found so far. Now you are only limited by the number of primes you can store. Of course, that's not really the algorithm you wanted to implement any more... but since it would be using integer division, it's quite fast. Something like this:
int myPrimes[MAX_PRIME];
int pCount, ii, jj;
ii = 3;
myPrimes[0]=2;
for(pCount=1; pCount<MAX_PRIME; pCount++) {
for(jj = 1; jj<pCount; jj++) {
if (ii%myPrimes[jj]==0) {
// not a prime
ii+=2; // never test even numbers...
jj = 1; // start loop again
}
}
myPrimes[pCount]=ii;
}
Not really what you were asking for, but maybe it is useful.