First of all, sorry for my English.
I am solving this problem:
Tom wants to shoot from a cannon at the Jerry, but he would like to have as many pieces as possible, but they must be the same size and as big as possible too. He only have n cannonballs at his disposal, so he can cut them in smaller pieces. And he would like to have k + 1 pieces to shoot from cannon at the Jerry. He knows the radius of every cannonball. What is the biggest volume of one piece? Output is rounded printf("%.3f\n",answer). First number is n and the second k , next n numbers are radiuses of cannonballs.
Possible input:
3 50
1 2 3
Output: 2.900*
Here is my solution: The volume of every piece can be only smaller or equal to volume of the smallest cannonball because you cannot join parts from cannonballs together. So I have used Binary search from 0.0 to minimal volume and as the predicate I have used function numberOfPieces, which counts the number of pieces from every cannonball with specific volume of one piece(this is the median in binary search). This function return number of pieces I can get if I use median as volume of one piece. Then I just compare it to k + 1 and if it is bigger or equal I use median as low otherwise I use it as high. My solution works for this test input.
The problem is that I get WA(wrong answer) and I cannot check the test input values. Can you please look at my code and check if I did something wrong please? The problem may be number inaccuracy, but I have small EPS so it should be good. Thanks in advance for every idea.
Here is my code:
#include <vector>
#include <iostream>
#include <algorithm>
#include <stdio.h>
#define PI 3.14159265358979323846
#define VC ((4.0/3.0) * PI) // constant for volume calculation
#define EPS 1E-12
using namespace std;
// return the number of pieces depending on the volume
int numberOfPieces(int v[], int n, double volume)
{
int ans = 0;
for(int i = 0; i < n; i++)
ans += (int)(v[i] * VC / volume);
return ans;
}
double binarySearch(double a, double b, int k, int n, int v[])
{
double low = a, high = b;
while(abs(low-high) > EPS)
{
double mid = low + (high - low) / 2.0;
if(numberOfPieces(v, n, mid) >= k)
low = mid;
else
high = mid;
}
return (low + high)/2.0;
}
int main()
{
int n, k, x; // n - number of cannonballs, k - number of wanted pieces, x - variable for input
int v[10001]; // radiuses ^ 3 of the cannonballs
scanf("%d%d", &n, &k);
int minVolume = 9999999;
for(int i = 0; i < n; i++) {
scanf("%d",&x);
minVolume = min(minVolume, x);
v[i] = x * x * x;
}
printf("%.3f\n", binarySearch(0.0, minVolume * minVolume * minVolume * VC, k + 1, n, v));
return 0;
}
The problem was I was setting minimal volume as high in the binary search, but I should use the maximal volume. The second problem was I was not passing maximal radius ^ 3 to the binary search function. Thanks for help
Related
hey I am making small C++ program to calculate the value of sin(x) till 7 decimal points but when I calculate sin(PI/2) using this program it gives me 0.9999997 rather than 1.0000000 how can I solve this error?
I know of little bit why I'm getting this value as output, question is what should be my approach to solve this logical error?
here is my code for reference
#include <iostream>
#include <iomanip>
#define PI 3.1415926535897932384626433832795
using namespace std;
double sin(double x);
int factorial(int n);
double Pow(double a, int b);
int main()
{
double x = PI / 2;
cout << setprecision(7)<< sin(x);
return 0;
}
double sin(double x)
{
int n = 1; //counter for odd powers.
double Sum = 0; // to store every individual expression.
double t = 1; // temp variable to store individual expression
for ( n = 1; t > 10e-7; Sum += t, n = n + 2)
{
// here i have calculated two terms at a time because addition of two consecutive terms is always less than 1.
t = (Pow(-1.00, n + 1) * Pow(x, (2 * n) - 1) / factorial((2 * n) - 1))
+
(Pow(-1.00, n + 2) * Pow(x, (2 * (n+1)) - 1) / factorial((2 * (n+1)) - 1));
}
return Sum;
}
int factorial(int n)
{
if (n < 2)
{
return 1;
}
else
{
return n * factorial(n - 1);
}
}
double Pow(double a, int b)
{
if (b == 1)
{
return a;
}
else
{
return a * Pow(a, b - 1);
}
}
sin(PI/2) ... it gives me 0.9999997 rather than 1.0000000
For values outside [-pi/4...+pi/4] the Taylor's sin/cos series converges slowly and suffers from cancelations of terms and overflow of int factorial(int n)**. Stay in the sweet range.
Consider using trig properties sin(x + pi/2) = cos(x), sin(x + pi) = -sin(x), etc. to bring x in to the [-pi/4...+pi/4] range.
Code uses remquo (ref2) to find the remainder and part of quotient.
// Bring x into the -pi/4 ... pi/4 range (i.e. +/- 45 degrees)
// and then call owns own sin/cos function.
double my_wide_range_sin(double x) {
if (x < 0.0) {
return -my_sin(-x);
}
int quo;
double x90 = remquo(fabs(x), pi/2, &quo);
switch (quo % 4) {
case 0:
return sin_sweet_range(x90);
case 1:
return cos_sweet_range(x90);
case 2:
return sin_sweet_range(-x90);
case 3:
return -cos_sweet_range(x90);
}
return 0.0;
}
This implies OP needs to code up a cos() function too.
** Could use long long instead of int to marginally extend the useful range of int factorial(int n) but that only adds a few x. Could use double.
A better approach would not use factorial() at all, but scale each successive term by 1.0/(n * (n+1)) or the like.
I see three bugs:
10e-7 is 10*10^(-7) which seems to be 10 times larger than you want. I think you wanted 1e-7.
Your test t > 10e-7 will become false, and exit the loop, if t is still large but negative. You may want abs(t) > 1e-7.
To get the desired accuracy, you need to get up to n = 7, which has you computing factorial(13), which overflows a 32-bit int. (If using gcc you can catch this with -fsanitize=undefined or -ftrapv.) You can gain some breathing room by using long long int which is at least 64 bits, or int64_t.
original outdated code:
Write an algorithm that compute the Euler's number until
My professor from Algorithms course gave me the following homework:
Write a C/C++ program that calculates the value of the Euler's number (e) with a given accuracy of eps > 0.
Hint: The number e = 1 + 1/1! +1/2! + ... + 1 / n! + ... = 2.7172 ... can be calculated as the sum of elements of the sequence x_0, x_1, x_2, ..., where x_0 = 1, x_1 = 1+ 1/1 !, x_2 = 1 + 1/1! +1/2 !, ..., the summation continues as long as the condition |x_(i+1) - x_i| >= eps is valid.
As he further explained, eps is the precision of the algorithm. For example, the precision could be 1/100 |x_(i + 1) - x_i| = absolute value of ( x_(i+1) - x_i )
Currently, my program looks in the following way:
#include<iostream>
#include<cstdlib>
#include<math.h>
// Euler's number
using namespace std;
double factorial(double n)
{
double result = 1;
for(double i = 1; i <= n; i++)
{
result = result*i;
}
return result;
}
int main()
{
long double euler = 2;
long double counter = 2;
long double epsilon = 1.0/1000;
long double moduloDifference;
do
{
euler+= 1 / factorial(counter);
counter++;
moduloDifference = (euler + 1 / factorial(counter+1) - euler);
} while(moduloDifference >= epsilon);
printf("%.35Lf ", euler );
return 0;
}
Issues:
It seems my epsilon value does not work properly. It is supposed to control the precision. For example, when I wish precision of 5 digits, I initialize it to 1.0/10000, and it outputs 3 digits before they get truncated after 8 (.7180).
When I use long double data type, and epsilon = 1/10000, my epsilon gets the value 0, and my program runs infinitely. Yet, if change the data type from long double to double, it works. Why epsilon becomes 0 when using long double data type?
How can I optimize the algorithm of finding Euler's number? I know, I can rid off the function and calculate the Euler's value on the fly, but after each attempt to do that, I receive other errors.
One problem with computing Euler's constant this way is pretty simple: you're starting with some fairly large numbers, but since the denominator in each term is N!, the amount added by each successive term shrinks very quickly. Using naive summation, you quickly reach a point where the value you're adding is small enough that it no longer affects the sum.
In the specific case of Euler's constant, since the numbers constantly decrease, one way we can deal with them quite a bit better is to compute and store all the terms, then add them up in reverse order.
Another possibility that's more general is to use Kahan's summation algorithm instead. This keeps track of a running error while it's doing the summation, and takes the current error into account as it's adding each successive term.
For example, I've rewritten your code to use Kahan summation to compute to (approximately) the limit of precision of a typical (80-bit) long double:
#include<iostream>
#include<cstdlib>
#include<math.h>
#include <vector>
#include <iomanip>
#include <limits>
// Euler's number
using namespace std;
long double factorial(long double n)
{
long double result = 1.0L;
for(int i = 1; i <= n; i++)
{
result = result*i;
}
return result;
}
template <class InIt>
typename std::iterator_traits<InIt>::value_type accumulate(InIt begin, InIt end) {
typedef typename std::iterator_traits<InIt>::value_type real;
real sum = real();
real running_error = real();
for ( ; begin != end; ++begin) {
real difference = *begin - running_error;
real temp = sum + difference;
running_error = (temp - sum) - difference;
sum = temp;
}
return sum;
}
int main()
{
std::vector<long double> terms;
long double epsilon = 1e-19;
long double i = 0;
double term;
for (int i=0; (term=1.0L/factorial(i)) >= epsilon; i++)
terms.push_back(term);
int width = std::numeric_limits<long double>::digits10;
std::cout << std::setw(width) << std::setprecision(width) << accumulate(terms.begin(), terms.end()) << "\n";
}
Result: 2.71828182845904522
In fairness, I should actually add that I haven't checked what happens with your code using naive summation--it's possible the problem you're seeing is from some other source. On the other hand, this does fit fairly well with a type of situation where Kahan summation stands at least a reasonable chance of improving results.
#include<iostream>
#include<cmath>
#include<iomanip>
#define EPSILON 1.0/10000000
#define AMOUNT 6
using namespace std;
int main() {
long double e = 2.0, e0;
long double factorial = 1;
int counter = 2;
long double moduloDifference;
do {
e0 = e;
factorial *= counter++;
e += 1.0 / factorial;
moduloDifference = fabs(e - e0);
} while (moduloDifference >= EPSILON);
cout << "Wynik:" << endl;
cout << setprecision(AMOUNT) << e << endl;
return 0;
}
This an optimized version that does not have a separate function to calculate the factorial.
Issue 1: I am still not sure how EPSILON manages the precision.
Issue 2: I do not understand the real difference between long double and double. Regarding my code, why long double requires a decimal point (1.0/someNumber), and double doesn't (1/someNumber)
I want to write a program where input are x and y integer values
and then:
Let s be the set { x0, 𝑥1, …, 𝑥y}; store it in array.
Repeat:
Partition the set s into two subsets: s1 and s2.
Find the sum of each of the two subset and store them in variables like sum1, sum2.
Calculate the product of sum1 * sum2.
The program ends after passing all over the partial groups that could be formed and then prints the max value of the product sum1 * sum2.
example: suppose x=2 , y=3 s= {1,2,4,8} one of the divisions is to take s1 ={1,4} , s2={2,8} sum1=5 , sum2= 10 the product is 50 and that will be compared to other productd that were calculated in the same way like s1 ={1} , s2={2,4,8} sum1=1 , sum2=14 and the product is 14 and so on.
My code so far:
#include <iostream>
using namespace std;
int main ()
{
int a[10000]; // Max value expected.
int x;
int y;
cin >> x;
cin >> y;
int xexpy = 1;
int k;
for (int i = 0; i <= y; i++)
{
xexpy = 1;
k = i;
while(k > 0)
{
xexpy = xexpy * x;
k--;
}
cout << "\n" << xexpy;
a[i] = xexpy;
}
return 0;
}
This is not a programming problem, it is a combinatorics problem with a theoretical rather than an empirical approach to its solution. You can simply print the correct solution and not bother iterating over any partitions.
Why is that?
Let
i.e. z is the fraction of the sum of all s elements that's in s1. It holds that
and thus, the product of both sets satisfies:
As a function of z (not of x and y), this is a parabola that takes its maximum at z = 1/2; and there are no other local maximum points, i.e. getting closer to 1/2 necessarily increases that product. Thus what you want to do is partition the full set so that each of s1 and s2 are as close as possible to have half the sum of elements.
In general, you might have had to use programming to consider multiple subsets, but since your elements are given by a formula - and it's the formula of a geometric sequence.
First, let's assume x >= 2 and y >= 2, otherwise this is not an interesting problem.
Now, for x >= 2, we know that
(the sum of a geometric sequence), and thus
i.e. the last element always outweighs all other elements put together. That's why you always want to choose {xy} as s1 and as all other elements as s2. No need to run any program. You can then also easily calculate the optimum product-of-sums.
Note: If we don't make assumptions about the elements of s, except that they're non-negative integers, finding the optimum solution is an optimization version of the Partition problem - which is NP-complete. That means, very roughly, that there is no solution is fundamentally much more efficient than just trying all possible combinations.
Here's a cheesy all-combinations-of-supplied-arguments generator, provided without comment or explanation because I think this is homework, and the exercise of understanding how and why this does what it does is the point here.
#include <algorithm>
#include <iostream>
#include <string>
#include <vector>
using namespace std;
int main(int c, const char **v)
{
basic_string<const char *> options(v);
auto N(options.length());
for (auto n = 1u; n < N; ++n) {
vector<char> pick(N);
fill_n(pick.rbegin(), n, 1);
do for (auto j=1u; j<N; ++j)
if (pick[j])
cout << options[j]<<' ';
while (cout<<'\n', next_permutation(begin(pick)+1, end(pick)));
}
}
I'm working on this program that approximates a taylor series function. I have to approximate it so that the taylor series function stops approximating the sin function with a precision of .00001. In other words,the absolute value of the last approximation minus the current approximation equals less than or equal to 0.00001. It also approximates each angle from 0 to 360 degrees in 15 degree increments. My logic seems to be correct, but I cannot figure out why i am getting garbage values. Any help is appreciated!
#include <math.h>
#include <iomanip>
#include <iostream>
#include <string>
#include <stdlib.h>
#include <cmath>
double fact(int x){
int F = 1;
for(int i = 1; i <= x; i++){
F*=i;
}
return F;
}
double degreesToRadians(double angle_in_degrees){
double rad = (angle_in_degrees*M_PI)/180;
return rad;
}
using namespace std;
double mySine(double x){
int current =99999;
double comSin=x;
double prev=0;
int counter1 = 3;
int counter2 = 1;
while(current>0.00001){
prev = comSin;
if((counter2 % 2) == 0){
comSin += (pow(x,(counter1))/(fact(counter1)));
}else{
comSin -= (pow(x,(counter1))/(fact(counter1)));
}
current=abs(prev-comSin);
cout<<current<<endl;
counter1+=2;
counter2+=1;
}
return comSin;
}
using namespace std;
int main(){
cout<<"Angle\tSine"<<endl;
for (int i = 0; i<=360; i+=15){
cout<<i<<"\t"<<mySine(degreesToRadians(i));
}
}
Here is an example which illustrates how to go about doing this.
Using the pow function and calculating the factorial at each iteration is very inefficient -- these can often be maintained as running values which are updated alongside the sum during each iteration.
In this case, each iteration's addend is the product of two factors: a power of x and a (reciprocal) factorial. To get from one iteration's power factor to the next iteration's, just multiply by x*x. To get from one iteration's factorial factor to the next iteration's, just multiply by ((2*n+1) + 1) * ((2*n+1) + 2), before incrementing n (the iteration number).
And because these two factors are updated multiplicatively, they do not need to exist as separate running values, they can exists as a single running product. This also helps avoid precision problems -- both the power factor and the factorial can become large very quickly, but the ratio of their values goes to zero relatively gradually and is well-behaved as a running value.
So this example maintains these running values, updated at each iteration:
"sum" (of course)
"prod", the ratio: pow(x, 2n+1) / factorial 2n+1
"tnp1", the value of 2*n+1 (used in the factorial update)
The running update value, "prod" is negated every iteration in order to to factor in the (-1)^n.
I also included the function "XlatedSine". When x is too far away from zero, the sum requires more iterations for an accurate result, which takes longer to run and also can require more precision than our floating-point values can provide. When the magnitude of x goes beyond PI, "XlatedSine" finds another x, close to zero, with an equivalent value for sin(x), then uses this shifted x in a call to MaclaurinSine.
#include <iostream>
#include <iomanip>
// Importing cmath seemed wrong LOL, so define Abs and PI
static double Abs(double x) { return x < 0 ? -x : x; }
const double PI = 3.14159265358979323846;
// Taylor series about x==0 for sin(x):
//
// Sum(n=[0...oo]) { ((-1)^n) * (x^(2*n+1)) / (2*n + 1)! }
//
double MaclaurinSine(double x) {
const double xsq = x*x; // cached constant x squared
int tnp1 = 3; // 2*n+1 | n==1
double prod = xsq*x / 6; // pow(x, 2*n+1) / (2*n+1)! | n==1
double sum = x; // sum after n==0
for(;;) {
prod = -prod;
sum += prod;
static const double MinUpdate = 0.00001; // try zero -- the factorial will always dominate the power of x, eventually
if(Abs(prod) <= MinUpdate) {
return sum;
}
// Update the two factors in prod
prod *= xsq; // add 2 to the power factor's exponent
prod /= (tnp1 + 1) * (tnp1 + 2); // update the factorial factor by two iterations
tnp1 += 2;
}
}
// XlatedSine translates x to an angle close to zero which will produce the equivalent result.
double XlatedSine(double x) {
if(Abs(x) >= PI) {
// Use int casting to do an fmod PI (but symmetric about zero).
// Keep in mind that a really big x could overflow the int,
// however such a large double value will have lost so much precision
// at a sub-PI-sized scale that doing this in a legit fashion
// would also disappoint.
const int p = static_cast<int>(x / PI);
x -= PI * p;
if(p % 2) {
x = -x;
}
}
return MaclaurinSine(x);
}
double DegreesToRadians(double angle_deg) {
return PI / 180 * angle_deg;
}
int main() {
std::cout<<"Angle\tSine\n" << std::setprecision(12);
for(int i = 0; i<=360; i+=15) {
std::cout << i << "\t" << MaclaurinSine(DegreesToRadians(i)) << "\n";
//std::cout << i << "\t" << XlatedSine(DegreesToRadians(i)) << "\n";
}
}
In my program, I am trying to take the find the largest prime factor of the number 600851475143. I have made one for loop that determines all the factors of that number and stores them in a vector array. The problem I am having is that I don't know how to determine if the factor can be square rooted and give a whole number rather than a decimal. My code so far is:
#include <iostream>
#include <vector>
#include <math.h>
using namespace std;
vector <int> factors;
int main()
{
double num = 600851475143;
for (int i=1; i<=num; i++)
{
if (fmod(num,i)==0)
{
factors.push_back(i);
}
}
for (int i=0; i<factors.size(); i++)
{
if (sqrt(factor[i])) // ???
}
}
Can someone show me how to determine whether a number can be square rooted or not through my if statement?
int s = sqrt(factor[i]);
if ((s * s) == factor[i])
As hobbs pointed out in the comments,
Assuming that double is the usual 64-bit IEEE-754 double-precision float, for values less than 2^53 the difference between one double and the next representable double is less than or equal to 1. Above 2^53, the precision is worse than integer.
So if your int is 32 bits you are safe. If you have to deal with numbers bigger than 2^53, you may have some precision errors.
Perfect squares can only end in 0, 1, 4, or 9 in base 16, So for 75% of your inputs (assuming they are uniformly distributed) you can avoid a call to the square root in exchange for some very fast bit twiddling.
int isPerfectSquare(int n)
{
int h = n & 0xF; // h is the last hex "digit"
if (h > 9)
return 0;
// Use lazy evaluation to jump out of the if statement as soon as possible
if (h != 2 && h != 3 && h != 5 && h != 6 && h != 7 && h != 8)
{
int t = (int) floor( sqrt((double) n) + 0.5 );
return t*t == n;
}
return 0;
}
usage:
for ( int i = 0; i < factors.size(); i++) {
if ( isPerfectSquare( factor[ i]))
//...
}
Fastest way to determine if an integer's square root is an integer
The following should work. It takes advantage of integer truncation.
if (int (sqrt(factor[i])) * int (sqrt(factor[i])) == factor[i])
It works because the square root of a non-square number is a decimal. By converting to an integer, you remove the fractional part of the double. Once you square this, it is no longer equal to the original square root.
You also have to take into account the round-off error when comparing to cero. You can use std::round if your compiler supports c++11, if not, you can do it yourself (here)
#include <iostream>
#include <vector>
#include <math.h>
using namespace std;
vector <int> factors;
int main()
{
double num = 600851475143;
for (int i=1; i<=num; i++)
{
if (round(fmod(num,i))==0)
{
factors.push_back(i);
}
}
for (int i=0; i<factors.size(); i++)
{
int s = sqrt(factor[i]);
if ((s * s) == factor[i])
}
}
You are asking the wrong question. Your algorithm is wrong. (Well, not entirely, but if it were to be corrected following the presented idea, it would be quite inefficient.) With your approach, you need also to check for cubes, fifth powers and all other prime powers, recursively. Try to find all factors of 5120=5*2^10 for example.
The much easier way is to remove a factor after it was found by dividing
num=num/i
and only increase i if it is no longer a factor. Then, if the iteration encounters some i=j^2 or i=j^3,... , all factors j, if any, were already removed at an earlier stage, when i had the value j, and accounted for in the factor array.
You could also have mentioned that this is from the Euler project, problem 3. Then you would have, possibly, found the recent discussion "advice on how to make my algorithm faster" where more efficient variants of the factorization algorithm were discussed.
Here is a simple C++ function I wrote for determining whether a number has an integer square root or not:
bool has_sqrtroot(int n)
{
double sqrtroot=sqrt(n);
double flr=floor(sqrtroot);
if(abs(sqrtroot - flr) <= 1e-9)
return true;
return false;
}
As sqrt() function works with floating-point it is better to avoid working with its return value (floating-point calculation occasionally gives the wrong result, because of precision error). Rather you can write a function- isSquareNumber(int n), which will decide if the number is a square number or not and the whole calculation will be done in integer.
bool isSquareNumber(int n){
int l=1, h=n;
while(l<=h){
int m = (l+h) / 2;
if(m*m == n){
return true;
}else if(m*m > n){
h = m-1;
}else{
l = m+1;
}
}
return false;
}
int main()
{
// ......
for (int i=0; i<factors.size(); i++){
if (isSquareNumber(factor[i]) == true){
/// code
}
}
}