why floating point numbers does not give desired answer? - c++

hey I am making small C++ program to calculate the value of sin(x) till 7 decimal points but when I calculate sin(PI/2) using this program it gives me 0.9999997 rather than 1.0000000 how can I solve this error?
I know of little bit why I'm getting this value as output, question is what should be my approach to solve this logical error?
here is my code for reference
#include <iostream>
#include <iomanip>
#define PI 3.1415926535897932384626433832795
using namespace std;
double sin(double x);
int factorial(int n);
double Pow(double a, int b);
int main()
{
double x = PI / 2;
cout << setprecision(7)<< sin(x);
return 0;
}
double sin(double x)
{
int n = 1; //counter for odd powers.
double Sum = 0; // to store every individual expression.
double t = 1; // temp variable to store individual expression
for ( n = 1; t > 10e-7; Sum += t, n = n + 2)
{
// here i have calculated two terms at a time because addition of two consecutive terms is always less than 1.
t = (Pow(-1.00, n + 1) * Pow(x, (2 * n) - 1) / factorial((2 * n) - 1))
+
(Pow(-1.00, n + 2) * Pow(x, (2 * (n+1)) - 1) / factorial((2 * (n+1)) - 1));
}
return Sum;
}
int factorial(int n)
{
if (n < 2)
{
return 1;
}
else
{
return n * factorial(n - 1);
}
}
double Pow(double a, int b)
{
if (b == 1)
{
return a;
}
else
{
return a * Pow(a, b - 1);
}
}

sin(PI/2) ... it gives me 0.9999997 rather than 1.0000000
For values outside [-pi/4...+pi/4] the Taylor's sin/cos series converges slowly and suffers from cancelations of terms and overflow of int factorial(int n)**. Stay in the sweet range.
Consider using trig properties sin(x + pi/2) = cos(x), sin(x + pi) = -sin(x), etc. to bring x in to the [-pi/4...+pi/4] range.
Code uses remquo (ref2) to find the remainder and part of quotient.
// Bring x into the -pi/4 ... pi/4 range (i.e. +/- 45 degrees)
// and then call owns own sin/cos function.
double my_wide_range_sin(double x) {
if (x < 0.0) {
return -my_sin(-x);
}
int quo;
double x90 = remquo(fabs(x), pi/2, &quo);
switch (quo % 4) {
case 0:
return sin_sweet_range(x90);
case 1:
return cos_sweet_range(x90);
case 2:
return sin_sweet_range(-x90);
case 3:
return -cos_sweet_range(x90);
}
return 0.0;
}
This implies OP needs to code up a cos() function too.
** Could use long long instead of int to marginally extend the useful range of int factorial(int n) but that only adds a few x. Could use double.
A better approach would not use factorial() at all, but scale each successive term by 1.0/(n * (n+1)) or the like.

I see three bugs:
10e-7 is 10*10^(-7) which seems to be 10 times larger than you want. I think you wanted 1e-7.
Your test t > 10e-7 will become false, and exit the loop, if t is still large but negative. You may want abs(t) > 1e-7.
To get the desired accuracy, you need to get up to n = 7, which has you computing factorial(13), which overflows a 32-bit int. (If using gcc you can catch this with -fsanitize=undefined or -ftrapv.) You can gain some breathing room by using long long int which is at least 64 bits, or int64_t.

Related

Efficiently convert two Integers x and y into the float x.y

Given two integers X and Y, whats the most efficient way of converting them into X.Y float value in C++?
E.g.
X = 3, Y = 1415 -> 3.1415
X = 2, Y = 12 -> 2.12
Here are some cocktail-napkin benchmark results, on my machine, for all solutions converting two ints to a float, as of the time of writing.
Caveat: I've now added a solution of my own, which seems to do well, and am therefore biased! Please double-check my results.
Test
Iterations
ns / iteration
#aliberro's conversion v2
79,113,375
13
#3Dave's conversion
84,091,005
12
#einpoklum's conversion
1,966,008,981
0
#Ripi2's conversion
47,374,058
21
#TarekDakhran's conversion
1,960,763,847
0
CPU: Quad Core Intel Core i5-7600K speed/min/max: 4000/800/4200 MHz
Devuan GNU/Linux 3
Kernel: 5.2.0-3-amd64 x86_64
GCC 9.2.1, with flags: -O3 -march=native -mtune=native
Benchmark code (Github Gist).
float sum = x + y / pow(10,floor(log10(y)+1));
log10 returns log (base 10) of its argument. For 1234, that'll be 3 point something.
Breaking this down:
log10(1234) = 3.091315159697223
floor(log10(1234)+1) = 4
pow(10,4) = 10000.0
3 + 1234 / 10000.0 = 3.1234.
But, as #einpoklum pointed out, log(0) is NaN, so you have to check for that.
#include <iostream>
#include <cmath>
#include <vector>
using namespace std;
float foo(int x, unsigned int y)
{
if (0==y)
return x;
float den = pow(10,-1 * floor(log10(y)+1));
return x + y * den;
}
int main()
{
vector<vector<int>> tests
{
{3,1234},
{1,1000},
{2,12},
{0,0},
{9,1}
};
for(auto& test: tests)
{
cout << "Test: " << test[0] << "," << test[1] << ": " << foo(test[0],test[1]) << endl;
}
return 0;
}
See runnable version at:
https://onlinegdb.com/rkaYiDcPI
With test output:
Test: 3,1234: 3.1234
Test: 1,1000: 1.1
Test: 2,12: 2.12
Test: 0,0: 0
Test: 9,1: 9.1
Edit
Small modification to remove division operation.
(reworked solution)
Initially, my thoughts were improving on the performance of power-of-10 and division-by-power-of-10 by writing specialized versions of these functions, for integers. Then there was #TarekDakhran's comment about doing the same for counting the number of digits. And then I realized: That's essentially doing the same thing twice... so let's just integrate everything. This will, specifically, allow us to completely avoid any divisions or inversions at runtime:
inline float convert(int x, int y) {
float fy (y);
if (y == 0) { return float(x); }
if (y >= 1e9) { return float(x + fy * 1e-10f); }
if (y >= 1e8) { return float(x + fy * 1e-9f); }
if (y >= 1e7) { return float(x + fy * 1e-8f); }
if (y >= 1e6) { return float(x + fy * 1e-7f); }
if (y >= 1e5) { return float(x + fy * 1e-6f); }
if (y >= 1e4) { return float(x + fy * 1e-5f); }
if (y >= 1e3) { return float(x + fy * 1e-4f); }
if (y >= 1e2) { return float(x + fy * 1e-3f); }
if (y >= 1e1) { return float(x + fy * 1e-2f); }
return float(x + fy * 1e-1f);
}
Additional notes:
This will work for y == 0; but - not for negative x or y values. Adapting it for negative value is pretty easy and not very expensive though.
Not sure if this is absolutely optimal. Perhaps a binary-search for the number of digits of y would work better?
A loop would make the code look nicer; but the compiler would need to unroll it. Would it unroll the loop and compute all those floats beforehand? I'm not sure.
I put some effort into optimizing my previous answer and ended up with this.
inline uint32_t digits_10(uint32_t x) {
return 1u
+ (x >= 10u)
+ (x >= 100u)
+ (x >= 1000u)
+ (x >= 10000u)
+ (x >= 100000u)
+ (x >= 1000000u)
+ (x >= 10000000u)
+ (x >= 100000000u)
+ (x >= 1000000000u)
;
}
inline uint64_t pow_10(uint32_t exp) {
uint64_t res = 1;
while(exp--) {
res *= 10u;
}
return res;
}
inline double fast_zip(uint32_t x, uint32_t y) {
return x + static_cast<double>(y) / pow_10(digits_10(y));
}
double IntsToDbl(int ipart, int decpart)
{
//The decimal part:
double dp = (double) decpart;
while (dp > 1)
{
dp /= 10;
}
//Joint boths parts
return ipart + dp;
}
Simple and very fast solution is converting both values x and y to string, then concatenate them, then casting the result into a floating number as following:
#include <string>
#include <iostream>
std::string x_string = std::to_string(x);
std::string y_string = std::to_string(y);
std::cout << x_string +"."+ y_string ; // the result, cast it to float if needed
(Answer based on the fact that OP has not indicated what they want to use the float for.)
The fastest (most efficient) way is to do it implicitly, but not actually do anything (after compiler optimizations).
That is, write a "pseudo-float" class, whose members are integers of x and y's types before and after the decimal point; and have operators for doing whatever it is you were going to do with the float: operator+, operator*, operator/, operator- and maybe even implementations of pow(), log2(), log10() and so on.
Unless what you were planning to do is literally save a 4-byte float somewhere for later use, it would almost certainly be faster if you had the next operand you need to work with then to really create a float from just x and y, already losing precision and wasting time.
Try this
#include <iostream>
#include <math.h>
using namespace std;
float int2Float(int integer,int decimal)
{
float sign = integer/abs(integer);
float tm = abs(integer), tm2 = abs(decimal);
int base = decimal == 0 ? -1 : log10(decimal);
tm2/=pow(10,base+1);
return (tm+tm2)*sign;
}
int main()
{
int x,y;
cin >>x >>y;
cout << int2Float(x,y);
return 0;
}
version 2, try this out
#include <iostream>
#include <cmath>
using namespace std;
float getPlaces(int x)
{
unsigned char p=0;
while(x!=0)
{
x/=10;
p++;
}
float pow10[] = {1.0f,10.0f,100.0f,1000.0f,10000.0f,100000.0f};//don't need more
return pow10[p];
}
float int2Float(int x,int y)
{
if(y == 0) return x;
float sign = x != 0 ? x/abs(x) : 1;
float tm = abs(x), tm2 = abs(y);
tm2/=getPlaces(y);
return (tm+tm2)*sign;
}
int main()
{
int x,y;
cin >>x >>y;
cout << int2Float(x,y);
return 0;
}
If you want something that is simple to read and follow, you could try something like this:
float convertToDecimal(int x)
{
float y = (float) x;
while( y > 1 ){
y = y / 10;
}
return y;
}
float convertToDecimal(int x, int y)
{
return (float) x + convertToDecimal(y);
}
This simply reduces one integer to the first floating point less than 1 and adds it to the other one.
This does become a problem if you ever want to use a number like 1.0012 to be represented as 2 integers. But that isn't part of the question. To solve it, I would use a third integer representation to be the negative power of 10 for multiplying the second number. IE 1.0012 would be 1, 12, 4. This would then be coded as follows:
float convertToDecimal(int num, int e)
{
return ((float) num) / pow(10, e);
}
float convertToDecimal(int x, int y, int e)
{
return = (float) x + convertToDecimal(y, e);
}
It a little more concise with this answer, but it doesn't help to answer your question. It might help show a problem with using only 2 integers if you stick with that data model.

Composite Simpson's Rule in C++

I've been trying to write a function to approximate an the value of an integral using the Composite Simpson's Rule.
template <typename func_type>
double simp_rule(double a, double b, int n, func_type f){
int i = 1; double area = 0;
double n2 = n;
double h = (b-a)/(n2-1), x=a;
while(i <= n){
area = area + f(x)*pow(2,i%2 + 1)*h/3;
x+=h;
i++;
}
area -= (f(a) * h/3);
area -= (f(b) * h/3);
return area;
}
What I do is multiply each value of the function by either 2 or 4 (and h/3) with pow(2,i%2 + 1) and subtract off the edges as these should only have a weight of 1.
At first, I thought it worked just fine, however, when I compared it to my Trapezoidal Method function it was way more inaccurate which shouldn't be the case.
This is a simpler version of a code I previously wrote which had the same problem, I thought that if I cleaned it up a little the problem would go away, but alas. From another post, I get the idea that there's something going on with the types and the operations I'm doing on them which results in loss of precision, but I just don't see it.
Edit:
For completeness, I was running it for e^x from 1 to zero
\\function to be approximated
double f(double x){ double a = exp(x); return a; }
int main() {
int n = 11; //this method works best for odd values of n
double e = exp(1);
double exact = e-1; //value of integral of e^x from 0 to 1
cout << simp_rule(0,1,n,f) - exact;
The Simpson's Rule uses this approximation to estimate a definite integral:
Where
and
So that there are n + 1 equally spaced sample points xi.
In the posted code, the parameter n passed to the function appears to be the number of points where the function is sampled (while in the previous formula n is the number of intervals, that's not a problem).
The (constant) distance between the points is calculated correctly
double h = (b - a) / (n - 1);
The while loop used to sum the weighted contributes of all the points iterates from x = a up to a point with an ascissa close to b, but probably not exactly b, due to rounding errors. This implies that the last calculated value of f, f(x_n), may be slightly different from the expected f(b).
This is nothing, though, compared to the error caused by the fact that those end points are summed inside the loop with the starting weight of 4 and then subtracted after the loop with weight 1, while all the inner points have their weight switched. As a matter of fact, this is what the code calculates:
Also, using
pow(2, i%2 + 1)
To generate the sequence 4, 2, 4, 2, ..., 4 is a waste, in terms of efficency, and may add (depending on the implementation) other unnecessary rounding errors.
The following algorithm shows how to obtain the same (fixed) result, without a call to that library function.
template <typename func_type>
double simpson_rule(double a, double b,
int n, // Number of intervals
func_type f)
{
double h = (b - a) / n;
// Internal sample points, there should be n - 1 of them
double sum_odds = 0.0;
for (int i = 1; i < n; i += 2)
{
sum_odds += f(a + i * h);
}
double sum_evens = 0.0;
for (int i = 2; i < n; i += 2)
{
sum_evens += f(a + i * h);
}
return (f(a) + f(b) + 2 * sum_evens + 4 * sum_odds) * h / 3;
}
Note that this function requires the number of intervals (e.g. use 10 instead of 11 to obtain the same results of OP's function) to be passed, not the number of points.
Testable here.
The above excellent and accepted solution could benefit from liberal use of std::fma() and templatize on the floating point type.
https://en.cppreference.com/w/cpp/numeric/math/fma
#include <cmath>
template <typename fptype, typename func_type>
double simpson_rule(fptype a, fptype b,
int n, // Number of intervals
func_type f)
{
fptype h = (b - a) / n;
// Internal sample points, there should be n - 1 of them
fptype sum_odds = 0.0;
for (int i = 1; i < n; i += 2)
{
sum_odds += f(std::fma(i,h,a));
}
fptype sum_evens = 0.0;
for (int i = 2; i < n; i += 2)
{
sum_evens += f(std::fma(i,h,a);
}
return (std::fma(2,sum_evens,f(a)) +
std::fma(4,sum_odds,f(b))) * h / 3;
}

How to fix "PI recursive function" code to work with all values of n?

I'm working on a code that calculates PI with n terms. However, my code only works correctly with some values of n.
This piece of code even numbers do not work and when I switch up the negative sign the odd numbers do not work.
double PI(int n, double y=2){
double sum = 0;
if (n==0){
return 3;
}else if (n % 2 != 0){
sum = (4/(y*(y+1)*(y+2)))+(PI (n - 1 ,y+2)) ;
}else{
sum= -(4/(y*(y+1)*(y+2)))+PI (n - 1,y+2) ;
}
return sum;
}
int main(int argc, const char * argv[]) {
double n = PI (2,2);
cout << n << endl;
}
For n = 2 I expected a result of 3.1333 but I got a value of 2.86667
This is the formula for calculating PI , y is the denominator and n is the number of terms
Firstly, I will assume that a complete runnable case of your code looks like
#include <iostream>
using namespace std;
double PI(int n, double y=2){
double sum = 0;
if (n==0){
return 3;
}else if (n % 2 != 0){
sum = (4/(y*(y+1)*(y+2)))+(PI (n - 1 ,y+2)) ;
}else{
sum= -(4/(y*(y+1)*(y+2)))+PI (n - 1,y+2) ;
}
return sum;
}
int main(int argc, const char * argv[]) {
double n = PI (2,2);
cout << n << endl;
}
I believe that you are attempting to compute pi through the formula
(pi - 3)/4 = \sum_{k = 1}^{\infty} (-1)^{k+1} / ((2k(2k+1)(2k+2)),
(where here and elsewhere I use LaTeX code to represent mathy things). This is a good formula that converges pretty quickly despite being so simple. If you were to use the first two terms of the sum, you would find that
(pi - 3)/4 \approx 1/(2*3*4) - 1/(4*5*6) ==> pi \approx 3.13333,
which you seem to indicate in your question.
To see what's wrong, you might trace through your first function call with PI(2, 2). This produces three terms.
n=2: 2 % 2 == 0, so the first term is -4/(2*3*4) + PI(1, 4). This is the wrong sign.
n=1: 1 % 2 == 1, so the second term is 4/(4*5*6), which is also the wrong sign.
n=0: n == 0, so the third term is 3, which is the correct sign.
So you have computed
3 - 4/(2*3*4) + 4/(4*5*6)
and we can see that there are many sign errors.
The underlying reason is because you are determining the sign based on n, but if you examine the formula the sign depends on y. Or in particular, it depends on whether y/2 is odd or even (in your formulation, where you are apparently only going to provide even y values to your sum).
You should change y and n appropriately. Or you might recognize that there is no reason to decouple them, and use something like the following code. In this code, n represents the number of terms to use and we compute y accordingly.
#include <iostream>
using namespace std;
double updatedPI(int n)
{
int y = 2*n;
if (n == 0) { return 3; }
else if (n % 2 == 1)
{
return 4. / (y*(y + 1)*(y + 2)) + updatedPI(n-1);
}
else
{
return -4. / (y*(y + 1)*(y + 2)) + updatedPI(n-1);
}
}
int main() {
double n = updatedPI(3);
cout << n << endl;
}
The only problem with your code is that y is calculated incorrectly. It has to be equal to 2 * n. Simply modifying your code that way gives correct results:
Live demo: https://wandbox.org/permlink/3pZNYZYbtHm7k1ND
That is, get rid of the y function parameter and set int y = 2 * n; in your function.

How do I end this while loop with a precision of 0.00001 ([C++],[Taylor Series])?

I'm working on this program that approximates a taylor series function. I have to approximate it so that the taylor series function stops approximating the sin function with a precision of .00001. In other words,the absolute value of the last approximation minus the current approximation equals less than or equal to 0.00001. It also approximates each angle from 0 to 360 degrees in 15 degree increments. My logic seems to be correct, but I cannot figure out why i am getting garbage values. Any help is appreciated!
#include <math.h>
#include <iomanip>
#include <iostream>
#include <string>
#include <stdlib.h>
#include <cmath>
double fact(int x){
int F = 1;
for(int i = 1; i <= x; i++){
F*=i;
}
return F;
}
double degreesToRadians(double angle_in_degrees){
double rad = (angle_in_degrees*M_PI)/180;
return rad;
}
using namespace std;
double mySine(double x){
int current =99999;
double comSin=x;
double prev=0;
int counter1 = 3;
int counter2 = 1;
while(current>0.00001){
prev = comSin;
if((counter2 % 2) == 0){
comSin += (pow(x,(counter1))/(fact(counter1)));
}else{
comSin -= (pow(x,(counter1))/(fact(counter1)));
}
current=abs(prev-comSin);
cout<<current<<endl;
counter1+=2;
counter2+=1;
}
return comSin;
}
using namespace std;
int main(){
cout<<"Angle\tSine"<<endl;
for (int i = 0; i<=360; i+=15){
cout<<i<<"\t"<<mySine(degreesToRadians(i));
}
}
Here is an example which illustrates how to go about doing this.
Using the pow function and calculating the factorial at each iteration is very inefficient -- these can often be maintained as running values which are updated alongside the sum during each iteration.
In this case, each iteration's addend is the product of two factors: a power of x and a (reciprocal) factorial. To get from one iteration's power factor to the next iteration's, just multiply by x*x. To get from one iteration's factorial factor to the next iteration's, just multiply by ((2*n+1) + 1) * ((2*n+1) + 2), before incrementing n (the iteration number).
And because these two factors are updated multiplicatively, they do not need to exist as separate running values, they can exists as a single running product. This also helps avoid precision problems -- both the power factor and the factorial can become large very quickly, but the ratio of their values goes to zero relatively gradually and is well-behaved as a running value.
So this example maintains these running values, updated at each iteration:
"sum" (of course)
"prod", the ratio: pow(x, 2n+1) / factorial 2n+1
"tnp1", the value of 2*n+1 (used in the factorial update)
The running update value, "prod" is negated every iteration in order to to factor in the (-1)^n.
I also included the function "XlatedSine". When x is too far away from zero, the sum requires more iterations for an accurate result, which takes longer to run and also can require more precision than our floating-point values can provide. When the magnitude of x goes beyond PI, "XlatedSine" finds another x, close to zero, with an equivalent value for sin(x), then uses this shifted x in a call to MaclaurinSine.
#include <iostream>
#include <iomanip>
// Importing cmath seemed wrong LOL, so define Abs and PI
static double Abs(double x) { return x < 0 ? -x : x; }
const double PI = 3.14159265358979323846;
// Taylor series about x==0 for sin(x):
//
// Sum(n=[0...oo]) { ((-1)^n) * (x^(2*n+1)) / (2*n + 1)! }
//
double MaclaurinSine(double x) {
const double xsq = x*x; // cached constant x squared
int tnp1 = 3; // 2*n+1 | n==1
double prod = xsq*x / 6; // pow(x, 2*n+1) / (2*n+1)! | n==1
double sum = x; // sum after n==0
for(;;) {
prod = -prod;
sum += prod;
static const double MinUpdate = 0.00001; // try zero -- the factorial will always dominate the power of x, eventually
if(Abs(prod) <= MinUpdate) {
return sum;
}
// Update the two factors in prod
prod *= xsq; // add 2 to the power factor's exponent
prod /= (tnp1 + 1) * (tnp1 + 2); // update the factorial factor by two iterations
tnp1 += 2;
}
}
// XlatedSine translates x to an angle close to zero which will produce the equivalent result.
double XlatedSine(double x) {
if(Abs(x) >= PI) {
// Use int casting to do an fmod PI (but symmetric about zero).
// Keep in mind that a really big x could overflow the int,
// however such a large double value will have lost so much precision
// at a sub-PI-sized scale that doing this in a legit fashion
// would also disappoint.
const int p = static_cast<int>(x / PI);
x -= PI * p;
if(p % 2) {
x = -x;
}
}
return MaclaurinSine(x);
}
double DegreesToRadians(double angle_deg) {
return PI / 180 * angle_deg;
}
int main() {
std::cout<<"Angle\tSine\n" << std::setprecision(12);
for(int i = 0; i<=360; i+=15) {
std::cout << i << "\t" << MaclaurinSine(DegreesToRadians(i)) << "\n";
//std::cout << i << "\t" << XlatedSine(DegreesToRadians(i)) << "\n";
}
}

Finding square root without using sqrt function?

I was finding out the algorithm for finding out the square root without using sqrt function and then tried to put into programming. I end up with this working code in C++
#include <iostream>
using namespace std;
double SqrtNumber(double num)
{
double lower_bound=0;
double upper_bound=num;
double temp=0; /* ek edited this line */
int nCount = 50;
while(nCount != 0)
{
temp=(lower_bound+upper_bound)/2;
if(temp*temp==num)
{
return temp;
}
else if(temp*temp > num)
{
upper_bound = temp;
}
else
{
lower_bound = temp;
}
nCount--;
}
return temp;
}
int main()
{
double num;
cout<<"Enter the number\n";
cin>>num;
if(num < 0)
{
cout<<"Error: Negative number!";
return 0;
}
cout<<"Square roots are: +"<<sqrtnum(num) and <<" and -"<<sqrtnum(num);
return 0;
}
Now the problem is initializing the number of iterations nCount in the declaratione ( here it is 50). For example to find out square root of 36 it takes 22 iterations, so no problem whereas finding the square root of 15625 takes more than 50 iterations, So it would return the value of temp after 50 iterations. Please give a solution for this.
There is a better algorithm, which needs at most 6 iterations to converge to maximum precision for double numbers:
#include <math.h>
double sqrt(double x) {
if (x <= 0)
return 0; // if negative number throw an exception?
int exp = 0;
x = frexp(x, &exp); // extract binary exponent from x
if (exp & 1) { // we want exponent to be even
exp--;
x *= 2;
}
double y = (1+x)/2; // first approximation
double z = 0;
while (y != z) { // yes, we CAN compare doubles here!
z = y;
y = (y + x/y) / 2;
}
return ldexp(y, exp/2); // multiply answer by 2^(exp/2)
}
Algorithm starts with 1 as first approximation for square root value.
Then, on each step, it improves next approximation by taking average between current value y and x/y. If y = sqrt(x), it will be the same. If y > sqrt(x), then x/y < sqrt(x) by about the same amount. In other words, it will converge very fast.
UPDATE: To speed up convergence on very large or very small numbers, changed sqrt() function to extract binary exponent and compute square root from number in [1, 4) range. It now needs frexp() from <math.h> to get binary exponent, but it is possible to get this exponent by extracting bits from IEEE-754 number format without using frexp().
Why not try to use the Babylonian method for finding a square root.
Here is my code for it:
double sqrt(double number)
{
double error = 0.00001; //define the precision of your result
double s = number;
while ((s - number / s) > error) //loop until precision satisfied
{
s = (s + number / s) / 2;
}
return s;
}
Good luck!
Remove your nCount altogether (as there are some roots that this algorithm will take many iterations for).
double SqrtNumber(double num)
{
double lower_bound=0;
double upper_bound=num;
double temp=0;
while(fabs(num - (temp * temp)) > SOME_SMALL_VALUE)
{
temp = (lower_bound+upper_bound)/2;
if (temp*temp >= num)
{
upper_bound = temp;
}
else
{
lower_bound = temp;
}
}
return temp;
}
As I found this question is old and have many answers but I have an answer which is simple and working great..
#define EPSILON 0.0000001 // least minimum value for comparison
double SquareRoot(double _val) {
double low = 0;
double high = _val;
double mid = 0;
while (high - low > EPSILON) {
mid = low + (high - low) / 2; // finding mid value
if (mid*mid > _val) {
high = mid;
} else {
low = mid;
}
}
return mid;
}
I hope it will be helpful for future users.
if you need to find square root without using sqrt(),use root=pow(x,0.5).
Where x is value whose square root you need to find.
//long division method.
#include<iostream>
using namespace std;
int main() {
int n, i = 1, divisor, dividend, j = 1, digit;
cin >> n;
while (i * i < n) {
i = i + 1;
}
i = i - 1;
cout << i << '.';
divisor = 2 * i;
dividend = n - (i * i );
while( j <= 5) {
dividend = dividend * 100;
digit = 0;
while ((divisor * 10 + digit) * digit < dividend) {
digit = digit + 1;
}
digit = digit - 1;
cout << digit;
dividend = dividend - ((divisor * 10 + digit) * digit);
divisor = divisor * 10 + 2*digit;
j = j + 1;
}
cout << endl;
return 0;
}
Here is a very simple but unsafe approach to find the square-root of a number.
Unsafe because it only works by natural numbers, where you know that the base respectively the exponent are natural numbers. I had to use it for a task where i was neither allowed to use the #include<cmath> -library, nor i was allowed to use pointers.
potency = base ^ exponent
// FUNCTION: square-root
int sqrt(int x)
{
int quotient = 0;
int i = 0;
bool resultfound = false;
while (resultfound == false) {
if (i*i == x) {
quotient = i;
resultfound = true;
}
i++;
}
return quotient;
}
This a very simple recursive approach.
double mySqrt(double v, double test) {
if (abs(test * test - v) < 0.0001) {
return test;
}
double highOrLow = v / test;
return mySqrt(v, (test + highOrLow) / 2.0);
}
double mySqrt(double v) {
return mySqrt(v, v/2.0);
}
Here is a very awesome code to find sqrt and even faster than original sqrt function.
float InvSqrt (float x)
{
float xhalf = 0.5f*x;
int i = *(int*)&x;
i = 0x5f375a86 - (i>>1);
x = *(float*)&i;
x = x*(1.5f - xhalf*x*x);
x = x*(1.5f - xhalf*x*x);
x = x*(1.5f - xhalf*x*x);
x=1/x;
return x;
}
After looking at the previous responses, I hope this will help resolve any ambiguities. In case the similarities in the previous solutions and my solution are illusive, or this method of solving for roots is unclear, I've also made a graph which can be found here.
This is a working root function capable of solving for any nth-root
(default is square root for the sake of this question)
#include <cmath>
// for "pow" function
double sqrt(double A, double root = 2) {
const double e = 2.71828182846;
return pow(e,(pow(10.0,9.0)/root)*(1.0-(pow(A,-pow(10.0,-9.0)))));
}
Explanation:
click here for graph
This works via Taylor series, logarithmic properties, and a bit of algebra.
Take, for example:
log A = N
x
*Note: for square-root, N = 2; for any other root you only need to change the one variable, N.
1) Change the base, convert the base 'x' log function to natural log,
log A => ln(A)/ln(x) = N
x
2) Rearrange to isolate ln(x), and eventually just 'x',
ln(A)/N = ln(x)
3) Set both sides as exponents of 'e',
e^(ln(A)/N) = e^(ln(x)) >~{ e^ln(x) == x }~> e^(ln(A)/N) = x
4) Taylor series represents "ln" as an infinite series,
ln(x) = (k=1)Sigma: (1/k)(-1^(k+1))(k-1)^n
<~~~ expanded ~~~>
[(x-1)] - [(1/2)(x-1)^2] + [(1/3)(x-1)^3] - [(1/4)(x-1)^4] + . . .
*Note: Continue the series for increased accuracy. For brevity, 10^9 is used in my function which expresses the series convergence for the natural log with about 7 digits, or the 10-millionths place, for precision,
ln(x) = 10^9(1-x^(-10^(-9)))
5) Now, just plug in this equation for natural log into the simplified equation obtained in step 3.
e^[((10^9)/N)(1-A^(-10^-9)] = nth-root of (A)
6) This implementation might seem like overkill; however, its purpose is to demonstrate how you can solve for roots without having to guess and check. Also, it would enable you to replace the pow function from the cmath library with your own pow function:
double power(double base, double exponent) {
if (exponent == 0) return 1;
int wholeInt = (int)exponent;
double decimal = exponent - (double)wholeInt;
if (decimal) {
int powerInv = 1/decimal;
if (!wholeInt) return root(base,powerInv);
else return power(root(base,powerInv),wholeInt,true);
}
return power(base, exponent, true);
}
double power(double base, int exponent, bool flag) {
if (exponent < 0) return 1/power(base,-exponent,true);
if (exponent > 0) return base * power(base,exponent-1,true);
else return 1;
}
int root(int A, int root) {
return power(E,(1000000000000/root)*(1-(power(A,-0.000000000001))));
}