program newton_raphson
implicit none
real,parameter::error=1e-4
integer::i
real::xo,x1,f,fd
print*,"Please enter the initial guess !!!"
read*,xo
i=1
10 x1=xo-(f(xo)/fd(xo))
if(abs((x1-xo)/x1)<error) then
print*,"root is", x1,"no. of iteration=",i
else
xo=x1
i=i+1
goto 10
endif
end
real function f(x)
real::x
f=cos(x)
end
real function fd(x)
real::x
fd=-sin(x)
end
This program works fine when I give 1 as an initial guess, but fails or doesn't show up anything when I give 0 as an initial guess. Is there any mistake with my code.
Although the Newton–Raphson method converges fast near the root, its global
convergence characteristics are poor. The reason is that the tangent line is not always an acceptable approximation of the function, so could try to combine your code with bisection method, and this way you can improve the results.
Related
I am solving the evolution of time dependent schrodinger equation under a harmonic potential and an initial gaussian wavefunction . Treating hcut=1 and 2m=1, and seperating the wavefunction in real and imaginary parts, two coupled equations are obtained in terms of the real and imaginary part, termed as yr and yc respectively.
xrange is [xi,xf]
trange is [0,tf]
method I used is :
first seperating the wavefunction in real and imaginary part, namely yr(x,t) and yc(x,t) .
then treating hcut=2m=1, and writing the wavefunction as yr[x,t]+i*yc[x,t],we get two coupled equations from the TDSE .
1.D[yr[x,t],t]=-D[yc[x,t],x,x]+V[x]*yc[x,t]
2.D[yc[x,t],t]=D[yc[x,t],x,x]-V[x]*yc[x,t]
Then I specified the inital wavefunction as
yr[x,0]=exp[-x^2]
yc[x,0]=0
After that, using finite difference scheme, I tried to find y[x,t] from y[x,0]
i.e,
yr[x,t+d]=yr[x,t]+d*D[yr[x,t],t]
=yr[x,t]+d*(D[yc[x,t],x,x]+V[x]*yc[x,t])
indexed as "a" in code and "b" for the complex part .
The values of y[x[i],t[j]] are stored as y[i,j] as array cannot have real index .
The code I used is given below:
function v(x) result(s)
real::s,x
s=x**2
end function v
real::t(10000),x(10000),yc(10000,10000),yr(10000,10000),tf,xi,xf,d
integer::i,j,k,l,m,n,time
write(*,*) "time of plot divided by step size"
read(*,*) time
tf=50
xi=-10
xf=10
d=0.01
x(1)=xi
t(1)=0
i=1
do while(x(i).lt.xf) !input all values of x in x(i) array
x(i+1)=x(i)+d
i=i+1
end do
n=1
do while(t(n).lt.tf) !input all values of t in t(i) array
t(n+1)=t(n)+d
n=n+1
end do
do j=1,i
yr(j,1)=exp(-(x(j)-5)**2) !input of initial wavefunction
yc(j,1)=0
end do
!calculation of wavefunction at higher time using finite element technique[y[x,t+d]=y[x]+d*D[y[x,t],t] and then replacing the partial derivative with time
!using equation 1 and 2 .
l=1
do while(l.le.i-2)
k=1
do while(t(k).lt.tf)
yr(l,k+1)=yr(l,k)-(yc(l+2,k)-2*yc(l+1,k)+yc(l,k))/d&
+v(x(l))*yc(l,k)*d
yc(l,k+1)=yc(l,k)+(yr(l+2,k)-2*yr(l+1,k)+yr(l,k))/d&
-v(x(l))*yr(l,k)*d
k=k+1
end do
l=l+1
end do
open(1,file="q.dat")
do m=1,i-2
write(1,*) x(m),(yr(m,time))**2+(yc(m,time))**2
end do
close(1)
end
Expected result: y(x,t)^2=yr(x,t)^2+yc(x,t)^2
Error: The wavefunction is not staying regular, after only t=0.05 or 0.06, the wavefunction is turning huge and the maxima is becoming of the order of e30, in spite of that the gaussian shape is remaining almost unchanged, as expected, as only 0.05 seconds has passsed.
This is not really a coding problem but a numerical mathematical one and I suggest you to first study some tutorials about numerical methods for the Schrödinger equation and similar PDEs and then ask for more at sites that are devoted to scientific computation.
First, the method you chose does not conserve the norm of the solution. That can be saved by normalizing by each time step, but it is quite ugly.
Second, the method you chose is explicit (actually, it is probably the forward Euler method that is unconditionally unstable). That means the allovable time step is going to be severly limited by diffusive-like term (even though it is complex here). A simple implicit method that also conserves the norm quite well is the Crank-Nicolson method. It is an implicit method and you will have to solve a system of linear equations at each timestep. It is quite simple in 1D as the system is tridiagonal. There are also some explicit schemes that may work, but they are more involved, not the naive scheme you tried.
Third, you do not have to split the system into the real part and the imaginary part, it can be done using complex numbers.
Fourth, you shoould keep your spatial grid step dx and time-step dt as separate variables with known values. You can compute the final coefficient d as their ratio, but you should at least know your values od dx and dt.
Fifth, you do not have to store all values of the solution in all time-steps. That will quickly become prohibitevely expenses in times of memory required for larger problems. It is enough to store the last time-step and the current time-step. More for multistep methods and some auxiliary steps for Runge-Kutta methods.
I am writing a code for a Monte Carlo simulation in Fortran, but I am having a lot of problems because of the small numbers involved.
The biggest problem is that in my code particle positions are not updated; the incriminated code looks like this
x=x+step*cos(p)*sin(t)
with step=0.001. With this, the code won't update the position and I get a infinite loop because the particle never exits the region. If I modify my code with something like this:
x=x+step
or
x=x+step*cos(t)
there is no problem. So it seems that the product step*cos(t)*cos(p)(of the order 10**-4) is too small and is treated as zero.
x is of the order 10**4.
How do I solve this problem in portable way?
My compiler is the latest f95.
Your problem is essentially the one of this other question. However, it's useful to add some Fortran-specific comments.
As in that other question, the discrete nature of floating point numbers mean that there is a point where one number is too small to make a difference when added to another. In the case of this question:
if (1e4+1e-4==1e4) print *, "Oh?"
if (1d4+1d-4==1d4) print *, "Really?"
end
That is, you may be able to use double precision reals and you'll see the problem go away.
What is the smallest number you can add to 1e4 to get something different from 1e4 (or to 1d4)?
print *, 1e4 + SPACING(1e4), 1e4+SPACING(1e4)/2
print *, 1d4 + SPACING(1d4), 1d4+SPACING(1d4)/2
end
This spacing varies with the size of the number. For large numbers it is large and around 1 it is small.
print*, EPSILON(1e0), SPACING([(1e2**i,i=0,5)])
print*, EPSILON(1d0), SPACING([(1d2**i,i=0,5)])
end
I developed the code below for finding the roots of the given polynomial. It works fine, but I need to adapt it to find all roots and not simply stop when it converges. How do I go about doing this? I thought about creating an outer do loop for values of x, but I'm uncertain whether this is the right approach. Any help is appreciated, thanks in advance.
PROGRAM nr
integer :: i
real :: x, f, df
write(*,*) "x=?"
read (*,*) x
write (*,*) '# Initial value: x=',x
do i=1,100
f= x**4 - 26*(x**3) + 131*(x**2) - 226*x + 120
df = 4*(x**3) - 3.0*26*(x**2) + 2.0*131*x - 226
write (*,*) i,x,f,df
x = x-f/df
end do
write (*,*) '#x = ',x
END PROGRAM
A possible algorithm to find all roots of the polynomial P consists in:
Start from some X0 and find a root R, using Newton's algorithm.
Divide P by (X-R): the division is exact (up to numerical error) since R is a root. (this step is called deflation)
Restart from the beginning if the quotient has degree > 1.
There are some subtleties:
If your polynomial has real coefficients and X0 is real, you will only find a real root, if there is any. To find complex roots, you then have to start from a complex X0, and of course use complex arithmetic.
Not all values of X0 will guarantee convergence, as Newton-Raphson is a local method. See Newton-Kantorovitch theorem and basins of attraction of the Newton method.
If there are multiple roots, convergence there is much slower. There are ways to adapt Newton's method to deal with this.
Numerical errors introduced in the deflation step add up and usually leads to poor accuracy of the subsequent roots. This is especially a problem for polynomials of high degree (how much high really depends). In extreme cases, the computed roots can be quite far from the exact roots. Furthermore, some polynomials are more "difficult" than others: see for instance Wilkinson's polynomial.
There are ways to find information on the roots (bound on the absolute values, circles enclosing the roots...), this can help to pick a good X0 in the algorithm: see Geometrical properties of polynomial roots. If you are only looking for real roots of a real polynomials, you first find bounds, then use Sturm's theorem to find intervals enclosing the roots. Another approach in the real case is to first find the roots of the derivative P' (and to find its roots, first find roots of P'', etc.), as the roots of P' separate the roots of P (except for multiple roots).
Another suggested reading: Roots of polynomials. Note that there are much better algorithms than Newton+deflation. There are also algorithms designed to find all the roots.
However, if you are only interested in this specific polynomial, I suggest to first have a look at the problem, as it's much easier than the general case: WolframAlpha.
Here, starting with integer values of X0 is going to work quite well...
Anyway, I give you below what I think is a better Newton-Raphson code, using Fortran-90 function declaration and a "while" rather than a blind "do". Best, Fabio M. S. Lima
PROGRAM Newton2
! Newton-Raphson method in Fortran-90
implicit none
integer i,imax
real x,xnew,tol
parameter (imax = 30)
parameter (tol = 0.00001)
print*, '*** Initial value: '
read*,x
xnew = 0.5 !Just to begin the loop below
i=0
do while ((abs(xnew-x)>tol) .and. (i<imax))
i=i+1
x = xnew
xnew = x -f(x)/f1(x)
print*, i,xnew,f(xnew)
end do
if (i>imax-2) then
print*,'# Error: Newton-Raphson has not converged after ',i,' iterations!'
else
print*,'* After ',i,' iterations, we found a root at ',xnew
endif
contains
function f(z)
real z,f
f = z**4 -26.0*(z**3) +131.0*(z**2) -226.0*z +120.0
end function f
function f1(z)
real z,f1
f1 = 4.0*(z**3) -3.0*26.0*(z**2) +2.0*131.0*z -226.0
end function f1
END PROGRAM Newton2
I want to check how much time does it take the computer to compute a function. To do this I wanted to compare the values given by the cpu_time subroutine before and after calling my function. To my surprise, the value before and after was the same as if it took zero time to perform the function. To check it, I created a simple piece of code
call cpu_time(time)
write(*,*) time
do i=1,10000
!simple math equation here
end do
call cpu_time(time)
write(*,*) time
And after running the program the value printed before and after the loop was exaclty the same. My guess is that the system clock is not precise enough to dinstinguish such a little changes, but does it really make sense? All in all I don't know how to measure the time needed for executing my function without this working properly.
I have a grid with thousands of double precision reals.
It's iterating through, and I need it to stop when it's reached convergence to 3 decimal places.
The target is to have it run as fast as possible, but needs to give the same result every (to 3 dp) every time.
At the minute I'm doing something like this
REAL(KIND=DP) :: TOL = 0.001_DP
DO WHILE(.NOT. CONVERGED)
CONVERGED = .TRUE.
DO I = 1, NUM_POINTS
NEW POTENTIAL = !blah blah blah
IF (CONVERGED) THEN
IF (NEW_POTENTIAL < OLD_POTENTIAL - TOL .OR. NEW_POTENTIAL > OLD_POTENTIAL + TOL) THEN
CONVERGED = .FALSE.
END IF
END IF
OLD_POTENTIAL = NEW POTENTIAL
END DO
END DO
I'm thinking that many IF statements can't be too great for performance. I thought about checking for convergence at the end; finding the average value (summing the whole grid, divide by num_points), and checking if that has converged in the same way as above, but I'm not convinced this will always be accurate.
What is the best way of doing this?
If I understand correctly you've got some kind of time-stepping going on, where you create the values in new_potential by calculations on old_potential. Then make old equal to new and carry on.
You could replace your existing convergence tests with the single statement
converged = all(abs(new_potential - old_potential)<tol)
which might be faster. If the speed of the test is a major concern you could test only every other (or every third or fourth ...) iteration
A few comments:
1) If you used a potential array with 2 planes, instead of an old_ and new_potential, you could transfer new_ into old_ by swapping indices at the end of each iteration. As your code stands there's a lot of data movement going on.
2) While semantically you are right to have a while loop, I'd always use a do loop with a maximum number of iterations, just in case the convergence criterion is never met.
3) In your declaration REAL(KIND=DP) :: TOL = 0.001_DP the specification of DP on the numerical value of TOL is redundant, REAL(KIND=DP) :: TOL = 0.001 is adequate. I'd also make this a parameter, the compiler may be able to optimise its use if it knows that it is immutable.
4) You don't really need to execute CONVERGED = .TRUE. inside the outermost loop, set it before the first iteration -- this will save you a nanosecond or two.
Finally, if your convergence criterion is that every element in the potential array has converged to 3dp then that is what you should test for. It would be relatively easy to construct counterexamples for your suggested averages. However, my concern would be that your system will never converge on every element and that you should be using some matrix norm computation to determine convergence. SO is not the place for a lesson in that topic.
What are the calculations for the convergence criteria? Unless they are worse then the calculations to advance the potential it is probably better to have the IF statement to terminate the loop as soon as possible rather than guess a very large number of iterations to be sure to obtain a good solution.
Re High Performance Mark's suggestion #1, if the copying operation is a significant portion of the run time, you could also use pointers.
The only way to be sure about this stuff is to measure the run time ... Fortran provides intrinsic functions to measure both CPU and clock time. Otherwise you may modify your some portion of you code to make it faster, perhaps making it less easier to understand and possibly introducing a bug, possibly without much improvement in runtime ... if that portion was taking a small amount of the total runtime, no amount of cleverness will can make much difference.
As High Performance Mark says, though the current semantics are elegant, you probably want to guard against an infinite loop. One approach:
PotentialLoop: do i=1, MaxIter
blah
Converged = test...
if (Converged) exit PotentialLoop
blah
end do PotentialLoop
if (.NOT. Converged) write (*, *) "error, did not converge"
I = 1
DO
NEWPOT = !bla bla bla
IF (ABS(NEWPOT-OLDPOT).LT.TOL) EXIT
OLDPOT = NEWPOT
I = MOD(I,NUMPOINTS) + 1
END DO
Maybe better
I = 1
DO
NEWPOT = !bla bla bla
IF (ABS(NEWPOT-OLDPOT).LT.TOL) EXIT
OLDPOT = NEWPOT
IF (I.EQ.NUMPOINTS) THEN
I = 1
ELSE
I = I + 1
END IF
END DO