I am solving the evolution of time dependent schrodinger equation under a harmonic potential and an initial gaussian wavefunction . Treating hcut=1 and 2m=1, and seperating the wavefunction in real and imaginary parts, two coupled equations are obtained in terms of the real and imaginary part, termed as yr and yc respectively.
xrange is [xi,xf]
trange is [0,tf]
method I used is :
first seperating the wavefunction in real and imaginary part, namely yr(x,t) and yc(x,t) .
then treating hcut=2m=1, and writing the wavefunction as yr[x,t]+i*yc[x,t],we get two coupled equations from the TDSE .
1.D[yr[x,t],t]=-D[yc[x,t],x,x]+V[x]*yc[x,t]
2.D[yc[x,t],t]=D[yc[x,t],x,x]-V[x]*yc[x,t]
Then I specified the inital wavefunction as
yr[x,0]=exp[-x^2]
yc[x,0]=0
After that, using finite difference scheme, I tried to find y[x,t] from y[x,0]
i.e,
yr[x,t+d]=yr[x,t]+d*D[yr[x,t],t]
=yr[x,t]+d*(D[yc[x,t],x,x]+V[x]*yc[x,t])
indexed as "a" in code and "b" for the complex part .
The values of y[x[i],t[j]] are stored as y[i,j] as array cannot have real index .
The code I used is given below:
function v(x) result(s)
real::s,x
s=x**2
end function v
real::t(10000),x(10000),yc(10000,10000),yr(10000,10000),tf,xi,xf,d
integer::i,j,k,l,m,n,time
write(*,*) "time of plot divided by step size"
read(*,*) time
tf=50
xi=-10
xf=10
d=0.01
x(1)=xi
t(1)=0
i=1
do while(x(i).lt.xf) !input all values of x in x(i) array
x(i+1)=x(i)+d
i=i+1
end do
n=1
do while(t(n).lt.tf) !input all values of t in t(i) array
t(n+1)=t(n)+d
n=n+1
end do
do j=1,i
yr(j,1)=exp(-(x(j)-5)**2) !input of initial wavefunction
yc(j,1)=0
end do
!calculation of wavefunction at higher time using finite element technique[y[x,t+d]=y[x]+d*D[y[x,t],t] and then replacing the partial derivative with time
!using equation 1 and 2 .
l=1
do while(l.le.i-2)
k=1
do while(t(k).lt.tf)
yr(l,k+1)=yr(l,k)-(yc(l+2,k)-2*yc(l+1,k)+yc(l,k))/d&
+v(x(l))*yc(l,k)*d
yc(l,k+1)=yc(l,k)+(yr(l+2,k)-2*yr(l+1,k)+yr(l,k))/d&
-v(x(l))*yr(l,k)*d
k=k+1
end do
l=l+1
end do
open(1,file="q.dat")
do m=1,i-2
write(1,*) x(m),(yr(m,time))**2+(yc(m,time))**2
end do
close(1)
end
Expected result: y(x,t)^2=yr(x,t)^2+yc(x,t)^2
Error: The wavefunction is not staying regular, after only t=0.05 or 0.06, the wavefunction is turning huge and the maxima is becoming of the order of e30, in spite of that the gaussian shape is remaining almost unchanged, as expected, as only 0.05 seconds has passsed.
This is not really a coding problem but a numerical mathematical one and I suggest you to first study some tutorials about numerical methods for the Schrödinger equation and similar PDEs and then ask for more at sites that are devoted to scientific computation.
First, the method you chose does not conserve the norm of the solution. That can be saved by normalizing by each time step, but it is quite ugly.
Second, the method you chose is explicit (actually, it is probably the forward Euler method that is unconditionally unstable). That means the allovable time step is going to be severly limited by diffusive-like term (even though it is complex here). A simple implicit method that also conserves the norm quite well is the Crank-Nicolson method. It is an implicit method and you will have to solve a system of linear equations at each timestep. It is quite simple in 1D as the system is tridiagonal. There are also some explicit schemes that may work, but they are more involved, not the naive scheme you tried.
Third, you do not have to split the system into the real part and the imaginary part, it can be done using complex numbers.
Fourth, you shoould keep your spatial grid step dx and time-step dt as separate variables with known values. You can compute the final coefficient d as their ratio, but you should at least know your values od dx and dt.
Fifth, you do not have to store all values of the solution in all time-steps. That will quickly become prohibitevely expenses in times of memory required for larger problems. It is enough to store the last time-step and the current time-step. More for multistep methods and some auxiliary steps for Runge-Kutta methods.
Related
I am writing a code for a Monte Carlo simulation in Fortran, but I am having a lot of problems because of the small numbers involved.
The biggest problem is that in my code particle positions are not updated; the incriminated code looks like this
x=x+step*cos(p)*sin(t)
with step=0.001. With this, the code won't update the position and I get a infinite loop because the particle never exits the region. If I modify my code with something like this:
x=x+step
or
x=x+step*cos(t)
there is no problem. So it seems that the product step*cos(t)*cos(p)(of the order 10**-4) is too small and is treated as zero.
x is of the order 10**4.
How do I solve this problem in portable way?
My compiler is the latest f95.
Your problem is essentially the one of this other question. However, it's useful to add some Fortran-specific comments.
As in that other question, the discrete nature of floating point numbers mean that there is a point where one number is too small to make a difference when added to another. In the case of this question:
if (1e4+1e-4==1e4) print *, "Oh?"
if (1d4+1d-4==1d4) print *, "Really?"
end
That is, you may be able to use double precision reals and you'll see the problem go away.
What is the smallest number you can add to 1e4 to get something different from 1e4 (or to 1d4)?
print *, 1e4 + SPACING(1e4), 1e4+SPACING(1e4)/2
print *, 1d4 + SPACING(1d4), 1d4+SPACING(1d4)/2
end
This spacing varies with the size of the number. For large numbers it is large and around 1 it is small.
print*, EPSILON(1e0), SPACING([(1e2**i,i=0,5)])
print*, EPSILON(1d0), SPACING([(1d2**i,i=0,5)])
end
I developed the code below for finding the roots of the given polynomial. It works fine, but I need to adapt it to find all roots and not simply stop when it converges. How do I go about doing this? I thought about creating an outer do loop for values of x, but I'm uncertain whether this is the right approach. Any help is appreciated, thanks in advance.
PROGRAM nr
integer :: i
real :: x, f, df
write(*,*) "x=?"
read (*,*) x
write (*,*) '# Initial value: x=',x
do i=1,100
f= x**4 - 26*(x**3) + 131*(x**2) - 226*x + 120
df = 4*(x**3) - 3.0*26*(x**2) + 2.0*131*x - 226
write (*,*) i,x,f,df
x = x-f/df
end do
write (*,*) '#x = ',x
END PROGRAM
A possible algorithm to find all roots of the polynomial P consists in:
Start from some X0 and find a root R, using Newton's algorithm.
Divide P by (X-R): the division is exact (up to numerical error) since R is a root. (this step is called deflation)
Restart from the beginning if the quotient has degree > 1.
There are some subtleties:
If your polynomial has real coefficients and X0 is real, you will only find a real root, if there is any. To find complex roots, you then have to start from a complex X0, and of course use complex arithmetic.
Not all values of X0 will guarantee convergence, as Newton-Raphson is a local method. See Newton-Kantorovitch theorem and basins of attraction of the Newton method.
If there are multiple roots, convergence there is much slower. There are ways to adapt Newton's method to deal with this.
Numerical errors introduced in the deflation step add up and usually leads to poor accuracy of the subsequent roots. This is especially a problem for polynomials of high degree (how much high really depends). In extreme cases, the computed roots can be quite far from the exact roots. Furthermore, some polynomials are more "difficult" than others: see for instance Wilkinson's polynomial.
There are ways to find information on the roots (bound on the absolute values, circles enclosing the roots...), this can help to pick a good X0 in the algorithm: see Geometrical properties of polynomial roots. If you are only looking for real roots of a real polynomials, you first find bounds, then use Sturm's theorem to find intervals enclosing the roots. Another approach in the real case is to first find the roots of the derivative P' (and to find its roots, first find roots of P'', etc.), as the roots of P' separate the roots of P (except for multiple roots).
Another suggested reading: Roots of polynomials. Note that there are much better algorithms than Newton+deflation. There are also algorithms designed to find all the roots.
However, if you are only interested in this specific polynomial, I suggest to first have a look at the problem, as it's much easier than the general case: WolframAlpha.
Here, starting with integer values of X0 is going to work quite well...
Anyway, I give you below what I think is a better Newton-Raphson code, using Fortran-90 function declaration and a "while" rather than a blind "do". Best, Fabio M. S. Lima
PROGRAM Newton2
! Newton-Raphson method in Fortran-90
implicit none
integer i,imax
real x,xnew,tol
parameter (imax = 30)
parameter (tol = 0.00001)
print*, '*** Initial value: '
read*,x
xnew = 0.5 !Just to begin the loop below
i=0
do while ((abs(xnew-x)>tol) .and. (i<imax))
i=i+1
x = xnew
xnew = x -f(x)/f1(x)
print*, i,xnew,f(xnew)
end do
if (i>imax-2) then
print*,'# Error: Newton-Raphson has not converged after ',i,' iterations!'
else
print*,'* After ',i,' iterations, we found a root at ',xnew
endif
contains
function f(z)
real z,f
f = z**4 -26.0*(z**3) +131.0*(z**2) -226.0*z +120.0
end function f
function f1(z)
real z,f1
f1 = 4.0*(z**3) -3.0*26.0*(z**2) +2.0*131.0*z -226.0
end function f1
END PROGRAM Newton2
I have been looking at an engineering paper here which describes an old FORTRAN code for solving pipe flow equations (it's dated 1974, before FORTRAN was standardised as Fortran 77). On page 42 of this document the old code calls the following subroutine:
C SYSTEM SUBROUTINE FROM UNIVAC MATH-PACK TO
C SOLVE LINEAR SYSTEM OF EQ.
CALL GJR(A,51,50,NP,NPP,$98,JC,V)
It's a bit of a long shot, but do any veterans or ancient code buffs recall this system subroutine and it's input arguments? I'm having trouble finding any information about it.
If I can adapt the old code my current application I may rewrite this in C++ or VBA, and will be looking for an equivalent function in these languages.
I'll add to this answer if I find anything more detailed, but I have a place to start looking for the arguments to GJR.
This function is part of the Sperry UNIVAC MATH-PACK library - a full list of functions in the library can be found in http://www.dtic.mil/dtic/tr/fulltext/u2/a170611.pdf GJR is described as "determinant; inverse; solution of simultaneous equations". Marginally helpful.
A better description comes from http://nvlpubs.nist.gov/nistpubs/jres/74B/jresv74Bn4p251_A1b.pdf
A FORTRAN subroutine, one of the Univac 1108 Math Pack programs,
available on the library tapes at the University of Maryland computing
center. It solves simultaneous equations, computes a determinant, or
inverts a matrix or any combination of the three above by using a
Gauss-Jordan elimination technique with column pivoting.
This is slightly more useful, but what we really want is "MATH-PACK, Programmer Reference", UP-7542 Rev. 1 from Sperry-UNIVAC (Unisys) I find a lot of references to this document but no full-text PDF of the document itself.
I'd take a look at the arguments in the function call, how they are set up and how the results are used, then look for equivalent routines in LAPACK or BLAS. See http://www.netlib.org/lapack/
I have a few books on piping networks including "Analysis of Flow in Pipe Networks" by Jeppson (same author as in the original PDF hosted by USU) https://books.google.com/books/about/Analysis_of_flow_in_pipe_networks.html?id=peZSAAAAMAAJ - I'll see if I can dig that up. The book may have a more portable matrix solver than the proprietary Sperry-UNIVAC library.
Update:
From p. 41 of http://ngds.egi.utah.edu/files/GL04099/GL04099_1.pdf I found documentation for the CGJR function, the complex version of GJR from the same library. It is likely the only difference in the arguments is variable type (COMPLEX instead of REAL):
CGJR is a subroutine which solves simultaneous equations, computes a determinant, inverts a matrix, or does any combination of these three operations, by using a Gauss-Jordan elimination technique with column pivoting.
The procedure for using CGJR is as follows:
Calling statement: CALL CGJR(A,NC,NR,N,MC,$K,JC,V)
where
A is the matrix whose inverse or determinant is to be determined. If simultaneous equations are solved, the last MC-N columns of the matrix are the constant vectors of the equations to be solved. On output, if the inverse is computed, it is stored in the first N columns of A. If simultaneous equations are solved, the last MC-N columns contain the solution vectors. A is a complex array.
NC is an integer representing the maximum number of columns of the array A.
NR is an integer representing the maximum number of rows of the array A.
N is an integer representing the number of rows of the array A to be operated on.
MC is the number of columns of the array A, representing the coefficient matrix if simultaneous equations are being solved; otherwise it is a dummy variable.
K is a statement number in the calling program to which control is returned if an overflow or singularity is detected.
1) If an overflow is detected, JC(1) is set to the negative of the last correctly completed row of the reduction and control is then returned to statement number K in the calling program.
2) If a singularity is detected, JC(1)is set to the number of the last correctly completed row, and V is set to (0.,0.) if the determinant was to be computed. Control is then returned to statement number K in the calling program.
JC is a one dimensional permutation array of N elements which is used for permuting the rows and columns of A if an inverse is being computed .. If an inverse is not computed, this array must have at least one cell for the error return identification. On output, JC(1) is N if control is returned normally.
V is a complex variable. On input REAL(V) is the option indicator, set as follows:
invert matrix
compute determinant
do 1. and 2.
solve system of equations
do 1. and 4.
do 2. and 4.
do 1., 2. and 4.
Notes on usage of row dimension arguments N and NR:
The arguments N and NR refer to the row dimensions of the A matrix.
N gives the number of rows operated on by the subroutine, while NR
refers to the total number of rows in the matrix as dimensioned by the
calling program. NR is used only in the dimension statement of the
subroutine. Through proper use of these parameters, the user may specify that only a submatrix, instead of the entire matrix, be operated on by the subroutine.
In your application (pipe flow), look at how matrix A and vector V are populated before the call to GJR and how they are used after the call.
You may be able to replace the call to GJR with a call to LAPACK's SGESV or DGESV without much difficulty.
Aside: The Fortran community really needs a drop-in 'Rosetta library' that wraps LAPACK, etc. for replacing legacy/proprietary IBM, UNIVAC, and Numerical Recipes math functions. The perfect case would be that maintainers would replace legacy functions with de facto standard math functions but in the real world, many of these older programs are un(der)maintained and there simply isn't the will (or, as in this case, the ability) to update them.
Update 2:
I started work on a compatibility library for the Sperry MATH-PACK and STAT-PACK routines as well as a few other legacy libraries, posted at https://bitbucket.org/apthorpe/alfc
Further, I located my copy of Jeppson's Analysis of Flow in Pipe Networks which is a slightly more legible version of the PDF of Steady Flow Analysis of Pipe Networks: An Instructional Manual and modernized the codes listed in the text. I have posted those at https://bitbucket.org/apthorpe/jeppson_pipeflow
Note that I found a number of errors in both the code listings and in the example problems given for many of the codes. If you're trying to learn how to write a pipe flow solver based on Jeppson's paper or text, I'd strongly suggest reviewing my updated codes and test cases because they will save you hours of effort trying to understand why the code doesn't work and why you can't replicate the example cases. This took a fair amount of forensic computing to sort out.
Update 3:
The source to CGJR and DGJR can be found in http://www.dtic.mil/dtic/tr/fulltext/u2/a110089.pdf. DGJR is the closest to what you want, though it references more routines that aren't available (proprietary UNIVAC error-handling routines). It should be easy to convert `DGJR' to single precision and skip the proprietary calls. Otherwise, use the compatibility library mentioned above.
I've seen this pseudo-random number generator for use in shaders referred to here and there around the web:
float rand(vec2 co){
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
It's variously called "canonical", or "a one-liner I found on the web somewhere".
What's the origin of this function? Are the constant values as arbitrary as they seem or is there some art to their selection? Is there any discussion of the merits of this function?
EDIT: The oldest reference to this function that I've come across is this archive from Feb '08, the original page now being gone from the web. But there's no more discussion of it there than anywhere else.
Very interesting question!
I am trying to figure this out while typing the answer :)
First an easy way to play with it: http://www.wolframalpha.com/input/?i=plot%28+mod%28+sin%28x*12.9898+%2B+y*78.233%29+*+43758.5453%2C1%29x%3D0..2%2C+y%3D0..2%29
Then let's think about what we are trying to do here: For two input coordinates x,y we return a "random number". Now this is not a random number though. It's the same every time we input the same x,y. It's a hash function!
The first thing the function does is to go from 2d to 1d. That is not interesting in itself, but the numbers are chosen so they do not repeat typically. Also we have a floating point addition there. There will be a few more bits from y or x, but the numbers might just be chosen right so it does a mix.
Then we sample a black box sin() function. This will depend a lot on the implementation!
Lastly it amplifies the error in the sin() implementation by multiplying and taking the fraction.
I don't think this is a good hash function in the general case. The sin() is a black box, on the GPU, numerically. It should be possible to construct a much better one by taking almost any hash function and converting it. The hard part is to turn the typical integer operation used in cpu hashing into float (half or 32bit) or fixed point operations, but it should be possible.
Again, the real problem with this as a hash function is that sin() is a black box.
The origin is probably the paper: "On generating random numbers, with help of y= [(a+x)sin(bx)] mod 1", W.J.J. Rey, 22nd European Meeting of Statisticians and the 7th Vilnius Conference on Probability Theory and Mathematical Statistics, August 1998
EDIT: Since I can't find a copy of this paper and the "TestU01" reference may not be clear, here's the scheme as described in TestU01 in pseudo-C:
#define A1 ???
#define A2 ???
#define B1 pi*(sqrt(5.0)-1)/2
#define B2 ???
uint32_t n; // position in the stream
double next() {
double t = fract(A1 * sin(B1*n));
double u = fract((A2+t) * sin(B2*t));
n++;
return u;
}
where the only recommended constant value is the B1.
Notice that this is for a stream. Converting to a 1D hash 'n' becomes the integer grid. So my guess is that someone saw this and converted 't' into a simple function f(x,y). Using the original constants above that would yield:
float hash(vec2 co){
float t = 12.9898*co.x + 78.233*co.y;
return fract((A2+t) * sin(t)); // any B2 is folded into 't' computation
}
the constant values are arbitrary, especially that they are very large, and a couple of decimals away from prime numbers.
a modulus over 1 of a hi amplitude sinus multiplied by 4000 is a periodic function. it's like a window blind or a corrugated metal made very small because it's multiplied by 4000, and turned at an angle by the dot product.
as the function is 2-D, the dot product has the effect of turning the periodic function at an oblique relative to X and Y axis. By 13/79 ratio approximately. It is inefficient, you can actually achieve the same by doing sinus of (13x + 79y) this will also achieve the same thing I think with less maths..
If you find the period of the function in both X and Y, you can sample it so that it will look like a simple sine wave again.
Here is a picture of it zoomed in graph
I don't know the origin but it is similar to many others, if you used it in graphics at regular intervals it would tend to produce moire patterns and you could see it's eventually goes around again.
Maybe it's some non-recurrent chaotic mapping, then it could explain many things, but also can be just some arbitrary manipulation with large numbers.
EDIT: Basically, the function fract(sin(x) * 43758.5453) is a simple hash-like function, the sin(x) provides smooth sin interpolation between -1 to 1, so sin(x) * 43758.5453 will be interpolation from -43758.5453 to 43758.5453. This is a quite huge range, so even small step in x will provide large step in result and really large variation in fractional part. The "fract" is needed to get values in range -0.99... to 0.999... .
Now, when we have something like hash function we should create function for production hash from the vector. The simplest way is call "hash" separetly for x any y component of the input vector. But then, we will have some symmetrical values. So, we should get some value from the vector, the approach is find some random vector and find "dot" product to that vector, here we go: fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
Also, according to the selected vector, its lenght should be long engough to have several peroids of the "sin" function after "dot" product will be computed.
I do not believe this to be the true origin, but OP's code is presented as code example in "The Book of Shaders" by Patricio Gonzalez Vivo and Jen Lowe ( https://thebookofshaders.com/10/ ). In their code, Patricio Gonzales Vivo is cited as the author, i.e "// Author #patriciogv - 2015"
Since the OP's research dates back even further (to '08), the source might at least explain its popularity, and the author might be able to shed some light on his source.
My program tries to solve a system of linear equations. In order to do that, it assembles matrix coeff_matrix and vector value_vector, and uses Eigen to solve them like:
Eigen::VectorXd sol_vector = coeff_matrix
.colPivHouseholderQr().solve(value_vector);
The problem is that the system can be both over- and under-determined. In the former case, Eigen either gives a correct or uncorrect solution, and I check the solution using coeff_matrix * sol_vector - value_vector.
However, please consider the following system of equations:
a + b - c = 0
c - d = 0
c = 11
- c + d = 0
In this particular case, Eigen solves the three latter equations correctly but also gives solutions for a and b.
What I would like to achieve is that only the equations which have only one solution would be solved, and the remaining ones (the first equation here) would be retained in the system.
In other words, I'm looking for a method to find out which equations can be solved in a given system of equations at the time, and which cannot because there will be more than one solution.
Could you suggest any good way of achieving that?
Edit: please note that in most cases the matrix won't be square. I've added one more row here just to note that over-determination can happen too.
I think what you want to is the singular value decomposition (SVD), which will give you exact what you want. After SVD, "the equations which have only one solution will be solved", and the solution is pseudoinverse. It will also give you the null space (where infinite solutions come from) and left null space (where inconsistency comes from, i.e. no solution).
Based on the SVD comment, I was able to do something like this:
Eigen::FullPivLU<Eigen::MatrixXd> lu = coeff_matrix.fullPivLu();
Eigen::VectorXd sol_vector = lu.solve(value_vector);
Eigen::VectorXd null_vector = lu.kernel().rowwise().sum();
AFAICS, the null_vector rows corresponding to single solutions are 0s while the ones corresponding to non-determinate solutions are 1s. I can reproduce this throughout all my examples with the default treshold Eigen has.
However, I'm not sure if I'm doing something correct or just noticed a random pattern.
What you need is to calculate the determinant of your system. If the determinant is 0, then you have an infinite number of solutions. If the determinant is very small, the solution exists, but I wouldn't trust the solution found by a computer (it will lead to numerical instabilities).
Here is a link to what is the determinant and how to calculate it: http://en.wikipedia.org/wiki/Determinant
Note that Gaussian elimination should also work: http://en.wikipedia.org/wiki/Gaussian_elimination
With this method, you end up with lines of 0s if there are an infinite number of solutions.
Edit
In case the matrix is not square, you first need to extract a square matrix. There are two cases:
You have more variables than equations: then you have either no solution, or an infinite number of them.
You have more equations than variables: in this case, find a square sub-matrix of non-null determinant. Solve for this matrix and check the solution. If the solution doesn't fit, it means you have no solution. If the solution fits, it means the extra equations were linearly-dependant on the extract ones.
In both case, before checking the dimension of the matrix, remove rows and columns with only 0s.
As for the gaussian elimination, it should work directly with non-square matrices. However, this time, you should check that the number of non-empty row (i.e. a row with some non-0 values) is equal to the number of variable. If it's less you have an infinite number of solution, and if it's more, you don't have any solutions.