I need to calculate the number steps to calculate a numerical method in such a way that for i=0 the time be t=tstart and for i=nsteps the time be t=tstop.
I know the real numbers tstart, tstop and dt, and to calculate the integer nsteps I use
nsteps = FLOOR((tstop-tstart)/dt)
but I'm worried, because FLOOR could be and integer minus one than I need. The following is the loop on the time:
DO i=0,nsteps
t = tstart + i*dt
END DO
I think this is a very usual calculus, but I do not know which is the better way to do it. Maybe there is a better idea instead of a DO loop, maybe a DO WHILE loop.
Thanks in advance for yours comments.
Related
I am trying to run a code on vortex simulations in parallel using OpenMP. These are similar to particle simulations where at each time step, the position of a vortex at the next time step has to be computed from its velocity which is determined by the positions of all the other vortices at the current time step. The vortices are deleted once they leave the domain. I compare the number of vortices at each time step for the parallel version of code with the serial version of code, and run each version multiple times.
For the serial versions, vortex counts match exactly at every time step. For the parallel case, all the runs match with the serial case for a few tens of time steps, post which, each parallel run shows a difference but remains within a 7-10% error bound with the serial case (as can be seen in the result link below). I know that this may be because of the round off errors in the parallel case owing from the difference in the order of computational steps due to distribution among the different threads, but should the error really be so high as 10%?
I have only used the reduction clause in a parallel do construct. The only parallel region in the whole code is within a function vblob(), which is inside a module, which I call from a main code. All function calls within vblob() are ixi(), fxi() are outside this module.
function vblob(blobs,xj,gj)
complex(8), intent(in) :: blobs(:,:), xj
complex(8) :: delxi, delxic, di, gvic, xi
real(8), intent(in) :: gj
real(8) :: vblob(2)
integer :: p
gvic = 0.0; delxi = 0.0; delxic = 0.0; di = 0.0; xi = 0.0
!$omp parallel do private(xi,delxic,delxi,di) shared(xj) reduction(+:gvic)
do p = 1, size(blobs,1)
xi = ixi(blobs(p,1))
delxic = xj-conjg(xi)
delxi = xj-xi
di = del*fxi(xi)
gvic = gvic + real(blobs(p,2))*1/delxic
if (abs(delxi) .gt. 1.E-4) then
gvic = gvic + (-1)*real(blobs(p,2))*1/delxi
end if
end do
!$omp end parallel do
gvic = j*gvic*fxi(xj)/(2*pi)
vblob(1) = real(gvic)
vblob(2) = -imag(gvic)
end function vblob
If the way I have constructed the parallel code is wrong, then errors should show up from the first few time steps itself, right?
(As can be seen in this result, the 'blobs' and 'sheets' are just types of vortex elements, the blue line is the total elements. P and S stand for Parallel and serial respectively and R stands for runs. THe solid plot markers are the serial code and the hollow ones are the three runs of the parallel code)
EDIT: When i change the numerical precision of my variables to real(4) instead, the divergenec in results happens at an earlier time step than the real(8) case above. SO its clearly a round off error issue.
TLDR: I want to clarify this with anyone else who has seen such a result over a range of time steps, where the parallel code matches for the first few time steps and then diverges?
Your code essentially sums up a lot of terms in gvic. Floating-point arithmetic is not associative, that is, (a+b)+c is not the same as a+(b+c) due to rounding. Also, depending on the values and the signs on the terms, there may be a serious loss of precision in each operation. See here for a really mandatory read on the subject.
While the sequential loop computes (given no clever compiler optimisations):
gvic = (...((((g_1 + g_2) + g_3) + g_4) + g_5) + ...)
where g_i is the value added to gvic by iteration i, the parallel version computes:
gvic = t_0 + t_1 + t_2 + ... t_(#threads-1)
where t_i is the accumulated private value of gvic in thread i (threads in OpenMP are 0-numbered even in Fortran). The order in which the different t_is are reduced is unspecified. The OpenMP implementation is free to choose whatever it deems fine. Even if all t_is are summed in order, the result will still differ from the one computed by the sequential loop. Unstable numerical algorithms are exceptionally prone to producing different results when parallelised.
This is something you can hardly avoid completely, but instead learn to control or simply live with its consequences. In many cases, the numerical solution to a problem is an approximation anyway. You should focus on conserved or statistical properties. For example, an ergodic molecular dynamics simulation may produce a completely different phase trajectory in parallel, but values such as the total energy or the thermodynamic averages will be pretty close (unless there is some serious algorithmic error or really bad numerical instability).
A side note - you are actually lucky to enter this field now, when most CPUs use standard 32- and 64-bit floating-point arithmetic. Years ago, when x87 was a thing, floating-point operations were done with 80-bit internal precision and the end result would depend on how many times a value leaves and re-enters the FPU registers.
I am writing a code for a Monte Carlo simulation in Fortran, but I am having a lot of problems because of the small numbers involved.
The biggest problem is that in my code particle positions are not updated; the incriminated code looks like this
x=x+step*cos(p)*sin(t)
with step=0.001. With this, the code won't update the position and I get a infinite loop because the particle never exits the region. If I modify my code with something like this:
x=x+step
or
x=x+step*cos(t)
there is no problem. So it seems that the product step*cos(t)*cos(p)(of the order 10**-4) is too small and is treated as zero.
x is of the order 10**4.
How do I solve this problem in portable way?
My compiler is the latest f95.
Your problem is essentially the one of this other question. However, it's useful to add some Fortran-specific comments.
As in that other question, the discrete nature of floating point numbers mean that there is a point where one number is too small to make a difference when added to another. In the case of this question:
if (1e4+1e-4==1e4) print *, "Oh?"
if (1d4+1d-4==1d4) print *, "Really?"
end
That is, you may be able to use double precision reals and you'll see the problem go away.
What is the smallest number you can add to 1e4 to get something different from 1e4 (or to 1d4)?
print *, 1e4 + SPACING(1e4), 1e4+SPACING(1e4)/2
print *, 1d4 + SPACING(1d4), 1d4+SPACING(1d4)/2
end
This spacing varies with the size of the number. For large numbers it is large and around 1 it is small.
print*, EPSILON(1e0), SPACING([(1e2**i,i=0,5)])
print*, EPSILON(1d0), SPACING([(1d2**i,i=0,5)])
end
I have a grid with thousands of double precision reals.
It's iterating through, and I need it to stop when it's reached convergence to 3 decimal places.
The target is to have it run as fast as possible, but needs to give the same result every (to 3 dp) every time.
At the minute I'm doing something like this
REAL(KIND=DP) :: TOL = 0.001_DP
DO WHILE(.NOT. CONVERGED)
CONVERGED = .TRUE.
DO I = 1, NUM_POINTS
NEW POTENTIAL = !blah blah blah
IF (CONVERGED) THEN
IF (NEW_POTENTIAL < OLD_POTENTIAL - TOL .OR. NEW_POTENTIAL > OLD_POTENTIAL + TOL) THEN
CONVERGED = .FALSE.
END IF
END IF
OLD_POTENTIAL = NEW POTENTIAL
END DO
END DO
I'm thinking that many IF statements can't be too great for performance. I thought about checking for convergence at the end; finding the average value (summing the whole grid, divide by num_points), and checking if that has converged in the same way as above, but I'm not convinced this will always be accurate.
What is the best way of doing this?
If I understand correctly you've got some kind of time-stepping going on, where you create the values in new_potential by calculations on old_potential. Then make old equal to new and carry on.
You could replace your existing convergence tests with the single statement
converged = all(abs(new_potential - old_potential)<tol)
which might be faster. If the speed of the test is a major concern you could test only every other (or every third or fourth ...) iteration
A few comments:
1) If you used a potential array with 2 planes, instead of an old_ and new_potential, you could transfer new_ into old_ by swapping indices at the end of each iteration. As your code stands there's a lot of data movement going on.
2) While semantically you are right to have a while loop, I'd always use a do loop with a maximum number of iterations, just in case the convergence criterion is never met.
3) In your declaration REAL(KIND=DP) :: TOL = 0.001_DP the specification of DP on the numerical value of TOL is redundant, REAL(KIND=DP) :: TOL = 0.001 is adequate. I'd also make this a parameter, the compiler may be able to optimise its use if it knows that it is immutable.
4) You don't really need to execute CONVERGED = .TRUE. inside the outermost loop, set it before the first iteration -- this will save you a nanosecond or two.
Finally, if your convergence criterion is that every element in the potential array has converged to 3dp then that is what you should test for. It would be relatively easy to construct counterexamples for your suggested averages. However, my concern would be that your system will never converge on every element and that you should be using some matrix norm computation to determine convergence. SO is not the place for a lesson in that topic.
What are the calculations for the convergence criteria? Unless they are worse then the calculations to advance the potential it is probably better to have the IF statement to terminate the loop as soon as possible rather than guess a very large number of iterations to be sure to obtain a good solution.
Re High Performance Mark's suggestion #1, if the copying operation is a significant portion of the run time, you could also use pointers.
The only way to be sure about this stuff is to measure the run time ... Fortran provides intrinsic functions to measure both CPU and clock time. Otherwise you may modify your some portion of you code to make it faster, perhaps making it less easier to understand and possibly introducing a bug, possibly without much improvement in runtime ... if that portion was taking a small amount of the total runtime, no amount of cleverness will can make much difference.
As High Performance Mark says, though the current semantics are elegant, you probably want to guard against an infinite loop. One approach:
PotentialLoop: do i=1, MaxIter
blah
Converged = test...
if (Converged) exit PotentialLoop
blah
end do PotentialLoop
if (.NOT. Converged) write (*, *) "error, did not converge"
I = 1
DO
NEWPOT = !bla bla bla
IF (ABS(NEWPOT-OLDPOT).LT.TOL) EXIT
OLDPOT = NEWPOT
I = MOD(I,NUMPOINTS) + 1
END DO
Maybe better
I = 1
DO
NEWPOT = !bla bla bla
IF (ABS(NEWPOT-OLDPOT).LT.TOL) EXIT
OLDPOT = NEWPOT
IF (I.EQ.NUMPOINTS) THEN
I = 1
ELSE
I = I + 1
END IF
END DO
I just have a quick question, on how to speed up calculations of infinite series.
This is just one of the examples:
arctan(x) = x - x^3/3 + x^5/5 - x^7/7 + ....
Lets say you have some library which allow you to work with big numbers, then first obvious solution would be to start adding/subtracting each element of the sequence until you reach some target N.
You also can pre-save X^n so for each next element instead of calculating x^(n+2) you can do lastX*(x^2)
But over all it seems to be very sequential task, and what can you do to utilize multiple processors (8+)??.
Thanks a lot!
EDIT:
I will need to calculate something from 100k to 1m iterations. This is c++ based application, but I am looking for abstract solution, so it shouldn't matter.
Thanks for reply.
You need to break the problem down to match the number of processors or threads you have. In your case you could have for example one processor working on the even terms and another working on the odd terms. Instead of precalculating x^2 and using lastX*(x^2), you use lastX*(x^4) to skip every other term. To use 8 processors, multiply the previous term by x^16 to skip 8 terms.
P.S. Most of the time when presented with a problem like this, it's worthwhile to look for a more efficient way of calculating the result. Better algorithms beat more horsepower most of the time.
If you're trying to calculate the value of pi to millions of places or something, you first want to pay close attention to choosing a series that converges quickly, and which is amenable to parallellization. Then, if you have enough digits, it will eventually become cost-effective to split them across multiple processors; you will have to find or write a bignum library that can do this.
Note that you can factor out the variables in various ways; e.g.:
atan(x)= x - x^3/3 + x^5/5 - x^7/7 + x^9/9 ...
= x*(1 - x^2*(1/3 - x^2*(1/5 - x^2*(1/7 - x^2*(1/9 ...
Although the second line is more efficient than a naive implementation of the first line, the latter calculation still has a linear chain of dependencies from beginning to end. You can improve your parallellism by combining terms in pairs:
= x*(1-x^2/3) + x^3*(1/5-x^2/7) + x^5*(1/9 ...
= x*( (1-x^2/3) + x^2*((1/5-x^2/7) + x^2*(1/9 ...
= [yet more recursive computation...]
However, this speedup is not as simple as you might think, since the time taken by each computation depends on the precision needed to hold it. In designing your algorithm, you need to take this into account; also, your algebra is intimately involved; i.e., for the above case, you'll get infinitely repeating fractions if you do regular divisions by your constant numbers, so you need to figure some way to deal with that, one way or another.
Well, for this example, you might sum the series (if I've got the brackets in the right places):
(-1)^i * (x^(2i + 1))/(2i + 1)
Then on processor 1 of 8 compute the sum of the terms for i = 1, 9, 17, 25, ...
Then on processor 2 of 8 compute the sum of the terms for i = 2, 11, 18, 26, ...
and so on, finally adding up the partial sums.
Or, you could do as you (nearly) suggest, give i = 1..16 (say) to processor 1, i = 17..32 to processor 2 and so on, and they can compute each successive power of x from the previous one. If you want more than 8x16 elements in the series, then assign more to each processor in the first place.
I doubt whether, for this example, it is worth parallelising at all, I suspect that you will get to double-precision accuracy on 1 processor while the parallel threads are still waking up; but that's just a guess for this example, and you can probably many series for which parallelisation is worth the effort.
And, as #Mark Ransom has already said, a better algorithm ought to beat brute-force and a lot of processors every time.
I understand this is a classic programming problem and therefore I want to be clear I'm not looking for code as a solution, but would appreciate a push in the right direction. I'm learning C++ and as part of the learning process I'm attempting some programming problems. I'm attempting to write a program which deals with numbers up to factorial of 1billion. Obviously these are going to be enormous numbers and way too big to be dealing with using normal arithmetic operations. Any indication as to what direction I should go in trying to solve this type of problem would be appreciated.
I'd rather try to solve this without using additional libraries if possible
Thanks
PS - the problem is here http://www.codechef.com/problems/FCTRL
Here's the method I used to solve the problem, this was achieved by reading the comments below:
Solution -- The number 5 is a prime factor of any number ending in zero. Therefore, dividing the factorial number by 5, recursively, and adding the quotients, you get the number of trailing zeros in the factorial result
E.G. - Number of trailing zeros in 126! = 31
126/5 = 25 remainder 1
25/5 = 5 remainder 0
5/5 = 1 remainder 0
25 + 5 + 1 = 31
This works for any value, just keep dividing until the quotient is less
than 5
Skimmed this question, not sure if I really got it right but here's a deductive guess:
First question - how do you get a zero on the end of the number? By multiplying by 10.
How do you multiply by 10? either by multiplying by either a 10 or by 2 x 5...
So, for X! how many 10s and 2x5s do you have...?
(luckily 2 & 5 are prime numbers)
edit: Here's another hint - I don't think you need to do any multiplication. Let me know if you need another hint.
Hint: you may not need to calculate N! in order to find the number of zeros at the end of N!
To solve this question, as Chris Johnson said you have to look at number of 0's.
The factors of 10 will be 1,2,5,10 itself. So, you can go through each of the numbers of N! and write them in terms of 2^x * 5^y * 10^z. Discard other factors of the numbers.
Now the answer will be greaterof(x,y)+z.
One interesting thing I learn from this question is, its always better to store factorial of a number in terms of prime factors for easy comparisons.
To actually x^y, there is an easy method used in RSA algorithm, which don't remember. I will try to update the post if I find one.
This isn't a good answer to your question as you've modified it a bit from what I originally read. But I will leave it here anyway to demonstrate the impracticality of actually trying to do the calculations by main brute force.
One billion factorial is going to be out of reach of any bignum library. Such numbers will require more space to represent than almost anybody has in RAM. You are going to have to start paging the numbers in from storage as you work on them. There are ways to do this. The guy who recently calculated π out to 2700 billion places used such a library
Do not use the naive method. If you need to calculate the factorial, use a fast algorithm: http://www.luschny.de/math/factorial/FastFactorialFunctions.htm
I think that you should come up with a way to solve the problem in pseudo code before you begin to think about C++ or any other language for that matter. The nature of the question as some have pointed out is more of an algorithm problem than a C++ problem. Those who suggest searching for some obscure library are pointing you in the direction of a slippery slope, because learning to program is learning how to think, right? Find a good algorithm analysis text and it will serve you well. In our department we teach from the CLRS text.
You need a "big number" package - either one you use or one you write yourself.
I'd recommend doing some research into "large number algorithms". You'll want to implement the C++ equivalent of Java's BigDecimal.
Another way to look at it is using the gamma function. You don't need to multiply all those values to get the right answer.
To start you off, you should store the number in some sort of array like a std::vector (a digit for each position in the array) and you need to find a certain algorithm that will calculate a factorial (maybe in some sort of specialized class). ;)
//SIMPLE FUNCTION TO COMPUTE THE FACTORIAL OF A NUMBER
//THIS ONLY WORKS UPTO N = 65
//CAN YOU SUGGEST HOW WE CAN IMPROVE IT TO COMPUTE FACTORIAL OF 400 PLEASE?
#include <iostream>
#include <cmath>
using namespace std;
int factorial(int x); //function to compute factorial described below
int main()
{
int N; //= 150; //you can also get this as user input using cin.
cout<<"Enter intenger\n";
cin>>N;
factorial(N);
return 0;
}//end of main
int factorial(int x) //function to compute the factorial
{
int i, n;
long long unsigned results = 1;
for (i = 1; i<=x; i++)
{
results = results * i;
}
cout<<"Factorial of "<<x<<" is "<<results<<endl;
return results;
}