Do loop with 2 variables changing in each step of loop - fortran

I'm working on Fortran 90. I need to calculate a recursion like xn = 6*(xn-1) + 7*(xn-2) where (xn-1) is the n-1 step and (xn-2) is the n-2 step. So if we set x1 = 2 and x0 = 1 we get x2 = 6*x1 + 7*x0 and so on for each n.
So I wrote
x0 = 1.
x1 = 2.
do i = 1,20
xn = 6.*x1 + 7.*x0
x1 = xn
x0 = x1
end do
but this code is replacing x0 and x1 for xn and I need to replace x1 for xn and x0 for x1 in each step. I'd tryed many things but I failed. Any idea how to do that?

Though the answer has already been added to this question, let me answer a more general question which is encountered more frequently. Consider the problem where the very next value in the iteration depends on n previous values. In the present case n = 2. The general strategy to solve this problem is to construct another 1-d array of size n and save all the initial values x(1),x(2),..,x(n) in this array. Then in each iteration we use these values to calculate the next value x(n+1) and update the array with x(1) by x(2), x(2) by x(3),...,x(n) by x(n+1) and again use these values to calculate the next value of x and so on. A particular example where such strategy must necessarily be used is the integration of time-delayed systems.

#parthian-shot has given the correct answer in the comment. But that leaves the question marked as unanswered, so I am repeating it here:
You are assigning the value of xn to x1, and then the value of x1 (which is now the same as xn) to x0. You just need to flip it around:
do i = 1,20
xn = 6.*x1 + 7.*x0
x0 = x1
x1 = xn
end do

Related

Infinite loop while implementing Runge-Kutta method with step correction

I'm implementing the Runge-Kutta-4 method for ODE approximation with a step correction procedure.
Here's the code:
function RK4 (v,h,cant_ec) !Runge-Kutta 4to Orden
real(8),dimension(0:cant_ec)::RK4,v
real::h
integer::cant_ec
real(8),dimension(0:cant_ec)::k1,k2,k3,k4
k1 = h*vprima(v)
k2 = h*vprima(v+k1/2.0)
k3 = h*vprima(v+k2/2.0)
k4 = h*vprima(v+k3)
v = v + (k1+2.0*k2+2.0*k3+k4)/6.0 !la aproximación actual con paso h
RK4 = v
end function RK4
subroutine RK4h1(v,h,xf,tol,cant_ec) !Runge-Kutta con corrección de paso por método 1
real(8),dimension(0:cant_ec)::v
real::h,tol,xf
integer::cant_ec,i,n
real(8),dimension(0:cant_ec)::v1,v2
real(8)::error
n = int((xf-v(0))/h +0.5)
open(2,file = "derivada.txt",status="replace")
error = 2*tol
do i = 1,n, 1
do while(error > tol)
v1 = RK4(v,h,cant_ec)
v2 = RK4(v,h/2,cant_ec)
v2 = v2 + RK4(v+v2,h/2,cant_ec)
error = MAXVAL(ABS(v1-v2))
if (error > tol) then
h = h/2
end if
end do
end do
write(*,*)v1
write(2,*)v1
close(2,status="keep")
call system("gnuplot -persist 'derivada.p'")
end subroutine Rk4h1
Where h is the step size, v is a vector of cant_ec components that corresponds to the order of the ODE (that is: v(0) = x,v(1) = y,v(2) = y', etc), tol is the tolerance of error and xf is the end of the x interval (assuming it starts at 0). All these values are inputted by the user before the subroutine call. The initial values given for this particular function are y(0) = -1. Everything else is defined by the user when running the script.
The differential equation is given by:
function vprima(v,x,y) !definición de la función derivada
real(8),dimension(0:cant_ec)::v,vprima
vprima(0) = 1.0
vprima(1) = (-v(1)/(v(0)**2+1))
end function vprima
noting that on the main program this assignment occurs:
v(0) = x
v(1) = y
where x and y are the initial values of the function, given by the user.
My issue is, the script seems to get stuck on an infinite loop when I call RK4h1.
Any help, or clue, would be appreciated. Thank you.
v2 = v2 + RK4(v+v2,h/2,cant_ec) is wrong, it should be v2 = RK4(v2,h/2,cant_ec), as the result of RK4 is the next point, not the update to the next point. As the error calculation is thus wrong, the step size gets reduced indefinitely. After some 50 reductions, the RK4 step will no longer advance the state, the increment being too small.
It will lead to problems if you have a fixed step number with a variable step size.
The inner loop does not make any sense whatsoever. The overall effect is that after each step size reduction i gets incremented by one. So theoretically, if n<50 the program should terminate, but with a final state very close to the initial state.
The local error should be compared to tol*h, so that the global error becomes proportional to tol.
There should also be an instruction that increases h if the local error becomes too small.
See How to perform adaptive step size using Runge-Kutta fourth order (Matlab)? for another example of using RK4 with step-size control by double steps.

Given N lines on a Cartesian plane. How to find the bottommost intersection of lines efficiently?

I have N distinct lines on a cartesian plane. Since slope-intercept form of a line is, y = mx + c, slope and y-intercept of these lines are given. I have to find the y coordinate of the bottommost intersection of any two lines.
I have implemented a O(N^2) solution in C++ which is the brute-force approach and is too slow for N = 10^5. Here is my code:
int main() {
int n;
cin >> n;
vector<pair<int, int>> lines(n);
for (int i = 0; i < n; ++i) {
int slope, y_intercept;
cin >> slope >> y_intercept;
lines[i].first = slope;
lines[i].second = y_intercept;
}
double min_y = 1e9;
for (int i = 0; i < n; ++i) {
for (int j = i + 1; j < n; ++j) {
if (lines[i].first ==
lines[j].first) // since lines are distinct, two lines with same slope will never intersect
continue;
double x = (double) (lines[j].second - lines[i].second) / (lines[i].first - lines[j].first); //x-coordinate of intersection point
double y = lines[i].first * x + lines[i].second; //y-coordinate of intersection point
min_y = min(y, min_y);
}
}
cout << min_y << endl;
}
How to solve this efficiently?
In case you are considering solving this by means of Linear Programming (LP), it could be done efficiently, since the solution which minimizes or maximizes the objective function always lies in the intersection of the constraint equations. I will show you how to model this problem as a maximization LP. Suppose you have N=2 first degree equations to consider:
y = 2x + 3
y = -4x + 7
then you will set up your simplex tableau like this:
x0 x1 x2 x3 b
-2 1 1 0 3
4 1 0 1 7
where row x0 represents the negation of the coefficient of "x" in the original first degree functions, x1 represents the coefficient of "y" which is generally +1, x2 and x3 represent the identity matrix of dimensions N by N (they are the slack variables), and b represents the value of the idepent term. In this case, the constraints are subject to <= operator.
Now, the objective function should be:
x0 x1 x2 x3
1 1 0 0
To solve this LP, you may use the "simplex" algorithm which is generally efficient.
Furthermore, the result will be an array representing the assigned values to each variable. In this scenario the solution is:
x0 x1 x2 x3
0.6666666667 4.3333333333 0.0 0.0
The pair (x0, x1) represents the point which you are looking for, where x0 is its x-coordinate and x1 is it's y-coordinate. There are other different results that you could get, for an example, there could exist no solution, you may find out more at plenty of books such as "Linear Programming and Extensions" by George Dantzig.
Keep in mind that the simplex algorithm only works for positive values of X0, x1, ..., xn. This means that before applying the simplex, you must make sure the optimum point which you are looking for is not outside of the feasible region.
EDIT 2:
I believe making the problem feasible could be done easily in O(N) by shifting the original functions into a new position by means of adding a big factor to the independent terms of each function. Check my comment below. (EDIT 3: this implies it won't work for every possible scenario, though it's quite easy to implement. If you want an exact answer for any possible scenario, check the following explanation on how to convert the infeasible quadrants into the feasible back and forth)
EDIT 3:
A better method to address this problem, one that is capable of precisely inferring the minimum point even if it is in the negative side of either x or y: converting to quadrant 1 all of the other 3.
Consider the following generic first degree function template:
f(x) = mx + k
Consider the following generic cartesian plane point template:
p = (p0, p1)
Converting a function and a point from y-negative quadrants to y-positive:
y_negative_to_y_positive( f(x) ) = -mx - k
y_negative_to_y_positive( p ) = (p0, -p1)
Converting a function and a point from x-negative quadrants to x-positive:
x_negative_to_x_positive( f(x) ) = -mx + k
x_negative_to_x_positive( p ) = (-p0, p1)
Summarizing:
quadrant sign of corresponding (x, y) converting f(x) or p to Q1
Quadrant 1 (+, +) f(x)
Quadrant 2 (-, +) x_negative_to_x_positive( f(x) )
Quadrant 3 (-, -) y_negative_to_y_positive( x_negative_to_x_positive( f(x) ) )
Quadrant 4 (+, -) y_negative_to_y_positive( f(x) )
Now convert the functions from quadrants 2, 3 and 4 into quadrant 1. Run simplex 4 times, one based on the original quadrant 1 and the other 3 times based on the converted quadrants 2, 3 and 4. For the cases originating from a y-negative quadrant, you will need to model your simplex as a minimization instance, with negative slack variables, which will turn your constraints to the >= format. I will leave to you the details on how to model the same problem based on a minimization task.
Once you have the results of each quadrant, you will have at hands at most 4 points (because you might find out, for example, that there is no point on a specific quadrant). Convert each of them back to their original quadrant, going back in an analogous manner as the original conversion.
Now you may freely compare the 4 points with each other and decide which one is the one you need.
EDIT 1:
Note that you may have the quantity N of first degree functions as huge as you wish.
Other methods for solving this problem could be better.
EDIT 3: Check out the complexity of simplex. In the average case scenario, it works efficiently.
Cheers!

spss IF loop MISSINGS ignored in special cases

I want to compute a variable X=x1+x2+x3. x1, x2 and x3 have either the values 1 or 0. I recoded system missings to -77. There are some conditions which should be met.
1) If there is a missing value in either x1,x2 or x3, then it should be ignored if one or two of the other variables have the value 1. So the sum should be calculated although there is a missing value but only if there is at least one 1 (Eg. X = x1 + x2 + x3 = 0 + missing + 1 = 1)
2) If there is a missing value in either x1, x2 or x3, then it should not be ignored if there is no 1 at all and the sum should not be calculated. (Eg. X = x1 + x2 + x3 = 0 + missing + missing = missing).
I tried to make a loop with IF but it won't work and I just can't figure out why.
COMPUTE X =SUM(x1, x2, x3).
IF (x1=-77 AND x2~=1 AND x3~=1) X=999.
IF (x2=-77 AND x1~=1 AND x3~=1) X=999.
IF (x3=-77 AND x1~=1 AND x2 ~=1)X=999.
EXECUTE.
These are the returned results:
when x1=1, x2 = 0, x3=-77 then X=1. (That is the result I want.
The problem arises when x1=-77, x2=0, x3=0 because then X=0 and not 999 as I want it to be.
I think that with the loop above I am close to the result but something is missing.
Below I post some other loops I made, but neither did work and I think the one above is the closest to the right answer.
Thank you so much for your help and happy easter!
Cheers desperate Ichav :)
COMPUTE X = x1 + x2 + x3.
RECODE X (SYSMIS=-77).
IF ((X = -77 AND x1 = 1) OR (X = -77 AND x2 = 1) OR (X = -77 AND x3 = 1)) X =1.
EXECUTE.
Here X is always returned as -77.
This is to create some sample data to work on:
data list list/x1 x2 x3.
begin data
1 0 -77
0 -77 0
0 0 0
1 0 1
end data.
MISSING VALUES x1 x2 x3 (-77).
As you can see this is assuming -77 was defined as a missing value - otherwise calculating sum(x1, x2, x3) will fail.
COMPUTE X=SUM(x1, x2, x3).
if X=0 and nmiss(x1, x2, x3)>0 X=999.
Now - if there were no 1 values, the sum is 0. If the sum is zero and there were any (more than 0) missing values involved - the sum is changed to 999.
If you somehow calculated X indirectly without turning -77 into a missing value, you would be able to use the value -77 in if statements (as you tried before). An easier way to do it then:
if X=0 and any(-77, x1, x2, x3) X=999.

Can a modulo operation be expressed as a constraint in CPLEX?

I have a situation where I want to associate a fixed cost in my optimization function if a summation is even. That is, if
(x1 + x2 + x3)%2 = 0
Is there a way this can be modeled in CPLEX? All the variables are binary, so really, I just want to express x1 + x2 + x3 = 0 OR x1 + x2 + x3 = 2
Yes, you can do this by introducing a new binary variable. (Note that we are modifying the underlying formulation, not tinkering with CPLEX per se for the modulo.)
Your constraints are
x1 + x2 + x3 = 0 OR 2
Let's introduce a new binary variable Y and rewrite the constraint.
Combined Constraint: x1 + x2 + x3 = 0(1-Y) + 2Y
This works because if Y is 0, one of the choices gets selected, and if Y=1 the other choice gets selected.
When simplified:
x1+x2+x3-2Y = 0
x_i, Y binary
Addendum
In your specific case, the constraint got simplified because one of the rhs terms was 0. Instead, more generally, if you had b1 or b2 as the two rhs choices,
the constraint would become
x1 + x2 + x3 = b1(Y) + b2(1-Y).
If you had inequalities in your constraint (<=), you'd use the Big-M trick, and then introduce a new binary variable, thereby making the model choose one of the constraints.
Hope that helps.

How do I encode Manhattan distance in Mixed Integer Programming

Lets have two points, (x1, y1) and (x2,y2)
dx = |x1 - x2|
dy = |y1 - y2|
D_manhattan = dx + dy where dx,dy >= 0
I am a bit stuck with how to get x1 - x2 positive for |x1 - x2|, presumably I introduce a binary variable representing the polarity, but I am not allowed multiplying a polarity switch to x1 - x2 as they are all unknown variables and that would result in a quadratic.
If you are minimizing an increasing function of |x| (or maximizing a decreasing function, of course),
you can always have the aboslute value of any quantity x in a lp as a variable absx such as:
absx >= x
absx >= -x
It works because the value absx will 'tend' to its lower bound, so it will either reach x or -x.
On the other hand, if you are minimizing a decreasing function of |x|, your problem is not convex and cannot be modelled as a lp.
For all those kind of questions, it would be much better to add a simplified version of your problem with the objective, as this it often usefull for all those modelling techniques.
Edit
What I meant is that there is no general solution to this kind of problem: you cannot in general represent an absolute value in a linear problem, although in practical cases it is often possible.
For example, consider the problem:
max y
y <= | x |
-1 <= x <= 2
0 <= y
it is bounded and has an obvious solution (2, 2), but it cannot be modelled as a lp because the domain is not convex (it looks like the shape under a 'M' if you draw it).
Without your model, it is not possible to answer the question I'm afraid.
Edit 2
I am sorry, I did not read the question correctly. If you can use binary variables and if all your distances are bounded by some constant M, you can do something.
We use:
a continuous variable ax to represent the absolute value of the quantity x
a binary variable sx to represent the sign of x (sx = 1 if x >= 0)
Those constraints are always verified if x < 0, and enforce ax = x otherwise:
ax <= x + M * (1 - sx)
ax >= x - M * (1 - sx)
Those constraints are always verified if x >= 0, and enforce ax = -x otherwise:
ax <= -x + M * sx
ax >= -x - M * sx
This is a variation of the "big M" method that is often used to linearize quadratic terms. Of course you need to have an upper bound of all the possible values of x (which, in your case, will be the value of your distance: this will typically be the case if your points are in some bounded area)