Is there a Python 2.7 package that contains a Gauss-Siedel solver for systems with more than 3 linear algebraic equations containing more than 3 unknowns? A simple example of the sort of problem I would like to solve is given below. If there are no templates or packages available, is it possible to solve this in python? If so please could you advise on the best way of going about it. Thanks.
An example of three linear algebraic equations with three unknowns (x,y,z):
x - 3y + z = 10
2x + 5y + z = 4
-x + y - 2z = -13
After a bit of searching around I found the solution was to use the numpy.linalg.solve command. The command uses the LAPACK gesv routine to solve the problem; however I am not sure what iterative technique this uses.
Here is the code to solve the problem if anyone is interested:
a = np.array([[1,-3,1],[2,5,1],[-1,1,-2]])
b = np.array([10,4,-13])
x = np.linalg.solve(a, b)
print x
print np.allclose(np.dot(a, x), b) # To check the solution is found
Related
I am working on a problem, that essentially it comes down to solving the following equation (b/n)*((b-1)/(n-1)) = 0.5, where only the lower limit of n is 10**12. I was able to solve the problem making use of methods described here https://www.alpertron.com.ar/QUAD.HTM
However I also tried solving the problem as a quadratic equation, and checking than the answers are integers, and that the required ratio is reached. The program works for lower values of n, but as soon it starts approaching the required limit (10**12), it starts giving false solutions. For example, the program yields
b = 707106783028 and
n = 1000000002604
as a set of solutions, and yet it is not -> (b/n)*((b-1)/(n-1)) gives 0.499999999999, however python just takes it as 0.5. I tried using x.hex() to try to account for that, but it did not help. Is there any way to make python store/display the true (or most accurate) value of a float?
My son is learning how to calculate the formula for a parabola using a directrix and focus point on his Khan Academy course. (a,b) is the focus point, k is the parameter for the directrix as y=k. I wanted to show him a simple way to check his results using Sympy; programming helps hugely in solidifying internal algorithms. Step 1 is clearly to set the equation out.
Parabola = Eq(sqrt((y-k)**2),sqrt((x-a)**2+(y-b)**2))
I first solved this for y, intending then to show how to substitute values and derive the equation, thus:
Y = solve(Parabola,y)
This was in a reasonable form, having collected the 1/(2b-2k) to the outside.
Next, I substituted the value of the focus and directrix into the equation, obtaining the equation y= 1/6*(x**2+16*x+49), which is correct.
He needed next to resolve this in a form (x+c1)(x+c2)+remainder. There does not seem to be a direct way to factor from the equation above into this form, at least not from an hour searching the docs.
Answer = Y[0].subs({a:-8,b:-1,k:-4})
factor(Answer,deep=True)
Of course I understand how to reduce to a square factorization plus remainder; my question is solely whether this is possible in sympy and, if so, how?
A second, perhaps trivial, question is why Sympy returns some factorizations as (constant - x) where (x -constant) is preferred: is there a way of specifying the form?
Thanks for any help, on behalf of my son, to whom I am showing the wonders of Sympy.
The process is usually called "completing the square". It is not implemented as a single SymPy method, but one can use the SymPy equation solver to find the coefficients of such a form of the polynomial:
>>> var('A B C')
>>> solve(Eq(Answer, A*(x-B)**2 + C), [A, B, C])
[(1/6, -8, -5/2)]
So the parabola vertex is at (8, -5/2), and the polynomial can be written as 1/6*(x+8)**2 - 5/2
First let me explain the problem I'm trying to solve. I'm integrating my code with 3rd party library which does quite complicated financial predictions. For the purposes of this question let's just say I have a blackbox which returns y when I pass in x.
Now, what I need to do is find input (x) for a given output (y). Since I know lowest and highest possible input values I wrote the following algorithm:
define starting input range (minimum input value to maximum input value)
divide the range into two equal parts and find output for a middle value
find which half output falls into
repeat steps 2 and 3 until range is too small to divide any further
This algorithm does the job nicely, I don't see any problems with it. However, is there a faster way to solve this problem?
It sounds like x and y are strongly correlated (i.e. as x increases, so does y), as otherwise your divide and conquer algorithm wouldn't work.
Assumuing this is the case, and you could work out a correlation factor, then you might be able to multiply the midpoint by the correlation factor to potentially hone in the expected value quicker.
Please note that I've not tested this idea at all, but it's something to think about. Possible improvements would be to make the correlationFactor a moving average, or precompute it based on, say, the deciles between xLow and xHigh.
Also, this assumes that calling f(x) is relatively inexpensive. If it is expensive, then the increased number of calls to f(x) would dwarf any savings. In fact - I'm starting to think this is a stupid idea...
Hopefully the following pseudo-code illustrates what I mean:
DivideAndConquer(xLow, xHigh, correlationFactor, expectedValue)
xMid = (xHigh - xLow) * correlationFactor
// Add some range checks to make sure that xMid is within xLow and xHigh!!
y = f(xMid)
if (y == expectedValue)
return expectedValue
elseif (y < expectedValue)
correlationFactor = (xMid - xLow) / (f(xMid) - f(xLow))
return DivideAndConquer(xLow, xMid, correlationFactor, expectedValue)
else
correlationFactor = (xHigh - xMid) / (f(xHigh) - f(xMid))
return DivideAndConquer(xMid, xHigh, correlationFactor, expectedValue)
I have been scouring the internet for quite some time now, trying to find a simple, intuitive, and fast way to approximate a 2nd degree polynomial using 5 data points.
I am using VC++ 2008.
I have come across many libraries, such as cminipack, cmpfit, lmfit, etc... but none of them seem very intuitive and I have had a hard time implementing the code.
Ultimately I have a set of discrete values put in a 1D array, and I am trying to find the 'virtual max point' by curve fitting the data and then finding the max point of that data at a non-integer value (where an integer value would be the highest accuracy just looking at the array).
Anyway, if someone has done something similar to this, and can point me to the package they used, and maybe a simple implementation of the package, that would be great!
I am happy to provide some test data and graphs to show you what kind of stuff I'm working with, but I feel my request is pretty straightforward. Thank you so much.
EDIT: Here is the code I wrote which works!
http://pastebin.com/tUvKmGPn
change size to change how many inputs are used
0 0
1 1
2 4
4 16
7 49
a: 1 b: 0 c: 0
Press any key to continue . . .
Thanks for the help!
Assuming that you want to fit a standard parabola of the form
y = ax^2 + bx + c
to your 5 data points, then all you will need is to solve a 3 x 3 matrix equation. Take a look at this example http://www.personal.psu.edu/jhm/f90/lectures/lsq2.html - it works through the same problem you seem to be describing (only using more data points). If you have a basic grasp of calculus and are able to invert a 3x3 matrix (or something nicer numerically - which I am guessing you do given you refer specifically to SVD in your question title) then this example will clarify what you need to do.
Look at this Wikipedia page on Poynomial Regression
Is there a command in sympy to simplify sinh(x)+cosh(x) to exp(x)? If I issue
from sympy import *
x = Symbol('x')
(sinh(x)+cosh(x)).simplify()
I just get sinh(x)+cosh(x) back, but I want to see exp(x) instead.
Even assuming that the simplify function in sympy was very good, what you suggest may not have worked, because what is "simple" is not rigorously defined.
I think what you want is the functionality present in .rewrite:
In [1]: (sinh(x)+cosh(x)).rewrite(exp)
Out[1]:
x
e
You can use .rewrite for many other transformations including gamma <-> combinatorics and inverse trig <-> logarithms.