The solution given by solveset is in the following form
{A}\{B}
How can I assign A to a new variable?
For example
solveset((x-y)/(x-t),x,domain=S.Reals)
returns R intersects with {y}\{t}
This should literally do what you ask, but your intentions are not that clear. Is this what you mean?
>>> complement = solveset((x-y)/(x-t),x,domain=S.Reals)
>>> f, c = complement.args
>>> new_var = f.args[1].args[0]; new_var
y
If you mean that you don't want it showing as an intersection with R then declare y to be real: y = symbols('y', real=True). In that case you will just get a FiniteSet as the first argument of the complement instead of a Union.
In Python 2, I can write:
In [5]: points = [ (1,2), (2,3)]
In [6]: min(points, key=lambda (x, y): (x*x + y*y))
Out[6]: (1, 2)
But that is not supported in 3.x:
File "<stdin>", line 1
min(points, key=lambda (x, y): (x*x + y*y))
^
SyntaxError: invalid syntax
The straightforward workaround is to index explicitly into the tuple that was passed:
>>> min(points, key=lambda p: p[0]*p[0] + p[1]*p[1])
(1, 2)
This is very ugly. If the lambda were a function, I could do
def some_name_to_think_of(p):
x, y = p
return x*x + y*y
But because the lambda only supports a single expression, it's not possible to put the x, y = p part into it.
How else can I work around this limitation?
No, there is no other way. You covered it all. The way to go would be to raise this issue on the Python ideas mailing list, but be prepared to argue a lot over there to gain some traction.
Actually, just not to say "there is no way out", a third way could be to implement one more level of lambda calling just to unfold the parameters - but that would be at once more inefficient and harder to read than your two suggestions:
min(points, key=lambda p: (lambda x,y: (x*x + y*y))(*p))
Python 3.8 update
Since the release of Python 3.8, PEP 572 — assignment expressions — have been available as a tool.
So, if one uses a trick to execute multiple expressions inside a lambda - I usually do that by creating a tuple and just returning the last component of it, it is possible to do the following:
>>> a = lambda p:(x:=p[0], y:=p[1], x ** 2 + y ** 2)[-1]
>>> a((3,4))
25
One should keep in mind that this kind of code will seldom be more readable or practical than having a full function. Still, there are possible uses - if there are various one-liners that would operate on this point, it could be worth to have a namedtuple, and use the assignment expression to effectively "cast" the incoming sequence to the namedtuple:
>>> from collections import namedtuple
>>> point = namedtuple("point", "x y")
>>> b = lambda s: (p:=point(*s), p.x ** 2 + p.y ** 2)[-1]
According to http://www.python.org/dev/peps/pep-3113/ tuple unpacking are gone, and 2to3 will translate them like so:
As tuple parameters are used by lambdas because of the single
expression limitation, they must also be supported. This is done by
having the expected sequence argument bound to a single parameter and
then indexing on that parameter:
lambda (x, y): x + y
will be translated into:
lambda x_y: x_y[0] + x_y[1]
Which is quite similar to your implementation.
I don't know any good general alternatives to the Python 2 arguments unpacking behaviour. Here's a couple of suggestion that might be useful in some cases:
if you can't think of a name; use the name of the keyword parameter:
def key(p): # more specific name would be better
x, y = p
return x**2 + y**3
result = min(points, key=key)
you could see if a namedtuple makes your code more readable if the list is used in multiple places:
from collections import namedtuple
from itertools import starmap
points = [ (1,2), (2,3)]
Point = namedtuple('Point', 'x y')
points = list(starmap(Point, points))
result = min(points, key=lambda p: p.x**2 + p.y**3)
While the destructuring arguments was removed in Python3, it was not removed from comprehensions. It is possible to abuse it to obtain similar behavior in Python 3.
For example:
points = [(1,2), (2,3)]
print(min(points, key=lambda y: next(x*x + y*y for (x,y) in [y])))
In comparison with the accepted answer of using a wrapper, this solution is able to completely destructure the arguments while the wrapper only destructures the first level. That is, you can do
values = [(('A',1),'a'), (('B',0),'b')]
print(min(values, key=lambda y: next(b for ((a,b),c) in (y,))))
In comparison to the accepted answer using an unwrapper lambda:
values = [(('A',1),'a'), (('B',0),'b')]
print(min(points, key=lambda p: (lambda a,b: (lambda x,y: (y))(*a))(*p)))
Alternatively one can also use a list instead of a tuple.
values = [(('A',1),'a'), (('B',0),'b')]
print(min(points, key=lambda y: next(b for (a,b),c in [y])))
This is just to suggest that it can be done, and should not be taken as a recommendation. However, IMO, this is better than the hack of using using multiple expressions in a tuple and returning the last one.
I think the better syntax is x * x + y * y let x, y = point, let keyword should be more carefully chosen.
The double lambda is the closest version.
lambda point: (lambda x, y: x * x + y * y)(*point)
High order function helper would be useful in case we give it a proper name.
def destruct_tuple(f):
return lambda args: f(*args)
destruct_tuple(lambda x, y: x * x + y * y)
Consider whether you need to unpack the tuple in the first place:
min(points, key=lambda p: sum(x**2 for x in p))
or whether you need to supply explicit names when unpacking:
min(points, key=lambda p: abs(complex(*p)**2)
Based on Cuadue suggestion and your comment on unpacking still being present in comprehensions, you can use, using numpy.argmin :
result = points[numpy.argmin(x*x + y*y for x, y in points)]
Another option is to write it into a generator producing a tuple where the key is the first element. Tuples are compared starting from beginning to end so the tuple with the smallest first element is returned. You can then index into the result to get the value.
min((x * x + y * y, (x, y)) for x, y in points)[1]
There may be a real solution to this, using PyFunctional!
Although not currently supported, I've submitted a tuple arg unpacking feature request to support:
(
seq((1, 2), (3, 4))
.map(unpack=lambda a, b: a + b)
) # => [3, 7]
Since questions on Stack Overflow are not supposed to contain the answer in the question, nor have explicit "update" sections, I am converting OP's original "updates" to a proper answer and making it community wiki.
OP originally claimed that this solution was "extending the idea in the answer". I cannot discern which answer that meant, or which idea. The idea is functionally the same as anthony.hl's answer, but that came years later. Considering the state of answers at the time, I think this qualifies as OP's original work.)
Make a wrapper function that generalizes the process of unpacking the arguments, like so:
def star(f):
return lambda args: f(*args)
Now we can use this to transform the lambda we want to write, into one that will receive the argument properly:
min(points, key=star(lambda x,y: (x*x + y*y))
We can further clean this up by using functools.wraps:
import functools
def star(f):
#functools.wraps(f)
def f_inner(args):
return f(*args)
return f_inner
I want to take the derivative of the following function
y=(np.log(x))/(1+x),
if I am using sympy, it gives me the following error
from sympy import *
y1=Derivative((np.log(x))/(1+x), x)
print y1
sequence too large; cannot be greater than 32
Do it like this:
>>> from sympy import *
>>> var('x')
x
>>> y1 = diff(log(x)/(1+x))
>>> y1
-log(x)/(x + 1)**2 + 1/(x*(x + 1))
As Sanjeev mentioned in a comment you need to define variables in one way or another.
np.log in your code would be a function that accepts a numerical value and returns a numerical value; sympy needs to see a function names that it knows in formal terms, such as log
In this context, you need to use sympy's diff function, rather than Derivative.
I'm new to python so please bear with me.
The problem is:
"Write a function polar(z) to convert a complex number to its polar form (r,theta). You may use the math.atan2 and math.hypot functions but not the cmath library."
I don't even know where to start with this one, but so far I have:
import math
def polar(z):
z = a + bj
r = math.hypot(a,b)
theta = math.atan2(b,a)
print "(",r,",",theta,")"
Any help will do!
You can use object.real and object.imag to get the values of real and imaginary values. Check this answer
import math
def polar(z):
a= z.real
b= z.imag
r = math.hypot(a,b)
theta = math.atan2(b,a)
return r,theta # use return instead of print.
u=3+5j
print polar(u)
Output:
(5.830951894845301, 1.0303768265243125)
Read difference b/w print and return in functions.
I am trying to optimize parameters for a known function to fit an experimental data plot. The function is fairly involved
where x sweeps along a know set of numbers and p, g and c are the independent parameters to be optimized. Any ideas or resources that could be of assistance?
I would not recommend Genetic Algorithms. Instead go for straight forward Optimization.
Scipy has some resources.
You haven't provided any data or so, so I'll just go for something that should run. Below is something to get you started. I can't know if it works without seeing the data. Also, there must probably is a way to dynamically feed objectivefunc your x and y data. That's probably in the docs to scipy.optimize.minimize.
What I've done. Create a function to minimize. Here, I've called it objectivefunc. For that I've taken your function y = x^2 * p^2 * g / ... and transformed it to be of the form x^2 * p^2 * g / (...) - y = 0. Then square the left hand side and try to minimise it. Because you will have multiple (x/y) data samples, I'd minimise the sum of the squares. Put it all in a function and pass it to the minimize from scipy.
import numpy as np
from scipy.optimize import minimize
def objectivefunc(pgq):
"""Your function transformed so that it can be minimised.
I've renamed the input pgq, so that pgq[0] is p, pgq[1] is g, etc.
"""
p = pgq[0]
g = pgq[1]
q = pgq[2]
x = [10, 9.4, 17] # Some input data.
y = [12, 42, 0.8]
sum_ = 0
for i in range(len(x)):
sum_ += (x[i]**2 * p**2 * g - y[i] * ( (c**2 - x**2)**2 + x**2 * g**2) )**2
return sum_
pgq = np.array([1.3, 0.7, 0.5]) # Supply sensible initivial values
res = minimize(objectivefunc, pgq, method='nelder-mead',
options={'xtol': 1e-8, 'disp': True})
Have you tired old good Levenberg-Marquardt as implemented in Levenberg-Marquardt.vi. If it does not suite your needs, you can try Waptia libraryfor LabVIEW with one of the genetic algorithms implemented.