Hello I am new to sympy and I am trying to solve a couple of non-linear equation - sympy

here is the problem:
constants:
enter image description here
dependent value functions:
def v_ego(self,ego_vel_x,ego_a_ini_x,t):
v_ego = ego_vel_x + ego_a_ini_x * t
return v_ego
def x_obj(self,x_obj_ini,obj_vel_x,obj_a_ini_x,t):
x_obj = x_obj_ini + obj_vel_x * t + 0.5 * obj_a_ini_x * t ** 2
return x_obj
def x_ego(self,x_ego_ini,ego_vel_x,ego_a_ini_x,t):
x_ego = x_ego_ini + ego_vel_x * t + 0.5 * ego_a_ini_x * t ** 2
return x_ego
def y_obj(self,y_obj_ini,obj_vel_y,obj_a_ini_y,t):
y_obj = y_obj_ini + obj_vel_y * t + 0.5 * obj_a_ini_y * t ** 2
return y_obj
def y_t(self):
y_t = math.sqrt(self._r_t ** 2 - self._l_f ** 2) - (self._w_ego / 2)
return y_t
def y_r(self,ego_vel_x,ego_a_ini_x,t):
y_r = math.sqrt(max(0, ((self.v_ego(ego_vel_x,ego_a_ini_x,t) ** 2 / (self
._Mu_rt * self._g)) ** 2 - self._l_c ** 2)))
return y_r
def y_min(self,ego_vel_x,ego_a_ini_x,t):
y_min = max(self.y_t(), self.y_r(ego_vel_x,ego_a_ini_x,t))
return y_min
def r_min(self,ego_vel_x,ego_a_ini_x,t):
r_min = max(self._r_t, math.sqrt(self._l_f ** 2 + (self.y_min(ego_vel_x,ego_a_ini_x,t) + self._w_ego / 2) ** 2))
return r_min
tts, delta_t = sym.symbols('tts,delta_t')
e_10 = sym.Eq(math.atan((self.x_obj(x_obj_ini, obj_vel_x, obj_a_ini_x, tts + delta_t) - self.x_ego(x_ego_ini,ego_vel_x,ego_a_ini_x,tts)+ self._l_f) / (self.y_min(ego_vel_x, ego_a_ini_x, tts) - self.y_obj(y_obj_ini, obj_vel_y, obj_a_ini_y,tts + delta_t))) - ((self.v_ego(ego_vel_x, ego_a_ini_x, tts) * delta_t) / self.r_min(ego_vel_x, ego_a_ini_x, tts) - (math.asin(min(1.0, self._l_f / self.r_min(ego_vel_x, ego_a_ini_x, tts))))), 0)
e_11 = sym.Eq((self.x_obj(x_obj_ini, obj_vel_x, obj_a_ini_x, tts + delta_t) - self.x_ego(x_ego_ini, ego_vel_x,ego_a_ini_x,tts)+ self._l_f) ** 2 + (self.y_min(ego_vel_x, ego_a_ini_x, tts) - self.y_obj(y_obj_ini, obj_vel_y, obj_a_ini_y,tts + delta_t)) ** 2 - (self.r_min(ego_vel_x, ego_a_ini_x, tts)) ** 2, 0)
print(sym.solve([e_10, e_11], (tts, delta_t)))
I am getting TypeError: cannot determine truth value of Relational
These are the equations:
non linear equations that I am trying to solve
and these are the dependent values that need to be calculated:
dependent functions
any help is appreciated

When you use min with a SymPy expressions, it will complain if it can't figure out what the min is, e.g. min(x,y) -> TypeError: cannot determine truth value of Relational.
Since you are only selecting a minimum of two values, the function is easy to write as a Piecewise as in the following example where the equation x - min(y,z) is being solved.
Replace max with a similar rewrite, too.

Related

How do I generate Bezier Curves and NURBS in C++ and import it as an igs?

I am new to C++ NURBS libary. I learnt generating line (by CLine, from nurbs.h ) and save it as igs. But in case of
multiple control points, how to generate a curve ? Every other tutorial using graphics.h
(putpixel), but couldnt find anything about igs.
This should be a simple problem. But I have no idea which function can help me here.
Thanks in advance.
We have 4 control points here to begin with.
for (float t = 0.0; t <= 1.0; t += 0.2) {
double xt = 0.0, yt = 0.0;
xt = pow(1 - t, 3) * x[0] + 3 * t * pow(1 - t, 2) * x[1] + 3 * pow(t, 2) * (1 - t) * x[2]
+ pow(t, 3) * x[3];
yt = pow(1 - t, 3) * y[0] + 3 * t * pow(1 - t, 2) * y[1] + 3 * pow(t, 2) * (1 - t) * y[2]
+ pow(t, 3) * y[3];
count = count + 1;
//Math::Vector4f c(xt, yt, 0);
for (int i = 1; i < 3; i++) {
listt[i][0]= xt;
listt[i][1]= yt;
Math::Vector4f a(listt[i][0], listt[i][1],0);
myvector.push_back (&a);
}
}
......
.....
igs.Write("test.igs");
--- This is to create the points, but after that I dont know how to use the points to create a Bezier curve .

Argmax function in C++?

I am trying to make armgax function in C++
For example,
C = wage * hour * shock + t + a * R + (1 - delta) * d - d_f - a_f - fx1 * (1 - delta) * d;
double result;
result = (1 / (1 - sig)) * pow(pow(C, psi) * pow(d, 1 - psi), (1 - sig));
Suppose every variable except for 'd_f' and 'a_f' are given and both 'd_f' and 'a_f' some number(double) in [0, 24].
If I want to get the combination of 'd_f' and 'a_f' that maximizes 'result', is there a proper function that I can use?
Thanks.

C++ What is wrong with this version of the quadratic formula?

In my book, it asks me the following question This is for compsci-1
What is wrong with this version of the quadratic formula?
x1 = (-b - sqrt(b * b - 4 * a * c)) / 2 * a;
x2 = (-b + sqrt(b * b - 4 * a * c)) / 2 * a;
The equation your code is translating is:
which of course is not the solution for quadratic equations. You want a solution for this equation:
What's the difference? In the first one you compute the numerator, then you divide by two, then you multiply by a. That's what your code is doing. In the second one you compute the numerator, then you compute the denominator, finally you divide them.
So with additional variables:
num1 = -b - sqrt(b * b - 4 * a * c);
num2 = -b + sqrt(b * b - 4 * a * c);
den = 2 * a;
x1 = num1 / den;
x2 = num2 / den;
which can of course be written as:
x1 = (-b - sqrt(b * b - 4 * a * c)) / (2 * a);
x2 = (-b + sqrt(b * b - 4 * a * c)) / (2 * a);
Where you have to plug in those parenthesis in order to force the denominator to be computed before the division. As suggested in the comment by #atru.

Python implementation of Gradient Descent Algorithm isn't working

I am trying to implement a gradient descent algorithm for simple linear regression. For some reason it doesn't seem to be working.
from __future__ import division
import random
def error(x_i,z_i, theta0,theta1):
return z_i - theta0 - theta1 * x_i
def squared_error(x_i,z_i,theta0,theta1):
return error(x_i,z_i,theta0,theta1)**2
def mse_fn(x, z, theta0,theta1):
m = 2 * len(x)
return sum(squared_error(x_i,z_i,theta0,theta1) for x_i,z_i in zip(x,z)) / m
def mse_gradient(x, z, theta0,theta1):
m = 2 * len(x)
grad_0 = sum(error(x_i,z_i,theta0,theta1) for x_i,z_i in zip(x,z)) / m
grad_1 = sum(error(x_i,z_i,theta0,theta1) * x_i for x_i,z_i in zip(x,z)) / m
return grad_0, grad_1
def minimize_batch(x, z, mse_fn, mse_gradient_fn, theta0,theta1,tolerance=0.000001):
step_sizes = 0.01
theta0 = theta0
theta1 = theta1
value = mse_fn(x,z,theta0,theta1)
while True:
grad_0, grad_1 = mse_gradient(x,z,theta0,theta1)
next_theta0 = theta0 - step_sizes * grad_0
next_theta1 = theta1 - step_sizes * grad_1
next_value = mse_fn(x,z,next_theta0,theta1)
if abs(value - next_value) < tolerance:
return theta0, theta1
else:
theta0, theta1, value = next_theta0, next_theta1, next_value
#The data
x = [i + 1 for i in range(40)]
y = [random.randrange(1,30) for i in range(40)]
z = [2*x_i + y_i + (y_i/7) for x_i,y_i in zip(x,y)]
theta0, theta1 = [random.randint(-10,10) for i in range(2)]
q = minimize_batch(x,z,mse_fn, mse_gradient, theta0,theta1,tolerance=0.000001)
When I run I get the following error:
return error(x_i,z_i,theta0,theta1)**2 OverflowError: (34, 'Result too large')

Which is more costly for the processor?

I am working on a small Math lib for 3D graphics.
I'm not sure what costs more to the CPU/GPU in terms of time.
Right now I am using this to multiply matrix (4x4)
tmpM.p[0][0] = matA.p[0][0] * matB.p[0][0] + matA.p[0][1] * matB.p[1][0] + matA.p[0][2] * matB.p[2][0] + matA.p[0][3] * matB.p[3][0];
tmpM.p[0][1] = matA.p[0][0] * matB.p[0][1] + matA.p[0][1] * matB.p[1][1] + matA.p[0][2] * matB.p[2][1] + matA.p[0][3] * matB.p[3][1];
tmpM.p[0][2] = matA.p[0][0] * matB.p[0][2] + matA.p[0][1] * matB.p[1][2] + matA.p[0][2] * matB.p[2][2] + matA.p[0][3] * matB.p[3][2];
tmpM.p[0][3] = matA.p[0][0] * matB.p[0][3] + matA.p[0][1] * matB.p[1][3] + matA.p[0][2] * matB.p[2][3] + matA.p[0][3] * matB.p[3][3];
tmpM.p[1][0] = matA.p[1][0] * matB.p[0][0] + matA.p[1][1] * matB.p[1][0] + matA.p[1][2] * matB.p[2][0] + matA.p[1][3] * matB.p[3][0];
tmpM.p[1][1] = matA.p[1][0] * matB.p[0][1] + matA.p[1][1] * matB.p[1][1] + matA.p[1][2] * matB.p[2][1] + matA.p[1][3] * matB.p[3][1];
tmpM.p[1][2] = matA.p[1][0] * matB.p[0][2] + matA.p[1][1] * matB.p[1][2] + matA.p[1][2] * matB.p[2][2] + matA.p[1][3] * matB.p[3][2];
tmpM.p[1][3] = matA.p[1][0] * matB.p[0][3] + matA.p[1][1] * matB.p[1][3] + matA.p[1][2] * matB.p[2][3] + matA.p[1][3] * matB.p[3][3];
tmpM.p[2][0] = matA.p[2][0] * matB.p[0][0] + matA.p[2][1] * matB.p[1][0] + matA.p[2][2] * matB.p[2][0] + matA.p[2][3] * matB.p[3][0];
tmpM.p[2][1] = matA.p[2][0] * matB.p[0][1] + matA.p[2][1] * matB.p[1][1] + matA.p[2][2] * matB.p[2][1] + matA.p[2][3] * matB.p[3][1];
tmpM.p[2][2] = matA.p[2][0] * matB.p[0][2] + matA.p[2][1] * matB.p[1][2] + matA.p[2][2] * matB.p[2][2] + matA.p[2][3] * matB.p[3][2];
tmpM.p[2][3] = matA.p[2][0] * matB.p[0][3] + matA.p[2][1] * matB.p[1][3] + matA.p[2][2] * matB.p[2][3] + matA.p[2][3] * matB.p[3][3];
tmpM.p[3][0] = matA.p[3][0] * matB.p[0][0] + matA.p[3][1] * matB.p[1][0] + matA.p[3][2] * matB.p[2][0] + matA.p[3][3] * matB.p[3][0];
tmpM.p[3][1] = matA.p[3][0] * matB.p[0][1] + matA.p[3][1] * matB.p[1][1] + matA.p[3][2] * matB.p[2][1] + matA.p[3][3] * matB.p[3][1];
tmpM.p[3][2] = matA.p[3][0] * matB.p[0][2] + matA.p[3][1] * matB.p[1][2] + matA.p[3][2] * matB.p[2][2] + matA.p[3][3] * matB.p[3][2];
tmpM.p[3][3] = matA.p[3][0] * matB.p[0][3] + matA.p[3][1] * matB.p[1][3] + matA.p[3][2] * matB.p[2][3] + matA.p[3][3] * matB.p[3][3];
Is this a bad/slow idea?
Will it be more efficient to use a loop?
It will mostly depend on what the compiler manages to figure out from it.
If you can't time the operation in a context similar or identical to use (which remains the best way to figure it out), then I would guess a loop with a functor (functor object or lambda) is likely the best bet for the compiler to be able to figure out a cache friendly unroll and a cheap access to the operation.
A half decent modern compiler will also most likely vectorize it on a CPU.