QuTip: How to multiply symbol with matrix - sympy

I am trying to multiply a symbol with a matrix which is defined by QuTip quantum object, but I got this error:
TypeError: Incompatible object for multiplication
I used:
from qutip import *
import sympy as sp
w0 = sp.Symbol('\omega_{0}')
w0*destroy(4)
Did I miss something?

The object destroy(4) contains a lot more information than just the matrix representation of the annihilation operator, so its represented as the type Qobj in qutip. The type Qobj currently doesn't support multiplication with the type sympy.Symbol. Note that you can look under the __rmul__ method of Qobj to find which types are supported for multiplying with Qobj on the right.
If you're happy working with the matrix representations of these operators you could do the following to multiply a symbol with the matrix corresponding to destroy(4). The following will work:
w0 * destroy(4).data.todense()
This will be a numpy matrix containing symbols, and you can multiply it with the matrices corresponding to other operators in your calculation (at a great loss of efficiency!).
Otherwise this might be worth posting an issue on their github. An implementation might be possible based on how __rmul__ is dispatched to numbers.Number here.

Related

Eigen complex matrix-vector multiplication

I have these Eigen complex matrices:
Eigen::MatrixXcd matrix;
Eigen::VectorXcd rhs;
Eigen::VectorXcd solution;
I can successfully load values and compute the solution, but if I try:
rhs = matrix*solution;
I get compiler errors related to overloaded "+=" operator and double/complex conversion in Eigen files GeneralProduct.h and CxRealAbs.h
Similar issues with trying to compute a residual.
Is there an
Help ??
thanks
Kevin
According to their documentation, Eigen checks the validity of operations:
Validity of operations
Eigen checks the validity of the operations that you perform. When possible, it checks them at compile-time, producing compilation errors. These error messages can be long and ugly, but Eigen writes the important message in UPPERCASE_LETTERS_SO_IT_STANDS_OUT. For example:
Matrix3f m;
Vector4f v;
v = m*v; // Compile-time error: YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES
Of course, in many cases, for example when checking dynamic sizes, the check cannot be performed at compile time. Eigen then uses runtime assertions. This means that the program will abort with an error message when executing an illegal operation if it is run in "debug mode", and it will probably crash if assertions are turned off.
MatrixXf m(3,3);
VectorXf v(4);
v = m * v; // Run-time assertion failure here: "invalid matrix product"
Eigen
Your RHS is a vector and you are trying to assign it a value or set of values from a matrix and vector product. Even from a mathematical perspective, you should ask yourself if this is a valid type of operation.
Consider that M and N are matrices and A & B are vectors what are the results of the following operations:
M = AB // Cross product and not a dot product
M = AM, MA, BM, MB // yields a transformed matrix using the original matrix
M = AN, NA, BN, NB // yields a transformed matrix using a supplied matrix
A = AA, AB, BA // cross product and not the dot product
A = AM, MA, AN, NA, BM, MB, BN, NB // yields what?
So within Eigen at compile time it is checking the validity of your operators and therefore it is giving you the compile-time error that the += is not defined and that you have not provided an overloaded version of one. It simply doesn't know how to perform the intended operation. It shouldn't matter what the underlying type of the matrix and vector classes are. It pertains to the fact that the operators you are trying to perform are not defined somewhere.
Edit
Here is a link that describes matrix and vector multiplication. Intuitively we would assume that this yields a vector, however, this is implicitly understood, but from a matrix perspective, this vector can be row or column such that the returned vector could either be a 1xM or a Mx1 matrix.
Here is a link to show the different interpretations even though they will return the same result: This can be found in section 3.2.3 of the documentation: Maxtrix-Vector Product
The number of operations between the two different interpretations varies although they provide the same final result. Without explicitly stating if it should be a row or column product this could lead to some type of ambiguity. I don't know how Eigen determines which method it would use, I would assume that it would choose the one with the fewest operations with the least amount of computations.
This isn't the main issue of your problem. At the end of the day, the compiler is still generating an error message that an overloaded operator is not being defined. I don't know where this is coming from within your code. So this could be within their library itself when trying to perform the operations on their version of the Complex types. Without being able to run and compile your full source code I can not easily determine what exactly is generating this compiler error and I also don't know what compiler you are using that is generating your errors.
The problem was mine. I had overloaded the *operator for std::complex * real because some earlier versions of std were incomplete. The first error listed was into Eigen, which led me astray.
GeneralProduct.h(287): no viable overloaded '+='
CxRealAbs.h(111): cannot convert 'const Eigen::Map, -1, 1, 0, -1, 1>, 1, Eigen::Stride<0, 0> >' to 'double' without a conversion
Remove my overload and it compiles and runs ok. No problem with Eigen dynamic matrices.
thanks for the replies.
Kevin

Does the quaternion that Eigen extracts from an Affine3d 3x3 rotation matrix is normalized?

here https://docs.ros.org/kinetic/api/eigen_conversions/html/eigen__msg_8cpp_source.html#l00093
I quite do not understand the syntax of this line : Eigen::Quaterniond q = (Eigen::Quaterniond)e.linear();
But anyway, assuming e is of type Eigen::Affine3d (3x3 rotation matrix), where can I figure out how a quaternion is constructed fron a rotation matrix in Eigen library please? I did not saw any method on the doc, except https://eigen.tuxfamily.org/dox/classEigen_1_1QuaternionBase.html#title25 but I'm a real noob in c++ so I do not even understand that one...
I hope it does something like this https://arc.aiaa.org/doi/abs/10.2514/2.4654?journalCode=jgcd because my need is that the extracted quaternion must be normalized, and I would like to ensure this.
You can ensure that your Eigen::Quaterniond is normalised by calling it's normalize() method.
The line in question is explicitly converting whatever e.linear() returns to a Eigen::Quaterniond.
As to how you go from a rotation matrix to a quaternion, Wikipedia describes the relationship. It's just solving a known set of simultaneous equations. I expect the body of the method to look (more or less) like a bunch of algebra.

How to update the covariance of a multi camera system when a rigid motion is applied to all of them?

For example for 6-dof camera states, two cameras have 12 state parameters and a 12*12 covariance matrix (assume Gaussian distribution). How does this covariance change when a 6-dof rigid motion is applied to the cameras?
What if the 6-dof is a Gaussian distribution too ?
You can use the "forward propagation" theorem (you can find it in Hartley and Zisserman's multiple view geometry book, chapter 5, page 139).
Basically, if you have a random variable x with mean x_m and covariance C, and a differentliable function f that you apply to x, then the mean of f(x) will be f(x_m) and its covariance C_f will be approximately JCJ^t, where ^t denotes the transpose, and Jis the Jacobian matrix of f evaluated at x_m.
Let's now consider the problems of the covariance propagation separately for camera positions and camera orientations.
First see what happens to the translation parameters of the camera in your case, let's denote them with x_t.In your case, f is a rigid transformation, that means that
f(x_t)=Rx_t+T //R is a rotation and T a translation, x_t is the position of the camera
Now the Jacobian of f with respect to x_t is simply R, so the covariance is given by
C_f=RCR^T
which is an interesting result: it indicates that the change in
covariance only depends on the rotation. This makes sense, since
intuitively, translating the (positional) data doesn't actually changes the axis
along which it changes (thing about principal component
analysis).
Also note that if C is isotropic, i.e a diagonal matrix
lambda*Identity, then C_f=lambda*Identity, which also makes sense,
since intuitively we don't expect an isotropic covariance to change
with a rotation.
Now consider the orientation parameters. Let's use the Lie algebra of the SO(3) group. In that case, the yaw, pitch, scale will be parametrized as v=[alpha_1, alpha_2, alpha_3]^t (they are basically Lie algebra coefficients). In the following, we will use the exponential and logarithm maps from the Lie algebra so(3) to the group SO(3). We can write our function as
f(v)=log(R*exp(v))
In the above, exp(v) is the rotation matrix of your camera, and R is the rotation from your rigid transformation.
Note that translation doesn't affect orientation parameters. Computing the Jacobian of f with respect to v is mathematically involved. I suspect that you can do it using the adjoint or the Lie algebra, or you can do it using the Baker-Campbell-Hausdorff formula, however, you will have to limit the precision. Here, we'll take a shortcut and use the result given in this question.
jacobian_f_with_respect_to_v=R*inverse(R*exp(v))
=R*exp(v)^t*R^t
So, our covariance will be
R*exp(v)^t*R^t * Cov(v) * (R*exp(v)^t*R^t)^t
=R*exp(v)^t*R^t * Cov(v) * R * exp(v) * R^t
Again, we observe the same thing: if Cov(v) is isotropic then so is the covariance of f.
Edit: Answers to the questions you asked in the comments
Why did you assume conditional independence between translation/rotation?
Conditional independence between translation/orientation parameters is often assumed in many works (especially in the pose graphe litterature, e.g. see Hauke Strasdat's thesis), and I've always found that in practice, this works a lot better (not a very convincing argument, I know). However, I admit that I didn't put much thought (if any) into this when writing this answer, because my main point was "use the forward propagation theorem". You can apply it jointly to orientation/position, and all this changes is that your Jacobian will look like
J=[J_R J_T]//J_R Jacobian w.r.t orientation , J_T Jacobian w.r.t position
and then the "densification" of the covariance matrix will happen as a result of the propagation like J^T*C*J.
Why did you use SO(3) instead of SE(3)?
You said it yourself, I separated the translation parameters from the orientation. SE(3) is the space of rigid transformation, which includes translations. It wouldn't have made sense for me to use it since I already had taken care of the position parameters.
What about the covariance between two cameras?
I think we can still apply the same theorem. The difference now is your rigid transformation will be a function M(x_1,x_2) of 12 parameters, and your Jacobian will look like [J_R_1 J_R_2 J_T_1 J_T2]. These can be tedious to compute as you know, so if you can just try numeric or automatic differentiation.

Least Squares Solution of Overdetermined Linear Algebraic Equation Ax = By

I have a linear algebraic equation of the form Ax=By. Where A is a matrix of 6x5, x is vector of size 5, B a matrix of 6x6 and y vector of size 6. A, B and y are known variables and their values are accessed in real time coming from the sensors. x is unknown and has to find. One solution is to find Least Square Estimation that is x = [(A^T*A)^-1]*(A^T)B*y. This is conventional solution of linear algebraic equations. I used Eigen QR Decomposition to solve this as below
matrixA = getMatrixA();
matrixB = getMatrixB();
vectorY = getVectorY();
//LSE Solution
Eigen::ColPivHouseholderQR<Eigen::MatrixXd> dec1(matrixA);
vectorX = dec1.solve(matrixB*vectorY);//
Everything is fine until now. But when I check the errore = Ax-By, its not zero always. Error is not very big but even not ignorable. Is there any other type of decomposition which is more reliable? I have gone through one of the page but could not understand the meaning or how to implement this. Below are lines from the reference how to solve the problem. Could anybody suggest me how to implement this?
The solution of such equations Ax = Byis obtained by forming the error vector e = Ax-By and the finding the unknown vector x that minimizes the weighted error (e^T*W*e), where W is a weighting matrix. For simplicity, this weighting matrix is chosen to be of the form W = K*S, where S is a constant diagonal scaling matrix, and K is scalar weight. Hence the solution to the equation becomes
x = [(A^T*W*A)^-1]*(A^T)*W*B*y
I did not understand how to form the matrix W.
Your statement " But when I check the error e = Ax-By, its not zero always. " almost always will be true, regardless of your technique, or what weighting you choose. When you have an over-described system, you are basically trying to fit a straight line to a slew of points. Unless, by chance, all the points can be placed exactly on a single perfectly straight line, there will be some error. So no matter what technique you use to choose the line, (weights and so on) you will always have some error if the points are not colinear. The alternative would be to use some kind of spline, or in higher dimensions to allow for warping. In those cases, you can choose to fit all the points exactly to a more complicated shape, and hence result with 0 error.
So the choice of a weight matrix simply changes which straight line you will use by giving each point a slightly different weight. So it will not ever completely remove the error. But if you had a few particular points that you care more about than the others, you can give the error on those points higher weight when choosing the least square error fit.
For spline fitting see:
http://en.wikipedia.org/wiki/Spline_interpolation
For the really nicest spline curve interpolation you can use Centripital Catmull-Rom, which in addition to finding a curve to fit all the points, will prevent unnecessary loops and self intersections that can sometimes come up during abrupt changes in the data direction.
Catmull-rom curve with no cusps and no self-intersections

Alglib: solving A * x = b in a least squares sense

I have a somewhat complicated algorithm that requires the fitting of a quadric to a set of points. This quadric is given by its parametrization (u, v, f(u,v)), where f(u,v) = au^2+bv^2+cuv+du+ev+f.
The coefficients of the f(u,v) function need to be found since I have a set of exactly 6 constraints this function should obey. The problem is that this set of constraints, although yielding a problem like A*x = b, is not completely well behaved to guarantee a unique solution.
Thus, to cut it short, I'd like to use alglib's facilities to somehow either determine A's pseudoinverse or directly find the best fit for the x vector.
Apart from computing the SVD, is there a more direct algorithm implemented in this library that can solve a system in a least squares sense (again, apart from the SVD or from using the naive inv(transpose(A)*A)*transpose(A)*b formula for general least squares problems where A is not a square matrix?
Found the answer through some careful documentation browsing:
rmatrixsolvels( A, noRows, noCols, b, singularValueThreshold, info, solverReport, x)
The documentation states the the singular value threshold is a clamping threshold that sets any singular value from the SVD decomposition S matrix to 0 if that value is below it. Thus it should be a scalar between 0 and 1.
Hopefully, it will help someone else too.