Sympy -- define a proper domain - sympy

I want to perform some calculations with polynomials in Sympy, such as multiplication and so on. The coefficients of these polynomials are noncommutative . It may be matrices for example. How can I set the appropriate domain in the "Sympy.Poly" class for this problem.

Related

Convert powers of trigonometric functions to linear sum of multiple angles

I cannot figure out if sympy has any functions to convert powers of trigonometric functions to a linear combination of multiple angles. For example, sin(x)**4 can be written as (3/2 - 2*cos(2*x) + 1/2 * cos(4*x))/4.
I managed to get this working this way:
simplify(expand((sin(x)**4).rewrite(exp)))

Symmetry of autocovariance matrix by multiplying feature matrix with its transpose

There is a mathematical theorem stating that a matrix A multiplied with its transpose yields a symmetric, positive definite matrix (thus leading to positive eigenvalues).
Why does the symmetry test fails here for medium-size-random matrices?
It always works for small matrices (20,20 etc.)
import numpy as np
features = np.random.random((50,70))
autocovar = np.dot(np.transpose(features),features)
print((np.transpose(autocovar) == autocovar).all())
I always get 'FALSE' running this code. What do I do wrong?
I need the autocovariance matrix to perform a PCA but so far I get complex eigenvalues...
Thanks!
This could be due to errors in floating point arithmetic. Your matrix may be very close to a symmetric matrix numerically, but due to errors in finite precision arithmetic it is technically nonsymmetric. As a result a numerical solver may return complex eigenvalues.
One solution (sort of a hack) is to symmetrize the matrix, i.e., replace it with its symmetric part. This matrix is guaranteed to be symmetric, even in floating point arithmetic, and it will be very close to the matrix you define (near machine precision). This can be achieved via
autocovar_sym = .5*(autocovar+autocovar.T)
Hope this helps.

How to find an inverse of a nearly singular matrix?

I am realizing an algorithm using C++ and CUDA. But I got into trouble when I tried to find an inverse of a special matrix.
This matrix has following features:
it is a square matrix (suppose: (m+3)x(m+3),m>0);
its transpose matrix is its self;
its main diagonal must be zeros;
it must have a 3x3 zero matrix on the bottom right corner;
you can consider this matrix in this form:H = [A ,B ;B' ,0];
I have tried some methods but all failed:
pseudo-inverse matrix:
I used matlab at first and got error or warning when I tried to use inv(H'*H): Warning: Matrix is singular to working precision or matrix is close to singular or badly scaled
some approximation methods:
the reference material is here:approximation I found two methods:Gauss-Jordan elimination and Cholesky decomposition.when I tried chol in matlab, i get following error:Matrix must be positive definite
can anybody give me some suggestions?
It would be good to know some more information on your specific problem and, in particular, if you need the inverse per se or if you need to just invert a linear system of equations. I will try to give you directions for both the cases.
Let me start from the consideration that that your matrix is nearly singular and so your system is ill-conditioned.
DETERMINING THE INVERSE OF A NEARLY SINGULAR MATRIX
As it has been clarified in the comments and answers above, seeking the inverse of a nearly singular matrix is meaningless. What makes sense is to construct a regularized inverse of your matrix. You can do that by resorting to the spectral decomposition (Singular Value Decomposition, or SVD) of your matrix. More in detail, you can construct the singular system, remove the least significant singular values which are the source for the nearly singular behavior of the matrix, and then use the singular values and vectors to form an approximate inverse. Of course, in this case A*A_inv will only give an approximation of the identity matrix.
How can this be done on GPU? First, let me say that implementing an SVD algorithm in C++ or CUDA is by no means an easy task. There are several techniques among which you should choose depending on the accuracy you need, for example, to determine the singular values. Anyway, Matlab has a set of linear algebra functions working on GPU. Also, CULA and Magma are two libraries offering SVD calculation routines. Also, you can consider using Arrayfire which also offers linear algebra routines, including the SVD.
INVERTING A NEARLY SINGULAR SYSTEM
In this case, you should consider using some sort of Tikhonov regularization, which consists to formulating the inversion of the linear system as an optimization problem and adding a regularization term, which may depend on the features you already know about your uknowns.
For both the cases above, I recommend reading some theory. The book
M. Bertero, P. Boccacci, Introduction to Inverse Problems in Imaging
would be useful either if you have to find an approximate inverse or if you have the explicitly invert the linear system.
The pseudo-inverse matrix is inv(H'*H)*H', since the condition number of H is very high (try cond(H)), you may need a regularization factor to obtain the pseudo-inverse matrix: inv(H'*H + lambda*eye(size(H)))*H'. The smaller the lambda, the lower bias such estimation will achieve. But too small value of lambda will lead to high variance (ill-conditioned). You may try a best-suit value.
You can of course use pinv(H) directly. The reason why pinv(H)*H ~= eye(size(H)) is because pinv(H) is just an approximation of the inverse of a matrix H with the rank lower than size(H,1). In other words, the columns in H is not completely independent.
I would like to show you a very simple example:
>>a =
0 1 0
0 0 1
0 1 0
pinv(a) * a
>>
ans =
0 0 0
0 1.0000 0
0 0 1.0000
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>>a =
1 0
0 1
1 0
pinv(a) * a
>>
ans =
1.0000 0
0 1.0000
Note a * pinv(a) is not identity matrix, because columns of a are linearly independent, not the row of a. Check this page for more details.

Kiss FFT seems to multiply data by the number of points that it transforms

My limited understanding of the Fourier transform is that you should be able to toggle between the time and frequency domain without changing the original data. So, here is a summary of what I (think I) am doing:
Using kiss_fft_next_fast_size(994) to determine that I should use 1000.
Using kiss_fft_alloc(...) to create a kiss_fft_cfg with nfft = 1000.
Extending my input data from size 994 to 1000 by padding extra points as zero.
Passing kiss_fft_cfg to kiss_fft(...) along with my input and output arrays.
Using kiss_fft_alloc(...) to create an inverse kiss_fft_cfg with nfft = 1000.
Passing the inverse kiss_fft_cfg to kiss_fft(...) inputting the previous output array.
Expecting the original data back, but getting each datum exactly 1000 times bigger!
I have put a full example here, and my 50-odd lines of code can be found right at the end. Although I can work around this by dividing each result by the value of OPTIMAL_SIZE (i.e. 1000) that fix makes me very uneasy without understanding why.
Please can you advise what simply stupid thing(s) I am doing wrong?
This is to be expected: the inverse discreet Fourier transform (which can be implemented using the Fast Fourier Transform), requires a division by 1/N:
The normalization factor multiplying the DFT and IDFT (here 1 and 1/N)
and the signs of the exponents are merely conventions, and differ in
some treatments. The only requirements of these conventions are that
the DFT and IDFT have opposite-sign exponents and that the product of
their normalization factors be 1/N. A normalization of \sqrt{1/N} for both the DFT and IDFT makes the transforms unitary,
which has some theoretical advantages. But it is often more practical
in numerical computation to perform the scaling all at once as above
(and a unit scaling can be convenient in other ways).
http://en.wikipedia.org/wiki/Dft

Polynomials with real coefficients in NTL

Does anyone know if the NTL library supports polynomials with real number coefficients, such as from the NTL classes RR or xdouble or just regular C++ floats?
I'm wanting to do polynomial multiplication for polynomials with real coefficients and would like if there was a class "RR_X" (or something similar) like the ZZ_X and ZZ_pX classes that supports FFT polynomial multiplication.