I have a Eigen::MatrixXd and a vector<int> of the indexes of that rows that I need to erase from the original matrix.
Is there a way to achieve this result as fast as possible?
Example:
Matrix:
1
2
4
0
Indexes of rows to remove {0, 2}.
Matrix:
2
0
Unfortunately, the answer is you'll have to roll your own, i.e. create a VectorXd of the size of the std::vector and fill it manually in a loop. When asked if a Matlab style conditional creation of matrices (B=A(A(1,:)<3,:)) exists, the dev (ggael) indicated that that feature would come later. I wouldn't be surprised if it's a SO style 6-8 weeks ;)
Related
This question already has an answer here:
Add a row to a matrix in OpenCV
(1 answer)
Closed 5 years ago.
Can alyone tell me how to append a couple of rows (as a cv::Mat) at the end of an existing cv::Mat? since it is a lot of data, I don't want to go through the rows with a for-loop and add them one-by-one. So here is what I want to do:
cv::Mat existing; //This is a Matrix, say of size 700x16
cv::Mat appendNew; //This is the new Matrix with additional data, say of size 200x16.
existing.push_back(appendNew);
If I try to push back the smaller matrix, I get an error of non-matching sizes:
OpenCV Error: Sizes of input arguments do not match
(Pushed vector length is not equal to matrix row length)
So I guess .push_back() tries to append the whole matrix like a kind of new channel, which won't work because it is much smaller than the existing matrix. Does someone know if the appending of the rows at the end of the existing matrix is possible as a whole, not going through them with a for-loop?
It seems like an easy question to me, nevertheless I was not able to find a simple solution online... So thanks in advance!
Cheers:)
You can use cv::hconcat() to append rows, either on top or bottom of a given matrix as:
import cv2
import numpy as np
box = np.ones((50, 50, 3), dtype=np.uint8)
box[:] = np.array([0, 0, 255])
sample_row = np.ones((1, 50, 3), dtype=np.uint8)
sample_row[:] = np.array([255, 0, 0])
for i in xrange(5):
box = cv2.vconcat([box, sample_row])
===>
For visualization purposes I have created a RGB matrix with red color and tried to append Blue rows to the bottom, You may replace with original data, Just make sure that both the matrices to be concatenated have same number of columns and same data type. I have explicitly defined the dtype while creating matrices.
Can someone explain the last line of this MatLab expression? I need to convert this to C++ and I do not have any experience in matlab syntax.
LUT = zeros(fix(Max - Min),1);
Bin= 1+LUT(round(Image));
Image is an input image, Min and Max are image minimum and maximum grey levels.
Is Bin going to be an array? What shall it contain? What are the dimensions, same as LUT or Image? What is the '1' stands for (add 1 to each member of array or a shift in array positions? I cannot find any example of this.
Thanks in advance.
LUT is a column vector that has a number of entries that is equal to the difference in maximum and minimum intensities in your image. LUT(round(Image)) retrieves the entries in your vector LUT which are given by the command round(Image). The dimension of Bin will be equal to the size of your matrix Image, and the entries will be equal to the corresponding indices from the LUT vector. So, say you have a 3x3 matrix Image, whose rounded values are as follows:
1 2 3
2 2 4
1 5 1
Then LUT(round(Image)) will return:
LUT(1) LUT(2) LUT(3)
LUT(2) LUT(2) LUT(4)
LUT(1) LUT(5) LUT(1)
And 1+LUT(round(Image)) will return:
1+LUT(1) 1+LUT(2) 1+LUT(3)
1+LUT(2) 1+LUT(2) 1+LUT(4)
1+LUT(1) 1+LUT(5) 1+LUT(1)
Note that this only works if all entries in round(Image) are positive, because you can't use zero/negative indexing in the LUT vector (or any MATLAB matrix/vector, for that matter).
I have observations in vector form and I want to calculate the covariance matrix and the mean from these observations in OpenCV using calcCovarMatrix:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html
My current function call is:
calcCovarMatrix(descriptors.at(j).descriptor.t(), covar, mean, CV_COVAR_ROWS);
Whereas descriptors.at(j).descriptor.t() is one matrix with 2 columns and 390 rows. So my "random variables" are the rows of this matrix. The covar and mean are empty matrices.
The function calculates covar correctly and retuns a 390x390 matrix. But the mean is just a matrix with 1 row and 2 columns. I do not get this. I am expecting a matrix with 1 columnd and 390 rows (a column vector).
Am I using the wrong variant of the function? If yes, how should I use the correct variant in my case, I am specifically pointing to the value for the nsamples parameter. I don't know two what value to set it.
In MSVS C++ I have a multidimensional vector (matrix). I am not using arrays.
For example:
vector< vector<float> > image(1056, vector<float>(366));
After data is included in the vector from another source how is it possible to create a sub matrix from this matrix, given an pixel co-ordinate and the number of columns and rows needed?
For example, I have:
1 2 3 4
5 6 7 8
9 10 11 12
I want:
6 7
10 11
Seems basic but I am new to this concept. There are examples but they use arrays and I was unable to change the samples around for my own need.
There is no simple way to do it. You should create new two-dimensional array of desired size and copy pieces of data to it.
You may want to access matrix through some view, which would be proxy class, mapping view indices, to underlaying data indices
I am trying to do a 2D Real To Complex FFT using CUFFT.
I realize that I will do this and get W/2+1 complex values back (W being the "width" of my H*W matrix).
The question is - what if I want to build out a full H*W version of this matrix after the transform - how do I go about copying some values from the H*(w/2+1) result matrix back to a full size matrix to get both parts and the DC value in the right place
Thanks
I'm not familiar with CUDA, so take that into consideration when reading my response. I am familiar with FFTs and signal processing in general, though.
It sounds like you start out with an H (rows) x W (cols) matrix, and that you are doing a 2D FFT that essentially does an FFT on each row, and you end up with an H x W/2+1 matrix. A W-wide FFT returns W values, but the CUDA function only returns W/2+1 because real data is even in the frequency domain, so the negative frequency data is redundant.
So, if you want to reproduce the missing W/2-1 points, simply mirror the positive frequency. For instance, if one of the rows is as follows:
Index Data
0 12 + i
1 5 + 2i
2 6
3 2 - 3i
...
The 0 index is your DC power, the 1 index is the lowest positive frequency bin, and so forth. You would thus make your closest-to-DC negative frequency bin 5+2i, the next closest 6, and so on. Where you put those values in the array is up to you. I would do it the way Matlab does it, with the negative frequency data after the positive frequency data.
I hope that makes sense.
There are two ways this can be acheived. You will have to write your own kernel to acheive either of this.
1) You will need to perform conjugate on the (half) data you get to find the other half.
2) Since you want full results anyway, it would be best if you convert the input data from real to complex (by padding with 0 imaginary) and performing the complex to complex transform.
From practice I have noticed that there is not much of a difference in speed either way.
I actually searched the nVidia forums and found a kernel that someone had written that did just what I was asking. That is what I used. if you search the cuda forum for "redundant results fft" or similar you will find it.