In openCV there's a function called FindHomography which takes MATRICES as input.
I have data points representing the location of the features in a frame. I want to put these data points in a Matrix using C++ or C.
I want to put them in 2d array, which these features represent x and y location.
Please can you suggest how to do it ?
let's say I have 20 features in the Frame, now these features are just integers and I wanna put these features in A matrix in order to use them in the Matrix mentioned above
Here's an example based on the example in O'Reilly's Learning OpenCV on page 35-36:
float vals[] = { 0.1, 0.2, 0.3, 0.4 };
CvMat mat;
cvInitMatHeader(&mat, 2, 2, CV_32FC1, vals);
This creates a 2x2 float Matrix using the statically allocated data above.
Related
I'm trying to implement the following pseudo-code in C++ using Eigen:
img_binary = +1*(img>img_mean) + -1*(img<img_mean)
i.e. i'm trying to convert a gray scale image into a binary image such that values greater than image mean are +1 and less then image mean are -1. So far, I have the following:
cv::Mat cv_image
cv_image = cv::imread(img_path, CV_LOAD_IMAGE_GRAYSCALE)
MatrixXf eig_image;
cv::cv2eigen(cv_image, eig_image):
float image_mean = eig_image.mean();
ArrayXXf bin_image;
bin_image = eig_image.array() > image_mean;
I'm getting an error in the last line saying that I mixed different numeric types. Any suggestion on how I can do element-wise comparisons with Eigen?
The easiest solution in Eigen would be
ArrayXXf bin_image = (eig_image.array() > image_mean).cast<float>()*2.f-1.f;
An alternative would be:
ArrayXXf bin_image = (eig_image.array() > image_mean)
.select(ArrayXXf::Constant(1.0f,eig_image.rows(),eig_image.cols()), -1.0f);
Having to use ArrayXXf::Constant for one argument unfortunately is necessary, because there is no .select method accepting two scalar values
However, unless you plan to do further processing in Eigen you should consider using the corresponding OpenCV method threshold.
I have a 3xN Mat data, which is saved in yaml file and looks like:
%YAML:1.0
data1: !!opencv-matrix
rows: 50
cols: 3
dt: d
data: [ 7.1709999084472656e+01, -2.5729999542236328e+01,
-1.4074000549316406e+02, 7.1680000305175781e+01,
-2.5729999542236328e+01, -1.4075000000000000e+02,
7.1639999389648438e+01, -2.5729999542236328e+01,
-1.4075000000000000e+02, 7.1680000305175781e+01,
-2.5729999542236328e+01, -1.4075000000000000e+02, ...
I want to reduce the dimension of my 3D data to 1D or rather 2D and after that visualize it on a QwtPlotCurve. In order to do that, I have implemented pca function under opencv, but have no idea how to get the calculated x and y coordinates from pca result:
int numOfComponents= 100;
PCA pca(data, cv::Mat(), CV_PCA_DATA_AS_ROW, numOfComponents);
Mat mean= pca.mean.clone();
Mat eigenvalues= pca.eigenvalues.clone();
Mat eigenvectors= pca.eigenvectors.clone();
Here's an example of a 2D data set.
x=[2.5, 0.5, 2.2, 1.9, 3.1, 2.3, 2, 1, 1.5, 1.1];
y=[2.4, 0.7, 2.9, 2.2, 3.0, 2.7, 1.6, 1.1, 1.6, 0.9];
We can write these arrays in OpenCV with the following code.
float X_array[]={2.5,0.5,2.2,1.9,3.1,2.3,2,1,1.5,1.1};
float Y_array[]={2.4,0.7,2.9,2.2,3.0,2.7,1.6,1.1,1.6,0.9};
cv::Mat x(10,1,CV_32F,X_array); //Copy X_array to Mat (PCA need Mat form)
cv::Mat y(10,1,CV_32F,Y_array); //Copy Y_array to Mat
Next, we will combine x and y into a unified cv::Mat data. Because the whole data must be in one place for PCA function to work, we have to combine our data. (If your data is in 2D format, such as as an image, then you can simply convert the 2D to 1D signals and combine them)
x.col(0).copyTo(data.col(0)); //copy x into first column of data
y.col(0).copyTo(data.col(1)); //copy y into second column of data
the data after the last code will look like the following:
data=
[2.5, 2.4;
0.5, 0.7;
2.2, 2.9;
1.9, 2.2;
3.1, 3;
2.3, 2.7;
2, 1.6;
1, 1.1;
1.5, 1.6;
1.1, 0.9]
With cv::PCA, we can calculate the eigenvalues and eigenvectors of the 2D signal.
cv::PCA pca(data, //Input Array Data
Mat(), //Mean of input array, if you don't want to pass it simply put Mat()
CV_PCA_DATA_AS_ROW, //int flag
2); // number of components that you want to retain(keep)
Mat mean=pca.mean; // get mean of Data in Mat form
Mat eigenvalue=pca.eigenvalues;
Mat eigenvectors=pca.eigenvectors;
our eigenValue and eigenvectors will be as the below:
EigenValue=
[1.155625;
0.044175029]
EigenVectors=
[0.67787337, 0.73517865;
0.73517865, -0.67787337]
As you can see in the eigenValue, the first-row value is 1.55 and is much bigger than 0.044. So in eigenvectors, the first row is most important than the second row and if you retain the correspond row in EigenVectors, you can have almost whole data in 1D (Simply you have compressed the data, but your 2D pattern available in new 1D data)
How we can Extract final Data??
To extract final data you can multiply the eigenVector by the original data and get new data, for example, if I want to convert my data to 1D, I can use the below code
Mat final=eigenvectors.row(0)*data.t(); //firts_row_in_eigenVector * tranpose(data)
In your example, if you want to convert 3D to 2D then set the dimension to retain 2, and if you want to convert to 1D then set this argument to 1 like the below
1D
int numOfComponents= 1;
PCA pca(data, cv::Mat(), CV_PCA_DATA_AS_ROW, numOfComponents);
2
int numOfComponents= 2;
PCA pca(data, cv::Mat(), CV_PCA_DATA_AS_ROW, numOfComponents);
I have a 3 channel Mat image, type is CV_8UC3.
I want to compare, in a loop, the intensity value of a pixel with its neighbours and then set 0 or 1 if the neighbour is greater or not.
I can get the intensity calling Img.at<Vec3b>(x,y).
But my question is: how can I compare two Vec3b?
Should I compare pixels value for every channel (BGR or Vec3b[0], Vec3b[1] and Vec3b[2]), and then merge the three channels results into a single Mat object?
Me again :)
If you want to compare (greater or less) two RGB values you need to project the 3-dimensional RGB space onto a plane or axis.
Of course, there are many possibilities to do this, but an easy way would be to use the HSV color space. The hue (H), however, is not appropriate as a linear order function because it is circular (i.e. the value 1.0 is identical with 0.0, so you cannot decide if 0.5 > 0.0 or 0.5 < 0.0). However, the saturation (S) or the value (V) are appropriate projection functions for your purpose:
If you want to have colored pixels "larger" than monochrome pixels, you will prefer S.
If you want to have lighter pixels larger than darker pixels, you will probably prefer V.
Also any combination of S and V would be a valid projection function, e.g. S+V.
As far as I understand, you want a measure to calculate distance/similarity between two Vec3b pixels. This can be reflected to the general problem of finding distance between two vectors in an n-mathematical space.
One of the famous measures (and I think this is what you're asking for), is the Euclidean distance.
If you are using Opencv then you can simply use:
cv::Vec3b a(1, 1, 1);
cv::Vec3b b(5, 5, 5);
double dist = cv::norm(a, b, CV_L2);
You can refer to this for reading about cv::norm and its options.
Edit: If you are doing this to measure color similarity, it's recommended to use the LAB color space as it's proved that Euclidean distance in LAB space is a good approximation for human perception of colors.
Edit 2: I see what you mean, for this you can get the magnitude of each vector and then compare them, something like this:
double a_magnitude = cv::norm(a, CV_L2);
double b_magnitude = cv::norm(b, CV_L2);
if(a_magnitude > b_magnitude)
// do something
else
// do something else.
Does anyone know a good way how i can extract blocks from an Eigen::VectorXf that can be interpreted as a specific Eigen::MatrixXf without copying data? (the vector should contains several flatten matrices)
e.g. something like that (pseudocode):
VectorXd W = VectorXd::Zero(8);
// Use data from W and create a matrix view from first four elements
Block<2,2> A = W.blockFromIndex(0, 2, 2);
// Use data from W and create a matrix view from last four elements
Block<2,2> B = W.blockFromIndex(4, 2, 2);
// Should also change data in W
A(0,0) = 1.0
B(0,0) = 1.0
The purpose is simple to have several representations that point to the same data in memory.
This can be done e.g. in python/numpy by extracting submatrix views and reshape them.
A = numpy.reshape(W[0:0 + 2 * 2], (2,2))
I Don't know whether Eigen supports reshape methods for Eigen::Block.
I guess, Eigen::Map is very similar except it expects plain c-arrays / raw memory.
(Link: Eigen::Map).
Chris
If you want to reinterpret a subvector as a matrix then yes, you have to use Map:
Map<Matrix2d> A(W.data()); // using the first 4 elements
Map<Matrix2d> B(W.tail(4).data()); // using the last 4 elements
Map<MatrixXd> C(W.data()+6, 2,2); // using the 6th to 10th elements
// with sizes defined at runtime.
I have a vector of a 2-dimensional points in OpenCV
std::vector<cv::Point2f> points;
I would like to calculate the mean values for x and y coordinates in points. Something like:
cv::Point2f mean_point; //will contain mean values for x and y coordinates
mean_point = some_function(points);
This would be simple in Matlab. But I'm not sure if I can utilize some high level OpenCV functions to accomplish the same. Any suggestions?
InputArray does a good job here. You can simply call
cv::Mat mean_;
cv::reduce(points, mean_, 01, CV_REDUCE_AVG);
// convert from Mat to Point - there may be even a simpler conversion,
// but I do not know about it.
cv::Point2f mean(mean_.at<float>(0,0), mean_.at<float>(0,1));
Details:
In the newer OpenCV versions, the InputArray data type is introduced. This way, one can send as parameters to an OpenCV function either matrices (cv::Mat) either vectors. A vector<Vec3f> will be interpreted as a float matrix with three channels, one row, and the number of columns equal to the vector size. Because no data is copied, this transparent conversion is very fast.
The advantage is that you can work with whatever data type fits better in your app, while you can still use OpenCV functions to ease mathematical operations on it.
Since OpenCV's Point_ already defines operator+, this should be fairly simple. First we sum the values:
cv::Point2f zero(0.0f, 0.0f);
cv::Point2f sum = std::accumulate(points.begin(), points.end(), zero);
Then we divide to get the average:
Point2f mean_point(sum.x / points.size(), sum.y / points.size());
...or we could use Point_'s operator*:
Point2f mean_point(sum * (1.0f / points.size()));
Unfortunately, at least as far as I can see, Point_ doesn't define operator /, so we need to multiply by the inverse instead of dividing by the size.
You can use stl's std::accumulate as follows:
cv::Point2f sum = std::accumulate(
points.begin(), points.end(), // Run from begin to end
cv::Point2f(0.0f,0.0f), // Initialize with a zero point
std::plus<cv::Point2f>() // Use addition for each point (default)
);
cv::Point2f mean = sum / points.size(); // Divide by count to get mean
Add them all up and divide by the total number of points.