Multiply std::vector<cv::Point3f> by a cv::Mat? - c++

as above, I have a std::vector of cv::Point3f. I have a transformation matrix. I need to multiply the vector by the inverse of the Mat.
My Mat: (T is the resulting transformation)
cv::Mat R(3,3,rvec.type());
cv::Rodrigues(rvec, R); // R is 3x3
cv::Mat T(4, 4, R.type()); // T is 4x4
T(cv::Range(0, 3), cv::Range(0, 3)) = R * 1; // copies R into T
T(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1; // copies tvec into T
float *p = T.ptr<float>(3);
p[0] = p[1] = p[2] = 0; p[3] = 1;
my vector:
std::vector<cv::Point3f> objectPoints;
I have tried:
cv::Mat V = T.inv() * cv::Mat(objectPoints, false)
V.copyTo(cv::Mat(objectPoints, false));
(Assertion failed, type error)
for (int i = 0; i < objectPoints.size(); i++)
{
cv::Mat dst = cv::Mat(objectPoints[i], false);
dst = -T*dst; //USE MATRIX ALGEBRA
// cv::Point3f tmp3 = cv::Point3f(dst(0, 0), dst(1, 0), dst(2, 0));
}
(Assertion failed)
std::vector<cv::Point3f> p3d;
perspectiveTransform(objectPoints, p3d, -T);
(runs, but the values are very incorrect)
cv::transform(objectPoints, p3d, -T);
(Assertion error)
What is the correct way (if there is a way!) to do this?
Thank you.

As Rick M. pointed out, you're trying to multiply a 4x4 matrix with a length-3 point. To perform a transformation with just one matrix multiplication (i.e. with the 4x4 combined R-T matrix), you first have to represent the point in homogeneous coordinates, which essentially just involves tacking on a 1 as the 4th element of your point; after the transformation, you divide the new point by the 4th element to maintain its value as a 1. Here's a nice source on 3D-3D transformations, with homogeneous coordinates discussed on slide 14.
Since OpenCV doesn't have a Point4f class, you'll have to add this 1 when you're creating the Mat form of the point. This is untested but might work:
std::vector<cv::Point3f> dstPoint;
for (int i = 0; i < objectPoints.size(); i++) {
// Convert Point3f to 4x1 Mat (in homogeneous coordinates, with 1 as 4th element)
cv::Point3f pt = objectPoints[i];
cv::Mat ptMat = (cv::Mat_<float>(4,1) << pt.x, pt.y, pt.z, 1);
// Perform matrix multiplication and store as Mat_ for easy element access
cv::Mat_<float> dstMat(T.inv() * ptMat);
// Divide first three resulting elements by the 4th (homogenizing
// the point) and store as Point3f
float scale = dstMat(0,3);
cv::Point3f dst(dstMat(0,0)/scale, dstMat(0,1)/scale, dstMat(0,2)/scale);
dstPoints.push_back(dst)
}
Would test, but I'm at work and don't have OpenCV on this computer.
UPDATE:
When copying to T, try this instead:
cv::Mat T(4, 4, cv::DataType<float>::type);
cv::Mat rot = T(cv::Range(0, 3), cv::Range(0, 3));
cv::Mat trans = T(cv::Range(0, 3), cv::Range(3, 4));
R.copyTo(rot);
tvec.copyTo(trans);

Based on the answer by DCSmith, I have it working. I had to make this small change:
cv::Mat T(4, 4, cv::DataType<float>::type);
R.copyTo(T(cv::Rect(0, 0, 3, 3)));
tvec.copyTo(T(cv::Rect(3, 0, 1, 3)));
To make the entire function look like:
std::vector<cv::Point3f> p3d;
cv::Mat R(3,3, cv::DataType<float>::type);
cv::Rodrigues(rvec, R); // R is 3x3
cv::Mat T(4, 4, cv::DataType<float>::type);
R.copyTo(T(cv::Rect(0, 0, 3, 3)));
tvec.copyTo(T(cv::Rect(3, 0, 1, 3)));
float *p = T.ptr<float>(3);
p[0] = p[1] = p[2] = 0; p[3] = 1;
std::vector<cv::Point3f> dstPoint;
for (int i = 0; i < objectPoints.size(); i++) {
cv::Point3f pt = objectPoints[i];
cv::Mat ptMat = (cv::Mat_<float>(4, 1) << pt.x, pt.y, pt.z, 1);
// Perform matrix multiplication and store as Mat_ for easy element access
cv::Mat_<float> dstMat = T.inv() * ptMat;
// Divide first three resulting elements by the 4th (homogenizing
// the point) and store as Point3f
float scale = dstMat(0, 3);
cv::Point3f dst(dstMat(0, 0) / scale, dstMat(0, 1) / scale, dstMat(0, 2) / scale);
p3d.push_back(dst);
}
Thank you for your help!

Related

Opencv 3D point to 2D image point projection

I'm converting LiDAR points to a camera image.
The LiDAR points and also the camera image are from a simulator.
For simplicity I placed them at the exact same location, facing the same direction without any roll, pitch, yaw. (So camera coordinate system is the same as the lidar coordinate system)
If I understood correctly I can then just use an empty t_vec/r_vec/d_vec (as there is also no distortion in the camera image).
The Image is 785x785.
// added for debugging purpose
std::vector<cv::Point3d> points_lidar {
cv::Point3d {0,0,4.5},
cv::Point3d {1.88,-0.42,4.50},
cv::Point3d {1.85,-0.42,4.49},
cv::Point3d {1.84,-0.42,4.49},
cv::Point3d {1.83,-0.42,4.51},
cv::Point3d {1.82,-0.42,4.52},
cv::Point3d {1.81,-0.41,4.52},
};
cv::Mat d_vec = cv::Mat::zeros(4, 1, cv::DataType<double>::type);
cv::Mat r_vec = cv::Mat::zeros(3, 1, cv::DataType<double>::type);
cv::Mat t_vec = cv::Mat::zeros(3, 1, cv::DataType<double>::type);
double camera_mat[3][3] = {
{785,0,0},
{0,785,0},
{0,0,1}
};
cv::Mat camera (3, 3, cv::DataType<double>::type, camera_mat);
std::vector<cv::Point2i> points_camera {};
cv::projectPoints(points_lidar, r_vec, t_vec, camera, d_vec, points_camera);
for (const auto& p : points_camera) {
cv::Point2i pp;
pp.x = (int)(p.x + 785/2);
pp.y = (int)((1 - p.y) + 785/2);
cv::circle(image->image, pp, 5, cv::Scalar(0, 0, 255), -1);
}
Unfortunately the projected points don't nearly match the 3d points and are too far to the right / bottom of the image.
Does anyone see an issue?
Edit:
I tested a simple solution without opencv
after the comment of #ema I tested a really simple solution without opencv. Which actually results in the correct image pixels but is far slower (~5ms compared to ~1ms with opencv).
for (const auto& p_lidar: points_lidar) {
cv::Point2d p {p_lidar.x / p_lidar.z, p_lidar.y / p_lidar.z};
p.x = (p.x + cam.canvas_width / 2) / 2;
p.y = (p.y + cam.canvas_height / 2) / 2;
cv::Point2i p_raster;
p_raster.x = std::floor(p.x * cam.image_width);
p_raster.y = std::floor((1 - p.y) * cam.image_height);
cv::circle(image->image, p_raster, 3, cv::Scalar(0, 0, 255), -1);
}
As the pipeline has about 100ms until the next iteration of processing begins I would still prefer to use opencv for optimized calculations.

Converting vector<Point3d> to a Mat of size (n x 3) and vice versa

I have point clouds in the form of a vector of Point3d (vector). If I use the conversion provided by OpenCV, like
cv::Mat tmpMat = cv::Mat(pts) //Here pts is vector<cv::Point3d>
It gets converted into a Matrix with 3 channels. I want a single channel matrix with the dimensions nx3 (n - the number of elements in the vector). Is there any direct method to convert a vector of Point3d into an OpenCV Mat of size nx3?
Now I'm doing
cv::Mat1f tmpMat = cv::Mat::zeros(pts.size(), 3, cv::CV_32FC1);
for(int i = 0; i< pts.size(); ++i)
{
tmpMat(i, 0) = pts[i].x;
tmpMat(i, 1) = pts[i].y;
tmpMat(i, 2) = pts[i].z;
}
from Mat to vector of Point3d
vector<cv::Point3d> pts;
for (int i = 0; i < tmpMat.rows; ++i)
{
pts.push_back(cv::Point3d(tmpMat(i, 0), tmpMat(i, 1), tmpMat(i, 2));
}
I will be doing this repeatedly. Is there any faster method?
Found the method to convert a 3 Channel Mat to a single Channel Mat of size (nx3) at
http://docs.opencv.org/2.4/modules/core/doc/basic_structures.html#mat-reshape
cv::Mat tmpMat = cv::Mat(pts).reshape(1);
to convert from a Mat of size nx3 to vector
vector<cv::Point3d> pts;
tmpMat.reshape(3, tmpMat.rows*tmpMat.cols).copyTo(pts);
the size of the vector will be the equal to the number of rows of the Mat

harris corner points coordinates

I am new in opencv so could you help me to find coordinates of points that detected by harris using following code?
source image is img
i want to store coordinates of corner points in matrix S
Mat S;
dst = Mat::zeros( img.size(), CV_32FC1 );
cornerHarris( img, dst, 7, 5, 0.0001, BORDER_DEFAULT );
dst = Mat::zeros( img.size(), CV_32FC1 );
cornerHarris( img, dst, 7, 5, 0.0001, BORDER_DEFAULT );
// Normalizing
normalize( dst, dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
convertScaleAbs( dst_norm, dst_norm_scaled );
for( int j = 0; j < dst_norm.rows ; j++ ) {
for( int i = 0; i < dst_norm.cols; i++ ) {
if( (int) dst_norm.at<float>(j,i) > thresh ) {
S.at<int >(j,i)= (int) dst_norm.at<int>(j,i);
}
}
}
You can store the points coordinates in a Nx2 matrix, where the first column is the x coordinate, and second column is the y coordinate.
You can declare S as an empty CV_32SC1 matrix like:
Mat S(0, 2, CV_32SC1);
(or you can even leave Mat S;, since the type is determined by the first push_back).
Then you can append the coordinates. Inside the if statement, add:
// Create a matrix for the point
Mat pt(1,2,CV_32SC1);
pt.at<int>(0, 0) = i;
pt.at<int>(0, 1) = j;
// Add the point to S
S.push_back(pt);
Note that it can be more straightforward to use std::vector<Point> to store points. In this case, you can declare Svec as:
std::vector<cv::Point> Svec;
and in your if statement you'll have:
Svec.push_back(Point(i,j));
or
Svec.emplace_back(i,j);
If needed, you can convert the vector<Point> to a Mat like:
Mat Z(Svec); // Creates a 2 channels matrix, Nx1
Z = Z.reshape(1); // Creates a 1 channel matrix, Nx2
Mat pt(1,2,CV_32SC1);
pt.at<int>(0, 0) = i;
pt.at<int>(0, 1) = j;
i , j in ur code will be values of i,j counters.
I need to put coordinates of harris corner points

reprojectImageTo3D() in OpenCV

I've been trying to compute real world coordinates of points from a disparity map using the reprojectImageTo3D() function provided by OpenCV, but the output seems to be incorrect.
I have the calibration parameters, and compute the Q matrix using
stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs, frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);
I believe this first step is correct, since the stereo frames are being rectified properly, and the distortion removal I'm performing also seems all right. The disparity map is being computed with OpenCV's block matching algorithm, and it looks good too.
The 3D points are being calculated as follows:
cv::Mat XYZ(disparity8U.size(),CV_32FC3);
reprojectImageTo3D(disparity8U, XYZ, Q, false, CV_32F);
But for some reason they form some sort of cone, and are not even close to what I'd expect, considering the disparity map. I found out that other people had a similar problem with this function, and I was wondering if someone has the solution.
Thanks in advance!
[EDIT]
stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs,frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);
initUndistortRectifyMap(left_cam_matrix, left_dist_coeffs, R1, P1, frame_size,CV_32FC1, left_undist_rect_map_x, left_undist_rect_map_y);
initUndistortRectifyMap(right_cam_matrix, right_dist_coeffs, R2, P2, frame_size, CV_32FC1, right_undist_rect_map_x, right_undist_rect_map_y);
cv::remap(left_frame, left_undist_rect, left_undist_rect_map_x, left_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);
cv::remap(right_frame, right_undist_rect, right_undist_rect_map_x, right_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);
cv::Mat imgDisparity32F = Mat( left_undist_rect.rows, left_undist_rect.cols, CV_32F );
StereoBM sbm(StereoBM::BASIC_PRESET,80,5);
sbm.state->preFilterSize = 15;
sbm.state->preFilterCap = 20;
sbm.state->SADWindowSize = 11;
sbm.state->minDisparity = 0;
sbm.state->numberOfDisparities = 80;
sbm.state->textureThreshold = 0;
sbm.state->uniquenessRatio = 8;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 0;
// Compute disparity
sbm(left_undist_rect, right_undist_rect, imgDisparity32F, CV_32F );
// Compute world coordinates from the disparity image
cv::Mat XYZ(disparity32F.size(),CV_32FC3);
reprojectImageTo3D(disparity32F, XYZ, Q, false, CV_32F);
print_3D_points(disparity32F, XYZ);
[EDIT]
Adding the code used to compute 3D coords from disparity:
cv::Vec3f *StereoFrame::compute_3D_world_coordinates(int row, int col,
shared_ptr<StereoParameters> stereo_params_sptr){
cv::Mat Q_32F;
stereo_params_sptr->Q_sptr->convertTo(Q_32F,CV_32F);
cv::Mat_<float> vec(4,1);
vec(0) = col;
vec(1) = row;
vec(2) = this->disparity_sptr->at<float>(row,col);
// Discard points with 0 disparity
if(vec(2)==0) return NULL;
vec(3)=1;
vec = Q_32F*vec;
vec /= vec(3);
// Discard points that are too far from the camera, and thus are highly
// unreliable
if(abs(vec(0))>10 || abs(vec(1))>10 || abs(vec(2))>10) return NULL;
cv::Vec3f *point3f = new cv::Vec3f();
(*point3f)[0] = vec(0);
(*point3f)[1] = vec(1);
(*point3f)[2] = vec(2);
return point3f;
}
Your code seems fine to me. It could be a bug with the reprojectImageTo3D. Try to replace it with the following code (which has the same role):
cv::Mat_<cv::Vec3f> XYZ(disparity32F.rows,disparity32F.cols); // Output point cloud
cv::Mat_<float> vec_tmp(4,1);
for(int y=0; y<disparity32F.rows; ++y) {
for(int x=0; x<disparity32F.cols; ++x) {
vec_tmp(0)=x; vec_tmp(1)=y; vec_tmp(2)=disparity32F.at<float>(y,x); vec_tmp(3)=1;
vec_tmp = Q*vec_tmp;
vec_tmp /= vec_tmp(3);
cv::Vec3f &point = XYZ.at<cv::Vec3f>(y,x);
point[0] = vec_tmp(0);
point[1] = vec_tmp(1);
point[2] = vec_tmp(2);
}
}
I never used reprojectImageTo3D, however I am using successfully code similar to the snippet above.
[Initial answer]
As it is explained in the documentation for StereoBM, if you request a CV_16S disparity map, you have to divide each disparity value by 16 before using them.
Hence, you should convert the disparity map as follows before using it:
imgDisparity16S.convertTo( imgDisparity32F, CV_32F, 1./16);
You can also directly request a CV_32F disparity map from the StereoBM structure, in which case you directy get the true disparities.

fourier transform gaussian filter error

I'm new to openCV and I'm trying to filter an image using a gaussian filter in frequency domain. But there is a run time error
"assertion failed (type == srcB.type() && srcA.size() == srcB.size()) in cv::mulSpectrum"
I know it is caused by the return type of my filter, the type doesn't match and I don't know how to make it right
here is the filter function (my guess is the return value from this function is wrong):
cv::Mat createGaussianHighPassFilter(cv::Size size, double cutoffInPixels){
Mat ghpf(size, CV_64F);
cv::Point center(size.width / 2, size.height / 2);
for(int u = 0; u < ghpf.rows; u++)
{
for(int v = 0; v < ghpf.cols; v++)
{
ghpf.at<double>(u, v) = gaussianCoeff(u - center.x, v - center.y, cutoffInPixels); //kernel utk gaussian filter yg 128x128
}
}
return ghpf;
}
and this is the main function:
Mat mask = createGaussianHighPassFilter(complexI.size(),16);
shift(mask);
Mat AX[] = {Mat::zeros(complexI.size(), CV_32F), Mat::zeros(complexI.size(), CV_32F)};
Mat kernel_spec;
AX[0] = mask; // real
AX[1] = mask; // imaginar
merge(AX, 2, kernel_spec);
cout<<complexI.type()<<endl<<kernel_spec.type(); //the result is 13 and 14, the type doesn't match
mulSpectrums(complexI, kernel_spec, complexI, DFT_ROWS); // only DFT_ROWS accepted
updateMag(complexI); // show spectrum
updateResult(complexI); // do inverse transform
Well of course they don't match. You are initializing kernel_spec as CV_32 but complexI is CV_64. Do a Mat::convertTo() and it should work.
HTH