I've got a Affine transform matrix in OpenCV from the KeypointBasedMotionEstimator class.
It comes in a form like:
[1.0008478, -0.0017408683, -10.667297;
0.0011812132, 1.0009096, -3.3626099;
0, 0, 1]
I would now like to apply the transform to a vector< Pointf >, so that it will transform each point as it would be if they were in the image.
The OpenCV does not seem to allow transforming points only, the function:
void cv::warpAffine ( InputArray src,
OutputArray dst,
InputArray M,
Size dsize,
int flags = INTER_LINEAR,
int borderMode = BORDER_CONSTANT,
const Scalar & borderValue = Scalar()
)
Only seems to take images as inputs and outputs.
Is there a way I can apply an affine transform to single points in OpenCV?
you can use
void cv::perspectiveTransform(InputArray src, OutputArray dst, InputArray m)
e.g.
cv::Mat yourAffineMatrix(3,3,CV_64FC1);
[...] // fill your transformation matrix
std::vector<cv::Point2f> yourPoints;
yourPoints.push_back(cv::Point2f(4,4));
yourPoints.push_back(cv::Point2f(0,0));
std::vector<cv::Point2f> transformedPoints;
cv::perspectiveTransform(yourPoints, transformedPoints, yourAffineMatrix);
not sure about Point datatype, but the transformation must have double type, e.g. CV_64FC1
see http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#perspectivetransform too
it's a bit clumsy, but you can matrix-multiply your points manually:
// the transformation matrix
Mat_<float> M(3,3);
M << 1.0008478, -0.0017408683, -10.667297,
0.0011812132, 1.0009096, -3.3626099,
0, 0, 1;
// a point
Point2f p(4,4);
// make a Mat for multiplication,
// must have same type as transformation mat !
Mat_<float> pm(3,1);
pm << p.x,p.y,1.0;
// now , just multiply:
Mat_<float> pr = M * pm;
// retrieve point:
Point2f pt(pr(0), pr(1));
cerr << pt << endl;
[-6.67087, 0.645753]
Related
I'am trying to applying homomorphic filter to my video player program.
While I was writing code using UMat, I found something incompatible with the code using the existing Mat.
in Mat code
cv::Mat temp;
someImage.convertTo(temp,CV_32FC1)
temp = temp + 0.01
temp = temp + 0.01
What does this mean?
And how can I using this option in UMat ?
OpenCV's operator+(const Mat& a, const Scalar& s) adds a scalar value to each element of the matrix. It's practically the same as calling void add(InputArray src1, InputArray src2, OutputArray dst, InputArray mask=noArray(), int dtype=-1).
InputArray interface accepts Mats and UMats as well as Scalars, so you can just call
cv::UMat temp(3, 3, CV_32FC1, cv::Scalar(0));
cv::add(temp, 0.01, temp);
I'm trying to implement color conversion from RGB-LMS and LMS-RGB back and using reshape for multiplication matrix, following answer from this question : Fastest way to apply color matrix to RGB image using OpenCV 3.0?
My ori Mat object is from an image with 3 channel (RGB), and I need to multiply them with matrix of 1 channel (lms), it seems like I have an issue with the matrix type. I've read reshape docs and questions related to this issue, like Issues multiplying Mat matrices, and I believe I have followed the instructions.
Here's my code : [UPDATED : Convert into flat image]
void test(const Mat &forreshape, Mat &output, Mat &pic, int rows, int cols)
{
Mat lms(3, 3, CV_32FC3);
Mat rgb(3, 3, CV_32FC3);
Mat intolms(rows, cols, CV_32F);
lms = (Mat_<float>(3, 3) << 1.4671, 0.1843, 0.0030,
3.8671, 27.1554, 3.4557,
4.1194, 45.5161 , 17.884 );
/* switch the order of the matrix according to the BGR order of color on OpenCV */
Mat transpose = (3, 3, CV_32F, lms).t(); // this will do transpose from matrix lms
pic = forreshape.reshape(1, rows*cols);
Mat flatFloatImage;
pic.convertTo(flatFloatImage, CV_32F);
rgb = flatFloatImag*transpose;
output = rgb.reshape(3, cols);
}
I define my Mat object, and I have converted it into float using convertTo
Mat ori = imread("ori.png", CV_LOAD_IMAGE_COLOR);
int rows = ori.rows;
int cols = ori.cols;
Mat forreshape;
ori.convertTo(forreshape, CV_32F);
Mat pic(rows, cols, CV_32FC3);
Mat output(rows, cols, CV_32FC3);
Error is :
OpenCV Error: Assertion failed (type == B.type() && (type == CV_32FC1 || type == CV_64FC1 || type == CV_32FC2 || type == CV_64FC2)) ,
so it's the type issue.
I tried to change all type into either 32FC3 of 32FC1, but doesn't seem to work. Any suggestion ?
I believe what you need is to convert your input to a flat image and than multiply them
float lms [] = {1.4671, 0.1843, 0.0030,
3.8671, 27.1554, 3.4557,
4.1194, 45.5161 , 17.884};
Mat lmsMat(3, 3, CV_32F, lms );
Mat flatImage = ori.reshape(1, ori.rows * ori.cols);
Mat flatFloatImage;
flatImage.convertTo(flatFloatImage, CV_32F);
Mat mixedImage = flatFloatImage * lmsMat;
Mat output = mixedImage.reshape(3, imData.rows);
I might have messed up with lms matrix there, but I guess you will catch up from here.
Also see 3D matrix multiplication in opencv for RGB color mixing
EDIT:
Problem with distortion is that you got overflow after float to 8U conversion. This would do the trick:
rgb = flatFloatImage*transpose;
rgb.convertTo(pic, CV_32S);
output = pic.reshape(3, rows)
Output:
;
Also I'm not sure but quick google search gives me different matrix for LMS see here. Also note that opencv stores colors in B-G-R format instead of RGB so change your mix mtraixes recordingly.
I am currently trying to do a kind of selftest of the OpenCVs implementation of the function findtranfromECC (http://docs.opencv.org/3.0-beta/modules/video/doc/motion_analysis_and_object_tracking.html#findtransformecc ).
To do so I create a warp-matrix by myself and do the affine transformation with the function warpAffine given by OpenCV.
After the affine Transform I use the input of the affine Transform and the warped output in the TransfromECC function. I hoped I would get back the same matrix like I used in the affine transform, bud sadly it differs a lot - it is a completly different one.
In the code example I put at the end of the post I got the following matrix for making the affine Transform:
[0.850332161003909, 0.1778601204261232, 0]
[-0.06752637272097255, 0.3701713812908899, 712.799877929688]
But the calculated Matrix by findTransformECC is:
[1.0151283, -0.0033983635, -5.6531301]
[-0.023056569, 1.038756, -8.7541409]
I also compared the eigen-Values:
For doing the affine Transform:
[127021.1755113913]
[0]
Calulated Warp-Matrix eigenvalues:
[2.945044]
[0]
Does anyone has made the same experience or might now what causes this error?
I appreciate everyones help
Point2f srcTri[3];
Point2f dstTri[3];
float scale = (float)1 / (float)255;
//Matri for first affine transform
Mat warp_mat(2, 3, CV_32FC1);
//Matrix for back affine transform
Mat warp_mat2 = Mat::eye(2, 3, CV_32FC1);
Mat src, warp_dst, warp_rotate_dst;
//Get Image from Image class and convert it to 8 bit which is used by
//findtransfromsEcc
this->DemosaicedDestination.convertTo(src, CV_8UC1, scale, 0);
//container for warp destination
warp_dst = Mat::zeros(src.rows, src.cols, CV_8UC1);
//Points for warp matrix
srcTri[0] = Point2f(0, 0);
srcTri[1] = Point2f(src.cols - 1, 0);
srcTri[2] = Point2f(0, src.rows - 1);
dstTri[0] = Point2f(src.cols*0.0, src.rows*0.33);
dstTri[1] = Point2f(src.cols*0.85, src.rows*0.25);
dstTri[2] = Point2f(src.cols*0.15, src.rows*0.7);
//get the affine transformation warp mat
warp_mat = getAffineTransform(srcTri, dstTri);
//Do the affine transformation
warpAffine(src, warp_dst, warp_mat, warp_dst.size());
//show affine transformation
imshow("src", src);
waitKey(0);
cout << "warp matrix used: " << warp_mat << endl;
namedWindow("warped to ", WINDOW_NORMAL);
imshow("warped to",warp_dst);
TermCriteria criteria(TermCriteria::COUNT + TermCriteria::EPS, 2500, 1e-1);
try{
this->cc = findTransformECC(warp_dst, src, warp_mat2, MOTION_AFFINE, criteria);
}
catch (Exception& e){
const char* err_msg = e.what();
cout << err_msg;
return false;
}
//Do the affine transformation back
warpAffine(warp_dst, src, warp_mat2, src.size());
//show backwarded afine transform
namedWindow("back warped transform", WINDOW_NORMAL);
imshow("back warped transform", src);
waitKey(0);
//show calculated matrix, should be the same was warp matrix used
cout << "calculated warp matrix 1 " << warp_mat2 << endl;
Inversing the matrix with the given OpenCV function gave a correct solution.
For my employer I am comparing the results of an already implemented image rectification method with the results of the corresponding OpenCV implementation. However, an exception is thrown once the OpenCV function is called.
The header of the OpenCV rectification function is
void stereoRectify(InputArray cameraMatrix1, InputArray distCoeffs1,
InputArray cameraMatrix2, InputArray distCoeffs2,
Size imageSize, InputArray R, InputArray T, OutputArray R1,
OutputArray R2, OutputArray P1, OutputArray P2,
OutputArray Q, int flags=CALIB_ZERO_DISPARITY,
double alpha=-1, Size newImageSize=Size(),
Rect* validPixROI1=0, Rect* validPixROI2=0);
As InputArray and OutputArray I used objects of type
cv::Mat
. Since the calibration of the cameras is already known, I initialized the input matrices manually with the correct values. The matrices have the following sizes in accordance with the corresponding documentation page:
cv::Mat cameraMatrix1; // 3x3 matrix
cv::Mat distCoeffs1; // 5x1 matrix for five distortion coefficients
cv::Mat cameraMatrix2; // 3x3 matrix
cv::Mat distCoeffs2; // 5x1 matrix
cv::Mat R; // 3x3 matrix, rotation left to right camera
cv::Mat T; // 4x1 matrix, translation left to right proj. center
I initialized the matrices like this:
T = cv::Mat::zeros(4, 1, CV_64F);
T.at<double>(0, 0) = proj_center_right.x - proj_center_left.x;
T.at<double>(1, 0) = proj_center_right.y - proj_center_left.y;
T.at<double>(2, 0) = proj_center_right.z - proj_center_left.z;
For all matrices I used CV_64F as value type.
I printed the content of the matrices on the console, to verify, that all values are set correctly (rounded):
cameraMatrix1: | 6654; 0, 1231 | | 0; 6654; 1037 | | 0; 0; 1 |
distCoeffs1: | -4.57e-009; 5.94e-017; 3.68e-008; -3.46e-008; 6.37e-023 |
cameraMatrix2: | 6689; 0, 1249 | | 0; 6689; 991 | | 0; 0; 1|
distCoeffs2: | -4.72e-009; 2.88e-016; 6.2e-008; -8.74e-008; -8.18e-024 |
R: | 0.87; -0.003, -0.46 | | 0.001; 0.999; -0.003 | | 0.46; 0.002; 0.89 |
T: | 228; 0; 0; 0 |
Everything seems correct to me so far. Further, I initialized the output matrices as identity matrices (using cv::Mat::eye(...)), with the following sizes:
cv::Mat R1; // 3x3 matrix
cv::Mat R2; // 3x3 matrix
cv::Mat P1; // 3x4 matrix
cv::Mat P2; // 3x4 matrix
cv::Mat Q; // 4x4 matrix
Finally the required cv::Size object is set to width 2448 and height 2050 (size of the images acquired by the cameras). Once I pass the parameters to OpenCV as
cv::stereoRectify(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imgSize, R, T, R1, R2, P1, P2, Q);
, the program crashes. The error message on the console states
opencv_core248, void cdecl cv::error(class cv::Exception const & ptr64) +0x152 (invalid frame pointer)
Since all matrices and the cv::Size object are initialized correctly, I do not see, what might be wrong. For any suggestions, I am thankful.
your code initially crashed for me in gemm(), changing T to a 3x1 vec seemed to help:
// Mat_<double> used here for easy << initialization
cv::Mat_<double> cameraMatrix1(3,3); // 3x3 matrix
cv::Mat_<double> distCoeffs1(5,1); // 5x1 matrix for five distortion coefficients
cv::Mat_<double> cameraMatrix2(3,3); // 3x3 matrix
cv::Mat_<double> distCoeffs2(5,1); // 5x1 matrix
cv::Mat_<double> R(3,3); // 3x3 matrix, rotation left to right camera
cv::Mat_<double> T(3,1); // * 3 * x1 matrix, translation left to right proj. center
// ^^ that's the main diff to your code, (3,1) instead of (4,1)
cameraMatrix1 << 6654, 0, 1231, 0, 6654, 1037, 0, 0, 1;
cameraMatrix2 << 6689, 0, 1249, 0, 6689, 991, 0, 0, 1;
distCoeffs1 << -4.57e-009, 5.94e-017, 3.68e-008, -3.46e-008, 6.37e-023;
distCoeffs2 << -4.72e-009, 2.88e-016, 6.2e-008, -8.74e-008, -8.18e-024;
R << 0.87, -0.003, -0.46, 0.001, 0.999, -0.003, 0.46, 0.002, 0.89;
T << 228, 0, 0;
cv::Mat R1,R2,P1,P2,Q; // you're safe to leave OutpuArrays empty !
Size imgSize(3000,3000); // wild guess from you cameramat ( not that it matters )
cv::stereoRectify(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imgSize, R, T, R1, R2, P1, P2, Q);
cerr << "Q" << Q << endl;
I'm looking to undistort an image using the distortion coefficients that I've computed for my camera, without changing the camera matrix. This is exactly what undistort() does, but I wanted to draw the output to a larger canvas image.
When I tried this:
Mat drawtransform = getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, size, 1.0, size * 2);
undistort(inputimage, undistorted, cameraMatrix, distCoeffs, drawtransform);
It still wrote out the same sized image, but only the top left quarter of the scaled-up-by-two undistorted result. Like the documentation says, undistort writes into a target image of the same size.
It's pretty obvious that I can just go copy out and reimplement a slightly tweaked version of undistort() but I am having some trouble understanding what it is doing. Here's the source:
void cv::undistort( InputArray _src, OutputArray _dst, InputArray _cameraMatrix,
InputArray _distCoeffs, InputArray _newCameraMatrix )
{
Mat src = _src.getMat(), cameraMatrix = _cameraMatrix.getMat();
Mat distCoeffs = _distCoeffs.getMat(), newCameraMatrix = _newCameraMatrix.getMat();
_dst.create( src.size(), src.type() );
Mat dst = _dst.getMat();
CV_Assert( dst.data != src.data );
int stripe_size0 = std::min(std::max(1, (1 << 12) / std::max(src.cols, 1)), src.rows);
Mat map1(stripe_size0, src.cols, CV_16SC2), map2(stripe_size0, src.cols, CV_16UC1);
Mat_<double> A, Ar, I = Mat_<double>::eye(3,3);
cameraMatrix.convertTo(A, CV_64F);
if( distCoeffs.data )
distCoeffs = Mat_<double>(distCoeffs);
else
{
distCoeffs.create(5, 1, CV_64F);
distCoeffs = 0.;
}
if( newCameraMatrix.data )
newCameraMatrix.convertTo(Ar, CV_64F);
else
A.copyTo(Ar);
double v0 = Ar(1, 2);
for( int y = 0; y < src.rows; y += stripe_size0 )
{
int stripe_size = std::min( stripe_size0, src.rows - y );
Ar(1, 2) = v0 - y;
Mat map1_part = map1.rowRange(0, stripe_size),
map2_part = map2.rowRange(0, stripe_size),
dst_part = dst.rowRange(y, y + stripe_size);
initUndistortRectifyMap( A, distCoeffs, I, Ar, Size(src.cols, stripe_size),
map1_part.type(), map1_part, map2_part );
remap( src, dst_part, map1_part, map2_part, INTER_LINEAR, BORDER_CONSTANT );
}
}
About half of the lines here are for sanity checking and initializing input parameters. What I'm confused about is what's going on with map1 and map2. These names are sadly less descriptive than most. I must be missing some explanation, maybe it's tucked away in some introduction page, or under the doc for another function.
map1 is a two channel signed short integer matrix and map2 is an unsigned short integer matrix, both are of dimension (height, max(4096/width, 1)). The question is, why? What will these maps contain? What is the significance and purpose of this striping? What is the significance and purpose of the strange dimension of the stripes?
Use initUndistortRectifyMap to obtain the transformation to the scale you desire , then apply its output (the two matrices you mention) to remap .
The first map is used to compute the transform the x coordinate at each pixel position, the second is used to transform the y coordinate.
You might want to read the description for the function remap. The map represents the pixel X,Y location in the source image for every pixel in the destination image. Map1_part is every X location in the source, and Map2_part is every Y location in the source.
Without reading into it much, the striping could be a method of speeding up the transformation process.
EDIT:
Also, if you are looking to just scale your image to a larger dimension you could just re-size the output image.
double scaleX = 2.0;
double scaleY = 2.0;
cv::Mat undistortedScaled;
cv::resize(undistorted, undistortedScaled, cv::Size(0,0), scaleX, scaleY);