OpenCV: Find original coordinates of a rotated point - c++

I have the following problem. I'm searching for eyes within an image using HaarClassifiers. Due to the rotation of the head I'm trying to find eyes within different angles. For that, I rotate the image by different angles. For rotating the frame, I use the code (written in C++):
Point2i rotCenter;
rotCenter.x = scaledFrame.cols / 2;
rotCenter.y = scaledFrame.rows / 2;
Mat rotationMatrix = getRotationMatrix2D(rotCenter, angle, 1);
warpAffine(scaledFrame, scaledFrame, rotationMatrix, Size(scaledFrame.cols, scaledFrame.rows));
This works fine and I am able to extract two ROI Rectangles for the eyes. So, I have the top/left coordinates of each ROI as well as their width and height. However, these coordinates are the coordinates in the rotated image. I don't know how I can backproject this rectangle onto the original frame.
Assuming I have the obtaind eye pair rois for the unscaled frame (full_image), but still roated.
eye0_roi and eye1_roi
How can I rotate them back, such that they map their correct position?
Best regards,
Andre

You can use the invertAffineTransform to get the inverse matrix and use this matrix to rotate point back:
Mat RotateImg(const Mat& img, double angle, Mat& invertMat)
{
Point center = Point( img.cols/2, img.rows/2);
double scale = 1;
Mat warpMat = getRotationMatrix2D( center, angle, scale );
Mat dst = Mat(img.size(), CV_8U, Scalar(128));
warpAffine( img, dst, warpMat, img.size(), 1, 0, Scalar(255, 255, 255));
invertAffineTransform(warpMat, invertMat);
return dst;
}
Point RotateBackPoint(const Point& dstPoint, const Mat& invertMat)
{
cv::Point orgPoint;
orgPoint.x = invertMat.at<double>(0,0)*dstPoint.x + invertMat.at<double>(0,1)*dstPoint.y + invertMat.at<double>(0,2);
orgPoint.y = invertMat.at<double>(1,0)*dstPoint.x + invertMat.at<double>(1,1)*dstPoint.y + invertMat.at<double>(1,2);
return orgPoint;
}

Related

Moving cv::RotatedRect in the same position after rotating a cv::Mat without cropping

I've currently trouble to understand what's necessary to transform a cv::RotatedRect after rotating an image without cropping using the following code by Lars Schillingmann in this question.
Here's the code he provided as answer:
#include "opencv2/opencv.hpp"
int main()
{
cv::Mat src = cv::imread("im.png", CV_LOAD_IMAGE_UNCHANGED);
double angle = -45;
// get rotation matrix for rotating the image around its center in pixel coordinates
cv::Point2f center((src.cols-1)/2.0, (src.rows-1)/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
// determine bounding rectangle, center not relevant
cv::Rect2f bbox = cv::RotatedRect(cv::Point2f(), src.size(), angle).boundingRect2f();
// adjust transformation matrix
rot.at<double>(0,2) += bbox.width/2.0 - src.cols/2.0;
rot.at<double>(1,2) += bbox.height/2.0 - src.rows/2.0;
cv::Mat dst;
cv::warpAffine(src, dst, rot, bbox.size());
cv::imwrite("rotated_im.png", dst);
return 0;
}
In my case, I've a cv::RotatedRect which matches a certain position in the src image. This cv::RotatedRect should match the same postion after the transformation/rotation was applied to the src mat. Currently, I struggle with doing it the right way.
From what I know, to rotate a cv::RotatedRect, it's only necessary to directly modify the members of the structure e.g. angle. I'm quite sure that I only have to modify the center, but the new position is always a bit off from the expected location. I initially expected that I only have to add the difference between bbox and src dimensions to get what I'm looking for but it turns out to be not the case (inlcuding the rotation of course).
connected_components[i].center.x += ...
connected_components[i].center.y += ...
cv::RotatedRect newRect(connected_components[i].center, connected_components[i].size, connected_components[i].angle- median);
The answer is quite simple. We can reuse the transformation matrix for a point transform using cv::transform. Sample code is below:
cv::Point2f points[4];
connected_components[i].points(points);
std::vector<cv::Point2f> old_points;
old_points.insert(old_points.begin(), std::begin(points), std::end(points));
std::vector<cv::Point2f> new_points;
cv::transform(old_points, new_points, rotation_matrix);
for (unsigned int j = 0; j < 4; ++j) {
cv::line(dest, new_points[j], new_points[(j + 1) % 4], cv::Scalar(0, 255, 0));
}

triangle mask with opencv

i have this image
i want to create a tranigle mask to get only this zone
but with the following code i get this result
Moments mu = moments(red,true);
Point center;
center.x = mu.m10 / mu.m00;
center.y = mu.m01 / mu.m00;
circle(red, center, 2, Scalar(0, 0, 255));
cv::Size sz = red.size();
int imageWidth = sz.width;
int imageHeight = sz.height;
Mat mask3(red.size(), CV_8UC1, Scalar::all(0));
// Create Polygon from vertices
vector<Point> ptmask3(3);
ptmask3.push_back(Point(imageHeight-1, imageWidth-1));
ptmask3.push_back(Point(center.x, center.y));
ptmask3.push_back(Point(0, red.rows - 1));
vector<Point> pt;
approxPolyDP(ptmask3, pt, 1.0, true);
// Fill polygon white
fillConvexPoly(mask3, &pt[0], pt.size(), 255, 8, 0);
// Create new image for result storage
Mat hide3(red.size(), CV_8UC3);
// Cut out ROI and store it in imageDest
red.copyTo(hide3, mask3);
imshow("mask3", hide3);
Updated Version (with the Help of Dan MaĊĦek)
Your Triangle is wrong
This is because you're initializing the vector with size 3, then putting another three points into it, for a total of 6 points of which three have default values. Try this instead:
vector<Point> ptmask3;
Also, make sure that the coordinates of the points are correct. You'll want to have a point in the bottom left corner, but it doesn't seem like your current triangle has one like that.
Your image is gray
You need to initialize hide3 properly, like this:
cv::Mat hide3(img.size(), CV_8UC3, cv::Scalar(0));

How to use Multi-band Blender in opencv

I want to blend two images using multiband blending but I am not clear to the input parameter of this function:
void detail::Blender::prepare(const std::vector<Point>& corners, const std::vector<Size>& sizes)
In my case ,I just input two warped images with black gap, and with masks all white.(forgive me can not add pictures...)
And I set the two corners (0.0,0.0),because the warped images has been registered.
but my result is not good enough.with obvious seam in the result
can someone tell me why?How can I solve this problem?
I'm not sure what do you mean when you say "my result is not good enough". It's better to watch that result, but I'll try to guess. My main part of code, which makes panorama, looks like this:
void makePanorama(Rect bounding_box, vector<Mat> images, vector<Mat> homographies, vector<vector<Point>> corners) {
detail::MultiBandBlender blender;
blender.prepare(bounding_box);
Mat mask, bigImage, curImage;
for (int i = 0; i < (int)images.size(); ++i) {
warpPerspective(images[i], curImage, homographies[i],
bounding_box.size(), INTER_LINEAR, ORDER_TRANSPARENT);
mask = makeMask(curImage.size(), corners[i], homographies[i]);
blender.feed(curImage.clone(), mask, Point(0, 0));
}
blender.blend(bigImage, mask);
bigImage.convertTo(bigImage, (bigImage.type() / 8) * 8);
imshow("Result", bigImage);
waitKey();
}
So, prepare blender and then loop: warp image, make the mask after warped image and feed blender. At the end, turn this blender on and that's all. I met two problems, which influence on my result badly. May be you have one of them or both.
The first is type. My images had CV_16SC3, and after blending you need to convert blended image type into unsigned one. Like this
bigImage.convertTo(bigImage, (bigImage.type() / 8) * 8);
If you not, the result image would be gray.
The second is borders. In the beginning, my function makeMask was calculating non-black area of warped images. As a result, the one could see borders of the warped images on the blended image. The solution is to make mask smaller than non-black warped image area. So, my function makeMask is looks like this:
Mat makeMask(Size sz, vector<Point2f> imageCorners, Mat homorgaphy) {
Scalar white(255, 255, 255);
Mat mask = Mat::zeros(sz, CV_8U);
Point2f innerPoint;
vector<Point2f> transformedCorners(4);
perspectiveTransform(imageCorners, transformedCorners, homorgaphy);
// Calculate inner point
for (auto& point : transformedCorners)
innerPoint += point;
innerPoint.x /= 4;
innerPoint.y /= 4;
// Make indent for each corner
vector<Point> corners;
for (int ind = 0; ind < 4; ++ind) {
Point2f direction = innerPoint - transformedCorners[ind];
double normOfDirection = norm(direction);
corners[ind].x += settings.indent * direction.x / normOfDirection;
corners[ind].y += settings.indent * direction.y / normOfDirection;
}
// Draw borders
Point prevPoint = corners[3];
for (auto& point : corners) {
line(mask, prevPoint, point, white);
prevPoint = point;
}
// Fill with white
floodFill(mask, innerPoint, white);
return mask;
}
I took this pieces of code from my real code, so I could possibly forget to specify something. But I hope, the idea of how to work with MultiBandBlender is clear.

Get angle from OpenCV Canny edge detector

I want to use OpenCV's Canny edge detector, such as is outlined in this question. For example:
cv::Canny(image,contours,10,350);
However, I wish to not only get the final thresholded image out, but I also wish to get the detected edge angle at each pixel. Is this possible in OpenCV?
canny doesn't give you this directly.
However, you can calculate the angle from the Sobel transform, which is used internally in canny().
Pseudo code:
cv::Canny(image,contours,10,350);
cv::Sobel(image, dx, CV_64F, 1, 0, 3, 1, 0, cv::BORDER_REPLICATE);
cv::Sobel(image, dy, CV_64F, 0, 1, 3, 1, 0, cv::BORDER_REPLICATE);
cv::Mat angle(image.size(), CV_64F)
foreach (i,j) such that contours[i, j] > 0
{
angle[i, j] = atan2(dy[i,j], dx[i , j])
}
Instead of using for loop you can also provide dx and dy gradients to phase function that returns grayscale image of angles direction, then pass it to applyColorMap function and then mask it with edges, so the background is black.
Here is the workflow:
Get the angles
Mat angles;
phase(dx, dy, angles, true);
true argument idicates that the angles are returned in degrees.
Change the range of angles to 0-255 so you can convert to CV_8U without data loss
angles = angles / 360 * 255;
note that angles is still in CV_64F type as it comes from Sobel function
Convert to CV_8U
angles.convertTo(angles, CV_8U);
Apply color map of your choice
applyColorMap(angles, angles, COLORMAP_HSV);
in this case I choose HSV colormap. See this for more info: https://www.learnopencv.com/applycolormap-for-pseudocoloring-in-opencv-c-python/
Apply the edges mask so the background is black
Mat colored;
angles.copyTo(colored, contours);
Finally display image :D
imshow("Colored angles", colored);
In case your source is a video or webcam, before applying the mask of edges you addtionlly must clear colored image, to prevent aggregation:
colored.release();
angles.copyTo(colored, contours);
Full code here:
Mat angles, colored;
phase(dx, dy, angles, true);
angles = angles / 360 * 255;
angles.convertTo(angles, CV_8U);
applyColorMap(angles, angles, COLORMAP_HSV);
colored.release();
angles.copyTo(colored, contours);
imshow("Colored angles", colored);

Rotate Opencv Matrix by 90, 180, 270 degrees [duplicate]

This question already has answers here:
Rotate image by 90, 180 or 270 degrees
(11 answers)
Closed 7 years ago.
I'm capturing image from webcam and I need to rotate it by right angle. I found myself theese functions:
getRotationMatrix2D - to create rotation matrix (whatever it is)
transform - transform one matrix to another by rotation matrix
But, I don't get anything but black area. This is my code:
if(rotate_button.click%4>0) {
double angle = (rotate_button.click%4)*90; //button increments its click by 1 per click
Mat transform_m = getRotationMatrix2D(Point(cam_frame_width/2, cam_frame_height/2), angle, 1); //Creating rotation matrix
Mat current_frame;
transform(cam_frame, current_frame, transform_m); //Transforming captured image into a new one
cam_frame = Mat((int)current_frame.cols,(int)current_frame.rows, cam_frame_type) = Scalar(0,128,0); //resizing captured matrix, so I can copy the resized one on it
current_frame.copyTo(cam_frame); //Copy resized to original
}
Outputs just black screen.
The above answers are too complex and hog your CPU. Your question was not arbitrary rotation, but 'Rotate Opencv Matrix by 90, 180, 270 degrees'.
UPDATE 30 JUN 2017:
This functionality is supported by OpenCV, but not documented: https://github.com/opencv/opencv/blob/master/modules/core/include/opencv2/core.hpp#L1041
void rotate(InputArray src, OutputArray dst, int rotateCode);
with
enum RotateFlags {
ROTATE_90_CLOCKWISE = 0, //Rotate 90 degrees clockwise
ROTATE_180 = 1, //Rotate 180 degrees clockwise
ROTATE_90_COUNTERCLOCKWISE = 2, //Rotate 270 degrees clockwise
};
Original Answer & Arbitrary degree rotation:
You can also do this by using flip and transpose operation, ie for 90CW:
transpose(matSRC, matROT);
flip(matROT, matROT,1); //transpose+flip(1)=CW
etc. Figure out the other commands yourself (thinking=learning) by introducing yourself with the transpose and flip operation form the Docs.
void rot90(cv::Mat &matImage, int rotflag){
//1=CW, 2=CCW, 3=180
if (rotflag == 1){
transpose(matImage, matImage);
flip(matImage, matImage,1); //transpose+flip(1)=CW
} else if (rotflag == 2) {
transpose(matImage, matImage);
flip(matImage, matImage,0); //transpose+flip(0)=CCW
} else if (rotflag ==3){
flip(matImage, matImage,-1); //flip(-1)=180
} else if (rotflag != 0){ //if not 0,1,2,3:
cout << "Unknown rotation flag(" << rotflag << ")" << endl;
}
}
So you call it like this, and note the matrix is passed by reference.
cv::Mat matImage;
//Load in sensible data
rot90(matImage,3); //Rotate it
//Note if you want to keep an original unrotated version of
// your matrix as well, just do this
cv::Mat matImage;
//Load in sensible data
cv::Mat matRotated = matImage.clone();
rot90(matImage,3); //Rotate it
Rotate by arbitrary degrees
While I'm at it, here is how to rotate by an arbitrary degree, which i expect to be 50x more expensive. Note that rotation in this manner will include black padding, and edges will be rotated to oustide of the image's original size.
void rotate(cv::Mat& src, double angle, cv::Mat& dst){
cv::Point2f ptCp(src.cols*0.5, src.rows*0.5);
cv::Mat M = cv::getRotationMatrix2D(ptCp, angle, 1.0);
cv::warpAffine(src, dst, M, src.size(), cv::INTER_CUBIC); //Nearest is too rough,
}
Calling this for a rotation of 10.5 degrees then is obviously:
cv::Mat matImage, matRotated;
//Load in data
rotate(matImage, 10.5, matRotated);
I find it remarkable that these kind of extremely basic functions are not part of OpenCV, while OpenCV does have native things like face detection (that's not really maintained with questionable performance). Remarkable.
Cheers
Use warpAffine.:
Try:
Point2f src_center(source.cols/2.0F, source.rows/2.0F);
Mat rot_mat = getRotationMatrix2D(src_center, angle, 1.0);
Mat dst;
warpAffine(source, dst, rot_mat, source.size());
dst is the final image
#Abhishek Thakur's answer only works well for rotating the image by 180 degrees. It does not handle the rotation by 90 degrees because
the center of rotation supplied to getRotationMatrix2D is incorrect, and
output matrix size passed to warpAffline is incorrect.
Here's the code that rotates an image by 90 degrees:
Mat src = imread("image.jpg");
Mat dst;
double angle = 90; // or 270
Size src_sz = src.size();
Size dst_sz(src_sz.height, src_sz.width);
int len = std::max(src.cols, src.rows);
Point2f center(len/2., len/2.);
Mat rot_mat = cv::getRotationMatrix2D(center, angle, 1.0);
warpAffine(src, dst, rot_mat, dst_sz);
Edit: Another approach to rotate images by 90,180 or 270 degrees involves doing matrix transpose and then flip. This method is probably faster.
The above code works just fine, but introduces numerical error in the image due to the matrix computations being done in floating point and the warpAffine interpolation.
For a 90deg increment rotation I prefer to use the following (in python/opencv python)
Since OpenCV images in Python are 2d Numpy Arrays.
90 deg.
theImage = numpy.rot90( theImage, 1 )
270 deg.
theImage = numpy.rot90( theImage, 3 )
Note: I only tested this on gray scale images of shape ( X, Y ).
If you have a color (or other multi-separation) image you might need to reshape it first to make sure that the rotation works along the correct axis.