Rotate Opencv Matrix by 90, 180, 270 degrees [duplicate] - c++

This question already has answers here:
Rotate image by 90, 180 or 270 degrees
(11 answers)
Closed 7 years ago.
I'm capturing image from webcam and I need to rotate it by right angle. I found myself theese functions:
getRotationMatrix2D - to create rotation matrix (whatever it is)
transform - transform one matrix to another by rotation matrix
But, I don't get anything but black area. This is my code:
if(rotate_button.click%4>0) {
double angle = (rotate_button.click%4)*90; //button increments its click by 1 per click
Mat transform_m = getRotationMatrix2D(Point(cam_frame_width/2, cam_frame_height/2), angle, 1); //Creating rotation matrix
Mat current_frame;
transform(cam_frame, current_frame, transform_m); //Transforming captured image into a new one
cam_frame = Mat((int)current_frame.cols,(int)current_frame.rows, cam_frame_type) = Scalar(0,128,0); //resizing captured matrix, so I can copy the resized one on it
current_frame.copyTo(cam_frame); //Copy resized to original
}
Outputs just black screen.

The above answers are too complex and hog your CPU. Your question was not arbitrary rotation, but 'Rotate Opencv Matrix by 90, 180, 270 degrees'.
UPDATE 30 JUN 2017:
This functionality is supported by OpenCV, but not documented: https://github.com/opencv/opencv/blob/master/modules/core/include/opencv2/core.hpp#L1041
void rotate(InputArray src, OutputArray dst, int rotateCode);
with
enum RotateFlags {
ROTATE_90_CLOCKWISE = 0, //Rotate 90 degrees clockwise
ROTATE_180 = 1, //Rotate 180 degrees clockwise
ROTATE_90_COUNTERCLOCKWISE = 2, //Rotate 270 degrees clockwise
};
Original Answer & Arbitrary degree rotation:
You can also do this by using flip and transpose operation, ie for 90CW:
transpose(matSRC, matROT);
flip(matROT, matROT,1); //transpose+flip(1)=CW
etc. Figure out the other commands yourself (thinking=learning) by introducing yourself with the transpose and flip operation form the Docs.
void rot90(cv::Mat &matImage, int rotflag){
//1=CW, 2=CCW, 3=180
if (rotflag == 1){
transpose(matImage, matImage);
flip(matImage, matImage,1); //transpose+flip(1)=CW
} else if (rotflag == 2) {
transpose(matImage, matImage);
flip(matImage, matImage,0); //transpose+flip(0)=CCW
} else if (rotflag ==3){
flip(matImage, matImage,-1); //flip(-1)=180
} else if (rotflag != 0){ //if not 0,1,2,3:
cout << "Unknown rotation flag(" << rotflag << ")" << endl;
}
}
So you call it like this, and note the matrix is passed by reference.
cv::Mat matImage;
//Load in sensible data
rot90(matImage,3); //Rotate it
//Note if you want to keep an original unrotated version of
// your matrix as well, just do this
cv::Mat matImage;
//Load in sensible data
cv::Mat matRotated = matImage.clone();
rot90(matImage,3); //Rotate it
Rotate by arbitrary degrees
While I'm at it, here is how to rotate by an arbitrary degree, which i expect to be 50x more expensive. Note that rotation in this manner will include black padding, and edges will be rotated to oustide of the image's original size.
void rotate(cv::Mat& src, double angle, cv::Mat& dst){
cv::Point2f ptCp(src.cols*0.5, src.rows*0.5);
cv::Mat M = cv::getRotationMatrix2D(ptCp, angle, 1.0);
cv::warpAffine(src, dst, M, src.size(), cv::INTER_CUBIC); //Nearest is too rough,
}
Calling this for a rotation of 10.5 degrees then is obviously:
cv::Mat matImage, matRotated;
//Load in data
rotate(matImage, 10.5, matRotated);
I find it remarkable that these kind of extremely basic functions are not part of OpenCV, while OpenCV does have native things like face detection (that's not really maintained with questionable performance). Remarkable.
Cheers

Use warpAffine.:
Try:
Point2f src_center(source.cols/2.0F, source.rows/2.0F);
Mat rot_mat = getRotationMatrix2D(src_center, angle, 1.0);
Mat dst;
warpAffine(source, dst, rot_mat, source.size());
dst is the final image

#Abhishek Thakur's answer only works well for rotating the image by 180 degrees. It does not handle the rotation by 90 degrees because
the center of rotation supplied to getRotationMatrix2D is incorrect, and
output matrix size passed to warpAffline is incorrect.
Here's the code that rotates an image by 90 degrees:
Mat src = imread("image.jpg");
Mat dst;
double angle = 90; // or 270
Size src_sz = src.size();
Size dst_sz(src_sz.height, src_sz.width);
int len = std::max(src.cols, src.rows);
Point2f center(len/2., len/2.);
Mat rot_mat = cv::getRotationMatrix2D(center, angle, 1.0);
warpAffine(src, dst, rot_mat, dst_sz);
Edit: Another approach to rotate images by 90,180 or 270 degrees involves doing matrix transpose and then flip. This method is probably faster.

The above code works just fine, but introduces numerical error in the image due to the matrix computations being done in floating point and the warpAffine interpolation.
For a 90deg increment rotation I prefer to use the following (in python/opencv python)
Since OpenCV images in Python are 2d Numpy Arrays.
90 deg.
theImage = numpy.rot90( theImage, 1 )
270 deg.
theImage = numpy.rot90( theImage, 3 )
Note: I only tested this on gray scale images of shape ( X, Y ).
If you have a color (or other multi-separation) image you might need to reshape it first to make sure that the rotation works along the correct axis.

Related

Moving cv::RotatedRect in the same position after rotating a cv::Mat without cropping

I've currently trouble to understand what's necessary to transform a cv::RotatedRect after rotating an image without cropping using the following code by Lars Schillingmann in this question.
Here's the code he provided as answer:
#include "opencv2/opencv.hpp"
int main()
{
cv::Mat src = cv::imread("im.png", CV_LOAD_IMAGE_UNCHANGED);
double angle = -45;
// get rotation matrix for rotating the image around its center in pixel coordinates
cv::Point2f center((src.cols-1)/2.0, (src.rows-1)/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
// determine bounding rectangle, center not relevant
cv::Rect2f bbox = cv::RotatedRect(cv::Point2f(), src.size(), angle).boundingRect2f();
// adjust transformation matrix
rot.at<double>(0,2) += bbox.width/2.0 - src.cols/2.0;
rot.at<double>(1,2) += bbox.height/2.0 - src.rows/2.0;
cv::Mat dst;
cv::warpAffine(src, dst, rot, bbox.size());
cv::imwrite("rotated_im.png", dst);
return 0;
}
In my case, I've a cv::RotatedRect which matches a certain position in the src image. This cv::RotatedRect should match the same postion after the transformation/rotation was applied to the src mat. Currently, I struggle with doing it the right way.
From what I know, to rotate a cv::RotatedRect, it's only necessary to directly modify the members of the structure e.g. angle. I'm quite sure that I only have to modify the center, but the new position is always a bit off from the expected location. I initially expected that I only have to add the difference between bbox and src dimensions to get what I'm looking for but it turns out to be not the case (inlcuding the rotation of course).
connected_components[i].center.x += ...
connected_components[i].center.y += ...
cv::RotatedRect newRect(connected_components[i].center, connected_components[i].size, connected_components[i].angle- median);
The answer is quite simple. We can reuse the transformation matrix for a point transform using cv::transform. Sample code is below:
cv::Point2f points[4];
connected_components[i].points(points);
std::vector<cv::Point2f> old_points;
old_points.insert(old_points.begin(), std::begin(points), std::end(points));
std::vector<cv::Point2f> new_points;
cv::transform(old_points, new_points, rotation_matrix);
for (unsigned int j = 0; j < 4; ++j) {
cv::line(dest, new_points[j], new_points[(j + 1) % 4], cv::Scalar(0, 255, 0));
}

3D Reconstruction Of Planar Markers usin OpenCV

I am trying to perform 3D Reconstruction(Structure From Motion) from Multiple Images of Planar Markers. I very new to MVG and openCV.
As far I have understood I have to do the following steps:
Identify corresponding 2D corner points in the one images.
Calculate the Camera Pose of the first image us cv::solvePNP(assuming the
origin to be center of the marker).
Repeat 1 and 2 for the second image.
Estimate the relative motion of the camera by Rot_relative = R2 - R1,
Trans_relative = T2-T1.
Now assume the first camera to be the origin construct the 3x4 Projection
Matrix for both views, P1 =[I|0]*CameraMatrix(known by Calibration) and P2 =
[Rot_relative |Trans_relative ].
Use the created projection matrices and 2D corner points to triangulate the
3D coordinate using cv::triangulatePoints(P1,P2,point1,point2,OutMat)
The 3D coordinate can be found by dividing the each rows of OutMat by the 4th
row.
I was hoping to keep my "First View" as my origin and iterate
through n views repeating steps from 1-7(I suppose its called Global SFM).
I was hoping to get (n-1)3D points of the corners with "The first View as origin" which we could optimize using Bundle Adjustment.
But the result I get is very disappointing the 3D points calculated are displaced by a huge factor.
These are questions:
1.Is there something wrong with the steps I followed?
2.Should I use cv::findHomography() and cv::decomposeHomographyMat() to find the
relative motion of the camera?
3.Should point1 and point2 in cv::triangulatePoints(P1,P2,point1,point2,OutMat)
be normalized and undistorted? If yes, how should the "Outmat" be interpreted?
Please anyone who has insights towards the topic, can you point out my mistake?
P.S. I have come to above understanding after reading "MultiView Geometry in Computer Vision"
Please find the code snippet below:
cv::Mat Reconstruction::Triangulate(std::vector<cv::Point2f>
ImagePointsFirstView, std::vector<cv::Point2f>ImagePointsSecondView)
{
cv::Mat rVectFirstView, tVecFristView;
cv::Mat rVectSecondView, tVecSecondView;
cv::Mat RotMatFirstView = cv::Mat(3, 3, CV_64F);
cv::Mat RotMatSecondView = cv::Mat(3, 3, CV_64F);
cv::solvePnP(RealWorldPoints, ImagePointsFirstView, cameraMatrix, distortionMatrix, rVectFirstView, tVecFristView);
cv::solvePnP(RealWorldPoints, ImagePointsSecondView, cameraMatrix, distortionMatrix, rVectSecondView, tVecSecondView);
cv::Rodrigues(rVectFirstView, RotMatFirstView);
cv::Rodrigues(rVectSecondView, RotMatSecondView);
cv::Mat RelativeRot = RotMatFirstView-RotMatSecondView ;
cv::Mat RelativeTrans = tVecFristView-tVecSecondView ;
cv::Mat RelativePose;
cv::hconcat(RelativeRot, RelativeTrans, RelativePose);
cv::Mat ProjectionMatrix_0 = cameraMatrix*cv::Mat::eye(3, 4, CV_64F);
cv::Mat ProjectionMatrix_1 = cameraMatrix* RelativePose;
cv::Mat X;
cv::undistortPoints(ImagePointsFirstView, ImagePointsFirstView, cameraMatrix, distortionMatrix, cameraMatrix);
cv::undistortPoints(ImagePointsSecondView, ImagePointsSecondView, cameraMatrix, distortionMatrix, cameraMatrix);
cv::triangulatePoints(ProjectionMatrix_0, ProjectionMatrix_1, ImagePointsFirstView, ImagePointsSecondView, X);
X.row(0) = X.row(0) / X.row(3);
X.row(1) = X.row(1) / X.row(3);
X.row(2) = X.row(2) / X.row(3);
return X;
}

Coordinate transform from ROI in original image

I have a little problem with some projection and geometry. I have an image where I detect a square. After the square detection, I crop the square from image. In the ROI I detect the point P(x,y) (see the image below).
My problem is that I know the coordinate of point P in the ROI, the coordinates of A,B,C,D, and rotation of ROI (RotatedRect::angle) but I want to get the coordinate of P in original image. Any advice could help.
For ROI crop I have this code
vector< RotatedRect > rect(squares.size());
for (int i=0;i<squares.size();i++)
{
rect[i] = minAreaRect(Mat(squares[i]));
Mat M,rotated,cropped;
float angle = rect[i].angle;
Size rect_size = rect[i].size;
if (rect[i].angle<-45)
{
angle += 90;
swap(rect_size.width,rect_size.height);
}
M = getRotationMatrix2D(rect[i].center,angle,1.0);
warpAffine(cameraFeed,rotated,M,cameraFeed.size(),INTER_CUBIC);
getRectSubPix(rotated,rect_size,rect[i].center,cropped);
cropped.copyTo(SatelliteClass[i].m_matROIcropped);
SatelliteClass[i].m_vecRect = rect[i];
}
It's basically a question of vector addition. Take the inverse of M, apply it to P ( so you're rotating P back to the original frame ) and then add P to the left corner of the rectangle.
There might be a way to do this within the API you're using instead of reinventing the wheel.

OpenCV: Find original coordinates of a rotated point

I have the following problem. I'm searching for eyes within an image using HaarClassifiers. Due to the rotation of the head I'm trying to find eyes within different angles. For that, I rotate the image by different angles. For rotating the frame, I use the code (written in C++):
Point2i rotCenter;
rotCenter.x = scaledFrame.cols / 2;
rotCenter.y = scaledFrame.rows / 2;
Mat rotationMatrix = getRotationMatrix2D(rotCenter, angle, 1);
warpAffine(scaledFrame, scaledFrame, rotationMatrix, Size(scaledFrame.cols, scaledFrame.rows));
This works fine and I am able to extract two ROI Rectangles for the eyes. So, I have the top/left coordinates of each ROI as well as their width and height. However, these coordinates are the coordinates in the rotated image. I don't know how I can backproject this rectangle onto the original frame.
Assuming I have the obtaind eye pair rois for the unscaled frame (full_image), but still roated.
eye0_roi and eye1_roi
How can I rotate them back, such that they map their correct position?
Best regards,
Andre
You can use the invertAffineTransform to get the inverse matrix and use this matrix to rotate point back:
Mat RotateImg(const Mat& img, double angle, Mat& invertMat)
{
Point center = Point( img.cols/2, img.rows/2);
double scale = 1;
Mat warpMat = getRotationMatrix2D( center, angle, scale );
Mat dst = Mat(img.size(), CV_8U, Scalar(128));
warpAffine( img, dst, warpMat, img.size(), 1, 0, Scalar(255, 255, 255));
invertAffineTransform(warpMat, invertMat);
return dst;
}
Point RotateBackPoint(const Point& dstPoint, const Mat& invertMat)
{
cv::Point orgPoint;
orgPoint.x = invertMat.at<double>(0,0)*dstPoint.x + invertMat.at<double>(0,1)*dstPoint.y + invertMat.at<double>(0,2);
orgPoint.y = invertMat.at<double>(1,0)*dstPoint.x + invertMat.at<double>(1,1)*dstPoint.y + invertMat.at<double>(1,2);
return orgPoint;
}

Get angle from OpenCV Canny edge detector

I want to use OpenCV's Canny edge detector, such as is outlined in this question. For example:
cv::Canny(image,contours,10,350);
However, I wish to not only get the final thresholded image out, but I also wish to get the detected edge angle at each pixel. Is this possible in OpenCV?
canny doesn't give you this directly.
However, you can calculate the angle from the Sobel transform, which is used internally in canny().
Pseudo code:
cv::Canny(image,contours,10,350);
cv::Sobel(image, dx, CV_64F, 1, 0, 3, 1, 0, cv::BORDER_REPLICATE);
cv::Sobel(image, dy, CV_64F, 0, 1, 3, 1, 0, cv::BORDER_REPLICATE);
cv::Mat angle(image.size(), CV_64F)
foreach (i,j) such that contours[i, j] > 0
{
angle[i, j] = atan2(dy[i,j], dx[i , j])
}
Instead of using for loop you can also provide dx and dy gradients to phase function that returns grayscale image of angles direction, then pass it to applyColorMap function and then mask it with edges, so the background is black.
Here is the workflow:
Get the angles
Mat angles;
phase(dx, dy, angles, true);
true argument idicates that the angles are returned in degrees.
Change the range of angles to 0-255 so you can convert to CV_8U without data loss
angles = angles / 360 * 255;
note that angles is still in CV_64F type as it comes from Sobel function
Convert to CV_8U
angles.convertTo(angles, CV_8U);
Apply color map of your choice
applyColorMap(angles, angles, COLORMAP_HSV);
in this case I choose HSV colormap. See this for more info: https://www.learnopencv.com/applycolormap-for-pseudocoloring-in-opencv-c-python/
Apply the edges mask so the background is black
Mat colored;
angles.copyTo(colored, contours);
Finally display image :D
imshow("Colored angles", colored);
In case your source is a video or webcam, before applying the mask of edges you addtionlly must clear colored image, to prevent aggregation:
colored.release();
angles.copyTo(colored, contours);
Full code here:
Mat angles, colored;
phase(dx, dy, angles, true);
angles = angles / 360 * 255;
angles.convertTo(angles, CV_8U);
applyColorMap(angles, angles, COLORMAP_HSV);
colored.release();
angles.copyTo(colored, contours);
imshow("Colored angles", colored);