Rotate ROI in a picture - c++

Given the use-case, I want to rotate a ROI in a specific image. For example I have predefined bounding boxes within a image, and now the task is to rotate the part of the image inside these boxes on behalf of an angle.
Furthermore, the rotation must be inside the bounding boxes, so the ROI is after the transformation cropped.
I tried it with a approach of create a subimage of the bounding box, than rotate it and put it back to the original position in the source image. The problem is, that I have a huge dataset of image ( > 100.000) and I think my method probably slow down the process.
Is there a other way to accomplish this transformation?
Edit: For better understanding
It is a quick mockup, how it should look after the transformation.

Mat rotateMatImage (Mat& src, double angle ,Mat& cdst) {
// Rotate the angle which using warpAffine Methods
Mat rot_mat;
Mat rotated;
cv::Point2f src_center(cdst.cols/2.0F, cdst.rows/2.0F);
rot_mat = getRotationMatrix2D(src_center, angle, 1.0);
warpAffine(src, rotated, rot_mat, cdst.size(), cv::INTER_CUBIC);
return rotated;
}
Refer the documentation http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html?highlight=warpaffine

If you are using cv::Mat, yourMat(yourRect) will give you the ROI of the image that should be rotated. The fastest way to do this is to calculate the affine transform needed to rotate the four corners of the ROI to their destination points, and then apply the transform to the ROI. Using ROI selection and affine transform are much faster than doing the pixel operations yourself, but I am not sure if there is a faster method than using affine transform.

Related

Projecting 3D points to an undistorted ROI using OpenCV

Assume I have a camera that has been calibrated using the full camera frame to obtain the camera matrix and distortion coefficients. Also, assume that I have a 3D world point expressed in that camera's frame.
I know that I can use cv::projectPoints() with rvec=tvec=(0,0,0), the camera matrix, and distortion coefficients to project the point to the (full) distorted frame. I also know that if I am receiving an ROI from the camera (which is a cropped portion of the full distorted frame), I can adjust for the ROI simply by subtracting the (x,y) coordinate of the top left corner of the ROI from the result of cv::projectPoints(). Lastly, I know that if I use cv::projectPoints() with rvec=tvec=(0,0,0), the camera matrix, and zero distortion coefficients I can project the point to the full undistorted frame (correct me if I'm wrong, but I think this requires that you use the same camera matrix in cv::undistort() and don't use a newCameraMatrix).
How do I handle the case where I want to project to the undistorted version of an ROI that I am receiving (i.e. I get a (distorted) ROI and then use cv::undistort() on it using the method described here to account for the fact that it's an ROI, and then I want to project the 3D point to that resulting image)?
If there is a better way to go about all this I am open to suggestions as well. My goal is that I want to be able to project 3D points to distorted and undistorted frames with or without the presence of an ROI where the ROI is always originally defined by the feed from the camera and therefore always defined in the distorted frame (i.e. 4 different cases: distorted full frame, distorted ROI, undistorted full frame, undistorted version of distorted ROI).

OpenCV undistorts only a central part of fisheye image

I'm trying to perform fisheye camera calibration via OpenCV 3.4.0 (C++, MS Windows).
I used cv::fisheye::calibrate to make K and D (camera matrix and radial distortion coeffitients matrix). Then I used cv::fisheye::initUndistortRectifyMap to produce maps for X and Y coordinates.
And finally I used cv::remap to undistort image from fisheye camera via maps from initUndistortRectifyMap.
Everything looks right, but OpenCV dewarps only a central part of fisheye image.
Edges are moved outside.
I'd like to dewarp the whole image.
I tried to change focal length in K matrix manually, and got undistorted edges, but they became very very blurry.
I found some results in this task. For example
https://www.youtube.com/watch?v=Ll8KCnCw4iU
and
https://www.youtube.com/watch?v=p1kCR1i2nF0
As far as you can see these results are very similar with my results.
Does anybody have a solution of this problem?
I analyzed a lot of papers in the last 2 weeks. I think I found the source of the problem. OpenCV 3.4.0 fisheye undistortion method is based on a pin-hole camera model. We have an angle between the optical axis of the camera and the ray of light from some object. We also have an angle between the direction to an undistorted point of this object and the camera optical axis. If the fisheye image was undistorted correctly, these two angles would be equal. FOV of my fisheye camera is 180 degrees. It means that distance fromthe undistorted image center and the point corresponding to the edge of the undistorted image is equal to infinity.
In other words if we have a fisheye camera with FOV around 180 degrees, undistortion (via OpenCV) of 100% of fisheye image surface is impossible.
It can be achieved, only that using a projection instead of trying to undistort it.
More info here OpenCV fisheye calibration cuts too much of the resulting image
Example result:

Fourier Angle Transformation of Picture, C++

I need get angle of object on picture, using Fourier transformation.
I have a picture object, that I rotated. There are no problems with the Fourier realization. It shows me good gradient lines that confirm the correct object angle.
Q. How do I get angle points from the Fourier gradient, to transform it horizontally?
You don't show us the image or its transform, so I'll assume the gradient lines are indeed clear enough. In that case you'd use the Hough transform. Output is a set of lines, each with an angle.

Opencv Warp perspective whole image

I'm struggling with this problem:
I have the an image and I want to apply a warp perspective to it (I already have the transformation matrix) but instead of the output only having the transformation area (like the example below) I want to be able to see the whole image instead.
EXAMPLE http://docs.opencv.org/trunk/_images/perspective.jpg
Instead of having the transformation region, like this example, I want to transform the whole original image.
How can I achieve this?
Thanks!
It seems that you are computing the perspective transform by selecting the corners of the sudoku grid in the input image and requesting them to be warped at fixed location in the output image. In your example, it seems that you are requesting the top-left corner to be warped at coordinates (0,0), the top-right corner at (300,0), the bottom-left at (0,300) and the bottom-right at (300,300).
This will always result in the cropping of the image area on the left of the two left corners and above the two top corners (i.e. the image area where x<0 or y<0 in the output image). Also, if you specify an output image size of 300x300, this results in the cropping of the image area on the right to the right corners and below the bottom corners.
If you want to keep the whole image, you need to use different output coordinates for the corners. For example warp TLC to (100, 100), TRC to (400,100), BLC to (100,400) and BRC to (400,400), and specify an output image size of 600x600 for instance.
You can also calculate the optimal coordinates as follows:
Compute the default perspective transform H0 (as you are doing now)
Transform the corners of the input image using H0, and compute the minimum and maximum values for the x and y coordinates of these corners. Let's denote them xmin, xmax, ymin, ymax.
Compute the translation necessary to map the point (xmin,ymin) to (0,0). The matrix of this translation is T = [1, 0, -xmin; 0, 1, -ymin; 0, 0, 1].
Compute the optimised perspective transform H1 = T*H0 and specify an output image size of (xmax-xmin) x (ymax-ymin).
This way, you are guaranteed that:
the four corners of your input sudoku grid will form a true square
the output image will be translated so that no useful image data is cropped above or to the left of the grid corners
the output image will be have sized so that no useful image data is cropped below or to the right of the grid corners
However, this will generate black areas since the ouput image is no longer a perfect rectangle, hence some pixels in the output image won't have any correspondence in the input image.
Edit 1: If you want to replace the black areas with something else, you can initialize the destination matrix as you wish and then set the borderMode parameter of the warpPerspective function to BORDER_TRANSPARENT.

How do I find the new position of a feature after undistorting the image?

I have an image with a circle in it, and I use the openCV methods to detect it and display its edges and center on the image before the image is rectified and undistorted.
I rectify the image and undistort it using InitUndistortRectifyMap in OpenCV. After remaping, the image is warped and the circle has an oval shape due to the change in perspective. The position coordinates of the center do obviuosly change as well.
I cannot do the circle detection step after rectifying because this will produce inaccurate results, due to the perspective change.
My question is, how can I find the position of the center after the image has been undistorted and rectified?
There is an undistortPoints function which is able to transform vector of Point2f or Point2d.