OpenCV C++ to C conversion - c++

if in openCV with C++ API while creating a rotation matrix if I write
Mat rot_matrix = getRotationMatrix2D(src_center, angle, 1.0);
then how should I write it in openCV with C? What should I write instead of Mat? Is it like this:-
CvMat* rot_mat =cv2DRotationMatrix( center, angle, scale, rot );
Is the above declaration correct? If yes, then how can I represent it in wrap affine function?
Is it like this:-
cvWarpAffine( src, dst, rot_mat);

I think you are a bit confused according to our little chat at the comments sector, so I decided to write you an answer that would try to make it a bit clearer.
First for the original question as you wrote Mat is indeed C++ form.
In C you use CvMat but the function cv2DRotationMatrix() already takes CvMat * as part of the parameters therefore it can be used like that:
cv2DRotationMatrix(center,angle,scale,rot_mat);
where:
center is CvPoint2D32f, the center of the rotation in the source image (width/2,height/2).
angle – The desired rotation angle(in degrees).
scale – Isotropic scale factor (1 means the picture would be kept at the same size)
mapMatrix – Pointer to the destination matrix (your 2x3 matrix that would be used as the rotation matrix)
Now rot_mat would hold the rotation matrix:
where:
Now you would like to calculate the position of each pixel after rotating the whole picture in x degrees (an Affine transformation on the pixels/pciture/image).
At this stage you have the rotation matrix that is being used in the affine transformation and you want to do the affine transformation (rotating the image) so you can use the function cvWarpAffine()
in our case:
cvWarpAffine( src, dst, rot_mat );
where:
src – Source image
dst – Destination image
rot_mat as our mapMatrix(transformation matrix)
*there is also a fourth parameter which is flag but the default is o.k for our case.
what it does?
transforms the source image using the specified matrix:
*or in simpler words as I described before it "just" calculates the new position for each pixel after the rotation (using rotation matrix - as input to the affine function).
rot_mat should be 2x3 mat - you can create it by calling cvCreateMat().
src,dst are IplImage * (because we said it's c code).
*The technical aspect of the function is from:
http://opencv.willowgarage.com/documentation/geometric_image_transformations.html

It seems like porting C++ program to C. As per Wikipedia, there are C interfaces which you can look for your purpose.
Declarations seems right provided the parameters are not reference but pointers since C does not support reference.
You can use pointers to wrap the class objects. It will be like:
CvMat * cv2dRotationMatrix( CvPoint *, CvAngle *, float );
As you have pointed out that you need to wrap C++ functions under a pure C function.

Related

Can I create a transformation matrix from rotation/translation vectors?

I'm trying to deskew an image that has an element of known size. Given this image:
I can use aruco:: estimatePoseBoard which returns rotation and translation vectors. Is there a way to use that information to deskew everything that's in the same plane as the marker board? (Unfortunately my linear algebra is rudimentary at best.)
Clarification
I know how to deskew the marker board. What I want to be able to do is deskew the other things (in this case, the cloud-shaped object) in the same plane as the marker board. I'm trying to determine whether or not that's possible and, if so, how to do it. I can already put four markers around the object I want to deskew and use the detected corners as input to getPerspectiveTransform along with the known distance between them. But for our real-world application it may be difficult for the user to place markers exactly. It would be much easier if they could place a single marker board in the frame and have the software deskew the other objects.
Since you tagged OpenCV:
From the image I can see that you have detected the corners of all the black box. So just get the most border for points in a way or another:
Then it is like this:
std::vector<cv::Point2f> src_points={/*Fill your 4 corners here*/};
std::vector<cv::Point2f> dst_points={cv:Point2f(0,0), cv::Point2f(width,0), cv::Point2f(width,height),cv::Point2f(0,height)};
auto H=v::getPerspectiveTransform(src_points,dst_points);
cv::Mat copped_image;
cv::warpPerspective(full_image,copped_image,H,cv::Size(width,height));
I was stuck on the assumption that the destination points in the call to getPerspectiveTransform had to be the corners of the output image (as they are in Humam's suggestion). Once it dawned on me that the destination points could be somewhere within the output image I had my answer.
float boardX = 1240;
float boardY = 1570;
float boardWidth = 1730;
float boardHeight = 1400;
vector<Point2f> destinationCorners;
destinationCorners(Point2f(boardX+boardWidth, boardY));
destinationCorners(Point2f(boardX+boardWidth, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY));
Mat h = getPerspectiveTransform(detectedCorners, destinationCorners);
Mat bigImage(image.size() * 3, image.type(), Scalar(0, 50, 50));
warpPerspective(image, bigImage, h, bigImage.size());
This fixed the perspective of the board and everything in its plane. (The waviness of the board is due to the fact that the paper wasn't lying flat in the original photo.)

OpenCV How to rotate cv::RotatedRect?

How do I apply some transformation (e.g. rotation) to a cv::rotatedRect?
Tried using cv::warpAffine but won't work, as it is supposed to be applied to cv::Mat...
You can control rotation translation and scale directly using the internal variables angle, center & size see documentation.
More general transformations requires getting the vertices using points() and manipulating them using for example cv::warpAffinebut once doing that you will no longer have a cv::rotatedRect (by definition)
If you are planing to do complex operations like affine or perspective, you should deal with the points of the rotated rect and the result may be quad shape not a rectangle.
cv::warpAffine works for images. you should use cv::Transform and cv::Perspectivetransform
They take array of points and produced array of points.
Example:
cv::RotatedRect rect;
//fill rect somehow
cv::Point2f rect_corners[4];
rect.points(rect_corners);
std::vector<cv::Point2f> rect_corners_transformed(4);
cv::Mat M;
//fill M with affine transformation matrix
cv::transform(std::vector<cv::Point2f>(std::begin(rect_corners), std::end(rect_corners)), rect_corners_transformed, M);
// your transformed points are in rect_corners_transformed
TLDR: Create a new rectangle.
I don't know if it will help you, but I solved a similar problem by creating a new rectangle and ignoring the old one. In other words, I calculated the new angle, and then assigned it and the values of the old rectangle (the center point and the size) to the new rectangle:
RotatedRect newRotatedRectangle(oldRectangle.center, oldRectangle.size, newAngle);

How to undo a perspective transform for a single point in opencv

I am trying to do some image analysis using an Inverse Perspective Map. I used the openCV functions getTransform and findHomography to generate a transformation matrix and apply it to the source image. This works well and I am able to get the points from the image I want. The problem is, I don't know how I can take individual point values and undo the transform to draw them back on the original picture. I want to only undo the transform for this set of points to find their original location. How does one do this.
The points are in the form Point(x,y) from the openCV library.
To invert a homography (e.g. perspective transformation) you typically just invert the transformation matrix.
So to transform some points back from your destination image to your source image you invert the transformation matrix and transform those points with the result. To transform a point with a transformation matrix you multiply it from right to the matrix, maybe followed by a de-homogenization.
Luckily, OpenCV provides not only the warpAffine/warpPerspective methods, which transform each pixel of one image to the other image, but there is method to transform single points, too.
Use cv::perspectiveTransform(inputVector, emptyOutputVector, yourTransformation) method to transform a set of points, where
inputVector is a std::vector<cv::Point2f> (you can use a nx2 or 2xn matrix, too, but sometimes that's erroneous). Instead you can use cv::Point3f type, but I'm not sure whether those would be homgeneous coordinate points or 3D points for 3D transformation (or maybe both?).
outputVector is an empty std::vector<cv::Point2f> where the result will be stored
yourTransformation is a double precision 3x3 cv::Mat (like provided by findHomography ) transformation matrix (or 4x4 for 3D points).
Here's a Python example:
import cv2
import numpy as np
# Forward transform
point_transformed = cv2.perspectiveTransform(point_original, trans)
# Reverse transform
inv_trans = np.linalg.pinv(trans)
round_tripped = cv2.perspectiveTransform(point_transformed, inv_trans)
# Now, round_tripped should be approximately equal to point_original
you can use cv::perspectiveTransform(inputVector, emptyOutputVector, yourTransformation) to apply persepective transform on points
Python: cv2.perspectiveTransform(src, m) → dst
src – input two-channel or three-channel floating-point array; each element is a 2D/3D vector to be transformed.
m – 3x3 or 4x4 floating-point transformation matrix calculated earlier by cv2.getPerspectiveTransform(_src, _dst)
In python, you have to pass points in a numpy array as shown below:
points_to_be_transformed = np.array([[[0, 0]]], dtype=np.float32)
transfromed_points = cv2.perspectiveTransform(points_to_be_transformed, m)
transfromed_points will also be in the same shape as the input array: points_to_be_transformed

Resolving rotation matrices to obtain the angles

I have used this code as a basis to detect my rectangular target in a scene.I use ORB and Flann Matcher.I have been able to draw the bounding box of the detected target in my scene successfully using the findHomography() and perspectiveTransform() functions.
The reference image (img_object in the above code) is a straight view of only the rectangular target.Now the target in my scene image may be tilted forwards or backwards.I want to find out the angle by which it has been tilted.I have read various posts and came to the conclusion that the homography returned by findHomography() can be decomposed to the rotation matrix and translation vector. I have used code from https:/gist.github.com/inspirit/740979 recommended by this link translated to C++.This is the Zhang SVD decomposition code got from the camera calibration module of OpenCV.I got the complete explanation of this decomposition code from O'Reilly's Learning OpenCV book.
I also used solvePnP() on the the keypoints returned by the matcher to cross check the rotation matrix and the translation vector returned from the homography decomposition but they do not seem to the same.
I have already the measurements of the tilts of all my scene images.i found 2 ways to retrieve the angles from the rotation matrix to check how well they match my values.
Given a 3×3 rotation matrix
R = [ r_{11} & r_{12} & r_{13} ]
[ r_{21} & r_{22} & r_{23} ]
[ r_{31} & r_{32} & r_{33} ]
The 3 Euler angles are
theta_{x} = atan2(r_{32}, r_{33})
theta_{y} = atan2(-r_{31}, sqrt{r_{32}^2 + r_{33}^2})
theta_{z} = atan2(r_{21}, r_{11})
The axis,angle representation - Being R a general rotation matrix, its corresponding rotation axis u
and rotation angle θ can be retrieved from:
cos(θ) = ( trace(R) − 1) / 2
[u]× = (R − R⊤) / 2 sin(θ)
I calculated the angles using both the methods for the rotation matrices obtained from the homography decomposition and the solvepnp().All the angles are different and give very unexpected values.
Is there a hole in my understanding?I do not understand where my calculations are wrong.Are there any alternatives i can use?
Why do you expect them to be the same? They are not the same thing at all.
The Euler angles are three angles of rotation about one axis at a time, starting from the world frame.
Rodriguez's formula gives components of one vector in the world frame, and an angle of rotation about that vector.

Camera Calibration with OpenCV

I want to perform camera calibration with OpenCV C++ API, using a set of known world to image point matches.
OpenCV has a function called cv::calibrateCamera as documented here. This mention clearly that the function will deduce the
intrinsic camera matrix for planar objects and that it expects the user
to specify the matrix for non-planar 3D environments.
In my point correspondences, the world coordinates are not planar. And I do not have a qualified guess for the internal camera matrix.
How would I go about calibrating the camera in this case?
Currently, I am using a simple DLT based approach for the calculation using the cv::SVD::solveZ function. But I would like to use the non-linear estimation that OpenCV performs.
This page explains how to perform camera auto-calibration. This includes a method using Kruppa equations which appears to be solvable using the non-linear techniques you desire.
I was in the same situation: I have a non-planar 3D target, however I wanted to use OpenCV's non-linear LM-optimization for the calibration process. (Zhang's initialization method used by OpenCV only allows for planar calibration targets)
What you can do is to extract the camera matrix from your own DLT result and use this as an initial guess for calibrateCamera. It is sufficient if done for one pair, only (camera points - object points). Even though the other pairs might produce other camera matrices, they will hopefully be similar and you'll need that matrix only for initialization anyways.
Note, I do assume though, that with your own DLT you obtain a projection matrix P which maps homogeneous world points X to hom. image points x via x = P * X.
This would be the way to go, it is in python though, you should be able to adapt to your own needs:
P = YOUR_DLT(imagePoints[0], objectPoints[0])
cameraMatrix, _, _, _, _, _, _ = cv2.decomposeProjectionMatrix(P)
cameraMatrix /= cameraMatrix[2,2] # ensure unit elem[2,2]
cameraMatrix[0,1] = 0 # ensure no skew
cameraMatrix[0,0] = abs(cameraMatrix[0,0]) # ensure positive focal lengthes
cameraMatrix[1,1] = abs(cameraMatrix[1,1])
# ensure principal point within image:
cameraMatrix[0,2] = min(resX-1, max(0, cameraMatrix[0,2]))
cameraMatrix[1,2] = min(resY-1, max(0, cameraMatrix[1,2]))
retval, cameraMatrix, distCoeffs, rvecs, tvecs = \
cv2.calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix)
Note, since calibrateCamera assumes cameraMatrix[2,2]==1 and is constrained to positive focal lengths and 0 skew, the camera matrix likely needs to be corrected, as I've showed in the code above.