I use SDL and I want to display image with USB camera.
I'll use YUV formatted image and use SDL_Overlay.
And I want to rotate the image.
Can I rotate a SDL_Overlay object?
I must convert YUV image to BMP?
The rotation angle are 90 degrees.
Related
Assume I have a camera that has been calibrated using the full camera frame to obtain the camera matrix and distortion coefficients. Also, assume that I have a 3D world point expressed in that camera's frame.
I know that I can use cv::projectPoints() with rvec=tvec=(0,0,0), the camera matrix, and distortion coefficients to project the point to the (full) distorted frame. I also know that if I am receiving an ROI from the camera (which is a cropped portion of the full distorted frame), I can adjust for the ROI simply by subtracting the (x,y) coordinate of the top left corner of the ROI from the result of cv::projectPoints(). Lastly, I know that if I use cv::projectPoints() with rvec=tvec=(0,0,0), the camera matrix, and zero distortion coefficients I can project the point to the full undistorted frame (correct me if I'm wrong, but I think this requires that you use the same camera matrix in cv::undistort() and don't use a newCameraMatrix).
How do I handle the case where I want to project to the undistorted version of an ROI that I am receiving (i.e. I get a (distorted) ROI and then use cv::undistort() on it using the method described here to account for the fact that it's an ROI, and then I want to project the 3D point to that resulting image)?
If there is a better way to go about all this I am open to suggestions as well. My goal is that I want to be able to project 3D points to distorted and undistorted frames with or without the presence of an ROI where the ROI is always originally defined by the feed from the camera and therefore always defined in the distorted frame (i.e. 4 different cases: distorted full frame, distorted ROI, undistorted full frame, undistorted version of distorted ROI).
I'm trying to perform fisheye camera calibration via OpenCV 3.4.0 (C++, MS Windows).
I used cv::fisheye::calibrate to make K and D (camera matrix and radial distortion coeffitients matrix). Then I used cv::fisheye::initUndistortRectifyMap to produce maps for X and Y coordinates.
And finally I used cv::remap to undistort image from fisheye camera via maps from initUndistortRectifyMap.
Everything looks right, but OpenCV dewarps only a central part of fisheye image.
Edges are moved outside.
I'd like to dewarp the whole image.
I tried to change focal length in K matrix manually, and got undistorted edges, but they became very very blurry.
I found some results in this task. For example
https://www.youtube.com/watch?v=Ll8KCnCw4iU
and
https://www.youtube.com/watch?v=p1kCR1i2nF0
As far as you can see these results are very similar with my results.
Does anybody have a solution of this problem?
I analyzed a lot of papers in the last 2 weeks. I think I found the source of the problem. OpenCV 3.4.0 fisheye undistortion method is based on a pin-hole camera model. We have an angle between the optical axis of the camera and the ray of light from some object. We also have an angle between the direction to an undistorted point of this object and the camera optical axis. If the fisheye image was undistorted correctly, these two angles would be equal. FOV of my fisheye camera is 180 degrees. It means that distance fromthe undistorted image center and the point corresponding to the edge of the undistorted image is equal to infinity.
In other words if we have a fisheye camera with FOV around 180 degrees, undistortion (via OpenCV) of 100% of fisheye image surface is impossible.
It can be achieved, only that using a projection instead of trying to undistort it.
More info here OpenCV fisheye calibration cuts too much of the resulting image
Example result:
I have calibrated my stereo cameras using OpenCV's stereoCalibrate() and rectified them using stereoRectify(). This is successful (cameras' epipolar lines correctly aligned and can obtain accurate 3D point locations from corresponding 2D point left/right pairs).
I'm now trying to use WebGL-based three.js to display the same 3D points in the same projection as I have in OpenCV. In other words, if I overlaid the calibrated and rectified 2D image from my left camera onto the three.js output, the three.js 3D points should visually align with where they are on the OpenCV 2D image. This could be used for augmented reality.
Just dealing with the left camera (camera 1):
stereoRectify() provides projection matrix P1 and rectification transform R1. Using only P1 I can convert a 3D point to the correct 2D screen position in OpenCV. However, I am having difficulty using P1 to get an equivalent camera.projectionMatrix in three.js.
I'm using the OpenCV camera matrix to OpenGL projection conversion suggested by Guillaume Noctua here. I'm taking the camera matrix to be the top-left 3x3 of P1. That produces a three.js camera view that looks similar, but not quite aligned (camera appears rotated along all axis by a degree or so, with perhaps some other small but clearly erroneous distortions/translations). So I'm missing something. Do I need to use the rectification transform R1 somehow too? I've tried using it as a rotation matrix and rotating the three.js camera by this amount but it doesn't improve.
Any thoughts on using OpenCV's P1 and R1 matrix to make an equivalent camera view in three.js would be much appreciated.
Given the use-case, I want to rotate a ROI in a specific image. For example I have predefined bounding boxes within a image, and now the task is to rotate the part of the image inside these boxes on behalf of an angle.
Furthermore, the rotation must be inside the bounding boxes, so the ROI is after the transformation cropped.
I tried it with a approach of create a subimage of the bounding box, than rotate it and put it back to the original position in the source image. The problem is, that I have a huge dataset of image ( > 100.000) and I think my method probably slow down the process.
Is there a other way to accomplish this transformation?
Edit: For better understanding
It is a quick mockup, how it should look after the transformation.
Mat rotateMatImage (Mat& src, double angle ,Mat& cdst) {
// Rotate the angle which using warpAffine Methods
Mat rot_mat;
Mat rotated;
cv::Point2f src_center(cdst.cols/2.0F, cdst.rows/2.0F);
rot_mat = getRotationMatrix2D(src_center, angle, 1.0);
warpAffine(src, rotated, rot_mat, cdst.size(), cv::INTER_CUBIC);
return rotated;
}
Refer the documentation http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html?highlight=warpaffine
If you are using cv::Mat, yourMat(yourRect) will give you the ROI of the image that should be rotated. The fastest way to do this is to calculate the affine transform needed to rotate the four corners of the ROI to their destination points, and then apply the transform to the ROI. Using ROI selection and affine transform are much faster than doing the pixel operations yourself, but I am not sure if there is a faster method than using affine transform.
I have an image with a circle in it, and I use the openCV methods to detect it and display its edges and center on the image before the image is rectified and undistorted.
I rectify the image and undistort it using InitUndistortRectifyMap in OpenCV. After remaping, the image is warped and the circle has an oval shape due to the change in perspective. The position coordinates of the center do obviuosly change as well.
I cannot do the circle detection step after rectifying because this will produce inaccurate results, due to the perspective change.
My question is, how can I find the position of the center after the image has been undistorted and rectified?
There is an undistortPoints function which is able to transform vector of Point2f or Point2d.