When Using findContours to extract contours, the output contour points miss a pixel in concave cornor. As shown in this image, 2 pixel are miss. How to fill this pixel and ensure two continued contour points are vertical or horizontal all the time.
Problem Image
For reference, the origin image is attach bellow:
Origin Image
Some Solution or Another Opencv API wanted.
Related
Hopeful end result of detecting ellipse in photo and drawing edges accurately:
Picture I'm trying to work with:
I’m looking to detect the ellipses of an eye in a side view image but one problem is that when I run the canny function and draw the edges it can only draw edges on the eyelashes and make some sort of ellipse there. This noise is also causing problems because in my hough transform ellipse function I am thresholding for all values higher than .9 (and lower than .7) and these values on the eyelashes have the maximum pixel intensity values so they are taken into account.
Is there a way to remove the noise (eyelashes)? I am using skimage from sci-kit image for all of this. This is for a side view of the eye.
Assume I have a camera that has been calibrated using the full camera frame to obtain the camera matrix and distortion coefficients. Also, assume that I have a 3D world point expressed in that camera's frame.
I know that I can use cv::projectPoints() with rvec=tvec=(0,0,0), the camera matrix, and distortion coefficients to project the point to the (full) distorted frame. I also know that if I am receiving an ROI from the camera (which is a cropped portion of the full distorted frame), I can adjust for the ROI simply by subtracting the (x,y) coordinate of the top left corner of the ROI from the result of cv::projectPoints(). Lastly, I know that if I use cv::projectPoints() with rvec=tvec=(0,0,0), the camera matrix, and zero distortion coefficients I can project the point to the full undistorted frame (correct me if I'm wrong, but I think this requires that you use the same camera matrix in cv::undistort() and don't use a newCameraMatrix).
How do I handle the case where I want to project to the undistorted version of an ROI that I am receiving (i.e. I get a (distorted) ROI and then use cv::undistort() on it using the method described here to account for the fact that it's an ROI, and then I want to project the 3D point to that resulting image)?
If there is a better way to go about all this I am open to suggestions as well. My goal is that I want to be able to project 3D points to distorted and undistorted frames with or without the presence of an ROI where the ROI is always originally defined by the feed from the camera and therefore always defined in the distorted frame (i.e. 4 different cases: distorted full frame, distorted ROI, undistorted full frame, undistorted version of distorted ROI).
I am using 2 different methods to render an image (as an opencv Matrix):
an implemented projection function that uses the camera intrinsics (focal length, principal point; distortion is disabled) - this function is used in other software packages and is supposed to work correctly (repository)
a 2D to 2D image warping (here, I'm determining the intersections of the corner-rays of my camera with my 2D image that should be warped into my camera frame); this backprojection of the corner points is using the same camera model as above
Now, I overlay these two images and what should basically happen is that a projected pen-tip (method 1.) should line up with a line that is drawn on the warped image (method 2.). However, this is not happening.
There is a tiny shift in both directions, depending on the orientation of the pen that is writing, and it is reduced when I am shifting the principal point of the camera. Now my question is, since I am not considering the principal point in the 2D-2D image warping, can this be the cause of the mismatch? Or is it generally impossible to align those two, since the image warping is a simplification of the projection process?
Grey Point: projected origin (should fall in line with the edges of the white area)
Blue Reticle: penTip that should "write" the Bordeaux-colored line
Grey Line: pen approximation
Red Edge: "x-axis" of white image part
Green Edge: "y-axis" of white image part
EDIT:
I also did the same projection with the origin of the coordinate system, and here, the mismatch grows, the further the origin moves out of the center of the image. (so delta[warp,project] gets larger on the image borders compare to the center)
I have an image consisting of concentric circles. I'm trying to undistort the image so that the circles are equal spacing apart around the edges, as if the camera is parallel to the plane. (Some circles will appear closer to the next which is fine, I just want equal spacing all around the edges between the two adjacent circles)
I've tried to estimate a rigid transform by specifying points of the outer circle, but it distorts the inner circles too much, and I've tried a findhomography by specifying points of all the circles, and comparing with points of circles of where they should be.
From what I see the outer circles are 'squished' vertically, so need to be smushed up horizontally, but the inner circles are more circular. What can I do to undistort this image?
https://code.google.com/p/ipwithopencv/source/browse/trunk/ThinPlateSpline/ThinPlateSpline/?r=4
Using this implementation of Thin Plate Spline, I was able to input a set of points representing all the distorted circles, and another set of points which represent where they should be, to get the desired output. It isn't absolutely perfect, but it's more than good enough!
I have an image with a circle in it, and I use the openCV methods to detect it and display its edges and center on the image before the image is rectified and undistorted.
I rectify the image and undistort it using InitUndistortRectifyMap in OpenCV. After remaping, the image is warped and the circle has an oval shape due to the change in perspective. The position coordinates of the center do obviuosly change as well.
I cannot do the circle detection step after rectifying because this will produce inaccurate results, due to the perspective change.
My question is, how can I find the position of the center after the image has been undistorted and rectified?
There is an undistortPoints function which is able to transform vector of Point2f or Point2d.