Rotated image when using "findHomography" and "warpPerspective" - c++

I am currently developing an ALPR system. To detect the plate first I am using the method described here.
The problem comes with certain images. After using findHomography and warpPerspective it gives me the plate image rotated.
This is the original image that gives me issues.
This is the detected plate contour
And this is the warped image
As you can see it is rotated 90 degrees. In other examples the detection works amazing.
The specific piece of code
cv::Mat warpped_plate( PLATE_HEIGHT, PLATE_WIDTH, CV_8UC3 );
vector< cv::Point> real_plate_polygons;
real_plate_polygons = {cv::Point(PLATE_WIDTH, PLATE_HEIGHT), cv::Point(0, PLATE_HEIGHT), cv::Point(0, 0), cv::Point(PLATE_WIDTH, 0)};
cv::Mat homography = cv::findHomography( plate_polygons, real_plate_polygons );
cv::warpPerspective(source_img, warpped_plate, homography, cv::Size(PLATE_WIDTH, PLATE_HEIGHT));
Where plate_polygons contains the four points of the plate (and are correct, because were used to draw the white box in the mask).
Any ideas? Thanks in advance!

As nico mentioned, the problem was with the order of the points in the plate_polygon. The algorithm generating them was not consistent on the start point (in my case, starting from lower).

Related

Can I create a transformation matrix from rotation/translation vectors?

I'm trying to deskew an image that has an element of known size. Given this image:
I can use aruco:: estimatePoseBoard which returns rotation and translation vectors. Is there a way to use that information to deskew everything that's in the same plane as the marker board? (Unfortunately my linear algebra is rudimentary at best.)
Clarification
I know how to deskew the marker board. What I want to be able to do is deskew the other things (in this case, the cloud-shaped object) in the same plane as the marker board. I'm trying to determine whether or not that's possible and, if so, how to do it. I can already put four markers around the object I want to deskew and use the detected corners as input to getPerspectiveTransform along with the known distance between them. But for our real-world application it may be difficult for the user to place markers exactly. It would be much easier if they could place a single marker board in the frame and have the software deskew the other objects.
Since you tagged OpenCV:
From the image I can see that you have detected the corners of all the black box. So just get the most border for points in a way or another:
Then it is like this:
std::vector<cv::Point2f> src_points={/*Fill your 4 corners here*/};
std::vector<cv::Point2f> dst_points={cv:Point2f(0,0), cv::Point2f(width,0), cv::Point2f(width,height),cv::Point2f(0,height)};
auto H=v::getPerspectiveTransform(src_points,dst_points);
cv::Mat copped_image;
cv::warpPerspective(full_image,copped_image,H,cv::Size(width,height));
I was stuck on the assumption that the destination points in the call to getPerspectiveTransform had to be the corners of the output image (as they are in Humam's suggestion). Once it dawned on me that the destination points could be somewhere within the output image I had my answer.
float boardX = 1240;
float boardY = 1570;
float boardWidth = 1730;
float boardHeight = 1400;
vector<Point2f> destinationCorners;
destinationCorners(Point2f(boardX+boardWidth, boardY));
destinationCorners(Point2f(boardX+boardWidth, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY));
Mat h = getPerspectiveTransform(detectedCorners, destinationCorners);
Mat bigImage(image.size() * 3, image.type(), Scalar(0, 50, 50));
warpPerspective(image, bigImage, h, bigImage.size());
This fixed the perspective of the board and everything in its plane. (The waviness of the board is due to the fact that the paper wasn't lying flat in the original photo.)

segmentation of the source image in opencv based on canny edge outline attained from processing the said source image

I have a source image. I need a particular portion to be segmented from it and save it as another image. I have the canny outline of the portion I need to be segmented out,but how do I use it to cut the portion from the source image? I have attached both the source image and the canny edge outline. Please help me and suggest me a solution.
EDIT-1: Alexander Kondratskiy,Is this what you meant by filling the boundary?
EDIT-2 : according to Kannat, I have done this
Now how do I separate the regions that are outside and inside of the contour into two separate images?
Edit 3- I thought of 'And-ing'the mask and the contour lined source image.Since I am using C, I am having a little difficulty.
this is the code I use to and:-
hsv_gray = cvCreateImage( cvSize(seg->width, seg->height), IPL_DEPTH_8U, 1 );
cvCvtColor( seg, hsv_gray, CV_BGR2GRAY );
hsv_mask=cvCloneImage(hsv_gray);
IplImage* contourImg =cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
IplImage* newImg=cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
cvAnd(contourImg, hsv_mask,newImg,NULL);
I always get an error of mismatch size or Type. I adjusted the size but I can't seem to adjust the type,since one(hsv_mask) is 1 channel and the others are 3 channels.
#kanat- I also tried your boundingrect but could not seem to get in right in C format.
Use cv::findContours on your second image to find the contour of the segment. Then use cv::boundingRect to find bounding box for this segment. After that you can create matrix and save in it cropped bounding box from your second image (as I see it is a binary image). To crop needed region use this:
cv::getRectSubPix(your_image, BB_size, cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image).
Then you can save new_image using cv::imwrite. That's it.
EDIT:
If you found only one contour then use this (else you will iterate through elements of found contours). The following code shows the steps but sorry I can't test it now:
std::vector<std::vector<cv::Point>> contours;
// cv::findContours(..., contours, ...);
cv::Rect BB = cv::boundingRect(cv::Mat(contours[0]));
cv::Mat new_image;
cv::getRectSubPix(your_image, BB.size(), cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image);
cv::imwrite("new_image_name.jpg", new_image);
You could fill the boundary created by the canny-edge detector, and use that as an alpha mask on the original image.

Stereo rectify - ROI have different sizes

I have done a stereo calibration and I got the validPixROI1 and 2 (green border). Now I want to use StereoSGBM but the rois from calibration (from stereoRectify) are not the same size. Anyone know how to solve this?
Actually I do somethine linke this:
Rect roiLeft(...);
Rect roiRight(...);
Mat cLeft(rLeft, roiLeft);
//Mat cRight(rRight, roiRight); // not same size...
Mat cRight(cRight, roiLeft);
stereoBM(cLeft,cRight, dst);
If I crop my images with that roi, will be the picture middle point be the same?
Here it works.
Why not run stereoBM on the (uncropped)calibrated images, then you can use those ROIs after to mask out the invalid bits of the result...
stereoBM(rLeft,rRight, disp);
//get intersection of both rois or use target image roi, if you know the target image
cv::Rect visibleRoi = roiLeft & roiRight;
cv::Mat cDisp(disp,visibleRoi);
Now you have no issues with different size inputs, or different centers and such.
Cheers
According to wiki
A point R at the intersection of the optical axis and the image plane. This point is referred to as the principal point or image center.
So I don't think the center will be same.
Refer to this site . Here in one of the examples the principal point is 302.71656,242.33386 for a 640x480 pixel camera which shows that the principal point and the image center are not the same.
Run the block matcher on the uncropped rectified images and then use.
cv::getValidDisparityROI(roi1, roi2, minDisparity, numberOfDisparities, SADWindowSize);
That call returns a cv::Rect that will be a bounding box for all the valid pixels in the left image and the disparity map. The valid pixels are only pixels that both cameras can "see" (caveat on occluded edges).
Once you have the disparity map the right image becomes useless.
Be aware that the roi's returned from stereoRectify are just valid pixels after the remap from the cameras intrinsics.

How to detect and draw a circle around the iris region of an eye?

I have been trying to detect the iris region of an eye and thereafter draw a circle around the detected area. I have managed to obtain a clear black and white eye image containing just the pupil, upper eyelid line and eyebrow using threshold function.
Once this is achieved HoughCircles is applied to detect if there are circles appearing in the image. However, it never detects any circular regions. After reading up on HoughCircles, it states that
the Hough gradient method works as follows:
First the image needs to be passed through an edge detection phase (in this case, cvCanny()).
I then added a canny detector after the threshold function. This still produced zero circles detected. If I remove the threshold function, the eye image becomes busy with unnecessary lines; hence I included it in.
cv::equalizeHist(gray, img);
medianBlur(img, img, 1);
IplImage img1 = img;
cvAddS(&img1, cvScalar(70,70,70), &img1);
//converting IplImage to cv::Mat
Mat imgg = cvarrToMat(&img1);
medianBlur(imgg, imgg, 1);
cv::threshold(imgg, imgg, 120, 255, CV_THRESH_BINARY);
cv::Canny(img, img, 0, 20);
medianBlur(imgg, imgg, 1);
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles(imgg, circles, CV_HOUGH_GRADIENT, 1, imgg.rows/8, 100, 30, 1, 5);
How can I overcome this problem?
Would hough circle method work?
Is there a better solution to detecting the iris region?
Are the parameters chosen correct?
Also note that the image is directly obtained from the webcam.
Try using Daugman's Integro differential operator. It calculates the centre of the iris and pupil and draws an accurate circle on the iris and pupil boundaries. The MATLAB code is available here iris boundary detection using Daugman's method. Since I'm not familiar with OpenCV you could convert it.
The binary eye image contained three different aspects eyelashes ,the eye and the eyebrow.The main aim is to get to the region of interest which is the eye/iris, excluding eyebrows and eyelashes.I followed these steps:-
Step 1: Discard the upper half of the eye image ,therefore we are left with eyelashes,eye region and small shadow regions .
Step 2:Find the contours
Step 3:Find largest contour so that we have just the eye region
Step 4:Use bounding box to create a rectangle around the eye area
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
Now we have the region of interest.From this point I am now exacting these images and using neural network to train the system to emulate properties of a mouse. Im currently learning about the neural network link1 and how to use it in opencv.
Using the previous methods which included detecting the iris point,creating an eye vector,tracking it and calculating the graze on the screen is time consuming .Also there is light reflected on the iris making it difficult to detect.

Contours opencv : How to eliminate small contours in a binary image

I am currently working on image processing project. I am using Opencv2.3.1 with VC++.
I have written the code such that, the input image is filtered to only blue color and converted to a binary image. The binary image has some small objects which I don't want. I wanted to eliminate those small objects, so i used openCV's cvFindContours() method to detect contours in Binary image. but the problem is I cant eliminate the small objects in the image output. I used cvContourArea() function , but didn't work properly.. , erode function also didn't work properly.
So please someone help me with this problem..
The binary image which I obtained :
The result/output image which I want to obtain :
Ok, I believe your problem could be solved with the bounding box demo recently introduced by OpenCV.
As you have probably noticed, the object you are interested at should be inside the largest rectangle draw in the picture. Luckily, this code is not very complex and I'm sure you can figure it all out by investigating and experimenting with it.
Here is my solution to eliminate small contours.
The basic idea is check the length/area for each contour, then delete the smaller one from vector container.
normally you will get contours like this
Mat canny_output; //example from OpenCV Tutorial
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Canny(src_img, canny_output, thresh, thresh*2, 3);//with or without, explained later.
findContours(canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0,0));
With Canny() pre-processing, you will get contour segments, however each segment is stored with boundary pixels as a closed ring. In this case, you can check the length and delete the small one like
for (vector<vector<Point> >::iterator it = contours.begin(); it!=contours.end(); )
{
if (it->size()<contour_length_threshold)
it=contours.erase(it);
else
++it;
}
Without Canny() preprocessing, you will get contours of objects.
Similarity, you can also use area to define a threshold to eliminate small objects, as OpenCV tutorial shown
vector<Point> contour = contours[i];
double area0 = contourArea(contour);
this contourArea() is the number of non-zero pixels
Are you sure filtering by small contour area didn't work? It's always worked for me. Can we see your code?
Also, as sue-ling mentioned, it's a good idea to use both erode and dilate to approximately preserve area. To remove small noisy bits, use erode first, and to fill in holes, use dilate first.
And another aside, you may want to check out the new C++ versions of the cv* functions if you weren't aware of them already (documentation for findContours). They're much easier to use, in my opinion.
Judging by the before and after images, you need to determine the area of all the white areas or blobs, then apply a threshold area value. This would eliminate all areas less than the value and leave only the large white region which is seen in the 2nd image. After using the cvFindContours function, try using 0 order moments. This would return the area of the blobs in the image. This link might be helpful in implementing what I've just described.
http://www.aishack.in/2010/07/tracking-colored-objects-in-opencv/
I believe you can use morphological operators like erode and dilate (read more here)
You need to perform erosion with a kernel size near to the radius of the circle on the right (the one you want to eliminate).
followed by dilation using the same kernel to fill the gaps created by the erosion step.
FYI erosion followed by dilation using the same kernel is called opening.
the code will be something like this
int erosion_size = 30; // adjust with you application
Mat erode_element = getStructuringElement( MORPH_ELLIPSE,
Size( 2*erosion_size + 1, 2*erosion_size+1 ),
Point( erosion_size, erosion_size ) );
erode( binary_img, binary_img, erode_element );
dilate( binary_img, binary_img, erode_element );
It is not a fast way but may be usefull in some cases.
There is a new function in OpencCV 3.0 - connectedComponentsWithStats. With it we can get area of connected components and eliminate unnecessary. So we can easy remove circle with holes, with the same bounding box as solid circle.