Stereo rectify - ROI have different sizes - c++

I have done a stereo calibration and I got the validPixROI1 and 2 (green border). Now I want to use StereoSGBM but the rois from calibration (from stereoRectify) are not the same size. Anyone know how to solve this?
Actually I do somethine linke this:
Rect roiLeft(...);
Rect roiRight(...);
Mat cLeft(rLeft, roiLeft);
//Mat cRight(rRight, roiRight); // not same size...
Mat cRight(cRight, roiLeft);
stereoBM(cLeft,cRight, dst);
If I crop my images with that roi, will be the picture middle point be the same?
Here it works.

Why not run stereoBM on the (uncropped)calibrated images, then you can use those ROIs after to mask out the invalid bits of the result...
stereoBM(rLeft,rRight, disp);
//get intersection of both rois or use target image roi, if you know the target image
cv::Rect visibleRoi = roiLeft & roiRight;
cv::Mat cDisp(disp,visibleRoi);
Now you have no issues with different size inputs, or different centers and such.
Cheers

According to wiki
A point R at the intersection of the optical axis and the image plane. This point is referred to as the principal point or image center.
So I don't think the center will be same.
Refer to this site . Here in one of the examples the principal point is 302.71656,242.33386 for a 640x480 pixel camera which shows that the principal point and the image center are not the same.

Run the block matcher on the uncropped rectified images and then use.
cv::getValidDisparityROI(roi1, roi2, minDisparity, numberOfDisparities, SADWindowSize);
That call returns a cv::Rect that will be a bounding box for all the valid pixels in the left image and the disparity map. The valid pixels are only pixels that both cameras can "see" (caveat on occluded edges).
Once you have the disparity map the right image becomes useless.
Be aware that the roi's returned from stereoRectify are just valid pixels after the remap from the cameras intrinsics.

Related

How to dedistort an image without pixel loss using opencv

As we all know,we can use the function cv::getOptimalNewCameraMatrix() with alpha = 1 to get a new CameraMatrix. Then we use the function cv::undistort() with the new CameraMatrix can get the image after dedistortion. However, I find the image after distortion is as large as the original image and some part of the image after distortion covered by black.
So my question is :Does this mean that the original image pixel is lost? and is there any way to avoid pixel loss or get the image whose size larger than origin image with opencv?
cv::Mat NewKMatrixLeft = cv::getOptimalNewCameraMatrix(KMatrixLeft,DistMatrixLeft ,cv::Size(image.cols,image.rows),1);
cv::undistort(image, show_image, KMatrixLeft, DistMatrixLeft,NewKMatrixLeft);
The size of image and show_image are both 640*480,however from my point of view,the size of image after distortion should be larger than 640*480 because some part of it is meaningless.
Thanks!
In order to correct distortion, you basically have to reverse the process that caused the initial distortion. This implies that pixels are stretched and squashed along various directions to correct the distortion. In some cases, this would move the pixels away from the image edge. In OpenCV, this is handled by inserting black pixels. There is nothing wrong with this approach. You can then choose how to crop it to remove the black pixels at the edges.

find image coordinates after blending

I implemented stitching algorithm in OpenCV. What I want to do now is to find certain point from one of the original images in the final stitched image. I can find this point in image after it is warped, but then all images blend together and I want to find coordinates of this point.
For example, I have this image with size 499x369:
Now i warp this image and i want to find point(for example 255x185)
This warped image has size 475x372 and selected point from original image is at 232x184
After this, I use OpenCV's MultiBand blending function and i blend 2 warped images together
Is there some way that i could find that point after blending them together?
CODE:
Here i warp the images:
Ptr<detail::RotationWarper> warp = warper_->create(float(scale));
warp->warp(img, K, rotationMatrix, INTER_LINEAR, BORDER_REFLECT, img_warped);
//now I can find where a certain point in warped image is
warp->warpPoint(Point, K, rotationMatrix);
//now I blend the images
blender_->prepare(corners, sizes);
blender_->feed(img_warped_s, mask_warped, corners[img_idx]);
blender_->blend(result, result_mask);
So result is the last image that i posted and i want to find that point in that image.

OpenCV Binary Image Mask for Image Analysis in C++

I'm trying to analyse some images which have a lot of noise around the outside of the image, but a clear circular centre with a shape inside. The centre is the part I'm interested in, but the outside noise is affecting my binary thresholding of the image.
To ignore the noise, I'm trying to set up a circular mask of known centre position and radius whereby all pixels outside this circle are changed to black. I figure that everything inside the circle will now be easy to analyse with binary thresholding.
I'm just wondering if someone might be able to point me in the right direction for this sort of problem please? I've had a look at this solution: How to black out everything outside a circle in Open CV but some of my constraints are different and I'm confused by the method in which source images are loaded.
Thank you in advance!
//First load your source image, here load as gray scale
cv::Mat srcImage = cv::imread("sourceImage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
//Then define your mask image
cv::Mat mask = cv::Mat::zeros(srcImage.size(), srcImage.type());
//Define your destination image
cv::Mat dstImage = cv::Mat::zeros(srcImage.size(), srcImage.type());
//I assume you want to draw the circle at the center of your image, with a radius of 50
cv::circle(mask, cv::Point(mask.cols/2, mask.rows/2), 50, cv::Scalar(255, 0, 0), -1, 8, 0);
//Now you can copy your source image to destination image with masking
srcImage.copyTo(dstImage, mask);
Then do your further processing on your dstImage. Assume this is your source image:
Then the above code gives you this as gray scale input:
And this is the binary mask you created:
And this is your final result after masking operation:
Since you are looking for a clear circular center with a shape inside, you could use Hough Transform to get that area- a careful selection of parameters will help you get this area perfectly.
A detailed tutorial is here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
For setting pixels outside a region black:
Create a mask image :
cv::Mat mask(img_src.size(),img_src.type());
Mark the points inside with white color :
cv::circle( mask, center, radius, cv::Scalar(255,255,255),-1, 8, 0 );
You can now use bitwise_AND and thus get an output image with only the pixels enclosed in mask.
cv::bitwise_and(mask,img_src,output);

OpenCV measure rectangular image size

I have an app that finds an object in a frame and uses warpPerspective to correct the image to be square. In the course of doing so you specify an output image size. However, I want to know how to do so without harming its apparent size. How can I unwarp the 4-corners of the image without changing the size of the image? I don't need the image itself, I just want to measure its height and width in pixels within the original image.
Get a transform matrix that will square up the corners.
std::vector<cv::Point2f> transformedPoints;
cv::Mat M = cv::getPerspectiveTransform(points, objectCorners);
cv::perspectiveTransform(points, transformedPoints, M);
This will square up the image, but in terms of the objectCorners coordinate system. Which is -0.5f to 0.5f not the original image plane.
BoundingRect almost does what I want.
cv::Rect boundingRectangle = cv::boundingRect(points);
But as the documentation states
The function calculates and returns the minimal up-right bounding rectangle for the specified point set.
And what I want is the bounding rectangle after it has been squared-up, not without squaring it up.
According to my understanding to your post, here is something which should help you.
OpenCV perspective transform example.
Update if it still doesn't help you out in finding the height and width within the image
Minimum bounding rect of the points
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
As the minAreaRect reference on OpenCV's website states
Finds a rotated rectangle of the minimum area enclosing the input 2D point set.
You can call box.size and get the width and height.

openCV combining an image to another image on given coordinates

i wrote a face & eye detection code
next step is put an image to the coordinates of the detected eye (for ex: eye
patch, eye glasses)
i couldn't find the function to combine the source frame and the image I want to add
any suggestions
thanks
You can use cvCopy with a mask to do this. If the the images do not have the same height and width set the ROI of the destination image before using cvCopy.
See OpenCV documentation:
cvCopy
cvSetImageROI