find image coordinates after blending - c++

I implemented stitching algorithm in OpenCV. What I want to do now is to find certain point from one of the original images in the final stitched image. I can find this point in image after it is warped, but then all images blend together and I want to find coordinates of this point.
For example, I have this image with size 499x369:
Now i warp this image and i want to find point(for example 255x185)
This warped image has size 475x372 and selected point from original image is at 232x184
After this, I use OpenCV's MultiBand blending function and i blend 2 warped images together
Is there some way that i could find that point after blending them together?
CODE:
Here i warp the images:
Ptr<detail::RotationWarper> warp = warper_->create(float(scale));
warp->warp(img, K, rotationMatrix, INTER_LINEAR, BORDER_REFLECT, img_warped);
//now I can find where a certain point in warped image is
warp->warpPoint(Point, K, rotationMatrix);
//now I blend the images
blender_->prepare(corners, sizes);
blender_->feed(img_warped_s, mask_warped, corners[img_idx]);
blender_->blend(result, result_mask);
So result is the last image that i posted and i want to find that point in that image.

Related

How to dedistort an image without pixel loss using opencv

As we all know,we can use the function cv::getOptimalNewCameraMatrix() with alpha = 1 to get a new CameraMatrix. Then we use the function cv::undistort() with the new CameraMatrix can get the image after dedistortion. However, I find the image after distortion is as large as the original image and some part of the image after distortion covered by black.
So my question is :Does this mean that the original image pixel is lost? and is there any way to avoid pixel loss or get the image whose size larger than origin image with opencv?
cv::Mat NewKMatrixLeft = cv::getOptimalNewCameraMatrix(KMatrixLeft,DistMatrixLeft ,cv::Size(image.cols,image.rows),1);
cv::undistort(image, show_image, KMatrixLeft, DistMatrixLeft,NewKMatrixLeft);
The size of image and show_image are both 640*480,however from my point of view,the size of image after distortion should be larger than 640*480 because some part of it is meaningless.
Thanks!
In order to correct distortion, you basically have to reverse the process that caused the initial distortion. This implies that pixels are stretched and squashed along various directions to correct the distortion. In some cases, this would move the pixels away from the image edge. In OpenCV, this is handled by inserting black pixels. There is nothing wrong with this approach. You can then choose how to crop it to remove the black pixels at the edges.

Warp perspective and stitch/overlap images (C++)

I am detecting and matching features of a pair of images, using a typical detector-descriptor-matcher combination and then findHomography to produce a transformation matrix.
After this, I want the two images to be overlapped (the second one (imgTrain) over the first one (imgQuery), so I warp the second image using the transformation matrix using:
cv::Mat imgQuery, imgTrain;
...
TRANSFORMATION_MATRIX = cv::findHomography(...)
...
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgTrain.size());
From here on, I don't know how to produce an image that contains the original imgQuery with the warped imgTrainWarped on it.
I consider two scenarios:
1) One where the size of the final image is the size of imgQuery
2) One where the size of the final image is big enough to accommodate both imgQuery and imgTrainWarped, since they overlap only partially, not completely. I understand this second case might have black/blank space somewhere around the images.
You should warp to a destination matrix that has the same dimensions as imgQuery after that, loop over the whole warped image and copy pixel to the first image, but only if the warped image actually holds a warped pixel. That is most easily done by warping an additional mask. Please try this:
cv::Mat imgMask = cv::Mat(imgTrain.size(), CV_8UC1, cv::Scalar(255));
cv::Mat imgMaskWarped;
cv::warpPerspective(imgMask , imgMaskWarped, TRANSFORMATION_MATRIX, imgQuery.size());
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgQuery.size());
// now copy only masked pixel:
imgTrainWarped.copyTo(imgQuery, imgMaskWarped);
please try and tell whether this is ok and solves scenario 1. For scenario 2 you would test how big the image must be before warping (by using the transformation) and copy both images to a destination image big enough.
Are you trying to create a panoramic image out of two overlapping pictures taken from the same viewpoint in different directions? If so, I am concerned about the "the second one over the first one" part. The correct way to stitch the panorama together is to cut both images off down the central line (symmetry axis) of the overlapping part, and not to add a part of one image to the (whole) other one.
Accepted answer works but could be done easier with using BORDER_TRANSPARENT:
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), INTER_LINEAR, BORDER_TRANSPARENT);
When using BORDER_TRANSPARENT the source pixel of imgQuery remains untouched.
For OpenCV 4 INTER_LINEAR and BORDER_TRANSPARENT
can be resolved by using
cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT, e.g.
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT);

Opencv Resize a stitch

I'm doing a mosaic from a video in Opencv. I'm using this example for stitching the frames of the videos: http://docs.opencv.org/doc/tutorials/features2d/feature_detection/feature_detection.html. At the end I'm doing this for merging the new frame with the stitch created at the passed iteration:
Mat H = findHomography(obj, scene, CV_RANSAC);
static Mat rImg;
warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);//(vImg[0], rImg, H, Size(vImg[0].cols * 2, vImg[0].rows * 2), CV_INTER_LINEAR);
static Mat final_img(Size(rImg.cols*2, rImg.rows*2), CV_8UC3);
static Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
rImg.copyTo(roi2);
vImg[1].copyTo(roi1);
imwrite("stitch.jpg", final_img);
vImg[0] = final_img;
So here's my problem: obviously the stitch becomes larger at each iteration, so how can I resize it to make it fit in the final_img image?
EDIT
Sorry but I had to remove images
For the second question, what you observe is an error in the homography that was estimated. This may come either from:
drift (if you chain homographies along the sequence), ie, small errors that accumulate and become large after dozens of frames
or (more likely) because your reference image is too old with respect to your new image, and they exhibit too few matching points to give an accurate homography, but yet enough to find one that passes the quality test inside cv::findHomography().
For your first question, you need to add some code that keeps track of the current bounds of the stitched image in a fixed coordinate frame.
I would suggest to choose the coordinates linked with the first image.
Then, when you stitch a new image, what you do really is to project this image onto this coordinate frame.
You can compute first for example the projected coordinates of the 4 corners of the incoming frame, test if they fit into the current stitching result, copy it to a new (bigger) image if necessary, then proceed with stitching the new image.

Stereo rectify - ROI have different sizes

I have done a stereo calibration and I got the validPixROI1 and 2 (green border). Now I want to use StereoSGBM but the rois from calibration (from stereoRectify) are not the same size. Anyone know how to solve this?
Actually I do somethine linke this:
Rect roiLeft(...);
Rect roiRight(...);
Mat cLeft(rLeft, roiLeft);
//Mat cRight(rRight, roiRight); // not same size...
Mat cRight(cRight, roiLeft);
stereoBM(cLeft,cRight, dst);
If I crop my images with that roi, will be the picture middle point be the same?
Here it works.
Why not run stereoBM on the (uncropped)calibrated images, then you can use those ROIs after to mask out the invalid bits of the result...
stereoBM(rLeft,rRight, disp);
//get intersection of both rois or use target image roi, if you know the target image
cv::Rect visibleRoi = roiLeft & roiRight;
cv::Mat cDisp(disp,visibleRoi);
Now you have no issues with different size inputs, or different centers and such.
Cheers
According to wiki
A point R at the intersection of the optical axis and the image plane. This point is referred to as the principal point or image center.
So I don't think the center will be same.
Refer to this site . Here in one of the examples the principal point is 302.71656,242.33386 for a 640x480 pixel camera which shows that the principal point and the image center are not the same.
Run the block matcher on the uncropped rectified images and then use.
cv::getValidDisparityROI(roi1, roi2, minDisparity, numberOfDisparities, SADWindowSize);
That call returns a cv::Rect that will be a bounding box for all the valid pixels in the left image and the disparity map. The valid pixels are only pixels that both cameras can "see" (caveat on occluded edges).
Once you have the disparity map the right image becomes useless.
Be aware that the roi's returned from stereoRectify are just valid pixels after the remap from the cameras intrinsics.

Create the voronoi diagram with openCv and C++

I have a little problem. I need to create the voronoi diagram of a BW image by using openCV and C++. I should have something like the output of the Matlab function voronoin.
The goal is to create a mask for each region of the diagram.
This is an example I made in Matlab:
matlab voronoi diagram
So, for each region I should create a mask or to have a different color.
I tried the openCV function distanceTransform in order to get the voronoi labels.
Mat bwCoresGoodInv = 255 - bwCoresGood;
distanceTransform(bwCoresGoodInv, distTr,voronoiLabels, CV_DIST_L2, CV_DIST_MASK_PRECISE, DIST_LABEL_PIXEL);
namedWindow( "voronoiDistLab", CV_WINDOW_AUTOSIZE );
voronoiLabels = voronoiLabels*5;
imshow( "voronoiDistLab", voronoiLabels );
the results is the following image:
voronoi labels openCV
as you can see in each region there are differents colors(in particular there is something in correspondence to the cell), is there a way to have just a color?
thank you in advance
If you are asking how to get different colors than the gray scale values provided by displaying the labels, one approach (probably not the most efficient) is to run cv::findContours on an edge-detected image of the label image, and then iterate through each contour found and draw it onto a new image, it can be filled or outlined. It's not super exact and can leave gaps, some dilation on the edge image may be required.
It would be very nice if distanceTransform returned a data-structure that mapped the range of intensity values in the label image to every pixel that has that value, maybe with a vector of binary images where the nth image in the vector is a binary mask with an isolated nth label region- but I think as it is now this would have to be done by the user.