Warp perspective and stitch/overlap images (C++) - c++

I am detecting and matching features of a pair of images, using a typical detector-descriptor-matcher combination and then findHomography to produce a transformation matrix.
After this, I want the two images to be overlapped (the second one (imgTrain) over the first one (imgQuery), so I warp the second image using the transformation matrix using:
cv::Mat imgQuery, imgTrain;
...
TRANSFORMATION_MATRIX = cv::findHomography(...)
...
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgTrain.size());
From here on, I don't know how to produce an image that contains the original imgQuery with the warped imgTrainWarped on it.
I consider two scenarios:
1) One where the size of the final image is the size of imgQuery
2) One where the size of the final image is big enough to accommodate both imgQuery and imgTrainWarped, since they overlap only partially, not completely. I understand this second case might have black/blank space somewhere around the images.

You should warp to a destination matrix that has the same dimensions as imgQuery after that, loop over the whole warped image and copy pixel to the first image, but only if the warped image actually holds a warped pixel. That is most easily done by warping an additional mask. Please try this:
cv::Mat imgMask = cv::Mat(imgTrain.size(), CV_8UC1, cv::Scalar(255));
cv::Mat imgMaskWarped;
cv::warpPerspective(imgMask , imgMaskWarped, TRANSFORMATION_MATRIX, imgQuery.size());
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgQuery.size());
// now copy only masked pixel:
imgTrainWarped.copyTo(imgQuery, imgMaskWarped);
please try and tell whether this is ok and solves scenario 1. For scenario 2 you would test how big the image must be before warping (by using the transformation) and copy both images to a destination image big enough.

Are you trying to create a panoramic image out of two overlapping pictures taken from the same viewpoint in different directions? If so, I am concerned about the "the second one over the first one" part. The correct way to stitch the panorama together is to cut both images off down the central line (symmetry axis) of the overlapping part, and not to add a part of one image to the (whole) other one.

Accepted answer works but could be done easier with using BORDER_TRANSPARENT:
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), INTER_LINEAR, BORDER_TRANSPARENT);
When using BORDER_TRANSPARENT the source pixel of imgQuery remains untouched.

For OpenCV 4 INTER_LINEAR and BORDER_TRANSPARENT
can be resolved by using
cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT, e.g.
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT);

Related

How to dedistort an image without pixel loss using opencv

As we all know,we can use the function cv::getOptimalNewCameraMatrix() with alpha = 1 to get a new CameraMatrix. Then we use the function cv::undistort() with the new CameraMatrix can get the image after dedistortion. However, I find the image after distortion is as large as the original image and some part of the image after distortion covered by black.
So my question is :Does this mean that the original image pixel is lost? and is there any way to avoid pixel loss or get the image whose size larger than origin image with opencv?
cv::Mat NewKMatrixLeft = cv::getOptimalNewCameraMatrix(KMatrixLeft,DistMatrixLeft ,cv::Size(image.cols,image.rows),1);
cv::undistort(image, show_image, KMatrixLeft, DistMatrixLeft,NewKMatrixLeft);
The size of image and show_image are both 640*480,however from my point of view,the size of image after distortion should be larger than 640*480 because some part of it is meaningless.
Thanks!
In order to correct distortion, you basically have to reverse the process that caused the initial distortion. This implies that pixels are stretched and squashed along various directions to correct the distortion. In some cases, this would move the pixels away from the image edge. In OpenCV, this is handled by inserting black pixels. There is nothing wrong with this approach. You can then choose how to crop it to remove the black pixels at the edges.

OpenCV color histogram calcHist considering only specific pixels (and not full image)

I want to calculate the color histogram of an image but only taking into account specific pixels (whose 2D coordinates I know).
Is it possible to use calcHist specifying that only these concrete pixels should be taken into consideration (instead of the whole cv::Mat and all the pixels in it)? If not, is it possible to create a new Mat including only those specific pixels at known positions, and how? (Considering that for a histogram the pixel coordinates do not matter, could they be added to a (1 x number_of_specific_pixels)-dim Mat keeping the original type of the Mat?)
Thanks a lot in advance!
The third parameter of clalHist is called Mask.
So, you create a new single channel 8 bit cv::Mat that has the same size of your input image. It should contain 255's where you want to calculate the histogram and 0's where you do not. Then, pass it as Mask.

Depth/Disparity Map from a moving camera in OpenCV

Is that possible to get the depth/disparity map from a moving camera? Let say I capture an image at x location, after I travelled let say 5cm and I capture another picture, and from there I calculate the depth map of the image.
I have tried using BlockMatching in opencv but the result is not good.The first and second image are as following:
first image,second image,
disparity map (colour),disparity map
My code is as following:
GpuMat leftGPU;
GpuMat rightGPU;
leftGPU.upload(left);rightGPU.upload(right);
GpuMat disparityGPU;
GpuMat disparityGPU2;
Mat disparity;Mat disparity1,disparity2;
Ptr<cuda::StereoBM> stereo = createStereoBM(256,3);
stereo->setMinDisparity(-39);
stereo->setPreFilterCap(61);
stereo->setPreFilterSize(3);
stereo->setSpeckleRange(1);
stereo->setUniquenessRatio(0);
stereo->compute(leftGPU,rightGPU,disparityGPU);
drawColorDisp(disparityGPU, disparityGPU2,256);
disparityGPU.download(disparity);
disparityGPU2.download(disparity2);
imshow("display img",disparityGPU);
how can I improve upon this? From the colour disparity map, there are quite a lot error (ie. the tall circle is red in colour and it is the same as some of the part of the table.). Also,from the disparity map, there are small noise (all the black dots in the picture), how can I pad those black dots with nearby disparities?
It is possible if the object is static.
To properly do stereo matching, you first need to rectify your images! If you don't have calibrated cameras, you can do this from detected feature points. Also note that for cuda::StereoBM the minimum default disparity is 0. (I have never used cuda, but I don't think your setMinDisparity is doing anything, see this anser.)
Now, in your example images corresponding points are only about 1 row apart, therefore your disparity map actually doesn't look too bad. Maybe having a larger blockSize would already do in this special case.
Finally, your objects have very low texture, therefore the block matching algorithm can't detect much.

How to align 2 images based on their content with OpenCV

I am totally new to OpenCV and I have started to dive into it. But I'd need a little bit of help.
So I want to combine these 2 images:
I would like the 2 images to match along their edges (ignoring the very right part of the image for now)
Can anyone please point me into the right direction? I have tried using the findTransformECC function. Here's my implementation:
cv::Mat im1 = [imageArray[1] CVMat3];
cv::Mat im2 = [imageArray[0] CVMat3];
// Convert images to gray scale;
cv::Mat im1_gray, im2_gray;
cvtColor(im1, im1_gray, CV_BGR2GRAY);
cvtColor(im2, im2_gray, CV_BGR2GRAY);
// Define the motion model
const int warp_mode = cv::MOTION_AFFINE;
// Set a 2x3 or 3x3 warp matrix depending on the motion model.
cv::Mat warp_matrix;
// Initialize the matrix to identity
if ( warp_mode == cv::MOTION_HOMOGRAPHY )
warp_matrix = cv::Mat::eye(3, 3, CV_32F);
else
warp_matrix = cv::Mat::eye(2, 3, CV_32F);
// Specify the number of iterations.
int number_of_iterations = 50;
// Specify the threshold of the increment
// in the correlation coefficient between two iterations
double termination_eps = 1e-10;
// Define termination criteria
cv::TermCriteria criteria (cv::TermCriteria::COUNT+cv::TermCriteria::EPS, number_of_iterations, termination_eps);
// Run the ECC algorithm. The results are stored in warp_matrix.
findTransformECC(
im1_gray,
im2_gray,
warp_matrix,
warp_mode,
criteria
);
// Storage for warped image.
cv::Mat im2_aligned;
if (warp_mode != cv::MOTION_HOMOGRAPHY)
// Use warpAffine for Translation, Euclidean and Affine
warpAffine(im2, im2_aligned, warp_matrix, im1.size(), cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
else
// Use warpPerspective for Homography
warpPerspective (im2, im2_aligned, warp_matrix, im1.size(),cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
UIImage* result = [UIImage imageWithCVMat:im2_aligned];
return result;
I have tried playing around with the termination_eps and number_of_iterations and increased/decreased those values, but they didn't really make a big difference.
So here's the result:
What can I do to improve my result?
EDIT: I have marked the problematic edges with red circles. The goal is to warp the bottom image and make it match with the lines from the image above:
I did a little bit of research and I'm afraid the findTransformECC function won't give me the result I'd like to have :-(
Something important to add:
I actually have an array of those image "stripes", 8 in this case, they all look similar to the images shown here and they all need to be processed to match the line. I have tried experimenting with the stitch function of OpenCV, but the results were horrible.
EDIT:
Here are the 3 source images:
The result should be something like this:
I transformed every image along the lines that should match. Lines that are too far away from each other can be ignored (the shadow and the piece of road on the right portion of the image)
By your images, it seems that they overlap. Since you said the stitch function didn't get you the desired results, implement your own stitching. I'm trying to do something close to that too. Here is a tutorial on how to implement it in c++: https://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
You can use Hough algorithm with high threshold on two images and then compare the vertical lines on both of them - most of them should be shifted a bit, but keep the angle.
This is what I've got from running this algorithm on one of the pictures:
Filtering out horizontal lines should be easy(as they are represented as Vec4i), and then you can align the remaining lines together.
Here is the example of using it in OpenCV's documentation.
UPDATE: another thought. Aligning the lines together can be done with the concept similar to how cross-correlation function works. Doesn't matter if picture 1 has 10 lines, and picture 2 has 100 lines, position of shift with most lines aligned(which is, mostly, the maximum for CCF) should be pretty close to the answer, though this might require some tweaking - for example giving weight to every line based on its length, angle, etc. Computer vision never has a direct way, huh :)
UPDATE 2: I actually wonder if taking bottom pixels line of top image as an array 1 and top pixels line of bottom image as array 2 and running general CCF over them, then using its maximum as shift could work too... But I think it would be a known method if it worked good.

Multiply Images In OpenCv & Apply Laplacian Filter On It

In my previous question here's the link. According to the answer I have obtained the desired image which is white flood filled.
Now after applying the morphological operation of erosion on the white flood filled image, I get the new masked image.
Your answer helped a lot. Now what I am trying to do is that I am multiplying the new masked image with the original grayscaled image in order to get the veins pattern. But it gives me the same image as result which I get after performing erosion on the white flood filled image. After completing this step I have to apply the Laplacian function to get the veins pattern. I am attaching the original image and the result image that I want. I hope you will look into the matter.
Original Image.
Result Image.
If I am right in understanding you, you only want to extract the veins from the grayscale hand image, right? To do something like this, you would obviously multiply both of them as,
finalimg = grayimg * veinmask;
If you have done the above I think it would be more helpful to post a portion of your code so experts here might be able to point out whats wrong, also the output image that you're getting, and the one you want would also help.
I hope I understand you correctly. You have a gray scale image showing a hand (first image in your question)
You create a mask image that looks like the second image you posted.
Multiplication of both results in the mask image?
If that is the case check your values. If you work within a byte image your mask image must contain values 0 and 1, not 0 and 255 as the multiplication results for non-zero mask pixels otherwise exceed 255!