I tried to get the inverse perspective to get a frame captured in real-time to the camera plane using the following code:
Mat dst;
dst=dst.zeros(frame.cols,frame.rows,frame.type());
if(Found){
Mat mmat;
mmat.create(3,3,CV_32FC1);
mmat=getPerspectiveTransform(templPoints,imgPoints);
cout<< mmat<<endl<<endl;
warpPerspective(frame,dst,Homo,dst.size(),INTER_LINEAR );
imshow("out",dst);
}
the problem is that the dst image is totally black , what's wrong with my code?
The image you are seeing is usually the result of sending the source points into getPerspectiveTransform in the wrong order. This means that the points are crossing each other and triangular shapes will appear. Check the order of the points and make sure they match the order of the destination points.
You need to provide some more details.
Why are you calling both findHomography and getPerspectiveTransform? Since you are calling both, I assume that both templPoints and imgPoints are of size 4, in which case the call to findHomography is redundant (and RANSAC does nothing at all for you).
Have you looked (e.g. using matlab or octave, or by hand) at the values of mmat * templPoints? They should be equal to imgPoints, and all inside the [0, dst.width] x [0, dst.height]
Related
Is that possible to get the depth/disparity map from a moving camera? Let say I capture an image at x location, after I travelled let say 5cm and I capture another picture, and from there I calculate the depth map of the image.
I have tried using BlockMatching in opencv but the result is not good.The first and second image are as following:
first image,second image,
disparity map (colour),disparity map
My code is as following:
GpuMat leftGPU;
GpuMat rightGPU;
leftGPU.upload(left);rightGPU.upload(right);
GpuMat disparityGPU;
GpuMat disparityGPU2;
Mat disparity;Mat disparity1,disparity2;
Ptr<cuda::StereoBM> stereo = createStereoBM(256,3);
stereo->setMinDisparity(-39);
stereo->setPreFilterCap(61);
stereo->setPreFilterSize(3);
stereo->setSpeckleRange(1);
stereo->setUniquenessRatio(0);
stereo->compute(leftGPU,rightGPU,disparityGPU);
drawColorDisp(disparityGPU, disparityGPU2,256);
disparityGPU.download(disparity);
disparityGPU2.download(disparity2);
imshow("display img",disparityGPU);
how can I improve upon this? From the colour disparity map, there are quite a lot error (ie. the tall circle is red in colour and it is the same as some of the part of the table.). Also,from the disparity map, there are small noise (all the black dots in the picture), how can I pad those black dots with nearby disparities?
It is possible if the object is static.
To properly do stereo matching, you first need to rectify your images! If you don't have calibrated cameras, you can do this from detected feature points. Also note that for cuda::StereoBM the minimum default disparity is 0. (I have never used cuda, but I don't think your setMinDisparity is doing anything, see this anser.)
Now, in your example images corresponding points are only about 1 row apart, therefore your disparity map actually doesn't look too bad. Maybe having a larger blockSize would already do in this special case.
Finally, your objects have very low texture, therefore the block matching algorithm can't detect much.
I am totally new to OpenCV and I have started to dive into it. But I'd need a little bit of help.
So I want to combine these 2 images:
I would like the 2 images to match along their edges (ignoring the very right part of the image for now)
Can anyone please point me into the right direction? I have tried using the findTransformECC function. Here's my implementation:
cv::Mat im1 = [imageArray[1] CVMat3];
cv::Mat im2 = [imageArray[0] CVMat3];
// Convert images to gray scale;
cv::Mat im1_gray, im2_gray;
cvtColor(im1, im1_gray, CV_BGR2GRAY);
cvtColor(im2, im2_gray, CV_BGR2GRAY);
// Define the motion model
const int warp_mode = cv::MOTION_AFFINE;
// Set a 2x3 or 3x3 warp matrix depending on the motion model.
cv::Mat warp_matrix;
// Initialize the matrix to identity
if ( warp_mode == cv::MOTION_HOMOGRAPHY )
warp_matrix = cv::Mat::eye(3, 3, CV_32F);
else
warp_matrix = cv::Mat::eye(2, 3, CV_32F);
// Specify the number of iterations.
int number_of_iterations = 50;
// Specify the threshold of the increment
// in the correlation coefficient between two iterations
double termination_eps = 1e-10;
// Define termination criteria
cv::TermCriteria criteria (cv::TermCriteria::COUNT+cv::TermCriteria::EPS, number_of_iterations, termination_eps);
// Run the ECC algorithm. The results are stored in warp_matrix.
findTransformECC(
im1_gray,
im2_gray,
warp_matrix,
warp_mode,
criteria
);
// Storage for warped image.
cv::Mat im2_aligned;
if (warp_mode != cv::MOTION_HOMOGRAPHY)
// Use warpAffine for Translation, Euclidean and Affine
warpAffine(im2, im2_aligned, warp_matrix, im1.size(), cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
else
// Use warpPerspective for Homography
warpPerspective (im2, im2_aligned, warp_matrix, im1.size(),cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
UIImage* result = [UIImage imageWithCVMat:im2_aligned];
return result;
I have tried playing around with the termination_eps and number_of_iterations and increased/decreased those values, but they didn't really make a big difference.
So here's the result:
What can I do to improve my result?
EDIT: I have marked the problematic edges with red circles. The goal is to warp the bottom image and make it match with the lines from the image above:
I did a little bit of research and I'm afraid the findTransformECC function won't give me the result I'd like to have :-(
Something important to add:
I actually have an array of those image "stripes", 8 in this case, they all look similar to the images shown here and they all need to be processed to match the line. I have tried experimenting with the stitch function of OpenCV, but the results were horrible.
EDIT:
Here are the 3 source images:
The result should be something like this:
I transformed every image along the lines that should match. Lines that are too far away from each other can be ignored (the shadow and the piece of road on the right portion of the image)
By your images, it seems that they overlap. Since you said the stitch function didn't get you the desired results, implement your own stitching. I'm trying to do something close to that too. Here is a tutorial on how to implement it in c++: https://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
You can use Hough algorithm with high threshold on two images and then compare the vertical lines on both of them - most of them should be shifted a bit, but keep the angle.
This is what I've got from running this algorithm on one of the pictures:
Filtering out horizontal lines should be easy(as they are represented as Vec4i), and then you can align the remaining lines together.
Here is the example of using it in OpenCV's documentation.
UPDATE: another thought. Aligning the lines together can be done with the concept similar to how cross-correlation function works. Doesn't matter if picture 1 has 10 lines, and picture 2 has 100 lines, position of shift with most lines aligned(which is, mostly, the maximum for CCF) should be pretty close to the answer, though this might require some tweaking - for example giving weight to every line based on its length, angle, etc. Computer vision never has a direct way, huh :)
UPDATE 2: I actually wonder if taking bottom pixels line of top image as an array 1 and top pixels line of bottom image as array 2 and running general CCF over them, then using its maximum as shift could work too... But I think it would be a known method if it worked good.
I'm trying to deskew an image that has an element of known size. Given this image:
I can use aruco:: estimatePoseBoard which returns rotation and translation vectors. Is there a way to use that information to deskew everything that's in the same plane as the marker board? (Unfortunately my linear algebra is rudimentary at best.)
Clarification
I know how to deskew the marker board. What I want to be able to do is deskew the other things (in this case, the cloud-shaped object) in the same plane as the marker board. I'm trying to determine whether or not that's possible and, if so, how to do it. I can already put four markers around the object I want to deskew and use the detected corners as input to getPerspectiveTransform along with the known distance between them. But for our real-world application it may be difficult for the user to place markers exactly. It would be much easier if they could place a single marker board in the frame and have the software deskew the other objects.
Since you tagged OpenCV:
From the image I can see that you have detected the corners of all the black box. So just get the most border for points in a way or another:
Then it is like this:
std::vector<cv::Point2f> src_points={/*Fill your 4 corners here*/};
std::vector<cv::Point2f> dst_points={cv:Point2f(0,0), cv::Point2f(width,0), cv::Point2f(width,height),cv::Point2f(0,height)};
auto H=v::getPerspectiveTransform(src_points,dst_points);
cv::Mat copped_image;
cv::warpPerspective(full_image,copped_image,H,cv::Size(width,height));
I was stuck on the assumption that the destination points in the call to getPerspectiveTransform had to be the corners of the output image (as they are in Humam's suggestion). Once it dawned on me that the destination points could be somewhere within the output image I had my answer.
float boardX = 1240;
float boardY = 1570;
float boardWidth = 1730;
float boardHeight = 1400;
vector<Point2f> destinationCorners;
destinationCorners(Point2f(boardX+boardWidth, boardY));
destinationCorners(Point2f(boardX+boardWidth, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY));
Mat h = getPerspectiveTransform(detectedCorners, destinationCorners);
Mat bigImage(image.size() * 3, image.type(), Scalar(0, 50, 50));
warpPerspective(image, bigImage, h, bigImage.size());
This fixed the perspective of the board and everything in its plane. (The waviness of the board is due to the fact that the paper wasn't lying flat in the original photo.)
I am detecting and matching features of a pair of images, using a typical detector-descriptor-matcher combination and then findHomography to produce a transformation matrix.
After this, I want the two images to be overlapped (the second one (imgTrain) over the first one (imgQuery), so I warp the second image using the transformation matrix using:
cv::Mat imgQuery, imgTrain;
...
TRANSFORMATION_MATRIX = cv::findHomography(...)
...
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgTrain.size());
From here on, I don't know how to produce an image that contains the original imgQuery with the warped imgTrainWarped on it.
I consider two scenarios:
1) One where the size of the final image is the size of imgQuery
2) One where the size of the final image is big enough to accommodate both imgQuery and imgTrainWarped, since they overlap only partially, not completely. I understand this second case might have black/blank space somewhere around the images.
You should warp to a destination matrix that has the same dimensions as imgQuery after that, loop over the whole warped image and copy pixel to the first image, but only if the warped image actually holds a warped pixel. That is most easily done by warping an additional mask. Please try this:
cv::Mat imgMask = cv::Mat(imgTrain.size(), CV_8UC1, cv::Scalar(255));
cv::Mat imgMaskWarped;
cv::warpPerspective(imgMask , imgMaskWarped, TRANSFORMATION_MATRIX, imgQuery.size());
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgQuery.size());
// now copy only masked pixel:
imgTrainWarped.copyTo(imgQuery, imgMaskWarped);
please try and tell whether this is ok and solves scenario 1. For scenario 2 you would test how big the image must be before warping (by using the transformation) and copy both images to a destination image big enough.
Are you trying to create a panoramic image out of two overlapping pictures taken from the same viewpoint in different directions? If so, I am concerned about the "the second one over the first one" part. The correct way to stitch the panorama together is to cut both images off down the central line (symmetry axis) of the overlapping part, and not to add a part of one image to the (whole) other one.
Accepted answer works but could be done easier with using BORDER_TRANSPARENT:
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), INTER_LINEAR, BORDER_TRANSPARENT);
When using BORDER_TRANSPARENT the source pixel of imgQuery remains untouched.
For OpenCV 4 INTER_LINEAR and BORDER_TRANSPARENT
can be resolved by using
cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT, e.g.
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT);
I'm getting results I don't expect when I use OpenCV 3.0 calibrateCamera. Here is my algorithm:
Load in 30 image points
Load in 30 corresponding world points (coplanar in this case)
Use points to calibrate the camera, just for un-distorting
Un-distort the image points, but don't use the intrinsics (coplanar world points, so intrinsics are dodgy)
Use the undistorted points to find a homography, transforming to world points (can do this because they are all coplanar)
Use the homography and perspective transform to map the undistorted points to the world space
Compare the original world points to the mapped points
The points I have are noisy and only a small section of the image. There are 30 coplanar points from a single view so I can't get camera intrinsics, but should be able to get distortion coefficients and a homography to create a fronto-parallel view.
As expected, the error varies depending on the calibration flags. However, it varies opposite to what I expected. If I allow all variables to adjust, I would expect error to come down. I am not saying I expect a better model; I actually expect over-fitting, but that should still reduce error. What I see though is that the fewer variables I use, the lower my error. The best result is with a straight homography.
I have two suspected causes, but they seem unlikely and I'd like to hear an unadulterated answer before I air them. I have pulled out the code to just do what I'm talking about. It's a bit long, but it includes loading the points.
The code doesn't appear to have bugs; I've used "better" points and it works perfectly. I want to emphasize that the solution here can't be to use better points or perform a better calibration; the whole point of the exercise is to see how the various calibration models respond to different qualities of calibration data.
Any ideas?
Added
To be clear, I know the results will be bad and I expect that. I also understand that I may learn bad distortion parameters which leads to worse results when testing points that have not been used to train the model. What I don't understand is how the distortion model has more error when using the training set as the test set. That is, if the cv::calibrateCamera is supposed to choose parameters to reduce error over the training set of points provided, yet it is producing more error than if it had just selected 0s for K!, K2, ... K6, P1, P2. Bad data or not, it should at least do better on the training set. Before I can say the data is not appropriate for this model, I have to be sure I'm doing the best I can with the data available, and I can't say that at this stage.
Here an example image
The points with the green pins are marked. This is obviously just a test image.
Here is more example stuff
In the following the image is cropped from the big one above. The centre has not changed. This is what happens when I undistort with just the points marked manually from the green pins and allowing K1 (only K1) to vary from 0:
Before
After
I would put it down to a bug, but when I use a larger set of points that covers more of the screen, even from a single plane, it works reasonably well. This looks terrible. However, the error is not nearly as bad as you might think from looking at the picture.
// Load image points
std::vector<cv::Point2f> im_points;
im_points.push_back(cv::Point2f(1206, 1454));
im_points.push_back(cv::Point2f(1245, 1443));
im_points.push_back(cv::Point2f(1284, 1429));
im_points.push_back(cv::Point2f(1315, 1456));
im_points.push_back(cv::Point2f(1352, 1443));
im_points.push_back(cv::Point2f(1383, 1431));
im_points.push_back(cv::Point2f(1431, 1458));
im_points.push_back(cv::Point2f(1463, 1445));
im_points.push_back(cv::Point2f(1489, 1432));
im_points.push_back(cv::Point2f(1550, 1461));
im_points.push_back(cv::Point2f(1574, 1447));
im_points.push_back(cv::Point2f(1597, 1434));
im_points.push_back(cv::Point2f(1673, 1463));
im_points.push_back(cv::Point2f(1691, 1449));
im_points.push_back(cv::Point2f(1708, 1436));
im_points.push_back(cv::Point2f(1798, 1464));
im_points.push_back(cv::Point2f(1809, 1451));
im_points.push_back(cv::Point2f(1819, 1438));
im_points.push_back(cv::Point2f(1925, 1467));
im_points.push_back(cv::Point2f(1929, 1454));
im_points.push_back(cv::Point2f(1935, 1440));
im_points.push_back(cv::Point2f(2054, 1470));
im_points.push_back(cv::Point2f(2052, 1456));
im_points.push_back(cv::Point2f(2051, 1443));
im_points.push_back(cv::Point2f(2182, 1474));
im_points.push_back(cv::Point2f(2171, 1459));
im_points.push_back(cv::Point2f(2164, 1446));
im_points.push_back(cv::Point2f(2306, 1474));
im_points.push_back(cv::Point2f(2292, 1462));
im_points.push_back(cv::Point2f(2278, 1449));
// Create corresponding world / object points
std::vector<cv::Point3f> world_points;
for (int i = 0; i < 30; i++) {
world_points.push_back(cv::Point3f(5 * (i / 3), 4 * (i % 3), 0.0f));
}
// Perform calibration
// Flags are set out so they can be commented out and "freed" easily
int calibration_flags = 0
| cv::CALIB_FIX_K1
| cv::CALIB_FIX_K2
| cv::CALIB_FIX_K3
| cv::CALIB_FIX_K4
| cv::CALIB_FIX_K5
| cv::CALIB_FIX_K6
| cv::CALIB_ZERO_TANGENT_DIST
| 0;
// Initialise matrix
cv::Mat intrinsic_matrix = cv::Mat(3, 3, CV_64F);
intrinsic_matrix.ptr<float>(0)[0] = 1;
intrinsic_matrix.ptr<float>(1)[1] = 1;
cv::Mat distortion_coeffs = cv::Mat::zeros(5, 1, CV_64F);
// Rotation and translation vectors
std::vector<cv::Mat> undistort_rvecs;
std::vector<cv::Mat> undistort_tvecs;
// Wrap in an outer vector for calibration
std::vector<std::vector<cv::Point2f>>im_points_v(1, im_points);
std::vector<std::vector<cv::Point3f>>w_points_v(1, world_points);
// Calibrate; only 1 plane, so intrinsics can't be trusted
cv::Size image_size(4000, 3000);
calibrateCamera(w_points_v, im_points_v,
image_size, intrinsic_matrix, distortion_coeffs,
undistort_rvecs, undistort_tvecs, calibration_flags);
// Undistort im_points
std::vector<cv::Point2f> ud_points;
cv::undistortPoints(im_points, ud_points, intrinsic_matrix, distortion_coeffs);
// ud_points have been "unintrinsiced", but we don't know the intrinsics, so reverse that
double fx = intrinsic_matrix.at<double>(0, 0);
double fy = intrinsic_matrix.at<double>(1, 1);
double cx = intrinsic_matrix.at<double>(0, 2);
double cy = intrinsic_matrix.at<double>(1, 2);
for (std::vector<cv::Point2f>::iterator iter = ud_points.begin(); iter != ud_points.end(); iter++) {
iter->x = iter->x * fx + cx;
iter->y = iter->y * fy + cy;
}
// Find a homography mapping the undistorted points to the known world points, ground plane
cv::Mat homography = cv::findHomography(ud_points, world_points);
// Transform the undistorted image points to the world points (2d only, but z is constant)
std::vector<cv::Point2f> estimated_world_points;
std::cout << "homography" << homography << std::endl;
cv::perspectiveTransform(ud_points, estimated_world_points, homography);
// Work out error
double sum_sq_error = 0;
for (int i = 0; i < 30; i++) {
double err_x = estimated_world_points.at(i).x - world_points.at(i).x;
double err_y = estimated_world_points.at(i).y - world_points.at(i).y;
sum_sq_error += err_x*err_x + err_y*err_y;
}
std::cout << "Sum squared error is: " << sum_sq_error << std::endl;
I would take random samples of the 30 input points and compute the homography in each case along with the errors under the estimated homographies, a RANSAC scheme, and verify consensus between error levels and homography parameters, this can be just a verification of the global optimisation process. I know that might seem unnecessary, but it is just a sanity check for how sensitive the procedure is to the input (noise levels, location)
Also, it seems logical that fixing most of the variables gets you the least errors, as the degrees of freedom in the minimization process are less. I would try fixing different ones to establish another consensus. At least this would let you know which variables are the most sensitive to the noise levels of the input.
Hopefully, such a small section of the image would be close to the image centre as it will incur the least amount of lens distortion. Is using a different distortion model possible in your case? A more viable way is to adapt the number of distortion parameters given the position of the pattern with respect to the image centre.
Without knowing the constraints of the algorithm, I might have misunderstood the question, that's also an option too, in such case I can roll back.
I would like to have this as a comment rather, but I do not have enough points.
OpenCV runs Levenberg-Marquardt algorithm inside calibrate camera.
https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm/
This algortihm works fine in problems with one minimum. In case of single image, points located close each other and many dimensional problem (n= number of coefficents) algorithm may be unstable (especially with wrong initial guess of camera matrix. Convergence of algorithm is well described here:
https://na.math.kit.edu/download/papers/levenberg.pdf/
As you wrote, error depends on calibration flags - number of flags changes dimension of a problem to be optimized.
Camera calibration also calculates pose of camera, which will be bad in models with wrong calibration matrix.
As a solution I suggest changing approach. You dont need to calculate camera matrix and pose in this step. Since you know, that points are located on a plane you can use 3d-2d plane projection equation to determine distribution type of points. By distribution I mean, that all points will be located equally on some kind of trapezoid.
Then you can use cv::undistort with different distCoeffs on your test image and calculate image point distribution and distribution error.
The last step will be to perform this steps as a target function for some optimization algorithm with distortion coefficents being optimized.
This is not the easiest solution, but i hope it will help you.