Low quality aerial stitching with OpenCV - c++

I've been trying to stitch low quality, low resolution (320x180) images, taken by a quadrocopter, in OpenCV recently. Here is what i got:
http://postimg.org/gallery/1rqsycyk/
The pictures taken are almost nadir and as you can see overlapping much. Between each shot is a translation and i tried to place objects on the ground that keep the scene almost planar not to disturb the requirements for a homography. Anyway quite many pictures are not taken into account during the stitching process.
Here another example, (only three images are stitched together):
http://postimg.org/gallery/1wpt3lmo/
I'm using the Surf Featuredetector and believe that the low quality of the images is not working out right for it but i'm not sure about that.
Here's the code i use, i found it on a similar question OpenCV non-rotational image stitching and decided to use it since it worked better than mine:
Mat pano;
Stitcher stitcher = Stitcher::createDefault(false);
stitcher.setWarper(new PlaneWarper());
stitcher.setFeaturesFinder(new detail::SurfFeaturesFinder(1000,3,4,3,4));
stitcher.setRegistrationResol(0.1);
stitcher.setSeamEstimationResol(0.1);
stitcher.setCompositingResol(1);
stitcher.setPanoConfidenceThresh(1);
stitcher.setWaveCorrection(true);
stitcher.setWaveCorrectKind(detail::WAVE_CORRECT_HORIZ);
stitcher.setFeaturesMatcher(new detail::BestOf2NearestMatcher(false,0.3));
stitcher.setBundleAdjuster(new detail::BundleAdjusterRay());
Stitcher::Status status = Stitcher::ERR_NEED_MORE_IMGS;
try{
status = stitcher.stitch(picturesTaken, pano);
}
catch(cv::Exception e){}
My other guess is to do the stitching process manually instead of using the Stitcher class, but i'm not sure if it would change much. So the question is: how can i make the stitching process more robust despite of the low quality of the images? Also: does defining ROIs have only an impact on the performance or also on the chance of actual stitching?

The result is not that bad given the quality of the input images!
To improve the quality of the output, I would do (in priority order):
an estimation of the camera distortion in order to fix it and make the matching easier
perform some histogram or lighting equalization before stitching
try to increase the temporal gap between pictures or use another stitcher. A part of the blur in the output is created by the stitcher when merging the images in their overlap areas.

I believe the problem is that you take pictures of textureless regions and it's hard to extract good distinctive keypoints from such smooth regions.

I found this question, which was very helpful for me. I investigate this theme and I have some other tips for you:
About finding similar images:
You set SURFFeatureFidner with minHessian = 1000. It is really big value (OpenCV suggest 300, I use sometimes 100). This is why there are only matches not all images.
You set PanoConfidendceThresh to "1", maybe you should set "0.8", it will stitch more images.
About the look of stitched images:
There are some other function in pipeline of Stitcher. Try to use:
stitcher.setSeamFinder(new detail::GraphCutSeamFinder(GraphCutSeamFinderBase::COST_COLOR))
stitcher.setBlender( detail::Blender::createDefault(Blender::MULTI_BAND, false))
stitcher.setExposureCompensator (detail::ExposureCompensator::createDefault(ExposureCompensator::GAIN_BLOCKS) )
Maybe this will be helpful for you!

Related

OpenCV HSV weird converted

I am working on project what detect hematoma from skin. I am having issue with color after convertion from RGB to HSV. My algorithm detect hematoma by its color.
With some images I have good results like here:
Original img: http://imgur.com/WHiOWdj
Result img: http://imgur.com/PujbnHa
But with some images i have bad result like this:
Original img: http://imgur.com/OshB99r
Result img: http://imgur.com/CuNzAId
The same original image after convertion to HSV: http://imgur.com/lkVwtCs
Do you have any ideas how to fix it?
Thanks
Looking at your result image I think that you are only using the H channel of the original image in your algorithm. The false positive detection can inherit from that the some part of the healty skin has quite the same H value than the hematoma has. You can see on the qrey-scale image of H channel that both parts have similar values:
The difference between the two parts is the saturation value. On the following image you can see the S channel of the original image and it shows perfectly that at the hematoma the saturation is much higher than at other the part of the arm:
This was expected because the hematoma has much stronger color than the healty skin has.
So, I suggest you to use both H and S channel in your algorithm that is you have to take into account only that parts of H image where the S image contains high saturation values. A possible and simple solution to do that is that you binarize both H and S images and with an AND operation you can execute this filtering:
H image after binarisation:
S image after binarisation:
Image after H&S operation:
You can see that on the result image only the hematoma part is white (except some noise but you can eliminate easily, for example by size or by morphological filtering).
EDIT
Important to note that binarization is one of most important (and sometimes also very complicated) step in the object detection algorithms namely binarization is the first highlight of the objects to detect.
If the the external conditions (lighting, color of objects etc.) do not change significantly from image to image you can use fix binaraziation thresholds. If this constant environment can not be issured you have to use more complicated methods. There are a lot of possibilies you can use, here you can read some examples:
Wikipedia - Thresholding
Wikipedia - Balanced histogram thresholding
Several solutions are based on the histogram analysis: on the histograms with objects there are always more local maximums which positions can vary depend on the environment and if you find them you can adapt the binarization threshold easily.
For example the histogram of the H channel of the original image is the following:
The first maximum belongs to the background, the second to the skin and the last to the hematome. It can be supposed that these 3 thresholds can be found in each image only their positions vary depend on the lighting or on other conditions. To put a threshold between the 2nd and the 3rd local maximum it can be a good choice to highlight the hematome.
Finally I offer you the read the following articel about thresholding in OpenCV:
OpenCV - Thresholding

How to detect image location before stitching with OpenCV / C++

I'm trying to merge/stitch 2 images together but found that the default stitcher class in OpenCV could not handle my images.
So I started to write my own..
Unfortunately the images are too large to attach to this message (they are both 12600x9000 pixels in size).. so I'll try to explain as good as possible.
The 2 images are not pictures takes by a camera but are tiff files extracted from a PDF file.
The images themselves were actually CAD drawings, so not much gradients in there and therefore I think the default stitcher class could not handle them.
So far, I managed to extract the features and match them.
Also I used the following well known example to stitch them together:
Mat WarpedImage;
cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(2*img_2.cols,2*img_2.rows));
Mat half(WarpedImage,Rect(0,0,img_1.cols,img_1.rows));
img_1.copyTo(half);
I sort of made it fit.. because my problem is that in my case the 2 images could be aligned vertically or horizontally.
By default, all stitch examples on the internet assume the first image is the left image and the 2nd image is the right image.
So my first question would be:
How can I detect if the image is to the left, right, above or below the first image and create a proper sized new image?
Secondly..
Currently I'm getting the proper image.. however, because I'm not having some decent code to check the ideal width and height of the new image, I have a lot of black/empty space in the new image.
What would be the best C++ code to remove those black area's?
(I'm seeing a lot of Python scripts on the net.. but no C++ examples of this.. and I have 0 Python skills....)
Thank you very much in advance for your help.
Greetings,
Floris.
You can reproject the corners of the second image with perspectiveTransform. With the transformed points you can find the relative position of your image and calculate the new image size that will fit both images. This will also let you deal with the black areas, since you have the boundaries of the two images.

opencv: performing night vision

First I am not talking about real night vision. I am talking about the technique used to improve picture brightness/light when light condition is poor. You can see this technique perfectly in smart phones, superb in phablets. I know the technique used in here, get the existing light and used it to make the pic clear. But how to do this in opencv? Any method or step by step process?
There are essentially 2 ways to brighten your image:
Get more photons in the camera
Give each photon more 'weight'
For approach 1, supposing that you can't control the lighting, then the only way to get more photons is to expose your sensor for a longer period of time. That assumes that you can change your camera's integration time. The drawback of this approach is that you can get more motion blur.
For approach 2, this amounts to applying a multiplicative gain to the input image, which makes each photon contribute more DN's to the resulting image. Applying such a gain though supposes that you have a priori information about the input image's brightness. If your gain value is not good you'll have an image that's either saturated or too dark.
To improve your image automatically, the best approach would be to use OpenCV's equalizeHist function, as described here. The operation isn't exactly a multiplicative gain but the effect is similar.
The last step would be, as previously suggested in comments, to apply a gamma correction as described here. Gamma correction tends to reduce the contrast in an image, but since you improved the contrast using histogram equalization you should get good results.
As Michel points out try equalizeHist.
Here's a minimal example:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char *argv[])
{
namedWindow("input");
namedWindow("output");
Mat in = imread("yourDarkImage.jpg");;
Mat out;
if(in.empty())exit(1);
//equalize histograms per channel
vector<Mat> colors;
split(in, colors);
equalizeHist(colors[0], colors[0]);
equalizeHist(colors[1], colors[1]);
equalizeHist(colors[2], colors[2]);
merge(colors, out);
imshow("input", in);
imshow("output", out);
waitKey(0);
return 0;
}

Stitcher class OpenCV

I have two Images with a Overlapping area of about 25% - but the Stitching fails.
How can I handle this problem?
I tried using orb and surf, as well as I changing the Threshold. Are there any other options I should consider?
Mat pano;
Stitcher stitcher = Stitcher::createDefault(try_use_gpu=false);
//Stitcher::Status status = stitcher.stitch(imgs, pano);
//stitcher.setWarper(new PlaneWarper());
stitcher.setWarper(new SphericalWarper());
stitcher.setFeaturesFinder(new detail::SurfFeaturesFinder(1000,3,4,3,4));
//stitcher.setFeaturesFinder(new detail::OrbFeaturesFinder());
stitcher.setRegistrationResol(0.1);
stitcher.setSeamEstimationResol(0.1);
stitcher.setCompositingResol(0.6);
stitcher.setPanoConfidenceThresh(1);
stitcher.setWaveCorrection(true);
stitcher.setWaveCorrectKind(detail::WAVE_CORRECT_HORIZ);
stitcher.setFeaturesMatcher(new detail::BestOf2NearestMatcher(false,0.3));
stitcher.setBundleAdjuster(new detail::BundleAdjusterRay());
tstart = clock();
Stitcher::Status status = stitcher.stitch(imgs, pano);
25% overlapping are definitely not enough. 40% will give somewhat better results but still not good enough. If you want good overlapping, try with something between 60% and 80%. It is important that the next image in your sequence that you want to stitch together overlaps the central area of the previous one since there is/shouldn't be any distortion. With 80% overlapping for example not only that happens but also the central areas of both images come pretty close together so you can neglect distortions and find plenty of matches provided that the texture quality of your images allows it. My advice is to first look at the samples provided by the library itself. You can find the latest version at https://github.com/Itseez/opencv/tree/master/samples/cpp/stitcher.cpp and https://github.com/Itseez/opencv/blob/master/samples/cpp/stitching_detailed.cpp. Then it is also good to dig inside the source itself (https://github.com/Itseez/opencv/blob/master/modules/stitching/src/stitcher.cpp) and also look up the provided documentation in the reference manual of OpenCV (online or download as a PDF).

Inconsistent outcome of findChessboardCorners() in opencv

I am writing C++ code with OpenCV where I'm trying to detect a chessboard on an image (loaded from a .jpg file) to warp the perspective of the image. When the chessboard is found by findChessboardCorners(), the rest of my code is working perfectly. But sometimes the function does not detect the pattern, and this behavior seems to be random.
For example, there is one image that works on it's original resolution 2560x1920, but not if I scale it down with GIMP first to 800x600. However, another image seems to do the opposite: doesn't work in original resolution, but does work scaled down.
Here's the bit of my code that does the detection:
Mat grayimg = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
if (img.data == NULL) {
printf("Unable to read image");
return 0;
}
bool patternfound = findChessboardCorners(grayimg, patternsize, corners,
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK);
if (!patternfound) {
printf("Chessboard not found");
return 0;
}
Is there some kind of bug in opencv causing this behavior? Does anyone has any tips on how to pre-process your image, so the function will work more consistently?
I already tried playing around with the parameters CALIB_CB_ADAPTIVE_THRESH, CALIB_CB_NORMALIZE_IMAGE, CALIB_CB_FILTER_QUADS and CALIB_CB_FAST_CHECK. I'm also having the same results when I pass in a color image.
Thanks in advance
EDIT: I'm using OpenCV version 2.4.1
I had a very hard time getting findChessboardCorners to work until I added a white boarder around the chessboard.
I found that as hint somewhere in the more recent documenation.
Before adding the border, it would sometimes be impossible to recognize the keyboard, but with the white border it works every time.
Welcome to the joys of real-world computer vision :-)
You don't post any images, and findChessboardCorners is a bit too high-level to debug. I suggest to display (in octave, or matlab, or with more OpenCV code) the location of the detected corners on top of the image, to see if enough are detected. If none, try to run cvCornerHarris by itself on the image.
Sometimes the cause of the problem is the excessive graininess of the image: try to blur is just a little and see if it helps.
Actually, try to remove the CALIB_CB_FAST_CHECK option, and give it a try.
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK is not the same as CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_FAST_CHECK, you should use | (binary or)