I am writing C++ code with OpenCV where I'm trying to detect a chessboard on an image (loaded from a .jpg file) to warp the perspective of the image. When the chessboard is found by findChessboardCorners(), the rest of my code is working perfectly. But sometimes the function does not detect the pattern, and this behavior seems to be random.
For example, there is one image that works on it's original resolution 2560x1920, but not if I scale it down with GIMP first to 800x600. However, another image seems to do the opposite: doesn't work in original resolution, but does work scaled down.
Here's the bit of my code that does the detection:
Mat grayimg = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
if (img.data == NULL) {
printf("Unable to read image");
return 0;
}
bool patternfound = findChessboardCorners(grayimg, patternsize, corners,
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK);
if (!patternfound) {
printf("Chessboard not found");
return 0;
}
Is there some kind of bug in opencv causing this behavior? Does anyone has any tips on how to pre-process your image, so the function will work more consistently?
I already tried playing around with the parameters CALIB_CB_ADAPTIVE_THRESH, CALIB_CB_NORMALIZE_IMAGE, CALIB_CB_FILTER_QUADS and CALIB_CB_FAST_CHECK. I'm also having the same results when I pass in a color image.
Thanks in advance
EDIT: I'm using OpenCV version 2.4.1
I had a very hard time getting findChessboardCorners to work until I added a white boarder around the chessboard.
I found that as hint somewhere in the more recent documenation.
Before adding the border, it would sometimes be impossible to recognize the keyboard, but with the white border it works every time.
Welcome to the joys of real-world computer vision :-)
You don't post any images, and findChessboardCorners is a bit too high-level to debug. I suggest to display (in octave, or matlab, or with more OpenCV code) the location of the detected corners on top of the image, to see if enough are detected. If none, try to run cvCornerHarris by itself on the image.
Sometimes the cause of the problem is the excessive graininess of the image: try to blur is just a little and see if it helps.
Actually, try to remove the CALIB_CB_FAST_CHECK option, and give it a try.
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK is not the same as CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_FAST_CHECK, you should use | (binary or)
Related
I am writing a program to analyze pictures and crop them around an object in the picture. The program crops the images well, but it leaves a weird gap on the side.
I copied the code from the approved answer on this question:
Opencv c++ detect and crop white region on image
The image I start with looks like this on a larger canvas. I get this result, but I want to get rid of the extra white space on the left side in order to crop super close to the phone case. It can be seen better if you open the image in a new tab.
Please help. I am using opencv and c++ in Visual Studio 2015.
This picture is not correctly cropped because of salt-and-pepper noise. To get rid of it you'd use median blur. You can use blurred image to fill nonBlackList and use this list to correctly crop original image. Since it appears the image was slightly magnified after the noise appeared, you should probably try aperture size at least 5 to get rid of it completly.
cv::Mat in = cv::imread("CropWhite.jpg");
cv::Mat blurred;
cv::medianBlur(in, blurred, 5);
...
if(blurred.at<cv::Vec3b>(j,i) != cv::Vec3b(255,255,255))
{
nonBlackList.push_back(cv::Point(i,j));
}
Is there a way to produce a glare on an image? Given an image with an object, I want to produce a glare on a portion of an image. If I have an image that is 256x256, I want to produce glare on the first 64x64 patch. Is there a function in opencv I can use for that? If not, what is a good way to go about this problem?
I think that this example does what you need. Each time it saves a face, it gives a flash in the part of the screen where the face was recognised. So, the glares changes every time of place and size.
You can found it here:
https://github.com/MasteringOpenCV/code/tree/master/Chapter8_FaceRecognition
Seek this part in the main.cpp:
// Make a white flash on the face, so the user knows a photo has been taken.
Mat displayedFaceRegion = displayedFrame(faceRect);
displayedFaceRegion += CV_RGB(90,90,90);
Looking for a way for taking a screenshot of a particular area on the screen in C++. (So not the whole screen) Then it should save it as .png .jpg whatever to use it with another function afterwards.
Also, I am going to use it, somehow, with openCV. Thought i'd mention that, maybe it's a helpful detail.
OpenCV cannot take screenshots from your computer directly. You will need a different framework/method to do this. #Ben is correct, this link would be worth investigating.
Once you have read this image in, you will need to store it into a cv:Mat so that you are able to perform OpenCV operations on it.
In order to crop an image in OpenCV the following code snippet would help.
CVMat * imagesource;
// Transform it into the C++ cv::Mat format
cv::Mat image(imagesource);
// Setup a rectangle to define your region of interest
cv::Rect myROI(10, 10, 100, 100);
// Crop the full image to that image contained by the rectangle myROI
// Note that this doesn't copy the data
cv::Mat croppedImage = image(myROI);
I have the following OpenCV2 code:
cv::Mat old = imread("some.JPG", CV_LOAD_IMAGE_COLOR);
cv::resize(old, old, cv::Size(342,228));
//cv::resize(old, old, cv::Size(342*2,228*2));
Which when displayed in a QT container using
QImage qimg((uchar*)old.data, old.cols, old.rows,QImage::Format_RGB888);
ui->ImgA->setPixmap(QPixmap::fromImage(qimg));
gives me this result (ignore the unrelated slight green tint, that was my screenshot tool being slow...):
When I switch to the commented out resize (aka 4x the size), I get a beautiful sunset photo with proper colors. It also works fine if I switch width and height. Is there something I'm missing in my code that's causing this to have the wrong offset at certain resized sizes? (note the original JPG is 5472 pixels by 3648 pixels)
Try this:
QImage qimg((uchar*)old.data, old.cols, old.rows,old.step,QImage::Format_RGB888);
I posted it as a comment, but now I test it on my computer and without step I get same wrong picture, so I'm sure that it is solution of your problem.
I've been trying to stitch low quality, low resolution (320x180) images, taken by a quadrocopter, in OpenCV recently. Here is what i got:
http://postimg.org/gallery/1rqsycyk/
The pictures taken are almost nadir and as you can see overlapping much. Between each shot is a translation and i tried to place objects on the ground that keep the scene almost planar not to disturb the requirements for a homography. Anyway quite many pictures are not taken into account during the stitching process.
Here another example, (only three images are stitched together):
http://postimg.org/gallery/1wpt3lmo/
I'm using the Surf Featuredetector and believe that the low quality of the images is not working out right for it but i'm not sure about that.
Here's the code i use, i found it on a similar question OpenCV non-rotational image stitching and decided to use it since it worked better than mine:
Mat pano;
Stitcher stitcher = Stitcher::createDefault(false);
stitcher.setWarper(new PlaneWarper());
stitcher.setFeaturesFinder(new detail::SurfFeaturesFinder(1000,3,4,3,4));
stitcher.setRegistrationResol(0.1);
stitcher.setSeamEstimationResol(0.1);
stitcher.setCompositingResol(1);
stitcher.setPanoConfidenceThresh(1);
stitcher.setWaveCorrection(true);
stitcher.setWaveCorrectKind(detail::WAVE_CORRECT_HORIZ);
stitcher.setFeaturesMatcher(new detail::BestOf2NearestMatcher(false,0.3));
stitcher.setBundleAdjuster(new detail::BundleAdjusterRay());
Stitcher::Status status = Stitcher::ERR_NEED_MORE_IMGS;
try{
status = stitcher.stitch(picturesTaken, pano);
}
catch(cv::Exception e){}
My other guess is to do the stitching process manually instead of using the Stitcher class, but i'm not sure if it would change much. So the question is: how can i make the stitching process more robust despite of the low quality of the images? Also: does defining ROIs have only an impact on the performance or also on the chance of actual stitching?
The result is not that bad given the quality of the input images!
To improve the quality of the output, I would do (in priority order):
an estimation of the camera distortion in order to fix it and make the matching easier
perform some histogram or lighting equalization before stitching
try to increase the temporal gap between pictures or use another stitcher. A part of the blur in the output is created by the stitcher when merging the images in their overlap areas.
I believe the problem is that you take pictures of textureless regions and it's hard to extract good distinctive keypoints from such smooth regions.
I found this question, which was very helpful for me. I investigate this theme and I have some other tips for you:
About finding similar images:
You set SURFFeatureFidner with minHessian = 1000. It is really big value (OpenCV suggest 300, I use sometimes 100). This is why there are only matches not all images.
You set PanoConfidendceThresh to "1", maybe you should set "0.8", it will stitch more images.
About the look of stitched images:
There are some other function in pipeline of Stitcher. Try to use:
stitcher.setSeamFinder(new detail::GraphCutSeamFinder(GraphCutSeamFinderBase::COST_COLOR))
stitcher.setBlender( detail::Blender::createDefault(Blender::MULTI_BAND, false))
stitcher.setExposureCompensator (detail::ExposureCompensator::createDefault(ExposureCompensator::GAIN_BLOCKS) )
Maybe this will be helpful for you!