GridAdaptedFeatureDetector disappeared from OpenCV 3.1? - c++

I'm working on an algorithm that should recognize an object from an image in a video file. For now, I want to use ORB(I know that SURF and SIFT are better at this kind of job, but I want to make this affirmation based on my results). Now I have one problem: when I run my program, in one of the images the keypoints are detected in a different area than in the other image and it hardly finds any matches. Now, in OpenCV 2.4 there was GridAdaptedFeatureDetector, a class that allows you to partition the source image into a grid and detect points in each cell. But I'm using OpenCV 3.1(Visual Studio 2015) and it seems to have disappeared? Please help me find a solution.

They removed a lot of different adapters feature detector/extractor in OpenCV 3.1.
One of the way to get them back, is to copy them to you project from OpenCV 2.4. It worked for me with OpponentSiftDescriptor. You will need to fix interfaces, because they moved from DescriptorExtractor and FeatureDetector interfaces to Features2D. You can copy its code from here : https://github.com/kipr/opencv/blob/master/modules/features2d/src/detectors.cpp

It's in python, so it might be useful (I found this question when looking for a python solution so hopefully someone else does too...) but this is what I used to iterate over sub-blocks of the image:
def blocks(img, rows, cols):
h, w = img.shape[:2]
xs = np.uint32(np.rint(np.linspace(0, w, num=cols+1)))
ys = np.uint32(np.rint(np.linspace(0, h, num=rows+1)))
ystarts, yends = ys[:-1], ys[1:]
xstarts, xends = xs[:-1], xs[1:]
for y1, y2 in zip(ystarts, yends):
for x1, x2 in zip(xstarts, xends):
yield img[y1:y2, x1:x2]

There is a recent paper that tackles the problem of homogeneous keypoint distribution on the image. C++, Python, and Matlab interfaces are provided in this repository.

Related

Adding Gaussian Noise in image-OpenCV and C++ and then denoised?

I'm trying to to add noise to an Image & then denoised to see the difference in my object detection algorithm. So I developed OpenCV code in C++ for detection some objects in the image. I would like to test the robustness of the code, so tried to add some noises. In that way would like to check how the object detection rate changed when add noises to the image. So , first added some random Gaussian Noises like this
cv::Mat noise(src.size(),src.type());
float m = (10,12,34);
float sigma = (1,5,50);
cv::randn(noise, m, sigma); //mean and variance
src += noise;
I got this images:
The original:
The noisy one
So is there any better model for noises? Then how to Denoise it. Is there any DeNoising algorithms?
OpenCV comes with Photo package in which you can find an implementation of Non-local Means Denoising algorithm. The documentation can be found here:
http://docs.opencv.org/3.0-beta/modules/photo/doc/denoising.html
As far as I know it's the only suitable denoising algorithm both in OpenCV 2.4 and OpenCV 3.x
I'm not aware of any other noise models in OpenCV than randn. It shouldn't be a problem however to add a custom function that does that. There are some nice examples in python (you should have no problem rewriting it to C++ as the OpenCV API remains roughly identical) How to add noise (Gaussian/salt and pepper etc) to image in Python with OpenCV
There's also one thing I don't understand: If you can generate noise, why would you denoise the image using some algorithm if you already have the original image without noise?
Check this tutorial it might help you.
http://docs.opencv.org/trunk/d5/d69/tutorial_py_non_local_means.html
Specially this part:
OpenCV provides four variations of this technique.
cv2.fastNlMeansDenoising() - works with a single grayscale images
cv2.fastNlMeansDenoisingColored() - works with a color image.
cv2.fastNlMeansDenoisingMulti() - works with image sequence captured
in short period of time (grayscale images)
cv2.fastNlMeansDenoisingColoredMulti() - same as above, but for color
images.
Common arguments are:
h : parameter deciding filter strength. Higher h value removes noise
better, but removes details of image also. (10 is ok)
hForColorComponents : same as h, but for color images only. (normally
same as h)
templateWindowSize : should be odd. (recommended 7)
searchWindowSize : should be odd. (recommended 21)
And to add gaussian noise to image, maybe this thread will be helpful:
How to add Noise to Color Image - Opencv

Refining Camera parameters and calculating errors - OpenCV

I've been trying to refine my camera parameters with CvLevMarq but after reading about it, it seems to be causing mixed results - which is exactly what I am experiencing. I read about the alternatives and came upon EIGEN - and also found this library that utilizes it.
However, the library above seems to use a stitching class that doesn't support OpenCV and will probably require me to port it to OpenCV.
Before going ahead and doing so, which will probably not be an easy task, I figured I'd ask around first and see if anyone else had the same problem?
I'm currently using:
1. Calculating features with FASTFeatureDetector
Ptr<FeatureDetector> detector = new FastFeatureDetector(5,true);
detector->detect(firstGreyImage, features_global[firstImageIndex].keypoints); // Previous picture
detector->detect(secondGreyImage, features_global[secondImageIndex].keypoints); // New picture
2. Extracting features with SIFTDescriptorExtractor
Ptr<SiftDescriptorExtractor> extractor = new SiftDescriptorExtractor();
extractor->compute(firstGreyImage, features_global[firstImageIndex].keypoints, features_global[firstImageIndex].descriptors); // Previous Picture
extractor->compute(secondGreyImage, features_global[secondImageIndex].keypoints, features_global[secondImageIndex].descriptors); // New Picture
3. Matching features with BestOf2NearestMatcher
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_use_gpu, 0.50f);
matcher(features_global, pairwise_matches);
matcher.collectGarbage();
4. CameraParams.R quaternion passed from a device (slightly inaccurate which causes the issue)
5. CameraParams.Focal == 389.0f -- Played around with this value, 389.0f is the only value that matches the images horizontally but not vertically.
6. Bundle Adjustment (cvLevMarq, calcError & calcJacobian)
Ptr<BPRefiner> adjuster = new BPRefiner();
adjuster->setConfThresh(0.80f);
adjuster->setMaxIterations(5);
(*adjuster)(features,pairwise_matches,cameras);
7. ExposureCompensator (GAIN)
8. OpenCV MultiBand Blender
What works so far:
SeamFinder - works to some extent but it depends on the result of the cvLevMarq algoritm. I.e. if the algoritm is off, seamFinder is going to be off too.
HomographyBasedEstimator works beautifully. However, since it "relies" on the features, it's unfortunately not the method that I'm looking for.
I wouldn't want to rely on the features since I already have the matrix, if there's a way to "refine" the current matrix instead - then that would be the targeted result.
Results so far:
cvLevMarq "Russian roulette" 6/10:
This is what I'm trying to achieve 10/10 times. But 4/10 times, it looks like the picture below this one.
By simply just re-running the algorithm, the results change. 4/10 times it looks like this (or worse):
cvLevMarq "Russian roulette" 4/10:
Desired Result:
I'd like to "refine" my camera parameters with the features that I've matched - in hope that the images would align perfectly. Instead of hoping that cvLevMarq will do the job for me (which it won't 4/10 times), is there another way to ensure that the images will be aligned?
Update:
I've tried these versions:
OpenCV 3.1: Using CVLevMarq with 3.1 is like playing Russian roulette. Some times it can align them perfectly, and other times it estimates focal as NAN which causes segfault in the MultiBand Blender (ROI = 0,0,1,1 because of NAN)
OpenCV 2.4.9/2.4.13: Using CvLevMarq with 2.4.9 or 2.4.13 is unfortunately the same thing minus the NAN issue. 6/10 times it can align the images perfectly, but the other 4 times it's completely off.
My Speculations / Thoughts:
Template Matching using OpenCV. Maybe if I template match the ends of the images (i.e. x = 0, y = 0,height = image.height, width = 50). Any thoughts about this?
I found this interesting paper about Levenberg Marquardt applied in Homography. That looks like something that could solve my problem since the paper uses corner detection and whatnot to detect the features in the images. Any thoughts about this?
Maybe the problem isn't in CvLevMarq but instead in BestOf2NearestMatcher? However, I've searched for days and I couldn't find another method that returns the pairwise matches to pass to BPRefiner.
Hough Line Transform Detecting the lines in the first/second image and use that to align the images. Any thoughts on this? -- One thing might be, what if the images doesn't have any lines? I.e. empty wall?
Maybe I'm overkilling something so simple.. Or maybe I'm not? Basically, I'm trying to align a set of images so I can warp them without overlapping each-other. Drop a comment if it doesn't make sense :)
Update Aug 12:
After trying all kinds of combinations, the absolute best so far is CvLevMarq. The only problem with it is the mixed results shown in the images above. If anyone has any input, I'd be forever grateful.
It seems your parameter initialization is the problem. I would use a linear estimator first, i.e. ignore your noisy sensor, and then use this as the initial values for the non-linear optimizer.
A quick method is to use getaffinetransform, as you have mostly rotation.
Maybe you want to take a look at this library: https://github.com/ethz-asl/kalibr.
Cheers
If you want to stitch the images, you should see stitching_detailed.cpp. It will probably solve your problem.
In addition, I have used Graph Cut Seam Finding method with Canny Edge Detection for better stitching results in this code. If you want to optimize this code, see here.
Also, if you are going to use it for personal use, SIFT is good. You should know, SIFT is patented and will cost you if you use it for commercial purposes. Use ORB instead.
Hope it helps!

efficient way to grayscale a frame without using OpenCV

i was capturing live video from my web camera to Mat objects.
is their any efficient way to convert a MAT object in to gray scaled image frame without using any API such as openCV...
I have tried it using openCV.
but i like to implement in to c++...is their any way to do it?
I would recommend you use OpenCV. OpenCV already contains optimized implementations for converting between various color spaces (i.e. even between RGB (actually BGR for OpenCV) to greyscale).
See for more details: http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html.
OpenCV is allready implemented in C++.
If you really want to implement you own for didactical purposes (I don't see any reason why you would do it otherwise) then the simple way to do it would be to iterate the R G B values in the Mat and apply the formula:
resultingVlue = 0.299 * R + 0.587 * G + 0.114 * B
(See also Stack overflow Question Converting RGB to grayscale/intensity for a more detailed discussion on why the R G B components typically get weighted differently)
Assuming here you want to convert RGB to gray. For other color space conversions, please look at the OpenCv documentation that also details how the transformations are done (see link provided above).
More so, OpenCV is open source. This means if you want to see how a optimal implementation might look like, you can download the source code and take a look.
Google tells me that you have to average the values of the R,G and B values of each pixel. Some algorithms are discussed here
http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
The simplest is to convert each color R, G and B values by the average (R+G+B)/3. Check the above links for the results of a few different averages.

How to create a depth map from PointGrey BumbleBee2 stereo camera using Triclops and FlyCapture SDKs?

I've got the BumbleBee 2 stereo camera and two mentioned SDKs.
I've managed to capture a video from it in my program, rectify stereo images and get a disparity map. Next thing I'd like to have is a depth map similar to one, the Kinect gives.
The Triclops' documentation is rather short, it only references functions, without typical workflow description. The workflow is described in examples.
Up to now I've found 2 relevant functions: family of triclopsRCDxxToXYZ() functions and triclopsExtractImage3d() function.
Functions from the first family calculate x, y and z coordinate for a single pixel. Z coordinate perfectly corresponds to the depth in meters. However, to use this function I should create two nested loops, as shown in the stereo3dpoints example. That gives too much overhead, because each call returns two more coordinates.
The second function, triclopsExtractImage3d(), always returns error TriclopsErrorInvalidParameter. The documentation says only that "there is a geometry mismatch between the context and the TriclopsImage3d", which is not clear for me.
Examples of Triclops 3.3.1 SDK do not show how to use it. Google brings example from Triclops SDK 3.2, which is absent in 3.3.1.
I've tried adding lines 253-273 from the link above to current stereo3dpoints - got that error.
Does anyone have an experience with it?
Is it valid to use triclopsExtractImage3d() or is it obsolete?
I also tried plotting values of disparity vs. z, obtained from triclopsRCDxxToXYZ().
The plot shows almost exact inverse proportionality: .
That is z = k / disparity. But k is not constant across the image, it varies from approximately 2.5e-5 to 1.4e-3, that is two orders of magnitude. Therefore, it is incorrect to calculate this value once and use forever.
Maybe it is a bit to late and you figured it out by yourself but:
To use triclopsExtractImage3d you have to create a 3dImage first.
TriclopsImage3d *depthImage;
triclopsCreateImage3d(triclopsContext, &depthImage);
triclopsExtractImage3d(triclopsContext, depthImage);
triclopsDestroyImage3d(&depthImage);

OpenCV Label connected and Compute feature measurements for image regions

I need help related to following matlab code
[labelMap_1,num] = bwlabel(labelMap == 1);
labelMap1Stat = imfeature(labelMap_1,'Area','Centroid');
Inside opencv i found few threads that i must use bloblib for it.
But suppose if i dont want to use it for the sake of code because i need to port this code into android and i am concern about the size. How can i achieve the same thing without using blob library overhead.
If there is no solution then what are the methods inside bloblib that will produce the same result as these two functions??
Thanks in advance.
Try using functions related to contours like cvFindContours() .
This article provides some insights on how to use opencv for blobs.
You can calculate centroid information my using cvMoments() function.
Then the center of mass is given by yc = M01 / M00, where M01 and M00 are fields in the structure returned by the Moments call.
Use cvContourArea() to find area.