OpenCV Label connected and Compute feature measurements for image regions - c++

I need help related to following matlab code
[labelMap_1,num] = bwlabel(labelMap == 1);
labelMap1Stat = imfeature(labelMap_1,'Area','Centroid');
Inside opencv i found few threads that i must use bloblib for it.
But suppose if i dont want to use it for the sake of code because i need to port this code into android and i am concern about the size. How can i achieve the same thing without using blob library overhead.
If there is no solution then what are the methods inside bloblib that will produce the same result as these two functions??
Thanks in advance.

Try using functions related to contours like cvFindContours() .
This article provides some insights on how to use opencv for blobs.
You can calculate centroid information my using cvMoments() function.
Then the center of mass is given by yc = M01 / M00, where M01 and M00 are fields in the structure returned by the Moments call.
Use cvContourArea() to find area.

Related

Refining Camera parameters and calculating errors - OpenCV

I've been trying to refine my camera parameters with CvLevMarq but after reading about it, it seems to be causing mixed results - which is exactly what I am experiencing. I read about the alternatives and came upon EIGEN - and also found this library that utilizes it.
However, the library above seems to use a stitching class that doesn't support OpenCV and will probably require me to port it to OpenCV.
Before going ahead and doing so, which will probably not be an easy task, I figured I'd ask around first and see if anyone else had the same problem?
I'm currently using:
1. Calculating features with FASTFeatureDetector
Ptr<FeatureDetector> detector = new FastFeatureDetector(5,true);
detector->detect(firstGreyImage, features_global[firstImageIndex].keypoints); // Previous picture
detector->detect(secondGreyImage, features_global[secondImageIndex].keypoints); // New picture
2. Extracting features with SIFTDescriptorExtractor
Ptr<SiftDescriptorExtractor> extractor = new SiftDescriptorExtractor();
extractor->compute(firstGreyImage, features_global[firstImageIndex].keypoints, features_global[firstImageIndex].descriptors); // Previous Picture
extractor->compute(secondGreyImage, features_global[secondImageIndex].keypoints, features_global[secondImageIndex].descriptors); // New Picture
3. Matching features with BestOf2NearestMatcher
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_use_gpu, 0.50f);
matcher(features_global, pairwise_matches);
matcher.collectGarbage();
4. CameraParams.R quaternion passed from a device (slightly inaccurate which causes the issue)
5. CameraParams.Focal == 389.0f -- Played around with this value, 389.0f is the only value that matches the images horizontally but not vertically.
6. Bundle Adjustment (cvLevMarq, calcError & calcJacobian)
Ptr<BPRefiner> adjuster = new BPRefiner();
adjuster->setConfThresh(0.80f);
adjuster->setMaxIterations(5);
(*adjuster)(features,pairwise_matches,cameras);
7. ExposureCompensator (GAIN)
8. OpenCV MultiBand Blender
What works so far:
SeamFinder - works to some extent but it depends on the result of the cvLevMarq algoritm. I.e. if the algoritm is off, seamFinder is going to be off too.
HomographyBasedEstimator works beautifully. However, since it "relies" on the features, it's unfortunately not the method that I'm looking for.
I wouldn't want to rely on the features since I already have the matrix, if there's a way to "refine" the current matrix instead - then that would be the targeted result.
Results so far:
cvLevMarq "Russian roulette" 6/10:
This is what I'm trying to achieve 10/10 times. But 4/10 times, it looks like the picture below this one.
By simply just re-running the algorithm, the results change. 4/10 times it looks like this (or worse):
cvLevMarq "Russian roulette" 4/10:
Desired Result:
I'd like to "refine" my camera parameters with the features that I've matched - in hope that the images would align perfectly. Instead of hoping that cvLevMarq will do the job for me (which it won't 4/10 times), is there another way to ensure that the images will be aligned?
Update:
I've tried these versions:
OpenCV 3.1: Using CVLevMarq with 3.1 is like playing Russian roulette. Some times it can align them perfectly, and other times it estimates focal as NAN which causes segfault in the MultiBand Blender (ROI = 0,0,1,1 because of NAN)
OpenCV 2.4.9/2.4.13: Using CvLevMarq with 2.4.9 or 2.4.13 is unfortunately the same thing minus the NAN issue. 6/10 times it can align the images perfectly, but the other 4 times it's completely off.
My Speculations / Thoughts:
Template Matching using OpenCV. Maybe if I template match the ends of the images (i.e. x = 0, y = 0,height = image.height, width = 50). Any thoughts about this?
I found this interesting paper about Levenberg Marquardt applied in Homography. That looks like something that could solve my problem since the paper uses corner detection and whatnot to detect the features in the images. Any thoughts about this?
Maybe the problem isn't in CvLevMarq but instead in BestOf2NearestMatcher? However, I've searched for days and I couldn't find another method that returns the pairwise matches to pass to BPRefiner.
Hough Line Transform Detecting the lines in the first/second image and use that to align the images. Any thoughts on this? -- One thing might be, what if the images doesn't have any lines? I.e. empty wall?
Maybe I'm overkilling something so simple.. Or maybe I'm not? Basically, I'm trying to align a set of images so I can warp them without overlapping each-other. Drop a comment if it doesn't make sense :)
Update Aug 12:
After trying all kinds of combinations, the absolute best so far is CvLevMarq. The only problem with it is the mixed results shown in the images above. If anyone has any input, I'd be forever grateful.
It seems your parameter initialization is the problem. I would use a linear estimator first, i.e. ignore your noisy sensor, and then use this as the initial values for the non-linear optimizer.
A quick method is to use getaffinetransform, as you have mostly rotation.
Maybe you want to take a look at this library: https://github.com/ethz-asl/kalibr.
Cheers
If you want to stitch the images, you should see stitching_detailed.cpp. It will probably solve your problem.
In addition, I have used Graph Cut Seam Finding method with Canny Edge Detection for better stitching results in this code. If you want to optimize this code, see here.
Also, if you are going to use it for personal use, SIFT is good. You should know, SIFT is patented and will cost you if you use it for commercial purposes. Use ORB instead.
Hope it helps!

GridAdaptedFeatureDetector disappeared from OpenCV 3.1?

I'm working on an algorithm that should recognize an object from an image in a video file. For now, I want to use ORB(I know that SURF and SIFT are better at this kind of job, but I want to make this affirmation based on my results). Now I have one problem: when I run my program, in one of the images the keypoints are detected in a different area than in the other image and it hardly finds any matches. Now, in OpenCV 2.4 there was GridAdaptedFeatureDetector, a class that allows you to partition the source image into a grid and detect points in each cell. But I'm using OpenCV 3.1(Visual Studio 2015) and it seems to have disappeared? Please help me find a solution.
They removed a lot of different adapters feature detector/extractor in OpenCV 3.1.
One of the way to get them back, is to copy them to you project from OpenCV 2.4. It worked for me with OpponentSiftDescriptor. You will need to fix interfaces, because they moved from DescriptorExtractor and FeatureDetector interfaces to Features2D. You can copy its code from here : https://github.com/kipr/opencv/blob/master/modules/features2d/src/detectors.cpp
It's in python, so it might be useful (I found this question when looking for a python solution so hopefully someone else does too...) but this is what I used to iterate over sub-blocks of the image:
def blocks(img, rows, cols):
h, w = img.shape[:2]
xs = np.uint32(np.rint(np.linspace(0, w, num=cols+1)))
ys = np.uint32(np.rint(np.linspace(0, h, num=rows+1)))
ystarts, yends = ys[:-1], ys[1:]
xstarts, xends = xs[:-1], xs[1:]
for y1, y2 in zip(ystarts, yends):
for x1, x2 in zip(xstarts, xends):
yield img[y1:y2, x1:x2]
There is a recent paper that tackles the problem of homogeneous keypoint distribution on the image. C++, Python, and Matlab interfaces are provided in this repository.

How to create a depth map from PointGrey BumbleBee2 stereo camera using Triclops and FlyCapture SDKs?

I've got the BumbleBee 2 stereo camera and two mentioned SDKs.
I've managed to capture a video from it in my program, rectify stereo images and get a disparity map. Next thing I'd like to have is a depth map similar to one, the Kinect gives.
The Triclops' documentation is rather short, it only references functions, without typical workflow description. The workflow is described in examples.
Up to now I've found 2 relevant functions: family of triclopsRCDxxToXYZ() functions and triclopsExtractImage3d() function.
Functions from the first family calculate x, y and z coordinate for a single pixel. Z coordinate perfectly corresponds to the depth in meters. However, to use this function I should create two nested loops, as shown in the stereo3dpoints example. That gives too much overhead, because each call returns two more coordinates.
The second function, triclopsExtractImage3d(), always returns error TriclopsErrorInvalidParameter. The documentation says only that "there is a geometry mismatch between the context and the TriclopsImage3d", which is not clear for me.
Examples of Triclops 3.3.1 SDK do not show how to use it. Google brings example from Triclops SDK 3.2, which is absent in 3.3.1.
I've tried adding lines 253-273 from the link above to current stereo3dpoints - got that error.
Does anyone have an experience with it?
Is it valid to use triclopsExtractImage3d() or is it obsolete?
I also tried plotting values of disparity vs. z, obtained from triclopsRCDxxToXYZ().
The plot shows almost exact inverse proportionality: .
That is z = k / disparity. But k is not constant across the image, it varies from approximately 2.5e-5 to 1.4e-3, that is two orders of magnitude. Therefore, it is incorrect to calculate this value once and use forever.
Maybe it is a bit to late and you figured it out by yourself but:
To use triclopsExtractImage3d you have to create a 3dImage first.
TriclopsImage3d *depthImage;
triclopsCreateImage3d(triclopsContext, &depthImage);
triclopsExtractImage3d(triclopsContext, depthImage);
triclopsDestroyImage3d(&depthImage);

Edge Detection, Matlab Vision System Toolbox

I have several images where I need to find an edge. I have tried following the vision.EdgeDetector System object in matlab, and the example they give here: http://www.mathworks.com/help/vision/ref/vision.edgedetectorclass.html
They give the example
hedge = vision.EdgeDetector;
hcsc = vision.ColorSpaceConverter('Conversion','RBG to intensity')
hidtypeconv = vision.ImageDataTypeConverter('OutputDataType',single');
img = step(hcsc, imread('picture.png'))
img1 = step(hidtypeconv, ing);
edge = step(hedge,img1);
imshow(edges);
Which I have followed exactly in my code.
However this code doesn't produce all the edges I would like, it seems as though Matlab can only pick up on about half of the edges in the entire image. Is there a different approach I can take to finding all the edges, or a way to improve upon the vision.EdgeDetector object in Matlab?
By default hedge = vision.EdgeDetector has a Threshold value of 20. Try changing it to hedge = vision.EdgeDetector('Threshold',Value) and play with value to see what value works out the best for you.
Try:
imgGray = rgb2gray(imgRGB);
imgEdge = edge(imgGray,'canny');
This should give you most of the edge points, if not, then change parameters THRESH and SIGMA accordingly. Also check the following for other methods:
help edge
You do not have to use vision.EdgeDetector system, somethings are easier without them! ;)

Can findContour in OpenCV work like bwlabel in Matlab?

Some people in this Q & A site suggested I use findContour to imitate what bwlabel in Matlab. But I am not sure because I think a contour is closed shape of detected edges and element from bwlabel is a connected shape. I guess they might be logically the same. What about them in practice? Are they really same?
Use either of these two library....cvBlobslib or cvblob...you will get many features about the connected components such as size and contour and ellipticity and bounding box...you can filter blobs and add togethar 2 or more blobs...try it..under the hood algo of bwlabel is a two scan connected component where as cvblob or cvBlobslib is a one scan algo...
bwlabel will give you the image connected components, i.e. different label for different connected objects in a background.
Probably what you mean is the combination of im2bw and imcontours provides, i.e. a combination of binarizing the image and trivially finding the single contour (boundaries) per retained object on the output.
Consider the following example:
I = imread('coins.png'); % grayscale
level = graythresh(I); % find thershold
BW = im2bw(I, level); % threshold image
imcontour(BW, 1); % plot single contour
For a grayscale image you can increase the number of requested contours, though findContours operates on binary images.
I found an exact article about this. Quick answer is "Yeah, their eventual output will be the same." So I might go with findContour after all considering cvBlob still using old C-style API and having its own implementation of finding contours.