SIFT, HOG and SURF c++, opencv - c++

I have a simple question, which I want to know, what kind of libraries are available and can give good results for implementing SIFT, HOG(Histogram Oriented Gradient) and SURF in c++ or opencv?
Hence: 1- Give me the link for the code if you can, which I will be so appreciated.
2- If you know one of them or any kind of information to lead me to what I want, I will be so appreciated as well.
Thanks

check these:
surf
- great article
http://people.csail.mit.edu/kapu/papers/mar_mir08.pdf
sift
- great source, I tried it on the iPhone
http://blogs.oregonstate.edu/hess/
- fast - fast corner detection library
http://svr-www.eng.cam.ac.uk/~er258/work/fast.html

Example of surf code in openCV
https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/matching_to_many_images.cpp

Not sure if this is still relevant, but you also get two implementations of computing HOG descriptors in opencv i.e. both GPU and CPU versions of the HOG code.
for the CPU version you can check this blog post
however in the CPU version you would need to write your own logic for sliding windows.
and the GPU version is fairly straightforward you can read the documentation here

Might help you to know that SIFT and SURF implementations are already integrated into OpenCV.
http://opencv.willowgarage.com/documentation/cpp/features2d__feature_detection_and_descriptor_extraction.html

Be careful about OpenCV implementations, because latest versions of OpenCV have classified SIFT and SURF implementations as nonfree http://docs.opencv.org/modules/nonfree/doc/nonfree.html.
Now you can use them, but probably they are subject to licensing and cannot be used for commercial solutions.

This one uses descriptors based on HoG, Sobel and Lab channels for detection Class-Specific Hough Forests for Object Detection (opencv/c source code).
Rather then performing detection at every possible location this approach calculates a vote for each descriptor, then when putted together they produce a voting cloud where maximum will correspond to most probable location of the target. When combined with cvGoodFeaturesToTrack can produce very good results, even with a small training database.

Related

Stitching images can't detect common feature points

I wish to stitch two or more images using OpenCV and C++. The images have regions of overlap but they are not being detected. I tried using homography detector. Can someone please suggest as to what other methods I should use. Also, I wish to use the ORB algorithm, and not SIFT or SURF.
The images can be found at-
https://drive.google.com/open?id=133Nbo46bgwt7Q4IT2RDuPVR67TX9xG6F
This a very common problem. Because images like this, they actually do not have much in common. The overlap region is not rich in feature. What you can do is dig into opencv stitcher code and there they use confidence factor for feature matching, you can play with that confidence factor to get matches in this case. But this will only work if your feature detector is able to detect some features in overlapping resion.
You can also look at this post:
Related Question
It might be helpful for you.
"OpenCV stitching code"
This is full pipleline of OPencv Stitching code. You can see that there are lot of parameters you can change to make your code give some good stitching result. Also I would suggest using a small image (640 X480) for the feature detection step. Using small images is better than using very large images

dlib vs opencv which one to use when

I am currently learning OpenCV API with Python and its all good. I am making decent progress. Part of it comes from Python syntax's simplicity as against using it with C++ which I haven't attempted yet. I have come to realize that I have to get dirty with C++ bindings for OpenCV at some point if I intend to do anything production quality.
Just recently I came across dlib which also claims to do all the things OpenCV does and more. Its written in C++ and offers Python API too (surprise). Can anybody vouch for dlib based on their own implementation experience?
I have used both OpenCV and dlib extensively for face detection and face recognition and dlib is much accurate as compared to OpenCV Haar based face detector. ( Note that OpenCV now has a DNN module where we get Deep Learning based Face Detector and Face Recognizer models. )
I'm in the middle of comparing the OpenCV-DNN vs Dlib for face detection / recognition. Will post the results once I'm done with it.
There are many useful functions available in dlib, but I prefer OpenCV for any other CV tasks.
EDIT : As promised, I have made a detailed comparison of OpenCV vs Dlib Face Detection methods.
Here is my conclusion :
General Case
In most applications, we won’t know the size of the face in the image before-hand. Thus, it is better to use OpenCV – DNN method as it is pretty fast and very accurate, even for small sized faces. It also detects faces at various angles. We recommend to use OpenCV-DNN in most
For medium to large image sizes
Dlib HoG is the fastest method on CPU. But it does not detect small sized faces ( < 70x70 ). So, if you know that your application will not be dealing with very small sized faces ( for example a selfie app ), then HoG based Face detector is a better option. Also, If you can use a GPU, then MMOD face detector is the best option as it is very fast on GPU and also provides detection at various angles.
For more details, you can have a look at this blog

FFT based image registration (optionally using OpenCV) in cpp?

I'm trying to align two images taken from a handheld camera.
At first, I was trying to use the OpenCV warpPerspective method based on SIFT/SURF feature points. The problem is the feature-extract & matching process may be extremely slow when the image quality is high (3000x4000). I tried to scale-down the image before find feature-points, the result is not as good as before.(The Mat generated from findHomography shouldn't be affected by scaling down the image, right?) And sometimes, due to lack of good feature point matches, the result is quite strange.
After searching on this topic, it seems that solving the problem in Fourier domain will speed up the registration process. And I've found this question which leads me to the code here.
The only problem is the code is written in python with numpy (not even using OpenCV), which makes it quite hard to re-written to C++ code using OpenCV (In OpenCV, I can only find dft and there's no fftshift nor fft stuff, I'm not quite familiar with NumPy, and I'm not brave enough to simply ignore the missing methods). So I'm wondering why there is not such a Fourier-domain image registration implementation using C++?
Can you guys give me some suggestion on how to implement one, or give me a link to the already implemented C++ version? Or help me to turn the python code into C++ code?
Big thanks!
I'm fairly certain that the FFT method can only recover a similarity transform, that is, only a (2d) rotation, translation and scale. Your results might not be that great using a handheld camera.
This is not quite a direct answer to your question, but, as a suggestion for a speed improvement, have you tried using a faster feature detector and descriptor? In OpenCV SIFT/SURF are some of the slowest methods they have for feature extraction/matching. You could try testing some of their other methods first, they all work quite well and are faster than SIFT/SURF. Especially if you use their FLANN-based matcher.
I've had to do this in the past with similar sized imagery, and using the binary descriptors OpenCV has increases the speed significantly.
If you need only shift you can use OpenCV's phasecorrelate

Best stereo correspondence algorithm in opencv

Well, I have got a stereo setup where it computes the disparity of stereo image pairs using SGBM(Semi-global block matching), BM(Block matching) and Variational matching algorithm using the OpenCV library. But the disparities are not so good as that of the ground truth disparities.
All I wanted to know is, whether opencv provides any function or a program that could compute the ground truth disparity. As per some papers like "A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms" by Daniel Scharstein and Richard Szeliski says that Belief Propagation algorithm is the best stereo correspondence algorithm.
Is there any existing code that computes disparities using graph-cut algorithm or belief propagation algorithm in opencv.
I don't think that there is in OpenCV, but you do have alternatives. There is C++ code available, and it wouldn't be hard to make it interact with OpenCV:
In the Middlebury stereo website that include graph cut and
belief propagation for stereo
There is also Graphcut code from the
University of Western Ontario which is really good.
I think Semi Global Block Matching algorithm by Hirshmuller is one of the best stereo correspondence algorithm.
This algorithm is provided in OpenCV library.
The OpenCV implementation of belief propagation is only offered for GPU (CUDA and OpenCL), not for CPU implementations. They also have the constant space variant of believe prop.
The Middlebury website keeps an updated state of the art on these algorithms, so keep an eye for new entries.
yeah, TSGO is not open sourced, and its article("Accurate Stereo Matching by Two-Step Energy Minimization") is not free either. Has anyone evaluated it?
There is an opencv implementation for GraphCut one and seems quite good, it deserves a try. http://daily-tech.hatenablog.com/entry/2016/06/25/233203

High Quality Image Magnification on GPU

I'm looking for interesting algorithms for image magnification that can be implemented on a gpu for real-time scaling of video. Linear and bicubic interpolations algorithms are not good enough.
Suggestions?
Here are some papers I've found, unsure about their suitability for gpu implementation.
Adaptive Interpolation
Level Set
I've seen some demos on the cell processor used in TVs for scaling which had some impressive results, no link unfortunately.
lanczos3 is a very nice interpolation algorithm (you can test it in the GIMP or virtualDub). It generally performs better than cubic interpolation and can be parallelized.
A GPU based version is implemented in Chromium:
http://code.google.com/p/chromium/issues/detail?id=47447
Check out chromium source code.
It may be still too slow for realtime video processing but maybe worth trying if you don't use too high resolution.
You may also want to try out CUVI Lib which offers a good set of GPU acceleration Image Processing algorithms. Find about it on: http://www.cuvilib.com
Disclosure: I am part of the team that developed CUVI.
Still slightly 'work in progress' but gpuCV is a drop in replacement for the openCV image processing functions implemented in openCL on a GPU
Prefiltered cubic b-spline interpolation delivers good results (you can have a look here for some theoretical background).
CUDA source code can be downloaded here.
WebGL examples can be found here.
edit: The cubic interpolation code is now available on github: CUDA version and WebGL version.
You may want to have a look at Super Resolution Algorithms. Starting Point on CiteseerX