I am currently attempting to use OpenCV 2.3 and c++ to detect a hand (wearing a green glove), and distinguish between different hand gestures.
At this very moment, my next step is to acquire specific features of the hand (convex defects).
So far I've used these functions in my process:
mixChannels(); //to subtract non-green channels from green.
threshold(); //to convert to binary
erode(); dialate(); //to get rid of any excess noise
findContours(); //to find contours of course
these have worked splendidly and I have been able to output findContours() through the use of drawContours().
Next step, and this is where I'm at, is using convexHull(), which also works in OpenCV 2.3. I have however yet to find out how the vector results of convexHull() actually look (what features they contains).
But this is where the tricky part comes.
I found that the older version of OpenCV (using c which uses IplImage), has a neat little function called cvConvexityDefects() which can give a set of deficiencies on the convex hull. These are what I need, but there seems to be no such function for the OpenCV 2.3 and I don't see how I can use the old syntax to get these results.
Here's a link to Open CV documentation on cvConvextDefects.
What I'm asking for, is either a similar OpenCV 2.3 function, or a selfwritten piece of code or algorithm for finding these defects. Or a method to use the old 2.1 syntax for a vector result or something like that.
(I know that I can use other features, rectangular bounding boxes, and fitted circles. But I'm sure that convex defects yield the most distinguishable features.)
Solution - I ended up using a c++ wrapper from this post
The only thing not working for this wrapper seems to be a leek of the defects vector, which should be easily solvable.
Next step is getting some usable data from these defects. (At first glance. the data seems to be single points on the convexHull or the Contour or the count of these. I had at first expected a set of two points, or a single point and a length, which it does not seem to be. If I'm hitting 'brick wall' with this, I'll make another post)
The new C++ interface does not (yet) support all the functions in C. And the opposite is true, also (not everything in cpp is in c). The reasons are various, but the good news is that you can easily use whatever function you want. By example, here you have to convert contours to sequences (CvSeq) and send them to your function.
Moreover, the FindContours method is a wrapper over cvFindContours. You can call it
cvFindContours((IplImage*)matImage, ...);
and then use directly the result.
Another way would be to create a nice, clean C++ wrapper over cvConvexDefects() and submit it to OpenCV. In the findContours source you will find some help for that (the opposite transform)
I would like to attempt to convert my convexHull vector from the c++ syntax to the sequence which you mentioned, but I don't know where to start. Could you perhaps shed some light on this?
Check out this question here, I beleive it goes into it.
Convexity defects C++ OpenCv
Related
I'm having trouble successfully tracking a user-selected feature point using the KLT tracking algorithm and header files. I'm programming in C++ VS Express 2010, and am trying to do this without OpenCV or other libraries. Here are the steps I'm taking to track each point at a time:
Make a tracking context
Make a feature list
Adjust feature list count (to one), set x and y position
Fill two image data containers
Call KLTTrackFeatures() with appropriate arguments
Take new x and y values from feature list, and insert back into custom data structure
That's pretty much it. I've fiddled with some attributes like borders etc to no avail. The results I get can vary from coords marked as -1, to positions where the point shouldn't be. I've found it difficult searching for this one, because most searches result in OpenCV related topics which don't seem to deal directly with klt itself. Anyone have any thoughts or suggestions on how to go about solving this?
Kind regards,
OJnr.
I'm trying to align two images taken from a handheld camera.
At first, I was trying to use the OpenCV warpPerspective method based on SIFT/SURF feature points. The problem is the feature-extract & matching process may be extremely slow when the image quality is high (3000x4000). I tried to scale-down the image before find feature-points, the result is not as good as before.(The Mat generated from findHomography shouldn't be affected by scaling down the image, right?) And sometimes, due to lack of good feature point matches, the result is quite strange.
After searching on this topic, it seems that solving the problem in Fourier domain will speed up the registration process. And I've found this question which leads me to the code here.
The only problem is the code is written in python with numpy (not even using OpenCV), which makes it quite hard to re-written to C++ code using OpenCV (In OpenCV, I can only find dft and there's no fftshift nor fft stuff, I'm not quite familiar with NumPy, and I'm not brave enough to simply ignore the missing methods). So I'm wondering why there is not such a Fourier-domain image registration implementation using C++?
Can you guys give me some suggestion on how to implement one, or give me a link to the already implemented C++ version? Or help me to turn the python code into C++ code?
Big thanks!
I'm fairly certain that the FFT method can only recover a similarity transform, that is, only a (2d) rotation, translation and scale. Your results might not be that great using a handheld camera.
This is not quite a direct answer to your question, but, as a suggestion for a speed improvement, have you tried using a faster feature detector and descriptor? In OpenCV SIFT/SURF are some of the slowest methods they have for feature extraction/matching. You could try testing some of their other methods first, they all work quite well and are faster than SIFT/SURF. Especially if you use their FLANN-based matcher.
I've had to do this in the past with similar sized imagery, and using the binary descriptors OpenCV has increases the speed significantly.
If you need only shift you can use OpenCV's phasecorrelate
I have a really basic question in opencv and c++. I am trying to graph something in real time by using opencv. I am looking to find a function to draw a graph in rel-time. But still unssuccesful.
I need a function that gets two arrays as an input one for x axis and one for y. I tried with this but seems not work in real-time http://www.shervinemami.info/graphs.html
I just need to know if there is something available in opencv or not.
Thanks in advance.
OpenCV provides just low level drawing primitives, so you have to look for other libraries to plot chars, or create the code yourself.
I'm trying to use BRISK implementation of OpenCV (for C++) in order to check in a photo if an image (or a part of an image) is included in. For example, I take a photo, and I try to match it with a set of images in database, and I would like to select the best corresponding image (or an error message if none of all the images is good enough).
So, I'm just testing OpenCV for the moment. I've simply taken the sample included in the framework (matching_to_many_images), and change the detector and descriptor from SURF to BRISK.
However, I have weird results. These are the results of matching (BruteForce Hamming):
In the first one, the scenes are entirely different, but there are a lot of matches!
In the second one, the scenes are pretty similar, but some matches are wrong.
I think this is a parameters issue- because on demo videos of BRISK, the results are significant.
Have you seen the OpenCV documentation for BRISK? I'm not sure what parameters you're using now, but you can specify the threshold and octaves, as well as the pattern. Documentation at
http://docs.opencv.org/modules/features2d/doc/feature_detection_and_description.html#brisk
Also you could try a different feature matching algorithm, although it appears that in the BRISK paper they also used hamming distance
Lastly, it's not too unexpected to have erroneous feature matches; try out different scenes as well as different feature parameters and see how your results are
There are commonly many incorrect initial matches when doing feature-feature matching using SIFT, SURF, BRISK, or any other local descriptor.
Many of these initial matches will be incorrect due to ambiguous features or features that arise from background clutter. [From Distinctive Image Features from Scale Invariant Keypoints]
The next step is to select only a subset of those matches that all agree on a common transformation between the two images. This is explained in sections 7.3 and 7.4 of Distinctive Image Features from Scale Invariant Keypoints.
The OpenCV Tutorial gives an excellent example of how to extract features and calculate a homography (a transformation that tells you how to to transform each point from one image to the other one).
You can replace the feature-detector/descriptor with any other one, which will result in different robustness to certain transformations like rotation, scaling or errors like blur or lumination change. The basic implementation of BRISK already has meaningful parameters defined.
Last but not least, if you try to match two completely different images, what would you expect as a result? The algorithm will try to find similarities, and therefore always calculate a result, even if it is non-sense and the scores are very low. Just keep in mind: Garbage in -> Garbage out.
Im looking for a way to warp an image similar to how the liquify/IWarp tool works in Photoshop/Gimp.
I would like to use it to move a few points on an image to make it look wider than it was originally.
Anyone have any ideas on libraries that could be used to do this? I'm currently using OpenCV in the same project so if theres a way using that it would be easiest but I'm open to anything really
Thanks.
EDIT: Heres an example of what im looking to do http://i.imgur.com/wMOzq.png
All I've done there is pulled a few points out sideways and thats what im looking to do from inside my application
From this search 'image warp operator source c++' i get:
..... Added function 'CImg ::[get_]warp()' that can warp an image using a deformation .... Added function 'CImg ::save_cpp()' allowing to save an image directly as a C/C++ source code. ...
then CImg could do well for you.
OpenCV's remap can accomplish this. You only have to provide x and y displacement maps. I would suggest you can create the displacement map directly if you are clever, and this would be good for brush-stroke manipulation similar to Photoshop's liquify. The mesh warp and sparse point map approach is another option, but essentially computes the displacement map based on interpolation.
You may want to take a look at http://code.google.com/p/imgwarp-opencv/. This library seems to be exactly what you need: image warping based on a sparse grid.
Another option is, of course, generate displacements yourself and use cv::Remap() function of OpenCV.