Point cloud line searching algorithms - c++

I'm working with a 2D point cloud and I want to find the shape that it follows, and determine whether it is a closed loop or not. You'll have to excuse my terrible paint illustration, but essentially I want to be able to draw the red line from the black dots, and confirm whether it is closed or not. I'm currently looking through methods like RANSAC etc but they all seem suited to slightly different tasks than the one I face. I'll be implementing the solution in C++.
Does anyone know a good approach/algorithm to calculate the red line, given the black points, and determine whether it closed?

My recommendation would be to:
Triangulate the point set.
Find the short path through all lines, basically the traveling salesman problem. The greedy algorithm is probably good enough for this purpose.
Use a line simplification algorithm to "tidy up" the line (RDP algorithm works pretty well).
Consider anything greater than X times the average distance between points (from step 2), basically outliers in the data as gaps to determine if open or closed.
Keep in mind steps #3 and #4 are highly opinionated. They both require you to define what you consider is "good enough", and will require some experimentation to set the best values.

Related

Averaging SIFT features to do pose estimation

I have created a point cloud of an irregular (non-planar) complex object using SfM. Each one of those 3D points was viewed in more than one image, so it has multiple (SIFT) features associated with it.
Now, I want to solve for the pose of this object in a new, different set of images using a PnP algorithm matching the features detected in the new images with the features associated with the 3D points in the point cloud.
So my question is: which descriptor do I associate with the 3D point to get the best results?
So far I've come up with a number of possible solutions...
Average all of the descriptors associated with the 3D point (taken from the SfM pipeline) and use that "mean descriptor" to do the matching in PnP. This approach seems a bit far-fetched to me - I don't know enough about feature descriptors (specifically SIFT) to comment on the merits and downfalls of this approach.
"Pin" all of the descriptors calculated during the SfM pipeline to their associated 3D point. During PnP, you would essentially have duplicate points to match with (one duplicate for each descriptor). This is obviously intensive.
Find the "central" viewpoint that the feature appears in (from the SfM pipeline) and use the descriptor from this view for PnP matching. So if the feature appears in images taken at -30, 10, and 40 degrees ( from surface normal), use the descriptor from the 10 degree image. This, to me, seems like the most promising solution.
Is there a standard way of doing this? I haven't been able to find any research or advice online regarding this question, so I'm really just curious if there is a best solution, or if it is dependent on the object/situation.
The descriptors that are used for matching in most SLAM or SFM systems are rotation and scale invariant (and to some extent, robust to intensity changes). That is why we are able to match them from different view points in the first place. So, in general it doesn't make much sense to try to use them all, average them, or use the ones from a particular image. If the matching in your SFM was done correctly, the descriptors of the reprojection of a 3d point from your point cloud in any of its observations should be very close, so you can use any of them 1.
Also, it seems to me that you are trying to directly match the 2d points to the 3d points. From a computational point of view, I think this is not a very good idea, because by matching 2d points with 3d ones, you lose the spatial information of the images and have to search for matches in a brute force manner. This in turn can introduce noise. But, if you do your matching from image to image and then propagate the results to the 3d points, you will be able to enforce priors (if you roughly know where you are, i.e. from an IMU, or if you know that your images are close), you can determine the neighborhood where you look for matches in your images, etc. Additionally, once you have computed your pose and refined it, you will need to add more points, no? How will you do it if you haven't done any 2d/2d matching, but just 2d/3d matching?
Now, the way to implement that usually depends on your application (how much covisibility or baseline you have between the poses from you SFM, etc). As an example, let's note your candidate image I_0, and let's note the images from your SFM I_1, ..., I_n. First, match between I_0 and I_1. Now, assume q_0 is a 2d point from I_0 that has successfully been matched to q_1 from I_1, which corresponds to some 3d point Q. Now, to ensure consistency, consider the reprojection of Q in I_2, and call it q_2. Match I_0 and I_2. Does the point to which q_0 is match in I_2 fall close to q_2? If yes, keep the 2d/3d match between q_0 and Q, and so on.
I don't have enough information about your data and your application, but I think that depending on your constraints (real-time or not, etc), you could come up with some variation of the above. The key idea anyway is, as I said previously, to try to match from frame to frame and then propagate to the 3d case.
Edit: Thank you for your clarifications in the comments. Here are a few thoughts (feel free to correct me):
Let's consider a SIFT descriptor s_0 from I_0, and let's note F(s_1,...,s_n) your aggregated descriptor (that can be an average or a concatenation of the SIFT descriptors s_i in their corresponding I_i, etc). Then when matching s_0 with F, you will only want to use a subset of the s_i that belong to images that have close viewpoints to I_0 (because of the 30deg problem that you mention, although I think it should be 50deg). That means that you have to attribute a weight to each s_i that depends on the pose of your query I_0. You obviously can't do that when constructing F, so you have to do it when matching. However, you don't have a strong prior on the pose (otherwise, I assume you wouldn't be needing PnP). As a result, you can't really determine this weight. Therefore I think there are two conclusions/options here:
SIFT descriptors are not adapted to the task. You can try coming up with a perspective-invariant descriptor. There is some literature on the subject.
Try to keep some visual information in the form of "Key-frames", as in many SLAM systems. It wouldn't make sense to keep all of your images anyway, just keep a few that are well distributed (pose-wise) in each area, and use those to propagate 2d matches to the 3d case.
If you only match between the 2d point of your query and 3d descriptors without any form of consistency check (as the one I proposed earlier), you will introduce a lot of noise...
tl;dr I would keep some images.
1 Since you say that you obtain your 3d reconstruction from an SFM pipline, some of them are probably considered inliers and some are outliers (indicated by a boolean flag). If they are outliers, just ignore them, if they are inliers, then they are the result of matching and triangulation, and their position has been refined multiple times, so you can trust any of their descriptors.

Image Processing: Algorithm Improvement for 'Coca-Cola Can' Recognition

One of the most interesting projects I've worked on in the past couple of years was a project about image processing. The goal was to develop a system to be able to recognize Coca-Cola 'cans' (note that I'm stressing the word 'cans', you'll see why in a minute). You can see a sample below, with the can recognized in the green rectangle with scale and rotation.
Some constraints on the project:
The background could be very noisy.
The can could have any scale or rotation or even orientation (within reasonable limits).
The image could have some degree of fuzziness (contours might not be entirely straight).
There could be Coca-Cola bottles in the image, and the algorithm should only detect the can!
The brightness of the image could vary a lot (so you can't rely "too much" on color detection).
The can could be partly hidden on the sides or the middle and possibly partly hidden behind a bottle.
There could be no can at all in the image, in which case you had to find nothing and write a message saying so.
So you could end up with tricky things like this (which in this case had my algorithm totally fail):
I did this project a while ago, and had a lot of fun doing it, and I had a decent implementation. Here are some details about my implementation:
Language: Done in C++ using OpenCV library.
Pre-processing: For the image pre-processing, i.e. transforming the image into a more raw form to give to the algorithm, I used 2 methods:
Changing color domain from RGB to HSV and filtering based on "red" hue, saturation above a certain threshold to avoid orange-like colors, and filtering of low value to avoid dark tones. The end result was a binary black and white image, where all white pixels would represent the pixels that match this threshold. Obviously there is still a lot of crap in the image, but this reduces the number of dimensions you have to work with.
Noise filtering using median filtering (taking the median pixel value of all neighbors and replace the pixel by this value) to reduce noise.
Using Canny Edge Detection Filter to get the contours of all items after 2 precedent steps.
Algorithm: The algorithm itself I chose for this task was taken from this awesome book on feature extraction and called Generalized Hough Transform (pretty different from the regular Hough Transform). It basically says a few things:
You can describe an object in space without knowing its analytical equation (which is the case here).
It is resistant to image deformations such as scaling and rotation, as it will basically test your image for every combination of scale factor and rotation factor.
It uses a base model (a template) that the algorithm will "learn".
Each pixel remaining in the contour image will vote for another pixel which will supposedly be the center (in terms of gravity) of your object, based on what it learned from the model.
In the end, you end up with a heat map of the votes, for example here all the pixels of the contour of the can will vote for its gravitational center, so you'll have a lot of votes in the same pixel corresponding to the center, and will see a peak in the heat map as below:
Once you have that, a simple threshold-based heuristic can give you the location of the center pixel, from which you can derive the scale and rotation and then plot your little rectangle around it (final scale and rotation factor will obviously be relative to your original template). In theory at least...
Results: Now, while this approach worked in the basic cases, it was severely lacking in some areas:
It is extremely slow! I'm not stressing this enough. Almost a full day was needed to process the 30 test images, obviously because I had a very high scaling factor for rotation and translation, since some of the cans were very small.
It was completely lost when bottles were in the image, and for some reason almost always found the bottle instead of the can (perhaps because bottles were bigger, thus had more pixels, thus more votes)
Fuzzy images were also no good, since the votes ended up in pixel at random locations around the center, thus ending with a very noisy heat map.
In-variance in translation and rotation was achieved, but not in orientation, meaning that a can that was not directly facing the camera objective wasn't recognized.
Can you help me improve my specific algorithm, using exclusively OpenCV features, to resolve the four specific issues mentioned?
I hope some people will also learn something out of it as well, after all I think not only people who ask questions should learn. :)
An alternative approach would be to extract features (keypoints) using the scale-invariant feature transform (SIFT) or Speeded Up Robust Features (SURF).
You can find a nice OpenCV code example in Java, C++, and Python on this page: Features2D + Homography to find a known object
Both algorithms are invariant to scaling and rotation. Since they work with features, you can also handle occlusion (as long as enough keypoints are visible).
Image source: tutorial example
The processing takes a few hundred ms for SIFT, SURF is bit faster, but it not suitable for real-time applications. ORB uses FAST which is weaker regarding rotation invariance.
The original papers
SURF: Speeded Up Robust Features
Distinctive Image Features
from Scale-Invariant Keypoints
ORB: an efficient alternative to SIFT or SURF
To speed things up, I would take advantage of the fact that you are not asked to find an arbitrary image/object, but specifically one with the Coca-Cola logo. This is significant because this logo is very distinctive, and it should have a characteristic, scale-invariant signature in the frequency domain, particularly in the red channel of RGB. That is to say, the alternating pattern of red-to-white-to-red encountered by a horizontal scan line (trained on a horizontally aligned logo) will have a distinctive "rhythm" as it passes through the central axis of the logo. That rhythm will "speed up" or "slow down" at different scales and orientations, but will remain proportionally equivalent. You could identify/define a few dozen such scanlines, both horizontally and vertically through the logo and several more diagonally, in a starburst pattern. Call these the "signature scan lines."
Searching for this signature in the target image is a simple matter of scanning the image in horizontal strips. Look for a high-frequency in the red-channel (indicating moving from a red region to a white one), and once found, see if it is followed by one of the frequency rhythms identified in the training session. Once a match is found, you will instantly know the scan-line's orientation and location in the logo (if you keep track of those things during training), so identifying the boundaries of the logo from there is trivial.
I would be surprised if this weren't a linearly-efficient algorithm, or nearly so. It obviously doesn't address your can-bottle discrimination, but at least you'll have your logos.
(Update: for bottle recognition I would look for coke (the brown liquid) adjacent to the logo -- that is, inside the bottle. Or, in the case of an empty bottle, I would look for a cap which will always have the same basic shape, size, and distance from the logo and will typically be all white or red. Search for a solid color eliptical shape where a cap should be, relative to the logo. Not foolproof of course, but your goal here should be to find the easy ones fast.)
(It's been a few years since my image processing days, so I kept this suggestion high-level and conceptual. I think it might slightly approximate how a human eye might operate -- or at least how my brain does!)
Fun problem: when I glanced at your bottle image I thought it was a can too. But, as a human, what I did to tell the difference is that I then noticed it was also a bottle...
So, to tell cans and bottles apart, how about simply scanning for bottles first? If you find one, mask out the label before looking for cans.
Not too hard to implement if you're already doing cans. The real downside is it doubles your processing time. (But thinking ahead to real-world applications, you're going to end up wanting to do bottles anyway ;-)
Isn't it difficult even for humans to distinguish between a bottle and a can in the second image (provided the transparent region of the bottle is hidden)?
They are almost the same except for a very small region (that is, width at the top of the can is a little small while the wrapper of the bottle is the same width throughout, but a minor change right?)
The first thing that came to my mind was to check for the red top of bottle. But it is still a problem, if there is no top for the bottle, or if it is partially hidden (as mentioned above).
The second thing I thought was about the transparency of bottle. OpenCV has some works on finding transparent objects in an image. Check the below links.
OpenCV Meeting Notes Minutes 2012-03-19
OpenCV Meeting Notes Minutes 2012-02-28
Particularly look at this to see how accurately they detect glass:
OpenCV Meeting Notes Minutes 2012-04-24
See their implementation result:
They say it is the implementation of the paper "A Geodesic Active Contour Framework for Finding Glass" by K. McHenry and J. Ponce, CVPR 2006.
It might be helpful in your case a little bit, but problem arises again if the bottle is filled.
So I think here, you can search for the transparent body of the bottles first or for a red region connected to two transparent objects laterally which is obviously the bottle. (When working ideally, an image as follows.)
Now you can remove the yellow region, that is, the label of the bottle and run your algorithm to find the can.
Anyway, this solution also has different problems like in the other solutions.
It works only if your bottle is empty. In that case, you will have to search for the red region between the two black colors (if the Coca Cola liquid is black).
Another problem if transparent part is covered.
But anyway, if there are none of the above problems in the pictures, this seems be to a better way.
I really like Darren Cook's and stacker's answers to this problem. I was in the midst of throwing my thoughts into a comment on those, but I believe my approach is too answer-shaped to not leave here.
In short summary, you've identified an algorithm to determine that a Coca-Cola logo is present at a particular location in space. You're now trying to determine, for arbitrary orientations and arbitrary scaling factors, a heuristic suitable for distinguishing Coca-Cola cans from other objects, inclusive of: bottles, billboards, advertisements, and Coca-Cola paraphernalia all associated with this iconic logo. You didn't call out many of these additional cases in your problem statement, but I feel they're vital to the success of your algorithm.
The secret here is determining what visual features a can contains or, through the negative space, what features are present for other Coke products that are not present for cans. To that end, the current top answer sketches out a basic approach for selecting "can" if and only if "bottle" is not identified, either by the presence of a bottle cap, liquid, or other similar visual heuristics.
The problem is this breaks down. A bottle could, for example, be empty and lack the presence of a cap, leading to a false positive. Or, it could be a partial bottle with additional features mangled, leading again to false detection. Needless to say, this isn't elegant, nor is it effective for our purposes.
To this end, the most correct selection criteria for cans appear to be the following:
Is the shape of the object silhouette, as you sketched out in your question, correct? If so, +1.
If we assume the presence of natural or artificial light, do we detect a chrome outline to the bottle that signifies whether this is made of aluminum? If so, +1.
Do we determine that the specular properties of the object are correct, relative to our light sources (illustrative video link on light source detection)? If so, +1.
Can we determine any other properties about the object that identify it as a can, including, but not limited to, the topological image skew of the logo, the orientation of the object, the juxtaposition of the object (for example, on a planar surface like a table or in the context of other cans), and the presence of a pull tab? If so, for each, +1.
Your classification might then look like the following:
For each candidate match, if the presence of a Coca Cola logo was detected, draw a gray border.
For each match over +2, draw a red border.
This visually highlights to the user what was detected, emphasizing weak positives that may, correctly, be detected as mangled cans.
The detection of each property carries a very different time and space complexity, and for each approach, a quick pass through http://dsp.stackexchange.com is more than reasonable for determining the most correct and most efficient algorithm for your purposes. My intent here is, purely and simply, to emphasize that detecting if something is a can by invalidating a small portion of the candidate detection space isn't the most robust or effective solution to this problem, and ideally, you should take the appropriate actions accordingly.
And hey, congrats on the Hacker News posting! On the whole, this is a pretty terrific question worthy of the publicity it received. :)
Looking at shape
Take a gander at the shape of the red portion of the can/bottle. Notice how the can tapers off slightly at the very top whereas the bottle label is straight. You can distinguish between these two by comparing the width of the red portion across the length of it.
Looking at highlights
One way to distinguish between bottles and cans is the material. A bottle is made of plastic whereas a can is made of aluminum metal. In sufficiently well-lit situations, looking at the specularity would be one way of telling a bottle label from a can label.
As far as I can tell, that is how a human would tell the difference between the two types of labels. If the lighting conditions are poor, there is bound to be some uncertainty in distinguishing the two anyways. In that case, you would have to be able to detect the presence of the transparent/translucent bottle itself.
Please take a look at Zdenek Kalal's Predator tracker. It requires some training, but it can actively learn how the tracked object looks at different orientations and scales and does it in realtime!
The source code is available on his site. It's in MATLAB, but perhaps there is a Java implementation already done by a community member. I have succesfully re-implemented the tracker part of TLD in C#. If I remember correctly, TLD is using Ferns as the keypoint detector. I use either SURF or SIFT instead (already suggested by #stacker) to reacquire the object if it was lost by the tracker. The tracker's feedback makes it easy to build with time a dynamic list of sift/surf templates that with time enable reacquiring the object with very high precision.
If you're interested in my C# implementation of the tracker, feel free to ask.
If you are not limited to just a camera which wasn't in one of your constraints perhaps you can move to using a range sensor like the Xbox Kinect. With this you can perform depth and colour based matched segmentation of the image. This allows for faster separation of objects in the image. You can then use ICP matching or similar techniques to even match the shape of the can rather then just its outline or colour and given that it is cylindrical this may be a valid option for any orientation if you have a previous 3D scan of the target. These techniques are often quite quick especially when used for such a specific purpose which should solve your speed problem.
Also I could suggest, not necessarily for accuracy or speed but for fun you could use a trained neural network on your hue segmented image to identify the shape of the can. These are very fast and can often be up to 80/90% accurate. Training would be a little bit of a long process though as you would have to manually identify the can in each image.
I would detect red rectangles: RGB -> HSV, filter red -> binary image, close (dilate then erode, known as imclose in matlab)
Then look through rectangles from largest to smallest. Rectangles that have smaller rectangles in a known position/scale can both be removed (assuming bottle proportions are constant, the smaller rectangle would be a bottle cap).
This would leave you with red rectangles, then you'll need to somehow detect the logos to tell if they're a red rectangle or a coke can. Like OCR, but with a known logo?
This may be a very naive idea (or may not work at all), but the dimensions of all the coke cans are fixed. So may be if the same image contains both a can and a bottle then you can tell them apart by size considerations (bottles are going to be larger). Now because of missing depth (i.e. 3D mapping to 2D mapping) its possible that a bottle may appear shrunk and there isn't a size difference. You may recover some depth information using stereo-imaging and then recover the original size.
Hmm, I actually think I'm onto something (this is like the most interesting question ever - so it'd be a shame not to continue trying to find the "perfect" answer, even though an acceptable one has been found)...
Once you find the logo, your troubles are half done. Then you only have to figure out the differences between what's around the logo. Additionally, we want to do as little extra as possible. I think this is actually this easy part...
What is around the logo? For a can, we can see metal, which despite the effects of lighting, does not change whatsoever in its basic colour. As long as we know the angle of the label, we can tell what's directly above it, so we're looking at the difference between these:
Here, what's above and below the logo is completely dark, consistent in colour. Relatively easy in that respect.
Here, what's above and below is light, but still consistent in colour. It's all-silver, and all-silver metal actually seems pretty rare, as well as silver colours in general. Additionally, it's in a thin slither and close enough to the red that has already been identified so you could trace its shape for its entire length to calculate a percentage of what can be considered the metal ring of the can. Really, you only need a small fraction of that anywhere along the can to tell it is part of it, but you still need to find a balance that ensures it's not just an empty bottle with something metal behind it.
And finally, the tricky one. But not so tricky, once we're only going by what we can see directly above (and below) the red wrapper. Its transparent, which means it will show whatever is behind it. That's good, because things that are behind it aren't likely to be as consistent in colour as the silver circular metal of the can. There could be many different things behind it, which would tell us that it's an empty (or filled with clear liquid) bottle, or a consistent colour, which could either mean that it's filled with liquid or that the bottle is simply in front of a solid colour. We're working with what's closest to the top and bottom, and the chances of the right colours being in the right place are relatively slim. We know it's a bottle, because it hasn't got that key visual element of the can, which is relatively simplistic compared to what could be behind a bottle.
(that last one was the best I could find of an empty large coca cola bottle - interestingly the cap AND ring are yellow, indicating that the redness of the cap probably shouldn't be relied upon)
In the rare circumstance that a similar shade of silver is behind the bottle, even after the abstraction of the plastic, or the bottle is somehow filled with the same shade of silver liquid, we can fall back on what we can roughly estimate as being the shape of the silver - which as I mentioned, is circular and follows the shape of the can. But even though I lack any certain knowledge in image processing, that sounds slow. Better yet, why not deduce this by for once checking around the sides of the logo to ensure there is nothing of the same silver colour there? Ah, but what if there's the same shade of silver behind a can? Then, we do indeed have to pay more attention to shapes, looking at the top and bottom of the can again.
Depending on how flawless this all needs to be, it could be very slow, but I guess my basic concept is to check the easiest and closest things first. Go by colour differences around the already matched shape (which seems the most trivial part of this anyway) before going to the effort of working out the shape of the other elements. To list it, it goes:
Find the main attraction (red logo background, and possibly the logo itself for orientation, though in case the can is turned away, you need to concentrate on the red alone)
Verify the shape and orientation, yet again via the very distinctive redness
Check colours around the shape (since it's quick and painless)
Finally, if needed, verify the shape of those colours around the main attraction for the right roundness.
In the event you can't do this, it probably means the top and bottom of the can are covered, and the only possible things that a human could have used to reliably make a distinction between the can and the bottle is the occlusion and reflection of the can, which would be a much harder battle to process. However, to go even further, you could follow the angle of the can/bottle to check for more bottle-like traits, using the semi-transparent scanning techniques mentioned in the other answers.
Interesting additional nightmares might include a can conveniently sitting behind the bottle at such a distance that the metal of it just so happens to show above and below the label, which would still fail as long as you're scanning along the entire length of the red label - which is actually more of a problem because you're not detecting a can where you could have, as opposed to considering that you're actually detecting a bottle, including the can by accident. The glass is half empty, in that case!
As a disclaimer, I have no experience in nor have ever thought about image processing outside of this question, but it is so interesting that it got me thinking pretty deeply about it, and after reading all the other answers, I consider this to possibly be the easiest and most efficient way to get it done. Personally, I'm just glad I don't actually have to think about programming this!
EDIT
Additionally, look at this drawing I did in MS Paint... It's absolutely awful and quite incomplete, but based on the shape and colours alone, you can guess what it's probably going to be. In essence, these are the only things that one needs to bother scanning for. When you look at that very distinctive shape and combination of colours so close, what else could it possibly be? The bit I didn't paint, the white background, should be considered "anything inconsistent". If it had a transparent background, it could go over almost any other image and you could still see it.
Am a few years late in answering this question. With the state of the art pushed to its limits by CNNs in the last 5 years I wouldn't use OpenCV to do this task now! (I know you specifically wanted OpenCv features in the question) I feel object detection algorithms such as Faster-RCNNs, YOLO, SSD etc would ace this problem with a significant margin compared to OpenCV features. If I were to tackle this problem now (after 6 years !!) I would definitely use Faster-RCNN.
I'm not aware of OpenCV but looking at the problem logically I think you could differentiate between bottle and can by changing the image which you are looking for i.e. Coca Cola. You should incorporate till top portion of can as in case of can there is silver lining at top of coca cola and in case of bottle there will be no such silver lining.
But obviously this algorithm will fail in cases where top of can is hidden, but in such case even human will not be able to differentiate between the two (if only coca cola portion of bottle/can is visible)
I like the challenge and wanted to give an answer, which solves the issue, I think.
Extract features (keypoints, descriptors such as SIFT, SURF) of the logo
Match the points with a model image of the logo (using Matcher such as Brute Force )
Estimate the coordinates of the rigid body (PnP problem - SolvePnP)
Estimate the cap position according to the rigid body
Do back-projection and calculate the image pixel position (ROI) of the cap of the bottle (I assume you have the intrinsic parameters of the camera)
Check with a method whether the cap is there or not. If there, then this is the bottle
Detection of the cap is another issue. It can be either complicated or simple. If I were you, I would simply check the color histogram in the ROI for a simple decision.
Please, give the feedback if I am wrong. Thanks.
I like your question, regardless of whether it's off topic or not :P
An interesting aside; I've just completed a subject in my degree where we covered robotics and computer vision. Our project for the semester was incredibly similar to the one you describe.
We had to develop a robot that used an Xbox Kinect to detect coke bottles and cans on any orientation in a variety of lighting and environmental conditions. Our solution involved using a band pass filter on the Hue channel in combination with the hough circle transform. We were able to constrain the environment a bit (we could chose where and how to position the robot and Kinect sensor), otherwise we were going to use the SIFT or SURF transforms.
You can read about our approach on my blog post on the topic :)
Deep Learning
Gather at least a few hundred images containing cola cans, annotate the bounding box around them as positive classes, include cola bottles and other cola products label them negative classes as well as random objects.
Unless you collect a very large dataset, perform the trick of using deep learning features for small dataset. Ideally using a combination of Support Vector Machines(SVM) with deep neural nets.
Once you feed the images to a previously trained deep learning model(e.g. GoogleNet), instead of using neural network's decision (final) layer to do classifications, use previous layer(s)' data as features to train your classifier.
OpenCV and Google Net:
http://docs.opencv.org/trunk/d5/de7/tutorial_dnn_googlenet.html
OpenCV and SVM:
http://docs.opencv.org/2.4/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.html
There are a bunch of color descriptors used to recognise objects, the paper below compares a lot of them. They are specially powerful when combined with SIFT or SURF. SURF or SIFT alone are not very useful in a coca cola can image because they don't recognise a lot of interest points, you need the color information to help. I use BIC (Border/Interior Pixel Classification) with SURF in a project and it worked great to recognise objects.
Color descriptors for Web image retrieval: a comparative study
You need a program that learns and improves classification accuracy organically from experience.
I'll suggest deep learning, with deep learning this becomes a trivial problem.
You can retrain the inception v3 model on Tensorflow:
How to Retrain Inception's Final Layer for New Categories.
In this case, you will be training a convolutional neural network to classify an object as either a coca-cola can or not.
As alternative to all these nice solutions, you can train your own classifier and make your application robust to errors. As example, you can use Haar Training, providing a good number of positive and negative images of your target.
It can be useful to extract only cans and can be combined with the detection of transparent objects.
There is a computer vision package called HALCON from MVTec whose demos could give you good algorithm ideas. There is plenty of examples similar to your problem that you could run in demo mode and then look at the operators in the code and see how to implement them from existing OpenCV operators.
I have used this package to quickly prototype complex algorithms for problems like this and then find how to implement them using existing OpenCV features. In particular for your case you could try to implement in OpenCV the functionality embedded in the operator find_scaled_shape_model. Some operators point to the scientific paper regarding algorithm implementation which can help to find out how to do something similar in OpenCV.
Maybe too many years late, but nevertheless a theory to try.
The ratio of bounding rectangle of red logo region to the overall dimension of the bottle/can is different. In the case of Can, should be 1:1, whereas will be different in that of bottle (with or without cap).
This should make it easy to distinguish between the two.
Update:
The horizontal curvature of the logo region will be different between the Can and Bottle due their respective size difference. This could be specifically useful if your robot needs to pick up can/bottle, and you decide the grip accordingly.
If you are interested in it being realtime, then what you need is to add in a pre-processing filter to determine what gets scanned with the heavy-duty stuff. A good fast, very real time, pre-processing filter that will allow you to scan things that are more likely to be a coca-cola can than not before moving onto more iffy things is something like this: search the image for the biggest patches of color that are a certain tolerance away from the sqrt(pow(red,2) + pow(blue,2) + pow(green,2)) of your coca-cola can. Start with a very strict color tolerance, and work your way down to more lenient color tolerances. Then, when your robot runs out of an allotted time to process the current frame, it uses the currently found bottles for your purposes. Please note that you will have to tweak the RGB colors in the sqrt(pow(red,2) + pow(blue,2) + pow(green,2)) to get them just right.
Also, this is gona seem really dumb, but did you make sure to turn on -oFast compiler optimizations when you compiled your C code?
The first things I would look for are color - like RED , when doing Red eye detection in an image - there is a certain color range to detect , some characteristics about it considering the surrounding area and such as distance apart from the other eye if it is indeed visible in the image.
1: First characteristic is color and Red is very dominant. After detecting the Coca Cola Red there are several items of interest
1A: How big is this red area (is it of sufficient quantity to make a determination of a true can or not - 10 pixels is probably not enough),
1B: Does it contain the color of the Label - "Coca-Cola" or wave.
1B1: Is there enough to consider a high probability that it is a label.
Item 1 is kind of a short cut - pre-process if that doe snot exist in the image - move on.
So if that is the case I can then utilize that segment of my image and start looking more zoom out of the area in question a little bit - basically look at the surrounding region / edges...
2: Given the above image area ID'd in 1 - verify the surrounding points [edges] of the item in question.
A: Is there what appears to be a can top or bottom - silver?
B: A bottle might appear transparent , but so might a glass table - so is there a glass table/shelf or a transparent area - if so there are multiple possible out comes. A Bottle MIGHT have a red cap, it might not, but it should have either the shape of the bottle top / thread screws, or a cap.
C: Even if this fails A and B it still can be a can - partial..
This is more complex when it is partial because a partial bottle / partial can might look the same , so some more processing of measurement of the Red region edge to edge.. small bottle might be similar in size ..
3: After the above analysis that is when I would look at the lettering and the wave logo - because I can orient my search for some of the letters in the words As you might not have all of the text due to not having all of the can, the wave would align at certain points to the text (distance wise) so I could search for that probability and know which letters should exist at that point of the wave at distance x.

Detecting curves in OpenCV

I am just starting to use OpenCV to detect specific curves in an image. First, I want to verify if there is a curve, and next, I would like to identify the type of curve according to vertical or horizontal convex or concave curve. Is there an available function in OpenCV? If not, can you give me some ideas about how can I possibly write such a function? Thanks! By the way, I'm using C++.
Template matching is not a robust way to solve this problem (its like looking at an object from a small pinhole) and edge detectors don't necessarily return you the true edges in the image; false edges such as those due to shadows are returned too. Further, you have to deal with the problem of incomplete edges and other problems that scales up with the complexity of the scene in your image.
The problem you posed, in general, is a very challenging one and, except for toy examples, there are no good solutions.
A rough attempt could be to first try to detect plausible edges using an edge detector (e.g. the canny edge detector suggested). Next, use RANSAC to try to fit a subset of the points in the detected edges to your curve model.
For e.g. let's say you are trying to detect a curve of the following form f(x) = ax^2 + bx + c. RANSAC will basically try to find from among the points in the detected edges, a subset of them that would best fit this curve model. To detect different curves, change f(x) accordingly and run RANSAC for each of them. You can then try to determine if the curve represented by f(x) really exists in your image using some heuristic applied to from the points that were assigned to it by RANSAC (e.g. if too few points were fitted to the model it is likely that the curve is not there. But how to determine a good threshold for the number of points?). You model will get more complex when you have to account for allowable transformation such as rotation etc.
The problem with this approach is you are basically trying fit what you think should be in the image to the points and sometimes, even if what you are looking for is not there, it will return you the "best possible" fit. For e.g. you have a whole bunch of points detected from a concentric circle. If you try to detect straight lines from these points, RANSAC will return you the best fit line! In fact, it could give you many different lines from different runs depending on which points it selected during its random initialization stage.
For more details on how to use RANSAC on this sort of problem, have a look at RANSAC for Dummies by Marco Zuliani. He also has a nice MATLAB toolbox to accompany this tech report, which you can probably port to the language of your choice.
Unless you know what you background looks like, or if you are in control of it e.g. by forcing a clean background, this is a very difficult problem to solve.

GJK collision detection implementation from 2D to 3D

I apologize for the length of this question and give a pre-emptive thanks for anyone who reads through this!
So i've spent the last few days going over the GJK algorithm. I understand the general concepts behind it, and understand the most of the nitty gritties of its implementation in 2D thanks to the wonderful article by William Bittle at http://www.codezealot.org/archives/88 .
I've implemented his pseudo code (found at the end of the article) into my own c++ project, however i want to make a 3D implementation. My weakness comes into using the dot products to test the voronoi regions and the tripleProducts to get perpandicular lines. But im trying to read up more on that.
My problem comes down to the containsOrigin function. Im having trouble visualizing and accounting for the new voronoi regions that the z axis adds. I just can't seem to wrap my head around how to determine which regions contains the origin. I assume there is 4 I have to account for, each extending from the triangular planes that the comprise the 4 faces of the tetrahedron simplex. If the origin is not within any of those regions, then it is contained, and we have a collision.
How do i go about testing if it is contained in a particular voronoi region/ which triangular face is pointing in the direction of the origin?
The current 2D algorithm checks if a triangle is made, if not, then the simplex is a line and it finds the 3rd point. I assume the 3D algorithm with check if a tetrahedron is made, if not, then it will check for a triangle, if true then it will to find a 4th point to make a tetrahedron(how would i get this? using a normal in direction of origin?). If i trangle isnt made, it will find a 3rd point to make a triangle (do i still use triple product for this like in 2D?).
Any suggestions, outlines, resources, code augmentations, comments are much appretiated.
Depending on what result you expect from the GJK algorithm you might want to look at this nice tutorial from Molly Rocket: https://mollyrocket.com/849
Be aware though that his implementation only outputs intersection? yes/no. But it might be a nice start.

Generating contour lines from regularly spaced data

I am currently working on a data visualization project.My aim is to produce contour lines ,in other words iso-lines, from gridded data.Data can be temperature, weather data or any kind of other environmental parameters but only condition is it must be regularly spaced.
I searched in internet , however i could not find a good algorithm, pseudo-code or source code for producing contour lines from grids.
Does anybody knows a library, source code or an algorithm for producing contour lines from gridded data?
it will be good if your suggestion has a good run time performance, i don't want to wait my users so much :)
Edit: thanks for response but isolines have some constrains like they should not intersects
so just generating bezier curves does not accomplish my goal.
See this question: How to approximate a vector contour from an elevation raster?
It's a near duplicate, but uses quite different terminology. You'll find that cartography and computer graphics solve many of the same problems, but use different terminology for them.
there's some reasonably good contouring available in GNUplot - if you're able to use GPL code that may help.
If your data is placed at regular intervals, this can be done fairly easily (assuming I understand your problem correctly). First you need to determine at what interval you want your contours. Next create the grid you are going to use to store the contour information (i'm assuming just a simple on/off or elevation at this contour level type of data), which should be one interval smaller than the source data.
Now the trick here is to offset the 2 grids by 1/2 an interval (won't actually show up in code like this, but its the concept I'm dealing with here) and compare the 4 coordinates surrounding the current point in the contour data grid you are calculating. If any of the 4 points are in a different interval range, then that 'pixel' in the contour grid should be set to true (or the value of the contour range being crossed).
With this method, there will be a problem when the interval is too fine which will cause several contours to overlap onto each other.
As the link from Paul Tomblin suggests, Bezier curves (which are a subset of B-splines) are a ripe solution for your problem. If runtime performance is an issue, Bezier curves have the added benefit of being constructable via the very fast de Casteljau algorithm, instead of drawing them according to the parametric equations. On the off chance you're working with DirectX, it has a library function for the de Casteljau, but it should not be challenging to brew one yourself using the 1001 web pages that describe it.