Detect person in bed - computer-vision

Suppose I want to find out if there is a person in a bed or not using cameras and computer vision algorithms. One can assume that the camera provides RGB, infrared and depth data.
I don't really have a good idea how to solve this. So far I came up with this:
Estimate a plane using RANSAC of the bed object. This plane should be further away from the ground plane, if there is a person in the bed. This seems very unstable though, assumes that the normal height of a bed is known and can easily be broken if the bed has an adjustable head part (e.g. in a hospital)
Face detection. Try to detect a face in the bed. Probably also isn't very reliable since the face can be sideways to the camera and partly covered.
Use the infrared-image. I am not sure how much you would see through the blanket and what would happen if the person just left the bed and the bed is still warm?
Is there a good way to do this? Or, to be reliable, you would have to use pressure sensors in the bed?
Thanks!

I dont know about infrared images but for camera based video processing this kind of problem is widely studied.
If your problem is to detect a person in a bed which is "Normally empty" then I think the simplest algorithm would be to capture successive frames and calculate their difference.
The existence of human in the frame would make it different from a frame capturing only empty bed. Depending on various algorithms like this you would get different reliability.
Otherwise you can go directly for human detection in video frames. One possible algorithm is described here.
Edit:
Your problem is harder than i thought. The following approach might solve the cases.
The main idea is to use bunch of features at once to get higher accuracy and remove false positives.
Use HOG person detector at top level to detect a person's entry in the scene. If the position of the possible entry doors are known or detectable using edge lines in the scene use it to increase accuracy. (At the point of entry the diference in successive frames will be located near the doors)
Use Edge lines to track the human. And use the bed edges to track the position of the human. The edges of human should be bounded by the edges of the bed.
If the difference is located within the bed implies human is in the bed but moving.
If needed as a preprocessing step include analysis of texture, connected component to remove possible moving objects in the room for higher accuracy (for example:- movement of clothes because of air).
Also use face detectors to increase accuracy.

Infrared that camera uses has a different frequency than infrared signal from a warm object. Unless you are using military grade IR scanners you can forget about connection IR-warmth. But IR is still useful if there is limited light or you use it for depth maps.
Go with depth (Kinect style) and estimate bed as a segment at your image. It should have some features in depth (certain dimension, flatness, etc). The bed usually surrounded by walls or floor that are easy to segment out. You algorithm can also be tuned to the distance to the bed and cut it out based just on depth range.
As other people said, it will be useful to learn more about your particular goal or application. What is background or environment around the bed? how does it looks when there is no person in it? Can a person simulate his/her presence(as in prison escape scenario), etc. etc.

Related

how to detect the coordinates of certain points on image

I'm using the ORB algorithm to detect and get the coordinates of the crossings of rope shown in the image, which is represented by the red dot. I want to detect the coordinates of the four points surrounding the crossing represented by the blue dots. All the four points have the same distance from the red spot.
Any idea how to get their coordinates by getting use of the red spot coordinate.
Thank you in Advance
Although you're using ORB, you're still going to need an algorithm to segment the rope from the background, or at least some technique to identify image chunks that belong to the rope and that are equidistant from the red dot. There are a number of options to explore.
It's important to consider your lighting & imaging as separate problems to be solved if this is meant to be a real-world application. This looks a bit like a problem for a class rather than for a application you'll sell and support, but you should still consider lighting:
Will your algorithm(s) still work when light level is reduced?
How will detection be affected by changes in camera pose relative to the surface where the rope will be located?
If you'll be detecting "black" rope, will the algorithm also be required to detect rope of different colors? dirty rope? rope on different backgrounds?
Since you're object of interest is rope, you have to consider a class of algorithms suitable for detection of non-rigid objects. Always consider the simplest solution first!
Connected Components
Connected components labeling is a traditional image processing algorithm and still suitable as the starting point for many applications. The last I knew, this was implemented in OpenCV as findContours(). This can also be called "blob finding" or some variant thereof.
https://en.wikipedia.org/wiki/Connected-component_labeling
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours
Depending on lighting, you may have to take different steps to binarize the image before running connected components. As a start, convert the color image to grayscale, which will simplify the task significantly.
Try a manual threshold since you can quickly test a number of values to see the effect. Don't be too discouraged if the binarization isn't quite right--this can often be fixed with preprocessing.
If a range of manual thresholds works (e.g. 52 - 76 in an 8-bit grayscale range), then use an algorithm that will automatically calculate the threshold for you: Otsu, entropy-based methods, etc., will all offer comparable performance. Whichever technique works best, the code/algorithm can be tweaked further to optimize for your rope application.
If thresholding and binarization don't work--which for your rope application seems unlikely, at least how you've presented it--then switch to thinking in terms of gradient-based (edge-based, energy-based) techniques.
But assuming you can separate the rope from the background, you're still going to need a method to start at the red dot [within the rope] and move equal distances out to the blue points. More about that later after a discussion of other rope segmentation methods.
Note: connected components labeling can work in scenarios beyond just binarizing black & white images. If you can create a texture field or some other 2D representation of the image that makes it possible to distinguish the black rope from the relatively light background, you may be able to use a connected components algorithm. (Finding a "more complicated" or "more modern" algorithm isn't necessarily going to be the right approach.)
In a binarized image, blobs can be nested: on a white background you can have several black blobs, inside of one or more of which are white blobs, inside of which are black blobs, etc. An earlier version of OpenCV handled this reasonably well. (OpenCV is a nice starting point, and a touchpoint for many, but for a number of reasons it doesn't always compare favorably to other open source and commercial packages; popularity notwithstanding, OpenCV has some issues.)
Once you have a "blob" (a 4-connected region of pixels) in a 2D digital image, you can treat the blob as an object, at which point you have a number of options:
Edge tracing: trace around the inside and outside edges of the blob. From what I recall, OpenCV does (or at least should) have some relatively straightforward method to get the edges.
Split the blob into component blobs, each of which can be treated separately
Convert the blob to a polygon
...
A connected components algorithm should be high on the list of techniques to try if you have a non-rigid object.
Boolean Operations
Once you have the rope as a connected component (and possibly even without this), you can use boolean image operations to find the spots at the blue dots in your image:
Create a circular region in data, or even in the image
Find the intersection of the circle (an annulus) and the black region representing the rope. Using your original image, you should have four regions.
Find the center point of the intersection regions.
You could even try this without using connected components at all, but using connected components as part of the solution could make it more robust.
Polygon Simplification
If you have a blob, which in your application would be a connected set of black pixels representing the rope on the floor, then you can consider converting this blob to one or more polygons for further processing. There are advantages to working with polygons.
If you consider only the outside boundary of the rope, then you can see that the set of pixels defining the boundary represents a polygon. It's a polygon with a lot of points, and not a convex polygon, but a polygon nonetheless.
To simplify the polygon, you can use an algorithm such as Ramer-Douglas-Puecker:
https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
Once you have a simplified polygon, you can try a few techniques to render useful data from the polygon
Angle Bisector Network
Triangulation (e.g. using ear clipping)
Triangulation is typically dependent on initial conditions, so the resulting triangulation for slighting different polygons (that is, rope -> blob -> polygon -> simplified polygon). So in your application it might be useful to triangulate the dark rope region, and then to connect the center of one triangle to the center of the next nearest triangle. You'll also have to deal with crossings, such as the rope overlap. Ultimately this can yield a "skeletonization" of the rope. Speaking of which...
Skeletonization
If the rope problem was posed to you as a class exercise, then it may have been a prompt to try skeletonization. You can read about it here:
https://en.wikipedia.org/wiki/Topological_skeleton
Skeletonization and thinning have their own problems to solve, but you should dig into them a bit and see those problems themselves.
The Medial Axis Transform (MAT) is a related concept. Long story there.
Edge-based techniques
There are a number of techniques to generate "edge images" based on edge strength, energy, entropy, etc. Making them robust takes a little effort. If you've had academic training in image processing you've likely heard of Harris, Sobel, Canny, and similar processing methods--none are magic bullets, but they're simple and dependable and will yield data you need.
An "edge image" consists of pixels representing the image gradient strength [and sometimes the gradient direction]. People may call this edge image something else, but it's the concept that matters.
What you then do with the edge data is another subject altogether. But one reason to think of edge images (or at least object borders) is that it reduces the amount of information your algorithm(s) will need to process.
Mean Shift (and related)
To get back to segmentation mentioned in the section on connected components, there are other methods for segmenting figures from a background: K-means, mean shift, and so on. You probably won't need any of those, but they're neat and worth studying.
Stroke Width Transform
This is an intriguing technique used to extract text from noisy backgrounds. Although it's intended for OCR, it could work for rope since the rope width is relatively constant, the rope shape varies, there are crossings, etc.
In short, and simplifying quite a bit, you can think of SWT as a means to find "strokes" (thick lines) by finding gradients antiparallel to each other. On either side of a stroke (or line), the edge gradient points normal to the object edge. The normal on one side of the stroke points opposite the direction of the normal on the other side of the stroke. By filtering for pixel-gradient pairs within a certain distance of each other, you can isolate certain strokes--even automatically. For your example the collection of points representing edge pairs for the rope would be much more common than other point pairs.
Non-Rigid Matching
There are techniques for matching non-rigid shapes, but they would not be worth exploring. If any of the techniques I mentioned above is unfamiliar to you, explore some of those first before you try any fancier algorithms.
CNNs, machine learning, etc.
Just don't even think of these methods as a starting point.
Other Considerations
If this were an application for industry, security, or whatnot, you'd have to determine how well your image processing worked under all environmental considerations. That's not an easy task, and can make all the difference between a setup that "works" in the lab and a setup that actually works in practice.
I hope that's of some help. Feel free to post a reply if I've confused more than helped, or if you want to explore some idea in more detail. Though I tried to touch on some common(ish) techniques, I didn't mention all the different ways of addressing this problem.
And briefly: once you have a skeleton, point network, or whatever representing a reduced data set for the rope and the red dot (the identified feature), a few techniques to find the items at the blue dots:
For a skeleton, trace along each "branch" of the rope outward from the know until the geodesic distance or straight-line 2D distance is the distance D that you want.
To use geometry, create a circle of width 1 - 2 pixels. Find the intersection of that circle and the rope. Find the center point of the intersections of circle and rope. (Also described above.)
Good luck!

Cocos2d - Moving objects in a curved path with different velocities

I'm trying to develop a game where cars move along roads and stop according to the signal of the traffic lights. They've got different velocities. Sometimes cars need to decelerate in order to not hit the leading car. They need to stop at the red lights. They have to make turns and etc. This is all relatively easy when working with straight intersecting roads. But how can I move a car/cars along a curved path? So far it was easy because I was just using either x or y of a car's position. But this time it's not the case, both coordinates seem to be necessary for moving it ahead. With straight roads I can just give a car an arbitrary speed and it will move along x or y axis with that speed. But how can I determine the velocity, if both coordinates have to be taken into account? Acceleration and decelerations are also mistery to me in this case. Thanks ahead.
Although this is about moving a train over a freeform track, the same issues and principles apply to cars moving across freeform roads. Actually, cars may be easier because they don't need to stick to their track 100% accurately.
In short: it's not easy, but doable. How hard it is going to be depends on how realistic you want your cars to look and finding corners to cut.
In your case the cars should simply follow a path (series of points). Since CCActions are bad for frequent direction/velocity changes, you should use your own system of detecting path points and heading to the next. Movement along a bezier curve is not going to have your cards move at constant speed, that rules out the CCBezier* actions.

Image Processing: Algorithm Improvement for 'Coca-Cola Can' Recognition

One of the most interesting projects I've worked on in the past couple of years was a project about image processing. The goal was to develop a system to be able to recognize Coca-Cola 'cans' (note that I'm stressing the word 'cans', you'll see why in a minute). You can see a sample below, with the can recognized in the green rectangle with scale and rotation.
Some constraints on the project:
The background could be very noisy.
The can could have any scale or rotation or even orientation (within reasonable limits).
The image could have some degree of fuzziness (contours might not be entirely straight).
There could be Coca-Cola bottles in the image, and the algorithm should only detect the can!
The brightness of the image could vary a lot (so you can't rely "too much" on color detection).
The can could be partly hidden on the sides or the middle and possibly partly hidden behind a bottle.
There could be no can at all in the image, in which case you had to find nothing and write a message saying so.
So you could end up with tricky things like this (which in this case had my algorithm totally fail):
I did this project a while ago, and had a lot of fun doing it, and I had a decent implementation. Here are some details about my implementation:
Language: Done in C++ using OpenCV library.
Pre-processing: For the image pre-processing, i.e. transforming the image into a more raw form to give to the algorithm, I used 2 methods:
Changing color domain from RGB to HSV and filtering based on "red" hue, saturation above a certain threshold to avoid orange-like colors, and filtering of low value to avoid dark tones. The end result was a binary black and white image, where all white pixels would represent the pixels that match this threshold. Obviously there is still a lot of crap in the image, but this reduces the number of dimensions you have to work with.
Noise filtering using median filtering (taking the median pixel value of all neighbors and replace the pixel by this value) to reduce noise.
Using Canny Edge Detection Filter to get the contours of all items after 2 precedent steps.
Algorithm: The algorithm itself I chose for this task was taken from this awesome book on feature extraction and called Generalized Hough Transform (pretty different from the regular Hough Transform). It basically says a few things:
You can describe an object in space without knowing its analytical equation (which is the case here).
It is resistant to image deformations such as scaling and rotation, as it will basically test your image for every combination of scale factor and rotation factor.
It uses a base model (a template) that the algorithm will "learn".
Each pixel remaining in the contour image will vote for another pixel which will supposedly be the center (in terms of gravity) of your object, based on what it learned from the model.
In the end, you end up with a heat map of the votes, for example here all the pixels of the contour of the can will vote for its gravitational center, so you'll have a lot of votes in the same pixel corresponding to the center, and will see a peak in the heat map as below:
Once you have that, a simple threshold-based heuristic can give you the location of the center pixel, from which you can derive the scale and rotation and then plot your little rectangle around it (final scale and rotation factor will obviously be relative to your original template). In theory at least...
Results: Now, while this approach worked in the basic cases, it was severely lacking in some areas:
It is extremely slow! I'm not stressing this enough. Almost a full day was needed to process the 30 test images, obviously because I had a very high scaling factor for rotation and translation, since some of the cans were very small.
It was completely lost when bottles were in the image, and for some reason almost always found the bottle instead of the can (perhaps because bottles were bigger, thus had more pixels, thus more votes)
Fuzzy images were also no good, since the votes ended up in pixel at random locations around the center, thus ending with a very noisy heat map.
In-variance in translation and rotation was achieved, but not in orientation, meaning that a can that was not directly facing the camera objective wasn't recognized.
Can you help me improve my specific algorithm, using exclusively OpenCV features, to resolve the four specific issues mentioned?
I hope some people will also learn something out of it as well, after all I think not only people who ask questions should learn. :)
An alternative approach would be to extract features (keypoints) using the scale-invariant feature transform (SIFT) or Speeded Up Robust Features (SURF).
You can find a nice OpenCV code example in Java, C++, and Python on this page: Features2D + Homography to find a known object
Both algorithms are invariant to scaling and rotation. Since they work with features, you can also handle occlusion (as long as enough keypoints are visible).
Image source: tutorial example
The processing takes a few hundred ms for SIFT, SURF is bit faster, but it not suitable for real-time applications. ORB uses FAST which is weaker regarding rotation invariance.
The original papers
SURF: Speeded Up Robust Features
Distinctive Image Features
from Scale-Invariant Keypoints
ORB: an efficient alternative to SIFT or SURF
To speed things up, I would take advantage of the fact that you are not asked to find an arbitrary image/object, but specifically one with the Coca-Cola logo. This is significant because this logo is very distinctive, and it should have a characteristic, scale-invariant signature in the frequency domain, particularly in the red channel of RGB. That is to say, the alternating pattern of red-to-white-to-red encountered by a horizontal scan line (trained on a horizontally aligned logo) will have a distinctive "rhythm" as it passes through the central axis of the logo. That rhythm will "speed up" or "slow down" at different scales and orientations, but will remain proportionally equivalent. You could identify/define a few dozen such scanlines, both horizontally and vertically through the logo and several more diagonally, in a starburst pattern. Call these the "signature scan lines."
Searching for this signature in the target image is a simple matter of scanning the image in horizontal strips. Look for a high-frequency in the red-channel (indicating moving from a red region to a white one), and once found, see if it is followed by one of the frequency rhythms identified in the training session. Once a match is found, you will instantly know the scan-line's orientation and location in the logo (if you keep track of those things during training), so identifying the boundaries of the logo from there is trivial.
I would be surprised if this weren't a linearly-efficient algorithm, or nearly so. It obviously doesn't address your can-bottle discrimination, but at least you'll have your logos.
(Update: for bottle recognition I would look for coke (the brown liquid) adjacent to the logo -- that is, inside the bottle. Or, in the case of an empty bottle, I would look for a cap which will always have the same basic shape, size, and distance from the logo and will typically be all white or red. Search for a solid color eliptical shape where a cap should be, relative to the logo. Not foolproof of course, but your goal here should be to find the easy ones fast.)
(It's been a few years since my image processing days, so I kept this suggestion high-level and conceptual. I think it might slightly approximate how a human eye might operate -- or at least how my brain does!)
Fun problem: when I glanced at your bottle image I thought it was a can too. But, as a human, what I did to tell the difference is that I then noticed it was also a bottle...
So, to tell cans and bottles apart, how about simply scanning for bottles first? If you find one, mask out the label before looking for cans.
Not too hard to implement if you're already doing cans. The real downside is it doubles your processing time. (But thinking ahead to real-world applications, you're going to end up wanting to do bottles anyway ;-)
Isn't it difficult even for humans to distinguish between a bottle and a can in the second image (provided the transparent region of the bottle is hidden)?
They are almost the same except for a very small region (that is, width at the top of the can is a little small while the wrapper of the bottle is the same width throughout, but a minor change right?)
The first thing that came to my mind was to check for the red top of bottle. But it is still a problem, if there is no top for the bottle, or if it is partially hidden (as mentioned above).
The second thing I thought was about the transparency of bottle. OpenCV has some works on finding transparent objects in an image. Check the below links.
OpenCV Meeting Notes Minutes 2012-03-19
OpenCV Meeting Notes Minutes 2012-02-28
Particularly look at this to see how accurately they detect glass:
OpenCV Meeting Notes Minutes 2012-04-24
See their implementation result:
They say it is the implementation of the paper "A Geodesic Active Contour Framework for Finding Glass" by K. McHenry and J. Ponce, CVPR 2006.
It might be helpful in your case a little bit, but problem arises again if the bottle is filled.
So I think here, you can search for the transparent body of the bottles first or for a red region connected to two transparent objects laterally which is obviously the bottle. (When working ideally, an image as follows.)
Now you can remove the yellow region, that is, the label of the bottle and run your algorithm to find the can.
Anyway, this solution also has different problems like in the other solutions.
It works only if your bottle is empty. In that case, you will have to search for the red region between the two black colors (if the Coca Cola liquid is black).
Another problem if transparent part is covered.
But anyway, if there are none of the above problems in the pictures, this seems be to a better way.
I really like Darren Cook's and stacker's answers to this problem. I was in the midst of throwing my thoughts into a comment on those, but I believe my approach is too answer-shaped to not leave here.
In short summary, you've identified an algorithm to determine that a Coca-Cola logo is present at a particular location in space. You're now trying to determine, for arbitrary orientations and arbitrary scaling factors, a heuristic suitable for distinguishing Coca-Cola cans from other objects, inclusive of: bottles, billboards, advertisements, and Coca-Cola paraphernalia all associated with this iconic logo. You didn't call out many of these additional cases in your problem statement, but I feel they're vital to the success of your algorithm.
The secret here is determining what visual features a can contains or, through the negative space, what features are present for other Coke products that are not present for cans. To that end, the current top answer sketches out a basic approach for selecting "can" if and only if "bottle" is not identified, either by the presence of a bottle cap, liquid, or other similar visual heuristics.
The problem is this breaks down. A bottle could, for example, be empty and lack the presence of a cap, leading to a false positive. Or, it could be a partial bottle with additional features mangled, leading again to false detection. Needless to say, this isn't elegant, nor is it effective for our purposes.
To this end, the most correct selection criteria for cans appear to be the following:
Is the shape of the object silhouette, as you sketched out in your question, correct? If so, +1.
If we assume the presence of natural or artificial light, do we detect a chrome outline to the bottle that signifies whether this is made of aluminum? If so, +1.
Do we determine that the specular properties of the object are correct, relative to our light sources (illustrative video link on light source detection)? If so, +1.
Can we determine any other properties about the object that identify it as a can, including, but not limited to, the topological image skew of the logo, the orientation of the object, the juxtaposition of the object (for example, on a planar surface like a table or in the context of other cans), and the presence of a pull tab? If so, for each, +1.
Your classification might then look like the following:
For each candidate match, if the presence of a Coca Cola logo was detected, draw a gray border.
For each match over +2, draw a red border.
This visually highlights to the user what was detected, emphasizing weak positives that may, correctly, be detected as mangled cans.
The detection of each property carries a very different time and space complexity, and for each approach, a quick pass through http://dsp.stackexchange.com is more than reasonable for determining the most correct and most efficient algorithm for your purposes. My intent here is, purely and simply, to emphasize that detecting if something is a can by invalidating a small portion of the candidate detection space isn't the most robust or effective solution to this problem, and ideally, you should take the appropriate actions accordingly.
And hey, congrats on the Hacker News posting! On the whole, this is a pretty terrific question worthy of the publicity it received. :)
Looking at shape
Take a gander at the shape of the red portion of the can/bottle. Notice how the can tapers off slightly at the very top whereas the bottle label is straight. You can distinguish between these two by comparing the width of the red portion across the length of it.
Looking at highlights
One way to distinguish between bottles and cans is the material. A bottle is made of plastic whereas a can is made of aluminum metal. In sufficiently well-lit situations, looking at the specularity would be one way of telling a bottle label from a can label.
As far as I can tell, that is how a human would tell the difference between the two types of labels. If the lighting conditions are poor, there is bound to be some uncertainty in distinguishing the two anyways. In that case, you would have to be able to detect the presence of the transparent/translucent bottle itself.
Please take a look at Zdenek Kalal's Predator tracker. It requires some training, but it can actively learn how the tracked object looks at different orientations and scales and does it in realtime!
The source code is available on his site. It's in MATLAB, but perhaps there is a Java implementation already done by a community member. I have succesfully re-implemented the tracker part of TLD in C#. If I remember correctly, TLD is using Ferns as the keypoint detector. I use either SURF or SIFT instead (already suggested by #stacker) to reacquire the object if it was lost by the tracker. The tracker's feedback makes it easy to build with time a dynamic list of sift/surf templates that with time enable reacquiring the object with very high precision.
If you're interested in my C# implementation of the tracker, feel free to ask.
If you are not limited to just a camera which wasn't in one of your constraints perhaps you can move to using a range sensor like the Xbox Kinect. With this you can perform depth and colour based matched segmentation of the image. This allows for faster separation of objects in the image. You can then use ICP matching or similar techniques to even match the shape of the can rather then just its outline or colour and given that it is cylindrical this may be a valid option for any orientation if you have a previous 3D scan of the target. These techniques are often quite quick especially when used for such a specific purpose which should solve your speed problem.
Also I could suggest, not necessarily for accuracy or speed but for fun you could use a trained neural network on your hue segmented image to identify the shape of the can. These are very fast and can often be up to 80/90% accurate. Training would be a little bit of a long process though as you would have to manually identify the can in each image.
I would detect red rectangles: RGB -> HSV, filter red -> binary image, close (dilate then erode, known as imclose in matlab)
Then look through rectangles from largest to smallest. Rectangles that have smaller rectangles in a known position/scale can both be removed (assuming bottle proportions are constant, the smaller rectangle would be a bottle cap).
This would leave you with red rectangles, then you'll need to somehow detect the logos to tell if they're a red rectangle or a coke can. Like OCR, but with a known logo?
This may be a very naive idea (or may not work at all), but the dimensions of all the coke cans are fixed. So may be if the same image contains both a can and a bottle then you can tell them apart by size considerations (bottles are going to be larger). Now because of missing depth (i.e. 3D mapping to 2D mapping) its possible that a bottle may appear shrunk and there isn't a size difference. You may recover some depth information using stereo-imaging and then recover the original size.
Hmm, I actually think I'm onto something (this is like the most interesting question ever - so it'd be a shame not to continue trying to find the "perfect" answer, even though an acceptable one has been found)...
Once you find the logo, your troubles are half done. Then you only have to figure out the differences between what's around the logo. Additionally, we want to do as little extra as possible. I think this is actually this easy part...
What is around the logo? For a can, we can see metal, which despite the effects of lighting, does not change whatsoever in its basic colour. As long as we know the angle of the label, we can tell what's directly above it, so we're looking at the difference between these:
Here, what's above and below the logo is completely dark, consistent in colour. Relatively easy in that respect.
Here, what's above and below is light, but still consistent in colour. It's all-silver, and all-silver metal actually seems pretty rare, as well as silver colours in general. Additionally, it's in a thin slither and close enough to the red that has already been identified so you could trace its shape for its entire length to calculate a percentage of what can be considered the metal ring of the can. Really, you only need a small fraction of that anywhere along the can to tell it is part of it, but you still need to find a balance that ensures it's not just an empty bottle with something metal behind it.
And finally, the tricky one. But not so tricky, once we're only going by what we can see directly above (and below) the red wrapper. Its transparent, which means it will show whatever is behind it. That's good, because things that are behind it aren't likely to be as consistent in colour as the silver circular metal of the can. There could be many different things behind it, which would tell us that it's an empty (or filled with clear liquid) bottle, or a consistent colour, which could either mean that it's filled with liquid or that the bottle is simply in front of a solid colour. We're working with what's closest to the top and bottom, and the chances of the right colours being in the right place are relatively slim. We know it's a bottle, because it hasn't got that key visual element of the can, which is relatively simplistic compared to what could be behind a bottle.
(that last one was the best I could find of an empty large coca cola bottle - interestingly the cap AND ring are yellow, indicating that the redness of the cap probably shouldn't be relied upon)
In the rare circumstance that a similar shade of silver is behind the bottle, even after the abstraction of the plastic, or the bottle is somehow filled with the same shade of silver liquid, we can fall back on what we can roughly estimate as being the shape of the silver - which as I mentioned, is circular and follows the shape of the can. But even though I lack any certain knowledge in image processing, that sounds slow. Better yet, why not deduce this by for once checking around the sides of the logo to ensure there is nothing of the same silver colour there? Ah, but what if there's the same shade of silver behind a can? Then, we do indeed have to pay more attention to shapes, looking at the top and bottom of the can again.
Depending on how flawless this all needs to be, it could be very slow, but I guess my basic concept is to check the easiest and closest things first. Go by colour differences around the already matched shape (which seems the most trivial part of this anyway) before going to the effort of working out the shape of the other elements. To list it, it goes:
Find the main attraction (red logo background, and possibly the logo itself for orientation, though in case the can is turned away, you need to concentrate on the red alone)
Verify the shape and orientation, yet again via the very distinctive redness
Check colours around the shape (since it's quick and painless)
Finally, if needed, verify the shape of those colours around the main attraction for the right roundness.
In the event you can't do this, it probably means the top and bottom of the can are covered, and the only possible things that a human could have used to reliably make a distinction between the can and the bottle is the occlusion and reflection of the can, which would be a much harder battle to process. However, to go even further, you could follow the angle of the can/bottle to check for more bottle-like traits, using the semi-transparent scanning techniques mentioned in the other answers.
Interesting additional nightmares might include a can conveniently sitting behind the bottle at such a distance that the metal of it just so happens to show above and below the label, which would still fail as long as you're scanning along the entire length of the red label - which is actually more of a problem because you're not detecting a can where you could have, as opposed to considering that you're actually detecting a bottle, including the can by accident. The glass is half empty, in that case!
As a disclaimer, I have no experience in nor have ever thought about image processing outside of this question, but it is so interesting that it got me thinking pretty deeply about it, and after reading all the other answers, I consider this to possibly be the easiest and most efficient way to get it done. Personally, I'm just glad I don't actually have to think about programming this!
EDIT
Additionally, look at this drawing I did in MS Paint... It's absolutely awful and quite incomplete, but based on the shape and colours alone, you can guess what it's probably going to be. In essence, these are the only things that one needs to bother scanning for. When you look at that very distinctive shape and combination of colours so close, what else could it possibly be? The bit I didn't paint, the white background, should be considered "anything inconsistent". If it had a transparent background, it could go over almost any other image and you could still see it.
Am a few years late in answering this question. With the state of the art pushed to its limits by CNNs in the last 5 years I wouldn't use OpenCV to do this task now! (I know you specifically wanted OpenCv features in the question) I feel object detection algorithms such as Faster-RCNNs, YOLO, SSD etc would ace this problem with a significant margin compared to OpenCV features. If I were to tackle this problem now (after 6 years !!) I would definitely use Faster-RCNN.
I'm not aware of OpenCV but looking at the problem logically I think you could differentiate between bottle and can by changing the image which you are looking for i.e. Coca Cola. You should incorporate till top portion of can as in case of can there is silver lining at top of coca cola and in case of bottle there will be no such silver lining.
But obviously this algorithm will fail in cases where top of can is hidden, but in such case even human will not be able to differentiate between the two (if only coca cola portion of bottle/can is visible)
I like the challenge and wanted to give an answer, which solves the issue, I think.
Extract features (keypoints, descriptors such as SIFT, SURF) of the logo
Match the points with a model image of the logo (using Matcher such as Brute Force )
Estimate the coordinates of the rigid body (PnP problem - SolvePnP)
Estimate the cap position according to the rigid body
Do back-projection and calculate the image pixel position (ROI) of the cap of the bottle (I assume you have the intrinsic parameters of the camera)
Check with a method whether the cap is there or not. If there, then this is the bottle
Detection of the cap is another issue. It can be either complicated or simple. If I were you, I would simply check the color histogram in the ROI for a simple decision.
Please, give the feedback if I am wrong. Thanks.
I like your question, regardless of whether it's off topic or not :P
An interesting aside; I've just completed a subject in my degree where we covered robotics and computer vision. Our project for the semester was incredibly similar to the one you describe.
We had to develop a robot that used an Xbox Kinect to detect coke bottles and cans on any orientation in a variety of lighting and environmental conditions. Our solution involved using a band pass filter on the Hue channel in combination with the hough circle transform. We were able to constrain the environment a bit (we could chose where and how to position the robot and Kinect sensor), otherwise we were going to use the SIFT or SURF transforms.
You can read about our approach on my blog post on the topic :)
Deep Learning
Gather at least a few hundred images containing cola cans, annotate the bounding box around them as positive classes, include cola bottles and other cola products label them negative classes as well as random objects.
Unless you collect a very large dataset, perform the trick of using deep learning features for small dataset. Ideally using a combination of Support Vector Machines(SVM) with deep neural nets.
Once you feed the images to a previously trained deep learning model(e.g. GoogleNet), instead of using neural network's decision (final) layer to do classifications, use previous layer(s)' data as features to train your classifier.
OpenCV and Google Net:
http://docs.opencv.org/trunk/d5/de7/tutorial_dnn_googlenet.html
OpenCV and SVM:
http://docs.opencv.org/2.4/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.html
There are a bunch of color descriptors used to recognise objects, the paper below compares a lot of them. They are specially powerful when combined with SIFT or SURF. SURF or SIFT alone are not very useful in a coca cola can image because they don't recognise a lot of interest points, you need the color information to help. I use BIC (Border/Interior Pixel Classification) with SURF in a project and it worked great to recognise objects.
Color descriptors for Web image retrieval: a comparative study
You need a program that learns and improves classification accuracy organically from experience.
I'll suggest deep learning, with deep learning this becomes a trivial problem.
You can retrain the inception v3 model on Tensorflow:
How to Retrain Inception's Final Layer for New Categories.
In this case, you will be training a convolutional neural network to classify an object as either a coca-cola can or not.
As alternative to all these nice solutions, you can train your own classifier and make your application robust to errors. As example, you can use Haar Training, providing a good number of positive and negative images of your target.
It can be useful to extract only cans and can be combined with the detection of transparent objects.
There is a computer vision package called HALCON from MVTec whose demos could give you good algorithm ideas. There is plenty of examples similar to your problem that you could run in demo mode and then look at the operators in the code and see how to implement them from existing OpenCV operators.
I have used this package to quickly prototype complex algorithms for problems like this and then find how to implement them using existing OpenCV features. In particular for your case you could try to implement in OpenCV the functionality embedded in the operator find_scaled_shape_model. Some operators point to the scientific paper regarding algorithm implementation which can help to find out how to do something similar in OpenCV.
Maybe too many years late, but nevertheless a theory to try.
The ratio of bounding rectangle of red logo region to the overall dimension of the bottle/can is different. In the case of Can, should be 1:1, whereas will be different in that of bottle (with or without cap).
This should make it easy to distinguish between the two.
Update:
The horizontal curvature of the logo region will be different between the Can and Bottle due their respective size difference. This could be specifically useful if your robot needs to pick up can/bottle, and you decide the grip accordingly.
If you are interested in it being realtime, then what you need is to add in a pre-processing filter to determine what gets scanned with the heavy-duty stuff. A good fast, very real time, pre-processing filter that will allow you to scan things that are more likely to be a coca-cola can than not before moving onto more iffy things is something like this: search the image for the biggest patches of color that are a certain tolerance away from the sqrt(pow(red,2) + pow(blue,2) + pow(green,2)) of your coca-cola can. Start with a very strict color tolerance, and work your way down to more lenient color tolerances. Then, when your robot runs out of an allotted time to process the current frame, it uses the currently found bottles for your purposes. Please note that you will have to tweak the RGB colors in the sqrt(pow(red,2) + pow(blue,2) + pow(green,2)) to get them just right.
Also, this is gona seem really dumb, but did you make sure to turn on -oFast compiler optimizations when you compiled your C code?
The first things I would look for are color - like RED , when doing Red eye detection in an image - there is a certain color range to detect , some characteristics about it considering the surrounding area and such as distance apart from the other eye if it is indeed visible in the image.
1: First characteristic is color and Red is very dominant. After detecting the Coca Cola Red there are several items of interest
1A: How big is this red area (is it of sufficient quantity to make a determination of a true can or not - 10 pixels is probably not enough),
1B: Does it contain the color of the Label - "Coca-Cola" or wave.
1B1: Is there enough to consider a high probability that it is a label.
Item 1 is kind of a short cut - pre-process if that doe snot exist in the image - move on.
So if that is the case I can then utilize that segment of my image and start looking more zoom out of the area in question a little bit - basically look at the surrounding region / edges...
2: Given the above image area ID'd in 1 - verify the surrounding points [edges] of the item in question.
A: Is there what appears to be a can top or bottom - silver?
B: A bottle might appear transparent , but so might a glass table - so is there a glass table/shelf or a transparent area - if so there are multiple possible out comes. A Bottle MIGHT have a red cap, it might not, but it should have either the shape of the bottle top / thread screws, or a cap.
C: Even if this fails A and B it still can be a can - partial..
This is more complex when it is partial because a partial bottle / partial can might look the same , so some more processing of measurement of the Red region edge to edge.. small bottle might be similar in size ..
3: After the above analysis that is when I would look at the lettering and the wave logo - because I can orient my search for some of the letters in the words As you might not have all of the text due to not having all of the can, the wave would align at certain points to the text (distance wise) so I could search for that probability and know which letters should exist at that point of the wave at distance x.

Lane Detection in an artificial Environment

I'm writing an app that can detect lanes in a driving simulator. The environment is relatively simple, its mostly straight multi-lane roads and almost no curvature at all. At the moment, I can successfully detect lines using the (classical) Hough Transform but the issue is that the HT naturally also detects lines that are not lanes.
How can I be more selective? I do not draw horizontal lines already, but still some lines creep in. Ideally, I would like to detect the lane boundaries that the vehicle is traveling in. The following is a typical image of the environment
Here is what I'm doing so far:
1. Because the environment is more or less the same wherever I drive, I set the region of interest (RoI) to exclude the horizon and anything above it.
2. Threshold the image (I'll explain my reason for threshold in a bit)
3. Canny Edge Detection
4. Apply a Hough Transform
5. Draw the detected lines excluding those which have a gradient of 0.0 or nearly 0.0
The reason for threshold the imaging is as follows. If you take a look at the environment photograph linked above, you'll see a grayish line running parallel to the road. Because its a continuous line - unlike the lane markers - the HT ends up detecting it. I cannot exclude it based on gradient as it has the same gradient as the lane markers. With thresholding, i can remove that and therefore only detect lines that are the actual lane markers.
Here is the result of the above operations
I understand that there are many solutions to this problem and I have read countless papers on this but they all seem to be handling environments vastly more complicated than this and/or are simply way over my head. For what its worth, just a little more than a month ago, I had no background in ComputerVision and so all of this is very very new to me.
UPDATE 1:
I guess to put this in better terms, I'm looking for a way to model the lanes so that lines that do not fit the model are not included. Unfortunately, I do not have a clue about where to begin with models. Any suggestions?
For what its worth, I have managed to identify the lanes that the vehicle is traveling within and can exclude the extra lines that are not part of the "active" lane, so to speak. Hopefully this photo will help
Its not perfect, but its something I guess. My ultimate goal, after modeling, is to generate a heading/position of the vehicle. But I just want to get, relatively, robust lane detection at first. I'm hoping there is a relatively simple technique that can help achieve this (something that does not depend on the system's parameters such as focal length of field of view).
One way to go would be to use prior knowledge of the scene you are looking at. You could have a model with a hidden state, comprising more or less static parameters such as camera height, camera tilt or lane width, and dynamic parameters such as camera yaw, lateral displacement of the camera within the lane, road curvature, etc. You could handle such model in the frame of a Kalman filter. An advantage of such a model would be an ability to tolerate other road surface markings such as direction arrows, zebras and such. Good luck!
Perhaps you could try to find only lines on edges found at grey-white transitions rather than on all edges in the entire image?

Simulating a car moving along a track

For Operating Systems class I'm going to write a scheduling simulator entitled "Jurrasic Park".
The ultimate goal is for me to have a series of cars following a set path and passengers waiting in line at a set location for those cars to return to so they can be picked up and be taken on the tour. This will be a simple 2d, top-down view of the track and the cars moving along it.
While I can code this easily without having to visually display anything I'm not quite sure what the best way would be to implement a car moving along a fixed track.
To start out, I'm going to simply use OpenGL to draw my cars as rectangles but I'm still a little confused about how to approach updating the car's position and ensuring it is moving along the set path for the simulated theme park.
Should I store vertices of the track in a list and have each call to update() move the cars a step closer to the next vertex?
If you want curved track, you can use splines, which are mathematically defined curves specified by two vector endpoints. You plop down the endpoints, and then solve for a nice curve between them. A search should reveal source code or math that you can derive into source code. The nice thing about this is that you can solve for the heading of your vehicle exactly, as well as get the next location on your path by doing a percentage calculation. The difficult thing is that you have to do a curve length calculation if you don't want the same number of steps between each set of endpoints.
An alternate approach is to use a hidden bitmap with the path drawn on it as a single pixel wide curve. You can find the next location in the path by matching the pixels surrounding your current location to a direction-of-travel vector, and then updating the vector with a delta function at each step. We used this approach for a path traveling prototype where a "vehicle" was being "driven" along various paths using a joystick, and it works okay until you have some intersections that confuse your vector calculations. But if it's a unidirectional closed loop, this would work just fine, and it's dead simple to implement. You can smooth out the heading angle of your vehicle by averaging the last few deltas. Also, each pixel becomes one "step", so your velocity control is easy.
In the former case, you can have specially tagged endpoints for start/stop locations or points of interest. In the latter, just use a different color pixel on the path for special nodes. In either case, what you display will probably not be the underlying path data, but some prettied up representation of your "park".
Just pick whatever is easiest, and write a tick() function that steps to the next path location and updates your vehicle heading whenever the car is in motion. If you're really clever, you can do some radius based collision handling so that cars will automatically stop when a car in front of them on the track has halted.
I would keep it simple:
Run a timer (every 100msec), and on each timer draw each ones of the cars in the new location. The location is read from a file, which contains the 2D coordinates of the car (each car?).
If you design the road to be very long (lets say, 30 seconds) writing 30*10 points would be... hard. So how about storing at the file the location at every full second? Then between those 2 intervals you will have 9 blind spots, just move the car in constant speed (x += dx/9, y+= dy/9).
I would like to hear a better approach :)
Well you could use some path as you describe, ether a fixed point path or spline. Then move as a fixed 'velocity' on this path. This may look stiff, if the car moves at the same spend on the straight as cornering.
So you could then have speeds for each path section, but you would need many speed set points, or blend the speeds, otherwise you'll get jerky speed changes.
Or you could go for full car simulation, and use an A* to build the optimal path. That's over kill but very cool.
If there is only going forward and backward, and you know that you want to go forward, you could just look at the cells around you, find the ones that are the color of the road and move so you stay in the center of the road.
If you assume that you won't have abrupt curves then you can assume that the road is directly in front of you and just scan to the left and right to see if the road curves a bit, to stay in the center, to cut down on processing.
There are other approaches that could work, but this one is simple, IMO, and allows you to have gentle curves in your road.
Another approach is just to have it be tile-based, so you just look at the tile before you, and have different tiles for changes in road direction an so you know how to turn the car to stay on the tile.
This wouldn't be as smooth but is also easy to do.