Can I track objects by mapping their coordinates from a sequence of images? - computer-vision

I have a video of simple moving dots (that sometimes overlap) that is saved as a sequence of images. At each image I detect all the dots and save their coordinates:
(snapshot 1 -> snapshot 2)
I would like to infer the trajectory of each dot. The dots move smoothly and not too fast from one frame to the other, but if for each point of the first image I just find their closest point of the next image it often fails to reconstruct the trajectory.
I tried on opencv the multitrackers but the trackers very quickly lose their target by jumping on a different dot when the dots tend to overlap. The detection works very nicely though.
The video and the objects to track are simple. I do not want to believe that I need to implement something more technical to accurately track these dots. Which is why I decided to ask here, I am out of ideas. Any tip or advice is appreciated... Thanks.

Related

Detect a 2 x 3 Matrix of white dots in an image

I want to locate a service robot via infrared landmarks. The idea is to detect two landmarks, get the distance to the landmarks and calculate the robots position from these informations (the position of the landmarks are known).
For this I have built an artificial 2x3 matrix of IR LEDs, which are visible in the robots infrared camera image (shown in the image below).
As the first step, I want to detect a single landmark in a picture and get it's x-y coordinates. I can use these coordinates in the future to get the distance from the depth-image provided.
My first approach was to convert the image to a black and white image. Then I tried to filter out different cluster of points (which i dilated and contoured in the first place). I couldn't succeed with this method.
Now I wonder if there are any pattern recognition/computer vision methods which can help me to quite "easily" detect the pattern.
I've added a picture of the infrared image with the landmark in it and a converted black/white image.
a) Which method can help me to solve this problem?
b) Should I use a 3x3 Matrix or any other geometric form instead of the 2x3 Matrix ?
IR-Image
Black-White Image
A direct answer:
1) find all small circles in the image; 2) look among these small circles for ones that are the same size and close together, and, say, form parallel lines.
The reason for this approach is that you have coded the robot with a specific pattern of small objects. Therefore, look for the objects and then look for the pattern. (If the orientation and size wouldn't change, then you could just look for a sub-image within the larger image, but because it can, you need to look for elements of the pattern that remain consistent with motion in the 3D space, that is, the parallel lines.)
This will work in the example images, but to know whether this will work more generally, we need to know more than you told us: It depends on whether the variation in the images of the matrix and the variations in the background will let this be enough to distinguish between them. If not, maybe you need a more clever algorithm or maybe a different pattern of lights. In the extreme case, it's obvious that if you had another 2x3 matric around, it's not enough. It all depends on the variation of the object to be identified and the variations within the background scene, and because you don't tell us either of these things, it's hard to say the best way, what's good enough, what's a better way, etc.
If you have the choice, and here it sound like you do, good data is better than clever analysis. For this problem, I'd call good data to be anything that clearly distinguishes the object from the background. You need to think of it this way, and look at what the background is, and all the different perspectives on the lights that are possible, and make sure these can never be confused.
For example, if you have a lot of control over this, and enough time, temporal variations are often the easiest. Turning the lights (or a subset of the lights) on and off, etc, and then looking for the expected temporal variation is often the surest way to distinguish signal from noise — but really, this again is just making an assumption about the background and foreground (ie, that the background won't vary with some particular time pattern).

OpenCV: Letters and words detection from edge detection image

I am currently dealing with text recognition. Here is a part of binarized image with edge detection (using Canny):
EDIT: I am posting a link to an image. I don't have 10 rep points so I cannot post an image.
EDIT 2: And here's the same piece after thresholding. Honestly, I don't know which approach would be better.
[2
The questions remain the same:
How should I detect certain letters? I need to determine location of every letter and then every word.
Is it a problem that some letters are "opened"? I mean that they are not closed areas.
If I use cv::matchtemplate, does it mean that I need to have 24 templates for every letter + 10 for every digit? And then loop over my image to determine the best correlation?
If both the letters and squares they are in, are 1-pixel wide, what filters / operations should I do to close the opened letters? I tried various combinations of dilate and erode - with no effect.
The question is kind of "how do I do OCR with Open CV?" and the answer is that it's an involved process and quite difficult.
But some pointers. Firstly, its hard to detect letters which are outlined. Most of the tools are designed for filled letters. But that image looks as if there will only be one non-letter distractor if you fill all loops using a certain size threshold. You can get rid of the non-letter lines because they are a huge connected object.
Once you've filled the letters, they can be skeletonised.
You can't use morphological operations like open and close very sensibly on images where the details are one pixel wide. You can put the image through the operation, but essentially there is no distinction between detail and noise if all features are one pixel. However once you fill the letters, that problem goes away.
This isn't in any way telling you how to do it, just giving some pointers.
As mentioned in the previous answer by malcolm OCR will work better on filled letters so you can do the following
1 use your second approach but take the inverse result and not the one you are showing.
2 run connected component labeling
3 for each component you can run the OCR algorithm
In order to discard outliers I will try to use the spatial relation between detected letters. They sold have other letter horizontally or vertically next to them.
Good luck

OpenCV C++ extract features from binary image

I have written an algorithm to process a camera capture and extract a binary image of two features I'm interested in. I'm trying to find the best (fastest) way of detecting when the two features intersect and where the lowest (y coordinate is greatest) point is (this will be the intersection).
I do not want to use a findContours() based method as this is too slow and, in my opinion, unnecessary. I also think blob detection libraries are too bloated for this.
I have two sample images (sorry for low quality):
(not touching: http://i.imgur.com/7bQ9qMo.jpg)
(touching: http://i.imgur.com/tuSmKw7.jpg)
Due to the way these images are created, there is often noise in the top right corner which looks like pixelated lines but methods such as dilation and erosion lose resolution around the features I'm trying to find.
My initial thought would be to use direct pixel access to form a width filter and a height filter. The lowest point in the image is therefore the intersection.
I have no idea how to detect when they touch... logically I can see that a triangle is formed when they intersect and otherwise there is no enclosed black area. Can I fill the image starting from the corner with say, red, and then calculate how much of the image is still black?
Does anyone have any suggestions?
Thanks
Your suggestion is a way more slow than finding contours. For binary images, finding contour is very easy and quick because you just need to find a black pixel followed by a white pixel or vice versa.
Anyway, if you don't want to use it, you can use the vertical projection or vertical profile you will see it the objects intersect or not.
For example, in the following image check the the letter "n" which is little similar to non-intersecting object, and the letter "o" which is similar to intersecting objects :
By analyzing the histograms you can recognize which one is intersecting or not.

Which object recognition algorithm should I use?

I am pretty new to CV, so forgive my stupid questions...
What I want to do:
I want to recognize a RC plane in live video (for now its only a recorded video).
What I have done so far:
Differences between frames
Convert it to grey scale
GaussianBlur
Threshold
findContours
Here are some example frames:
But there are also frames with noise, so there are more objects in the frame.
I thought I could do something like this:
Use some object recognition algorithm for every contour that has been found. And compute only the feature vector for each of these bounding rectangles.
Is it possible to compute SURF/SIFT/... only for a specific patch (smaller part) of the image?
Since it will be important that the algorithm is capable of processing real time video I think it will only be possible if I don't look at the whole image all the time?! Or maybe decide for example if there are more than 10 bounding rectangles I check the whole image instead of every rectangle.
Then I will look at the next frame and try to match my feature vector with the previous one. That way I will be able to trace my objects. Once these objects cross the red line in the middle of the picture it will trigger another event. But that's not important here.
I need to make sure that not every object which is crossing or behind that red line is triggering that event. So there need to be at least 2 or 3 consecutive frames which contain that object and if it crosses then and only then the event should be triggered.
There are so many variations of object recognition algorithms, I am bit overwhelmed.
Sift/Surf/Orb/... you get what I am saying.
Can anyone give me a hint which one I should chose or if what I am doing is even making sense?
Assuming the plane location doesn't change a lot from one frame to the next, I think you should look at object tracking instead of trying to estimate the location independently in each frame.
http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html

OpenCV Developing Motion detection Software

I am at the start of developing a software using OpenCV in Microsoft Visual 2010 Express. Now what I need to know before i get into coding is the procedures i have to follow.
Overview:
I want to develop software that detects simple boxing moves such as (Left punch, right punch) and outputs the results.
Now where am struggling is what approach should i take how should i tackle this development i.e.
Capture Video Footage and be able to extract lets say every 5th frame for processing.
Do i have to extract and store this frame perhaps have a REFERENCE image to subtract the capture frame from it.
Once i capture a frame what would be the best way to process it:
* Threshold it, then
* Detect the edges, then
* Smooth the edges using some filter, then
* Draw some BOUNDING boxes....?
What is your view on this guys or am i missing something or are there better simpler ways...? Any suggestions...?
Any answer will be much appreciated
Ps...its not my homework :)
I'm not sure if analyzing only every 5th frame will be enough, because usually punches are so fast that they could be overlooked.
I assume what you actually want to find is fast forward (towards camera) movements of fists.
In case of OpenCV I would first start off with such movements of faces, since some examples are already provided on how to do that in software package.
To detect and track faces you can use CvHaarClassifierCascade, but since this won't be fast enough for runtime detection, continue tracking such found face with Lukas-Kanade. Just pick some good-to-track points inside previously found face, remember their distance from arbitrary face middle, and at each frame update it. See this guy http://www.youtube.com/watch?v=zNqCNMefyV8 - example of just some random points tracked with Lukas-Kanade. Note that unlike faces, fists may not be so easy to track since their surface is rather uniform, better check Lukas-Kanade demo in OpenCV.
Of course with each frame actual face will drift away, once in a while re-run CvHaarClassifierCascade and interpolate to it your currently held face position.
You should be able to do above for fists also, but that will require training classifier with pictures of fists (classifier trained with faces is already provided in OpenCV).
Now having fists/face tracked you may try observing what happens to the points - when someone punches they move rapidly in some direction, while on the fist that remains still they don't move to much. And so, when you calculate average movement of single points in recent frames, the higher the value, the bigger chance that there was a punch. Alternatively, if somehow you've managed to track them accurately, if distance between each of them increases, that means object is closer to camera - and so a likely punch.
Note that without at least knowing change of a size of the fist in picture, it might be hard to distinguish if a movement of hand was forward or backward, or if the user was faking it by moving fists left or right. You may have to come up with some specialized algorithm (maybe with trial and error) to detect that, like say, increase a number of screen color pixels in location that previously fist was found.
What you are looking for is the research field of action recognition e.g. www.nada.kth.se/cvap/actions/ or an possible solution is e.g the STIP ( Space-time interest points) method www.di.ens.fr/~laptev/actions/ . But finally this is a tough job if you have to deal with occlusion or different point of views.