Detect valid area of a target - c++

Im building a DIY laser target.
So far on the software side I managed to find when is a laser shot at the target by calculating differences and threshold, and then find where did it hit in relation to the whole image.
Now the next problem I am facing is finding where it hit in relation to the target center.
I have a proper box assembled for the target, but while testing this is what I have (the inner circle should have been filled with black):
Its not pretty and has lots of unrelated noise but it should be enough to test. This is a best possible scenario, from within the box:
To find where it hit in relation to the target center I suppose I'll have to make a binary image out of those two black rings and calculate the average point. (how?)
Then (supposing the target is perfectly parallel, not even a little bit in perspective), with the center known, I would need to know how far the outer ring is from it.
With that known it should be a matter of trigonometry to tell the punctuation (11 to 0, being 0 the outside of the outer ring)
Resuming the question:
Need to detect a known figure(ring), tell where its center is, and determine its radius.
I am not worried about performance since this will only be done once at each program run to calibrate.
Some sample C++ code (besides pointing to the functions I can use) would be great since I'm new to both C++ and OpenCV.

Related

Plot a curved trajectory on a 2D coordinate system

I have a few ordered points (less than 10) in 2D coordinates system.
I have an agent moving in the system and I want to find the shortest path between those points following their order.
For background the agent can be given a position to go to with a thrust, and my objective is to plot the fastest course given the fact that the agent has a maximum thrust and maximum angular velocity.
After some research I realized that I may be looking for a curve fitting algorithm, but I don't know the underlying function since the points are randomly distributed in the coordinates system.
Please, help me find a solution to this problem.
I am open to any suggestion, my prefered programming language being C++.
I'm sure there is a pure mathematical solution such as spacecraft trajectory optimization for example, but here is how I would consider approaching it from a programming/annealing perspective.
Even though you need a continuous path, you might start the path search with discreet steps and a tolerance for getting close enough to each point as a start.
Assign a certain amount of time to each step and vary applied angular force and thrust at each step.
Calculate resulting angular momentum and position for the start of the next step.
Select the parameters for the next step either with a random search, or iterate through each to find the closest to the next target point (quantize the selection of angles and thrust to begin with). Repeat steps until you are close enough to the next target point. Then repeat to get to the next point.
Once you have a rough path you can start refining it (perhaps use the rough point from the previous run as target points in the new one) by reducing the time/size of the steps and tolerance to target points.
When evaluating the parameters' fitness at each step you might want to consider that once you reach a target point you also want to perhaps have momentum in the direction of the next point. This should be an emergent property if multiple paths are considered and the fitness function considers shortest total time.
c++ could help here if you use the std::priority_queue and std::map for example to keep track of the potential paths.

Region of Interest Uniqueness and Identity

I'm currently working a computer vision application with OpenCV. The application involves target identification and characteristic determination. Generally, I'm going to have a target cross into the visible region and slowly move through it in a couple of seconds. This should give me upwards of 50-60 frames from the camera in which I'll be able to find the target.
We have successfully implemented the detection algorithms using SWT and OCR (the targets all have alphanumeric identifiers, which makes them relatively easy to pick out). What I want to do is use as much of the data as possible from all 50-60 shots of each target. To do this, I need some way to identify that a particular ROI of image 2 contains the same target as another ROI from image 1.
What I'm asking for a little advice from anyone who may have come across this before. How can I easily/quickly identify, within a reasonable error margin, that ROI #2 has the same target as ROI#1? My first instinct is something like this:
Detect targets in frame 1.
Calculate certain unique features of each of the targets in frame 1. Save.
Get frame 2.
Immediately look for ROIs which have the same features as those calc'd in step 2. Grab these and send them down the line for further processing, skipping step 5.
Detect new targets in frame 2.
Pass targets to a thread to calculate shape, color, GPS coordinates, etc.
Lather, rinse, repeat.
I'm thinking that SURF or SIFT features might be a way to accomplish this, but I'm concerned that they might have trouble identifying targets as the same from frame to frame due to distortion or color fade. I don't know how to set a threshold on SIFT/SURF features.
Thank you in advance for any light you can shed on this matter.
One thing you can do is locally equalize brightness and possibly saturation levels. If you aren't using an advanced space such as YCrCb or HSV, I suggest you try them.
Can you assume that the object is not moving too fast? If you feed the previous position in the detection routine, you can decrease the size of the window you are looking at. Same thing goes with the speed, and direction of movement.
I've successfully used histogram composition and shape descriptors of a region in order to reliably detect it, you can use that or add it to a SURF/SIFT classifier.

OpenCV Developing Motion detection Software

I am at the start of developing a software using OpenCV in Microsoft Visual 2010 Express. Now what I need to know before i get into coding is the procedures i have to follow.
Overview:
I want to develop software that detects simple boxing moves such as (Left punch, right punch) and outputs the results.
Now where am struggling is what approach should i take how should i tackle this development i.e.
Capture Video Footage and be able to extract lets say every 5th frame for processing.
Do i have to extract and store this frame perhaps have a REFERENCE image to subtract the capture frame from it.
Once i capture a frame what would be the best way to process it:
* Threshold it, then
* Detect the edges, then
* Smooth the edges using some filter, then
* Draw some BOUNDING boxes....?
What is your view on this guys or am i missing something or are there better simpler ways...? Any suggestions...?
Any answer will be much appreciated
Ps...its not my homework :)
I'm not sure if analyzing only every 5th frame will be enough, because usually punches are so fast that they could be overlooked.
I assume what you actually want to find is fast forward (towards camera) movements of fists.
In case of OpenCV I would first start off with such movements of faces, since some examples are already provided on how to do that in software package.
To detect and track faces you can use CvHaarClassifierCascade, but since this won't be fast enough for runtime detection, continue tracking such found face with Lukas-Kanade. Just pick some good-to-track points inside previously found face, remember their distance from arbitrary face middle, and at each frame update it. See this guy http://www.youtube.com/watch?v=zNqCNMefyV8 - example of just some random points tracked with Lukas-Kanade. Note that unlike faces, fists may not be so easy to track since their surface is rather uniform, better check Lukas-Kanade demo in OpenCV.
Of course with each frame actual face will drift away, once in a while re-run CvHaarClassifierCascade and interpolate to it your currently held face position.
You should be able to do above for fists also, but that will require training classifier with pictures of fists (classifier trained with faces is already provided in OpenCV).
Now having fists/face tracked you may try observing what happens to the points - when someone punches they move rapidly in some direction, while on the fist that remains still they don't move to much. And so, when you calculate average movement of single points in recent frames, the higher the value, the bigger chance that there was a punch. Alternatively, if somehow you've managed to track them accurately, if distance between each of them increases, that means object is closer to camera - and so a likely punch.
Note that without at least knowing change of a size of the fist in picture, it might be hard to distinguish if a movement of hand was forward or backward, or if the user was faking it by moving fists left or right. You may have to come up with some specialized algorithm (maybe with trial and error) to detect that, like say, increase a number of screen color pixels in location that previously fist was found.
What you are looking for is the research field of action recognition e.g. www.nada.kth.se/cvap/actions/ or an possible solution is e.g the STIP ( Space-time interest points) method www.di.ens.fr/~laptev/actions/ . But finally this is a tough job if you have to deal with occlusion or different point of views.

Minecraft like collision

I'm working on a Minecraft like game for educational purposes. The rendering is great so far even with 1024x1204 blocks but now that I started integrating the player collision I'm having problems.
I have a aabb for the player and aabb's for all the blocks around him. These are created dynamically and it works out pretty fast.
My problem goes as following:
I have speed vector and the current position. For each axis I calculate the potential position and make out an aabb. I check for collisions and it's free I move there otherwise I set the speed for that component to 0. I separate the axis since I want my player to slide in a direction of partially facing a wall.
The order for the axis is y,x,z. The collision response is great but I'm having some problems with the corners as it sometimes get's stuck in the world without being able to move. Not sure what the reason is for this.
I do not want to implement actual physics since those are more demanding and basically just too much for what I need.
Do you guys have any suggestions on how to implement this in a nice way? I did some searching but I didn't find anything useful for this particular situation.
This is a bit abstract in a sense that the cause of your problem can be related to many things. From the top of my head, maybe a bug in your collision detection code: somehow it allows the objects to cross boundaries by 1 (or more) unit. So when the next collision is computed 1 or more dimension is stuck (imagine having an arm already inside the wall when collision is detected. You can't get your arm out because it collide with the interior of the wall boundary)

Detecting Chess moves from successive image differences using OpenCV tools

Hey, I am coding up a simple chess playing robot's vision system, I am trying to improve on some previous research to allow camera and a standard chess set be used and both be allowed to move during the game. So far I can locate the board in an image acquired via web-cam, and I want to detect moves by taking difference of successive images to determine what has changed then use previous information about the board occupancy to detect moves.
My problem is that I can't seem to reliably detect changes at the moment, my current pipeline goes like this:
Subtract two images -> Histogram equalize the difference image -> erode and dilate diff image to remove minor changes -> make a binary copy and do distance transform -> Get the largest blob(corresponding to the highest value after DT and flood fill that blob) -> repeat again until DT returns a value small enough to ignore change.
I am coding all this in OpenCV and C++. but my flood fill seem to always either not fill the blobs, hence most cases I just get one change detected. I have tried also using cv::inpaint but that didn't help either. So my question is; am I just using the wrong approach or somehow turing can make the change detection more reliable. In case of the former, could people suggest alternative routes, preferable codable in C++/Python and/or OpenCV in a reasonable time?
thanks
The problem of getting a fix on the board and detecting movement of pieces can be solved independently, assuming one does not move the board while also moving pieces around..
Some thoughts on how I would approach it:
Detecting the orientation of the board
You have to be able to handle the board being rotated in place, as well as moved around as long as some angle is maintained that lets you see the pieces. It would help if there were something on the board that you could easily identify (e.g. a marker on each corner) so that if you lose orientation (e.g. someone moves the board away from the camera completely) you could easily find it again.
In order to keep track of the board you need to model the position of the camera relative to the board in 3D space. This is the same problem as determining the location of the camera being moved around a fixed board. A problem of Egomotion. Once you solve that you can move on to the next stage, which is detecting movement and tracking objects.
Detecting movement of pieces
This probably the simpler part of the problem. There are lots of algorithms out there for object detection in video. I would only add that you can use "key" frames. What I mean by that is to identify those frames in which you see only the board before and after a single move. e.g. you don't see the hand moving it obscuring the pieces, etc. Once you have the before/after frame you can figure out what moved and where it is positioned relative to the board.
You can probably get away with not being able to recognize the shape of each piece if you assume continuity (i.e. that you've tracked all movements since the initial arrangement of the board, which is well known).