How to use marching squares for multiple contours? - c++

how to make marching squares proceed after it finds the first contour?
the contours in the image am working on are going to change quite often and because am in an embedded environment(android/ios) i would like a fast performance solution above all.
and using an external library isn't an option.
i tried connected component labeling but never got it to work as i have a PNG which isn't black and white (isn't thresholded) and if am not mistaken the CCL only works on black and white (binary) images.
i thought about saving the blob information to another vector and check if the newly found pixels fall within the earlier found blobs but i don't think that's fast enough as the vector gets filled with more and more blobs it gets more and more expensive to check every blob inside the vector.
which leaves me with my almost finished current approach which is erasing the contours i find and repeat until there's nothing left? but that's my currently used approach which seems expensive too.
and if there is no fast solution then can anyone suggest a different approach...even if that means a different algorithm.
Mark1: i chose marching squares because i only need the outline of the contours even if there are holes in theme.

i solved my problem by using an implementation of the marching squares algorithm that i found in the chipmunk2d physics library.

Related

OpenCV edge based object detection C++

I have an application where I have to detect the presence of some items in a scene. The items can be rotated and a little scaled (bigger or smaller). I've tried using keypoint detectors but they're not fast and accurate enough. So I've decided to first detect edges in the template and the search area, using Canny ( or a faster edge detection algo ), and then match the edges to find the position, orientation, and size of the match found.
All this needs to be done in less than a second.
I've tried using matchTemplate(), and matchShape() but the former is NOT scale and rotation invariant, and the latter doesn't work well with the actual images. Rotating the template image in order to match is also time consuming.
So far I have been able to detect the edges of the template but I don't know how to match them with the scene.
I've already gone through the following but wasn't able to get them to work (they're either using old version of OpenCV, or just not working with other images apart from those in the demo):
https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching
Angle and Scale Invariant template matching using OpenCV
https://answers.opencv.org/question/69738/object-detection-kinect-depth-images/
Can someone please suggest me an approach for this? Or a code snipped for the same if possible ?
This is my sample input image ( the parts to detect are marked in red )
These are some software that are doing this and also how I want it should be:
This topic is what I am actually dealing for a year on a project. So I will try to explain what my approach is and how I am doing that. I assume that you already did the preprocess steps(filters,brightness,exposure,calibration etc). And be sure you clean the noises on image.
Note: In my approach, I am collecting data from contours on a reference image which is my desired object. Then I am comparing these data with the other contours on the big image.
Use canny edge detection and find the contours on reference
image. You need to be sure here about that it shouldn't miss some parts of
contours. If it misses, probably preprocess part should have some
problems. The other important point is that you need to find an
appropriate mode of findContours because every modes have
different properties so you need to find an appropriate one for your
case. At the end you need to eliminate the contours which are okey
for you.
After getting contours from reference, you can find the length of
every contours using outputArray of findContours(). You can compare
these values on your big image and eliminate the contours which are
so different.
minAreaRect precisely draws a fitted, enclosing rectangle for
each contour. In my case, this function is very good to use. I am
getting 2 parameters using this function:
a) Calculate the short and long edge of fitted rectangle and compare the
values with the other contours on the big image.
b) Calculate the percentage of blackness or whiteness(if your image is
grayscale, get a percentage how many pixel close to white or black) and
compare at the end.
matchShape can be applied at the end to the rest of contours or you can also apply to all contours(I suggest first approach). Each contour is just an array so you can hold the reference contours in an array and compare them with the others at the end. After doing 3 steps and then applying matchShape is very good on my side.
I think matchTemplate is not good to use directly. I am drawing every contour to a different mat zero image(blank black surface) as a template image and then I compare with the others. Using a reference template image directly doesnt give good results.
OpenCV have some good algorithms about finding circles,convexity etc. If your situations are related with them, you can also use them as a step.
At the end, you just get the all data,values, and you can make a table in your mind. The rest is kind of statistical analysis.
Note: I think the most important part is preprocess part. So be sure about that you have a clean almost noiseless image and reference.
Note: Training can be a good solution for your case if you just want to know the objects exist or not. But if you are trying to do something for an industrial application, this is totally wrong way. I tried YOLO and haarcascade training algorithms several times and also trained some objects with them. The experiences which I get is that: they can find objects almost correctly but the center coordinates, rotation results etc. will not be totally correct even if your calibration is correct. On the other hand, training time and collecting data is painful.
You have rather bad image quality very bad light conditions, so you have only two ways:
1. To use filters -> binary threshold -> find_contours -> matchShape. But this very unstable algorithm for your object type and image quality. You will get a lot of wrong contours and its hard to filter them.
2. Haarcascades -> cut bounding box -> check the shape inside
All "special points/edge matching " algorithms will not work in such bad conditions.

Noise Removal From Image Using OpenCV

I have performed the thinning operation on a binary image with the code provided here. The source image which I used was this one.
And the result image which I obtained after applying thinning operation on the source image was this one
The problem I am facing is how to remove the noise in the image. Which is visible around the thinned white lines.
In such particular case, the easiest and safest solution is to label the connected component (union-find algorithm), and delete the one with a surface lower than one or two pixels.
FiReTiTi and kcc__ have already provided good answers, but I thought I'd provide another perspective. Having looked through some of your previous posts, it appears that you're trying to build software that uses vascular patterns on the hand to identify people. So at some point, you will need to build some kind of classification algorithm.
I bring this up because many such algorithms are quite robust in the presence of this kind of noise. For example, if you intend to use supervised learning to train a convolutional neural net (which would be a reasonable approach assuming you can collect a decent amount of training samples), you may find that extensive pre-processing of this sort is unnecessary, and may even degrade the performance.
Just some thoughts to consider. Cheers!
Another simple but perhaps not so robust is to use contour area to remove small connected regions, then use erode/dilate before applying thinning process.
However you can so process your thinned image directly by using cv::findContours(,) and mask about contours with small area. This is similar to what FiReTiTi answered.
You can use the findContour example from OpenCV to build a contour detection using edge detector such as Canny. The example can be ported directly as part your requirment.
Once you got the contours in vector<vector<Point> > contours;you can iterate over each contour and use cv::contourArea to find the area of each region. Using pre-defined threshold you can remove unwanted areas.
In my opinion why dont you use distance transform on the 1st image and then from the resultant image use size filter to de-speckle the image.

Nodes collision testing - but not on bouding rectangles

I'm looking for some simple and somehow efficient way to detect collisions of Nodes (not necessarily Sprites) while ignoring collisions of transparent parts of Nodes' images.
It is easy to implement bounding Rects collision... but it does not reflect the transparency.
There were published other approach named "Pixel-perfect"... ok, but I see it as inefficient and somehow complicated. In cases of full-hd and bigger displays...
I suppose it could be possible to yield some "non-transparent" masks of both sprites, get them only in intersect of their bounding rects and finally perform AND operation on those masks...
Please, did anybody see something similar? Or better? Can I yield transparent and non-transparent parts of image of node?
I also found this and this well. Not studied yet but now I'd like use cocos2d-js .... :)
Thanks a lot
Give quadtrees a try. The algorithm works by splitting nodes into 4 quadrants recursively into a list with likely collideables. There are plenty of resources about them.
I combined this with pixel perfect check. I'm afraid it's the only way to properly check images' collisions short of using the per pixel algorithm to create unique concave and convex shapes to represent the collision space and then using other algorithms (such as Minkowski portal refinement). However if you images change or are animated, your game can takr an equal performance hit.
Interestingly enough, pp intersection was used in games like pong, with much less processing power. Pp also gives the most accurate intersection as compared to other algorithns.
You can use physics collisions. And set-up physics body to repeat not-transparent part of the node.

OpenCV Developing Motion detection Software

I am at the start of developing a software using OpenCV in Microsoft Visual 2010 Express. Now what I need to know before i get into coding is the procedures i have to follow.
Overview:
I want to develop software that detects simple boxing moves such as (Left punch, right punch) and outputs the results.
Now where am struggling is what approach should i take how should i tackle this development i.e.
Capture Video Footage and be able to extract lets say every 5th frame for processing.
Do i have to extract and store this frame perhaps have a REFERENCE image to subtract the capture frame from it.
Once i capture a frame what would be the best way to process it:
* Threshold it, then
* Detect the edges, then
* Smooth the edges using some filter, then
* Draw some BOUNDING boxes....?
What is your view on this guys or am i missing something or are there better simpler ways...? Any suggestions...?
Any answer will be much appreciated
Ps...its not my homework :)
I'm not sure if analyzing only every 5th frame will be enough, because usually punches are so fast that they could be overlooked.
I assume what you actually want to find is fast forward (towards camera) movements of fists.
In case of OpenCV I would first start off with such movements of faces, since some examples are already provided on how to do that in software package.
To detect and track faces you can use CvHaarClassifierCascade, but since this won't be fast enough for runtime detection, continue tracking such found face with Lukas-Kanade. Just pick some good-to-track points inside previously found face, remember their distance from arbitrary face middle, and at each frame update it. See this guy http://www.youtube.com/watch?v=zNqCNMefyV8 - example of just some random points tracked with Lukas-Kanade. Note that unlike faces, fists may not be so easy to track since their surface is rather uniform, better check Lukas-Kanade demo in OpenCV.
Of course with each frame actual face will drift away, once in a while re-run CvHaarClassifierCascade and interpolate to it your currently held face position.
You should be able to do above for fists also, but that will require training classifier with pictures of fists (classifier trained with faces is already provided in OpenCV).
Now having fists/face tracked you may try observing what happens to the points - when someone punches they move rapidly in some direction, while on the fist that remains still they don't move to much. And so, when you calculate average movement of single points in recent frames, the higher the value, the bigger chance that there was a punch. Alternatively, if somehow you've managed to track them accurately, if distance between each of them increases, that means object is closer to camera - and so a likely punch.
Note that without at least knowing change of a size of the fist in picture, it might be hard to distinguish if a movement of hand was forward or backward, or if the user was faking it by moving fists left or right. You may have to come up with some specialized algorithm (maybe with trial and error) to detect that, like say, increase a number of screen color pixels in location that previously fist was found.
What you are looking for is the research field of action recognition e.g. www.nada.kth.se/cvap/actions/ or an possible solution is e.g the STIP ( Space-time interest points) method www.di.ens.fr/~laptev/actions/ . But finally this is a tough job if you have to deal with occlusion or different point of views.

Generating contour lines from regularly spaced data

I am currently working on a data visualization project.My aim is to produce contour lines ,in other words iso-lines, from gridded data.Data can be temperature, weather data or any kind of other environmental parameters but only condition is it must be regularly spaced.
I searched in internet , however i could not find a good algorithm, pseudo-code or source code for producing contour lines from grids.
Does anybody knows a library, source code or an algorithm for producing contour lines from gridded data?
it will be good if your suggestion has a good run time performance, i don't want to wait my users so much :)
Edit: thanks for response but isolines have some constrains like they should not intersects
so just generating bezier curves does not accomplish my goal.
See this question: How to approximate a vector contour from an elevation raster?
It's a near duplicate, but uses quite different terminology. You'll find that cartography and computer graphics solve many of the same problems, but use different terminology for them.
there's some reasonably good contouring available in GNUplot - if you're able to use GPL code that may help.
If your data is placed at regular intervals, this can be done fairly easily (assuming I understand your problem correctly). First you need to determine at what interval you want your contours. Next create the grid you are going to use to store the contour information (i'm assuming just a simple on/off or elevation at this contour level type of data), which should be one interval smaller than the source data.
Now the trick here is to offset the 2 grids by 1/2 an interval (won't actually show up in code like this, but its the concept I'm dealing with here) and compare the 4 coordinates surrounding the current point in the contour data grid you are calculating. If any of the 4 points are in a different interval range, then that 'pixel' in the contour grid should be set to true (or the value of the contour range being crossed).
With this method, there will be a problem when the interval is too fine which will cause several contours to overlap onto each other.
As the link from Paul Tomblin suggests, Bezier curves (which are a subset of B-splines) are a ripe solution for your problem. If runtime performance is an issue, Bezier curves have the added benefit of being constructable via the very fast de Casteljau algorithm, instead of drawing them according to the parametric equations. On the off chance you're working with DirectX, it has a library function for the de Casteljau, but it should not be challenging to brew one yourself using the 1001 web pages that describe it.