How compare two edges images in opencv (not matchShapes) - c++

A little introduction on what I'm doing ...
For academic purposes I am creating an application in c++ using opencv for the detection of static objects in a scene.
The application is based on a combined approach of background subtraction and tracking, and the detection of events related to the abandonment of the objects works fine.
But at the moment I have a problem that I can't solve; I have to implement a finite state machine for detect the event of object removal, both before and after the entry of the object in the background.
To do this I was ordered by my superiors to use the edges of objects.
And now the problem.
After detecting a vehicle illegally parked along a road, I need to compare the edges of various images (the background captured at the time of the alarm, the current background, the current frame) to understand what the vehicle do (picks up the movement, remains parked or picks up the movement after being in the background).
I run these comparisons on the region of the scene in which there is the vehicle (vehicles typically have different size), I pull the edges using canny algorithm by obtaining a binarized CV_8UC1 cv::Mat.
At this point I have to compare them.
I tried to detect the contours with findContours and compare them with matchShapes, but it does not seem the right way, I'd compare each contour of the first image with every contour of the second, in addition typically the two images to campare have different number of contour (for example original background and current background, because the edges of the current background increased with the entry of the vehicle in the background).
I also tried to create a new image in which each pixel corresponds to the absolute difference of the other two, then I counted the white pixels of the difference image (wPx), and I used this number for comparison in this way: I set two thresholds (thr1 and thr2), and counted the pixels of the bounding rect of the vehicle (perim), if wPxthr2*perim images are different.
(I set percentages thresholds and I moltipy them with the perimeter of the bounding box to adapt the thresholds to the vehicle dimensions.)
This solution, however, seems to be very little robust.
Do you have something simple to suggest me?
Thank you very much in any case, more than once you StackOverflow users have helped me!
PS: THIS is an example of the images that I have to compare
The first is the background without the vehicle stationary, contains the edges of the street;
the second is the original background, the one captured when the stationary vehicle is detected;
the third is the current background (which in this case is equal to the original being the same frame, but then change);
the fourth is the current frame of the video;

You may want to take a look at this paper: A Novel SIFT-Like-Based Approach
for FIR-VS Images Registration. Aguilera et al. propose an Edge Oriented Histogram descriptor (EOH-SIFT).
This paper intends to register multispectral images, visible and infrared image, to each other. Because of the different characteristics of the images, the authors first extract edges/contours in both images, which results in images similiar to yours.
So, you can describe your image patches using this descriptor, illustrated in the following figure (taken from the above paper):
Subdivide your image patch into 4x4 zones
For each of the 16 subregions compose a histogram of contour's orientation (5 bins)
Put the histograms together into one descriptor vector of size 16x5=80 bins
Normalize the feature vector
So, every image you want to compare (in your case 4) is described by its 80-dimensional feature vector. You can compare them to each other by calculating and evaluating the Euclidean distance between them.
Note: Here a patch of size 80x80 or 100x100 (NxN) pixels is suggested. You may have to adjust the sizes to your image sizes.

Related

Motion detection by eliminating constant movements

I am trying to implement a motion detection in OpenCV C++. I tried various methods like MOG, Optical flow which work fine but is there a way we can eliminate constant movements in the scene like a constant fan motion etc ? I have opencv accumuateWeighted() in mind but not sure if it works. Is there any better way we can do it ?
I have not got full robust solution and also i don't have any experience with video processing but i would put my idea whatever till now i have got in to this problem:
First consider a few pairs of consecutive image frames from the video and convert them to gray scale for more robust comparison.
Raster scan the image pairs and find the difference of image pairs by comparing corresponding pairs.
The resultant image will give the pixel location where there is a change in image to image in a pair, cluster these pixels locations and make a bounding box over them. So that this bounding box region will mark an object which is translating/rotation.
Now as we have applied the above image difference operation over several pairs. We will have rotating/translating bounding box in each image pair difference.
Now check in each resultant image difference with pixels having bounding box over them.
Compare bounding box central location in a difference image with other difference images. If bounding box with a very slight variation in its central location exists across all difference images then object contained in that bounding box will be having rotational motion like Fan,leaves and remaining bounding boxes will represent the actual translating objects in the video.

Detect ball/circle in OpenCV (C++)

I am trying to detect a ball in an filtered image.
In this image I've already removed the stuff that can't be part of the object.
Of course I tried the HoughCircle function, but I did not get the expected output.
Either it didn't find the ball or there were too many circles detected.
The problem is that the ball isn't completly round.
Screenshots:
I had the idea that it could work, if I identify single objects, calculate their center and check whether the radius is about the same in different directions.
But it would be nice if it detect the ball also if he isn't completely visible.
And with that method I can't detect semi-circles or something like that.
EDIT: These images are from a video stream (real time).
What other method could I try?
Looks like you've used difference imaging or something similar to obtain the images you have..? Instead of looking for circles, look for a more generic loop. Suggestions:
Separate all connected components.
For every connected component -
Walk around the contour and collect all contour pixels in a list
Suggestion 1: Use least squares to fit an ellipse to the contour points
Suggestion 2: Study the curvature of every contour pixel and check if it fits a circle or ellipse. This check may be done by computing a histogram of edge orientations for the contour pixels, or by checking the gradients of orienations from contour pixel to contour pixel. In the second case, for a circle or ellipse, the gradients should be almost uniform (ask me if this isn't very clear).
Apply constraints on perimeter, area, lengths of major and minor axes, etc. of the ellipse or loop. Collect these properties as features.
You can either use hard-coded heuristics/thresholds to classify a set of features as ball/non-ball, or use a machine learning algorithm. I would first keep it simple and simply use thresholds obtained after studying some images.
Hope this helps.

painting DICOM ROI

I'm at a point where I need to mix the DICOM Region of Interest (ROI) Relative Electron Density (RED) with the information from DICOM CT's where some of the ROIs should override the CT info. [I'm working in C# by the way.] My question is that I need to draw the ROI's filled, in the correct way such that lungs for instance are shown with low RED while the body is water eq. I can use the bounding rectangle to gain an idea if one is possibly inside the other, but once that is known, I still need to determine if they overlap or if one is completely contained within another. I can do a raw draw of each ROI on a separate bitmap and do a slice voxel by voxel comparison, but this seems likely to be slow. I have not found a good answer and I'm hoping someone knows a better way to determine ordering of drawing (painting filled) that works in a fast manner.
Thanks
ROI in DICOM is normally defined as a list of points to form a polygon (or several) on a plane of related CT-scan slice (they share the same frame of reference UID). So, you can draw your CT slice and then on top draw ROI polygons, or you can query every CT point you draw whether it belongs or not to ROI polygons set, and change the color correspondingly.

How can I detect TV Screen from an Image with OpenCV or Another Library?

I've working on this some time now, and can't find a decent solution for this.
I use OpenCV for image processing and my workflow is something like this:
Took a picture of a tv.
Split image in to R, G, B planes - I'm starting to test using H, S, V too and seems a bit promising.
For each plane, threshold image for a range values in 0 to 255
Reduce noise, detect edges with canny, find the contours and approximate it.
Select contours that contains the center of the image (I can assume that the center of the image is inside the tv screen)
Use convexHull and HougLines to filter and refine invalid contours.
Select contours with certain area (area between 10%-90% of the image).
Keep only contours that have only 4 points.
But this is too slow (loop on each channel (RGB), then loop for the threshold, etc...) and is not good enought as it not detects many tv's.
My base code is the squares.cpp example of the OpenCV framework.
The main problems of TV Screen detection, are:
Images that are half dark and half bright or have many dark/bright items on screen.
Elements on the screen that have the same color of the tv frame.
Blurry tv edges (in some cases).
I also have searched many SO questions/answers on Rectangle detection, but all are about detecting a white page on a dark background or a fixed color object on a contrast background.
My final goal is to implement this on Android/iOS for near-real time tv screen detection. My code takes up to 4 seconds on a Galaxy Nexus.
Hope anyone could help . Thanks in advance!
Update 1: Just using canny and houghlines, does not work, because there can be many many lines, and selecting the correct ones can be very difficult. I think that some sort of "cleaning" on the image should be done first.
Update 2: This question is one of the most close to the problem, but for the tv screen, it didn't work.
Hopefully these points provide some insight:
1)
If you can properly segment the image via foreground and background, then you can easily set a bounding box around the foreground. Graph cuts are very powerful methods of segmenting images. It appears that OpenCV provides easy to use implementations for it. So, for example, you provide some brush strokes which cover "foreground" and "background" pixels, and your image is converted into a digraph which is sliced optimally to split the two. Here is a fun example:
http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html
This is a quick something I put together to illustrate its effectiveness:
2)
If you decide to continue down the edge detection route, then consider using Mathematical Morphology to "clean up" the lines you detect before trying to fit a bounding box or contour around the object.
http://en.wikipedia.org/wiki/Mathematical_morphology
3)
You could train across a dataset containing TVs and use the viola jones algorithm for object detection. Traditionally it is used for face detection but you can adapt it for TVs given enough data. For example you could script downloading images of living rooms with TVs as your positive class and living rooms without TVs as your negative class.
http://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework
http://docs.opencv.org/trunk/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html
4)
You could perform image registration using cross correlation, like this nice MATLAB example demonstrates:
http://www.mathworks.com/help/images/examples/registering-an-image-using-normalized-cross-correlation.html
As for your template TV image which would be slid across the search image, you could obtain a bunch of pictures of TVs and create "Eigenscreens" similar to how Eigenfaces are used for facial recognition and generate an average TV image:
http://jeremykun.com/2011/07/27/eigenfaces/
5)
It appears OpenCV has plenty of fun tools for describing shape and structure features, which appears to be mainly what you're interested in. Worth a look if you haven't seen this already:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html
Best of luck.

Finding Circle Edges :

Finding Circle Edges :
Here are the two sample images that i have posted.
Need to find the edges of the circle:
Does it possible to develop one generic circle algorithm,that could find all possible circles in all scenarios ?? Like below
1. Circle may in different color ( White , Black , Gray , Red)
2. Background color may be different
3. Different in its size
http://postimage.org/image/tddhvs8c5/
http://postimage.org/image/8kdxqiiyb/
Please suggest some idea to write a algorithm that should work out on above circle
Sounds like a job for the Hough circle transform:
I have not used it myself so far, but it is included in OpenCV. Among other parameters, you can give it a minimum and maximum radius.
Here are links to documentation and a tutorial.
I'd imagine your second example picture will be very hard to detect though
You could apply an edge detection transformation to both images.
Here is what I did in Paint.NET using the outline effect:
You could test edge detect too but that requires more contrast in the images.
Another thing to take into consideration is what it exactly is that you want to detect; in the first image, do you want to detect the white ring or the disc inside. In the second image; do you want to detect the all the circles (there are many tiny ones) or just the big one(s). These requirement will influence what transformation to use and how to initialize these.
After transforming the images into versions that 'highlight' the circles you'll need an algorithm to find them.
Again, there are more options than just one. Here is a paper describing an algoritm
Searching the web for image processing circle recognition gives lots of results.
I think you will have to use a couple of different feature calculations that can be used for segmentation. I the first picture the circle is recognizeable by intensity alone so that one is easy. In the second picture it is mostly the texture that differentiates the circle edge, in that case a feature image based based on some kind of texture filter will be needed, calculating the local variance for instance will result in a scalar image that can segment out the circle. If there are other features that defines the circle in other scenarios (different colors for background foreground etc) you might need other explicit filters that give a scalar difference for those cases.
When you have scalar images where the circles stand out you can use the circular Hough transform to find the circle. Either run it for different circle sizes or modify it to detect a range of sizes.
If you know that there will be only one circle and you know the kind of noise that will be present (vertical/horizontal lines etc) an alternative approach is to design a more specific algorithm e.g. filter out the noise and find center of gravity etc.
Answer to comment:
The idea is to separate the algorithm into independent stages. I do not know how the specific algorithm you have works but presumably it could take a binary or grayscale image where high values means pixel part of circle and low values pixel not part of circle, the present algorithm also needs to give some kind of confidence value on the circle it finds. This present algorithm would then represent some stage(s) at the end of the complete algorithm. You will then have to add the first stage which is to generate feature images for all kind of input you want to handle. For the two examples it should suffice with one intensity image (simply grayscale) and one image where each pixel represents the local variance. In the color case do a color transform an use the hue value perhaps? For every input feed all feature images to the later stage, use the confidence value to select the most likely candidate. If you have other unknowns that your algorithm need as input parameters (circle size etc) just iterate over the possible values and make sure your later stages returns confidence values.