Difference between pixel based and frame based methods - c++

I am working on video frames using OpenCV. My question might be low leveled, but I want to clarify it first.
There are plenty of pixel based methods available in openCV, but can I change them into frame based ones?
To me, it is similar, since the whole frame is also stored in one matrix, and I will read that matrix from the beginning to end to handle it. So for instance for finding average value, the only thing I should change is find the total average of whole pixels for one frame.
But for one pixel, see several frames and decide that pixel's average based on them. But when it comes to build models like GMM, I cannot differentiate it.
Could someone help explain it clearly?
Can I use or change openCV's GMM for global usage?

I think this is a good definition for the problem, though you are working with pixels.
Pixel-based methods: The information of the pixel(x,y) in the resulting processed image is the result of applying transformations to the pixel(x,y) of the original image.
Region-based methods: The pixels in the original image are grouped forming a contiguous regions and transformations are applied to the whole region. Example: the resulting pixel(x,y) is the mean of a patch around the original pixel (x,y).

Related

OpenCV - PCA analysis on BW image - area vs shape - is there already an implementation?

The shape of an object is detected on a bw image. The object is a black continuous shape, the background is white.
We use PCA (http://docs.opencv.org/3.1.0/d1/dee/tutorial_introduction_to_pca.html) to get the object direction and align the object. Currently the shape itself (the points on the contour) is the input to the opencv PCA implementation. This usually works very well. But from time to time there is small dirt on the object border, causing the shape to pass around the dirt. This causes more points and more weight on one side, slightly turning the object.
Idea: Instead of the contour, we use the area of the object as input for our PCA analysis. The issue there, to check all points on if they are inside the contour and then use them for PCA slows the application down. This part will be about 52352 times slower.
New Approach: We take random points in the image, check if they are inside the shape and if so, use them for our PCA. We have to see if we can get the consistent quality needed from this approach.
Is there already a similar implementation in opencv which is using the area instead of the shape?
Another approach would be to put a mesh over the object and use the mesh points inside the object for PCA.
Is there already something similar available one can just use or does one quickly need to implement something like this?
Going for straight lines around the object isn't an option.
Given that we have received very limited information about your problem (posting images would help a lot) and you do not seem to know the probability density function of the noise, your best bet is to consider the noise to be Gaussian.
As such, and following your intuition, my suggested approach is to take a few (by a few I mean statistically relevant but not raising the computation time that much) random points that lie inside the object and compute the PCA.
Repeat this procedure in an iterative loop and store somewhere the resulting rotation angles you get from the application of the PCA to the object shape.
Stop once you have enough point, compute the mean of the rotation angles: this is a decent estimate of the true angle. Compute also the standard deviation to get a measure of the quality of your estimation. By "enough points" you can consider that ~30 points is usually considered to be "enough" for being representative of the underlying population according to the central limit theorem.
If you want, you can improve on this approach in many ways, for example doing robust estimation of the true angle once you have collected enough points. It all depends on the data you have at hand...take my suggestion just as a starting point.
There are few parameters that you could change, in which may improve your system.
First is the threshold you use to binarize your image. I don't know what your application is about, but you could use other color systems, or normalize your image by cromacity, and after that, apply the new threshold.
Other aspect is to exclude shapes (contours) that have bigger or smaller area that what you are expecting.
To add up, you may use a blur filter before detect contours.
I don't know how the noise looks but when you say "small dirt" I think it might be only some few pixels that is a lot smaller then the object it self, but it might be attached to the object. To reduce this noise it might be possible to perform an opening (morphology) on the binary image.
http://docs.opencv.org/2.4/doc/tutorials/imgproc/opening_closing_hats/opening_closing_hats.html

Build-in function for interpolating single pixels and small blobs

Problem
Is there a build-in function for interpolating single pixels?
Given a normal image as Mat and a Point, e.g. an anomaly of the sensor or an outlier, is their some function to repair this Point?
Furthermore, if I have more than one Point connected (let's say a blob with area smaller 10x10) is there a possibility to fix them too?
Trys but not really solutions
It seems that interpolation is implemented in the geometric transformations including resizing images and to extrapolate pixels outside of the image with borderInterpolate, but I haven't found a possibility for single pixels or small clusters of pixels.
A solution with medianBlur like suggested here does not seem appropriate as it changes the whole image.
Alternative
If there isn't a build-in function, my idea would be to look at all 8-connected surrounding pixels which are not part of the blob and calculate the mean or weighted mean. If doing this iteratively, all missing or erroneous pixel should be filled. But this method would be dependent of the applied order to correct each pixel. Are there other suggestions?
Update
Here is an image to illustrate the problem. Left the original image with a contour marking the pixels to fix. Right side shows the fixed pixels. I hope to find some sophisticated algorithms to fix the pixel.
The build-in function inpaint of OpenCV does the desired interpolation of chosen pixels. Simply create a mask with all pixels to be repaired.
See the documentation here: OpenCV 3.2. Description: inpaint and Function: inpaint

Which object recognition algorithm should I use?

I am pretty new to CV, so forgive my stupid questions...
What I want to do:
I want to recognize a RC plane in live video (for now its only a recorded video).
What I have done so far:
Differences between frames
Convert it to grey scale
GaussianBlur
Threshold
findContours
Here are some example frames:
But there are also frames with noise, so there are more objects in the frame.
I thought I could do something like this:
Use some object recognition algorithm for every contour that has been found. And compute only the feature vector for each of these bounding rectangles.
Is it possible to compute SURF/SIFT/... only for a specific patch (smaller part) of the image?
Since it will be important that the algorithm is capable of processing real time video I think it will only be possible if I don't look at the whole image all the time?! Or maybe decide for example if there are more than 10 bounding rectangles I check the whole image instead of every rectangle.
Then I will look at the next frame and try to match my feature vector with the previous one. That way I will be able to trace my objects. Once these objects cross the red line in the middle of the picture it will trigger another event. But that's not important here.
I need to make sure that not every object which is crossing or behind that red line is triggering that event. So there need to be at least 2 or 3 consecutive frames which contain that object and if it crosses then and only then the event should be triggered.
There are so many variations of object recognition algorithms, I am bit overwhelmed.
Sift/Surf/Orb/... you get what I am saying.
Can anyone give me a hint which one I should chose or if what I am doing is even making sense?
Assuming the plane location doesn't change a lot from one frame to the next, I think you should look at object tracking instead of trying to estimate the location independently in each frame.
http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html

How to Detect Objects in the Image without using any library in C++?

I am writing an application in C++ that requires a little bit of image processing. Since I am completely new to this field I don't quite know where to begin.
Basically I have an image that contains a rectangle with several boxes. What I want is to be able to isolate that rectangle (x, y, width, height) as well as get the center coordinates of each of the boxes inside (18 total).
I was thinking of using a simple for-loop to loop through the pixels in the image until I find a pattern but I was wondering if there is a more efficient approach. I also want to see if I can do it efficiently without using big libraries like OpenCV.
Here are a couple example images, any help would be appreciated:
Also, what are some good resources where I could learn more about image processing like this.
The detection algorithm here can be fairly simple. Your box-of-squares (BOS) is always aligned with the edge of the image, and has a simple structure. Here's how I'd approach it.
Choose a colorspace. Assume RGB is OK for now, but it may work better in something else.
For each line
For each pixel, calculate the magnitude difference between the pixel and the pixel immediately below it. The magnitude difference is simply sqrt((X-x)^2+(Y-y)^2+(Z-z)^2)), where X,Y,Z are color coordinates of the first pixel, and x,y,z are color coordinates of the pixel below it. For RGB, XYZ=RGB of course.
Calculate the maximum run length of consecutive difference magnitudes that are below a certain threshold magThresh. You may also choose a forgiving version of this: maximum run length, but allowing intrusions up to intrLen pixels long that must be followed by up to contLen pixels long runs. This is to take care of possible line-to-line differences at the edges of the squares.
Find the largest set of consecutive lines that have the maximum run lengths above minWidth and below maxWidth.
Thus you've found the lines which contain the box, and by recalculating data in 2.1 above, you'll get to know where the boxes are in horizontal coordinates.
Detecting box edges is done by repeating the same thing but scanning left-to-right within the box. At that point you'll have approximate box centroids that take no notice of bleeding between pixels.
This can be all accomplished by repeatedly running the image through various convolution kernels followed by doing thresholding, I'd think. The good thing is that both of those operations have very fast library implementations. You do not want to reimplement them by hand, it will be likely significantly slower.
If you insist on doing it yourself (personally I'd use OpenCV, it's industrial-strength and free), you're going to need an edge detection algorithm first. There are a good few out there on the internet, but be prepared for some frightening mathematics...
Many involve iterating over each pixel, and lifting it and it's neighbours' values into a matrix, and then convolving with a kernel matrix. Be aware that this has to be done for every pixel (in principle though, in your case you can stop at the first discovered rectangle), and for each colour channel - so it would be highly advisable to push onto the GPU.

Image Segmentation with boost graph

I recently discovered boost::graph.
Since I have never used Graph theory before I was wondering how i would solve the following problem with boost graph.
Lets say I've got a simple(greyscale) 2D Image and I'd like to extract Regions from it which suffice a specific criterion, e.g. pixel value > threshold.
Lets above is white, below is black.
How would I implement that?
My first clue was adding one single Vertex to the graph for every pixel in the image.
And then connect every pixel Vertex to its neighbours with the same colour(white/black).
And then I could extract regions with the connected_components() function.
Or is it more effective to connect all neighbouring pixels and encode the border information into the edge(border edge, nonborder edge)?
Actually there are some interesting graph-theory based segmentation algorithms out there, called graph-cut segmentation. They use colored edges to encode differential information between neighboring pixels.
For your very simple segmentation though using graphs at all seems overkill to me.
I would definitely do the former where you create a vertex for each pixel, and then connect pixels (or adjacent pixels depending on what you are trying to do) that share your criterion. That way you could do a "pixel-walk" to find all the areas of your image (or at least adjacent areas) that satisfy a specific criterion.
In order to find the first pixel that fits your criterion in order to start the walking sequence there are a couple methods you could use. 1) a random pick of pixels from the image, 2) save a list pointers to pixels that fit your different criteria (you only need one pixel for each criteria), or 3) save some type of gradient information on the image so that by picking just one pixel from the image, you can then search along the gradient flows to find the pixel you're looking for (i.e., the gradients would give you directional information on where you need to pick you next pixel to get closer to the desired criterion you're looking for). I would think choices 1 or 2 would be easiest to implement.
Hope this helps,
Jason