Remove artifacts/noise from a depth analyzer - c++

Given two images and you can find the depth from scanning each row and differences, does anybody know how to remove noise?
for example, if the expected output is left and mine is right
http://sphotos-a.xx.fbcdn.net/hphotos-prn1/28905_10151279071291130_752649318_n.jpg
Does anybody have any coding tips to get rid of some of that noise?

I'm assuming this is caused by the hardware you are using.
You could average out the pixel colors based on neighboring pixels. Iterate through each column/row and adjust your color based on the surrounding colors. Do several passes on the entire image to give a greater effect.
This wouldn't get rid of the noise completely but it would give a softer image.
Additionally, if you are passing this through a graphics card you might consider writing a Shader. This would dramatically increase the speed.

Related

Measuring on a blurred image with OpenCV and C++

I'm using a 3D scanner in order to scan rectangular objects and measure them (width and length). But, due to the position respect the sensor or also to the vertexs of the rectangle, blur appears at some sides. This causes the measure to have not enought accuracy.
What kind of preprocessing (OpenCV with C++) do you suggest me in order to find the correct contour? Do you think there is a better solution that using preprocessing? Note that the intensity of a pixel is a translation of it height respect the zero plane.
Here you have an example: a rubber on three different places. As you can see, blur appears at one side depending on this placing. The real size of the rubber is (at the image) 179x182 px.
Thank you!
EDIT: Forget to say that the blur affects different sides depending on the rubber's position respect the horizontal axis (middle row).
I can't see that you can ever measure it accurately if the image is blurred. I don't think this is a C++ question, it's a mechanical one.
You need to have a model/concept about the reason for the blur. Without it, there is no computation that comes closer to the truth. With such a model you may be able to adjust your computations. If for example your idea is that the blur is caused by the fact that the object is not orthogonal to the sensor and there is data from another side of the object you might want to cut off the object at the maximum value (closest to the sensor). For this you could use a threshold close to the maximum value. Please note that this is just an example.
You will need to de-blurr your image before doing measurement, it's all depends on the way the blurring happened, somehow you will need to estimate the way blurring happened. The simplest model is called shift-invariant mode, take a look at this link
MATLAB link and this link

Noise reduction OpenCV skindetection sample

I'm working on a facedetection app using segmentation of skinpixels with a predefined skinmodel in YCrCb space.
I'm loosely basing my algorithms of this report; http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=767122&tag=1, by Douglas Chai and King N. Ngan.
I first segment out all skin pixels (see left).
After that I perform some calculations to reduce the noise (see steps below). It results in a filtered bitmap 1/8th the size of original. Ideally this would be noise free both in face and background area, but it isn't. I have already tried to reduce it by using my density map and then checking neighbouring 3x3 area pixels and eroding/dilating pixel values depending on their neighbours. Then I resize this bitmap and apply the result as a mask on the original image (see right image for result, ignore my censorship).
My question is, what methods do you recommend for getting rid of the noise?
Also, are there any good methods to get smoother contours ? Ideally I would not like to use "find biggest contour and flood fill", preferably something more sophisticated.
There also seem to be bit of a displacement of the resized mask (it cuts of a bit too much on my right side of the face, and shows a bit too much on the left side). What can be causing this?
the easiest way to do smoother contours is to interpolate your data to a higher resolution using an interpolation scheme. You may look in openCV, that will result in smoother transitions between points.
I hope it will help a little bit. Good luck.

Pixel shader in Direct2D render error along the middle

I've created a pixel shader for Direct2D that blurs along edges with an alpha channel lower then 1.0.
For every pixel I sample a blurRadius of pixels up, down, left and right. In the middle (vertically and horizontally) some render errors occur. These go away when i only sample pixels backward of the pixel I'm processing. I think my input image (size 1920x1080) is chopped up and I'm sampling outside the piece it's in.
Does anyone have a clue what is going on, if my assumption is right and what to do about it? See below for the resulting image. The dark lines are not supposed to be there and go away when I only sample surrounding pixels at the left or top.
I finally fixed this and I hope this can help others.
I found out that the problem only occurred when using a different input size than the output size. I had created my effect based on the sample effect from Microsoft which can be found here. The problem with this is that it maps the pInputRects[0] tot the pOutputRect in MapOutputRectToInputRects. This messes up the sampling in effect somehow. The fix is to set the pInputRects[0] to the pInputRects[0] from the MapInputRectsToOutputRect function.

Difference between pixel based and frame based methods

I am working on video frames using OpenCV. My question might be low leveled, but I want to clarify it first.
There are plenty of pixel based methods available in openCV, but can I change them into frame based ones?
To me, it is similar, since the whole frame is also stored in one matrix, and I will read that matrix from the beginning to end to handle it. So for instance for finding average value, the only thing I should change is find the total average of whole pixels for one frame.
But for one pixel, see several frames and decide that pixel's average based on them. But when it comes to build models like GMM, I cannot differentiate it.
Could someone help explain it clearly?
Can I use or change openCV's GMM for global usage?
I think this is a good definition for the problem, though you are working with pixels.
Pixel-based methods: The information of the pixel(x,y) in the resulting processed image is the result of applying transformations to the pixel(x,y) of the original image.
Region-based methods: The pixels in the original image are grouped forming a contiguous regions and transformations are applied to the whole region. Example: the resulting pixel(x,y) is the mean of a patch around the original pixel (x,y).

How to detect points which are drastically different than their neighbours

I'm doing some image processing, and am trying to keep track of points similar to those circled below, a very dark spot of a couple of pixels diameter, with all neighbouring pixels being bright. I'm sure there are algorithms and methods which are designed for this, but I just don't know what they are. I don't think edge detection would work, as I only want the small spots. I've read a little about morphological operators, could these be a suitable approach?
Thanks
Loop over your each pixel in your image. When you are done considering a pixel, mark it as "used" (change it to some sentinel value, or keep this data in a separate array parallel to the image).
When you come across a dark pixel, perform a flood-fill on it, marking all those pixels as "used", and keep track of how many pixels were filled in. During the flood-fill, make sure that if the pixel you're considering isn't dark, that it's sufficiently bright.
After the flood-fill, you'll know the size of the dark area you filled in, and whether the border of the fill was exclusively bright pixels. Now, continue the original loop, skipping "used" pixels.
How about some kind of median filtering? Sample values from 3*3 grid (or some other suitable size) around the pixel and set the value of pixel to median of those 9 pixels.
Then if most of the neighbours are bright the pixel becomes bright etc.
Edit: After some thinking, I realized that this will not detect the outliers, it will remove them. So this is not the solution original poster was asking.
Are you sure that you don't want to do an edge detection-like approach? It seems like a comparing the current pixel to the average value of the neighborhood pixels would do the trick. (I would evaluate various neighborhood sizes to be sure.)
Personally I like this corner detection algorithms manual.
Also you can workout naive corner detection algorithm by exploiting idea that isolated pixel is such pixel through which intensity changes drastically in every direction. It is just a starting idea to begin from and move on further to better algorithms.
I can think of these methods that might work with some tweaking of parameters:
Adaptive thresholds
Morphological operations
Corner detection
I'm actually going to suggest simple template matching for this, if all your features are of roughly the same size.
Just copy paste the pixels of one (or a few features) to create few templates, and then use Normalized Cross Correlation or any other score that OpenCV provides in its template matching routines to find similar regions. In the result, detect all the maximal peaks of the response (OpenCV has a function for this too), and those are your feature coordinates.
Blur (3x3) a copy of your image then diff your original image. The pixels with the highest values are the ones that are most different from their neighbors. This could be used as an edge detection algorithm but points are like super-edges so set your threshold higher.
what a single off pixel looks like:
(assume surrounding pixels are all 1)
original blurred diff
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
1,0,1 8/9,8/9,8/9 1/9,8/9,1/9
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
what an edge looks like:
(assume surrounding pixels are the same as their closest neighbor)
original blurred diff
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
Its been a few years since i did any image processing. But I would probably start by converting to a binary representation. It doesn't seem like you're overly interested in the grey middle values, just the very dark/very light regions, so get rid of all the grey. At that point, various morphological operations can accentuate the points you're interested in. Opening and Closing are pretty easy to implement, and can yield pretty nice results, leaving you with a field of black everywhere except the points you're interested in.
Have you tried extracting connected components using cvContours? First thresholding the image (using Otsu's method say) and then extracting each contour. Since the spots you wish to track are (from what I see in your image) somewhat isolated from neighborhood they will some up as separate contours. Now if we compute the area of the Bounding Rectangle of each contour and filter out the larger ones we'd be left with only small dots separate from dark neighbors.
As suggested earlier a bit of Morphological tinkering before the contour separation should yield good results.