Which filter can best remove horizontal vertical banding noises (hvbn) from image - c++

I would like to know if among the OpenCV filter functions, is there a particular one that is suited to remove horizontal/vertical banding noise in an image? Like, for salt and pepper noise, Gaussian blur works best. Or, is the process of removing banding noise a combination of different filters?
Did try several filters already like bilateral, gaussian, median and blur(homogenous), but the vertical banding lines don't seem to disappear or significantly reduce, or sometimes the resulting image is just so smoothed. I'd prefer the image to be retained, only that the noise is removed.
I am interested to do the removal of noise in OpenCV, c++ in particular. Any basic steps, concepts or tutorial would be great. Thanks
The sample images are here which shows obvious marks of vertical bands of noise. These images are outputs of a scanner by the way.
https://drive.google.com/file/d/0B1aXcXzD_OADMkNuNnJxNGw2NTA/edit?usp=sharing
https://drive.google.com/file/d/0B1aXcXzD_OADNkFBbGgxU20yMjg/edit?usp=sharing

I would do it like this:
1.find your lines
for example search for repetitive bumps/edges on first derivation by x or y;
then select only those which are in average period place
and neighbouring lines/rows has also this bump near the same place
2.remove pixels of selected lines
just take color before and after bump
and interpolate between them (avg color for example)
also in some case you can substract constant color from all pixels of the line
depends on the color mixing mechanism which created those lines

You can reduce vertical or horizontal noise by adjusting the kernel size using simple blur operation
Fir example for your vertical noised image you could use
blur(src,dst,Size(11,1));
Note that here the kernel size in vertical direction is higher value compared to horizontal.
See the image before and after filtering
Similarly for horizontal just use reverse kernel size like
blur(src,dst,Size(1,11));
See the result

Related

Noise reduction OpenCV skindetection sample

I'm working on a facedetection app using segmentation of skinpixels with a predefined skinmodel in YCrCb space.
I'm loosely basing my algorithms of this report; http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=767122&tag=1, by Douglas Chai and King N. Ngan.
I first segment out all skin pixels (see left).
After that I perform some calculations to reduce the noise (see steps below). It results in a filtered bitmap 1/8th the size of original. Ideally this would be noise free both in face and background area, but it isn't. I have already tried to reduce it by using my density map and then checking neighbouring 3x3 area pixels and eroding/dilating pixel values depending on their neighbours. Then I resize this bitmap and apply the result as a mask on the original image (see right image for result, ignore my censorship).
My question is, what methods do you recommend for getting rid of the noise?
Also, are there any good methods to get smoother contours ? Ideally I would not like to use "find biggest contour and flood fill", preferably something more sophisticated.
There also seem to be bit of a displacement of the resized mask (it cuts of a bit too much on my right side of the face, and shows a bit too much on the left side). What can be causing this?
the easiest way to do smoother contours is to interpolate your data to a higher resolution using an interpolation scheme. You may look in openCV, that will result in smoother transitions between points.
I hope it will help a little bit. Good luck.

Python: Reduce rectangles on images to their border

I have many grayscale input images which contain several rectangles. Some of them overlap and some go over the border of the image. An example image could look like this:
Now i have to reduce the rectangles to their border. My idea was to make all non-white pixels which are less than N (e.g. 3) pixels away from the border or a white pixel (using the Manhatten distance) white. The output should look like this (sorry for the different-sized borders):
It is not very hard to implement this. Unfortunately the implementation must be fast, because the input may contain extremly many images (e.g. 100'000) and the user has to wait until this step is finished.
I thought about using fromimage and do then everything with numpy, but i did not find a good solution.
Maybe someone has an idea or a hint how this problem may be solved very efficient?
Calculate the distance transform of the image (opencv distanceTrasform http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html)
In the resulted image zero all the pixels that have value bigger than 3

How can I remove small parallel line in image?

I have black and white image after binarization. After that I have image like below:
How can I remove the small lines parallel to the long curves using OpenCV?. I can remove them by removing all small objects, but I want to remove only the small parallel
lines.
This looks like a Canny artifact (or some kind of ringing artifact) to me. There are several ways to remove them.
An empiric but not too computing intensive method would be to locate all small features, and superimpose them with the same image shifted by [+/-]X, [+/-]Y. If the feature is completely coincident with the shifted image, i.e., all pixels in the white feature are also white in the shifted image, then you are probably looking at an artifact.
To evaluate "smallness" of feature, you can use a basic floodfill. This method is cheap because you can simulate shifting with pointers, without really allocating four shifted images. It is prone to false positives wherever you really have small parallel lines, and to false negatives if the artifacts are very large.
Another method would be to posterize twice the original image with different thresholds. While the "real" lines will stay together, the ringing artifacts will have a different strength. At that point you evaluate the image difference, and consider "artifact" all features that are farther than a given threshold from the image track. This is a bit more computation intensive, yields better results, but depends on what you have for an original image, i.e. what is your workflow.
It is possible that reevaluating the workflow (altering the edge detection phase) could avoid the creation of the artifacts altogether.
use cvBlobslib library to detect the white patches as blobs...the cvBlobslib library gives functions by which you can find out different features of the blobs like area , and ellipticity...so if you want only the smaller patches parallel to the long curve...then ..
Get the long curve on the basis of area covered by the blob or the preimeter i.e. contour length of the blob...
Get the ellipticity or the orientation of the major axis of the long curve after fitting an ellipse(cvBlobslib library will do that for you..!!)...
Filter all those blobs which are less than a threshold in terms of area or contour and have the same orientation as the long curve....
hope this might work..
If you know the orientation of your line in advance, you can do a morphological closing with a custom structuring element adapted to your needs.
See morphomat on wikipedia
See opencv documentation
Perhaps similar to what the others said, but in simpler words: since the small lines seem to have roughly half the thickness of the long ones, if you don't really care about preserving the long lines the way they are, you could apply several times a simple algorithm that "makes the lines thinner", until the small ones disappear. What you need to do is scan the image pixel by pixel and when you detect a white pixel above or below or to the left or to the right of a black pixel, you store its coordinates in a vector. After you traverse the entire image, you make all the pixels specified by the coordinates in the vector black. You could define some threshold empirically for the number of iterations of this algorithm.
Here are steps exploiting the fact that parallel lines are increasing edge density.
1) Apply adaptive Threshold on gray image to get many edges.
2) Erode 3x3 (or experiment but small) Morphological Operation.
3) Take Logical Not to get edge density.
4) Apply Dilate of like 3x3 or 5x5. It will dilate edges to merge and make a region.
5) Now Erode 7x7 (or experiment for higher then last dilate) Morphological Operation. It will remove most of the non-required region, long lines and small stray areas.
Output is is MASK for removal region. You can apply contour detection on original image and remove contour-object for matching position in mask high precision removal.
OR if you don't need high-precision result simply And with mask's NOT.
Why not doing something like:
Find the long curves (using findContours and filter by size).
Find the small curves
For each long curve, calculate the minimal distance between each point of every small curve and the long curve.
Calculate the mean and the standard deviation of these minimal distances.
Reject small curves for which either the mean minimal distance to the long curve is too large, or small curves for which the standard deviation of the minimal distances is large.
The result will probably be better (and faster) is you skeletonize the image first.
Good luck with it,

How best to approach a localised thresholding opengl function

I would like to take a photo of some text and make the text easier to read. The tricky part is that the initial photo may have dark regions as well as light regions and I want the opengl function to enhance the text in all these regions.
Here is an example. On top is the original image. On bottom is the processed images.
[edited]
I have added in a better example picture of what is happening. I am able to enhance the text, but in areas where I have no text, this simple thresholding is creating speckled noise (image bottom left).
If I wind back the threshold, then I lose the text in the darker region (bottom right).
At the moment, the processed image only picks up some of the text, not all the text. The original algorithm I used was pretty simple:
- sample 8 pixels around the current pixel (pixels about 4-5 distant away seem to work best)
- figure out the lightest and darkest pixels from this sample
- if the current pixel is closer to the darkest threshold, then make black, and vice versa
This seemed to work very well for around text, but when it came to non-text, then it provided a very noisy image (even when I provided an initial rejection threshold)
I modified this algorithm to assume that text was always close to black. This provided the bottom image above, but once again I am not able to pull out all the text features I want.
Before implementing this as a program, you might want to take source photo and play with it in a GIMP or another editor to see what you can do.
One way to deal with shadows is to run high pass filter before thresolding.
This is how you do it in image editor (manually, without "highpass" filter plugin):
1. Convert image to grayscale and save it to "layer_A"
2. make a copy of "layer_A" into "Layer_B"
3. Invert colors in "Layer_B"
4. Gaussian blur "Layer_B" with radius that is larger than largest feature you want to preserve. (blur radius larger than letter)
5. Merge "Layer_A" with "Layer_B" where result = "Layer_A" * 0.5 + "Layer_B" * 0.5.
6. Increase contrast in resulting image.
7. Run thresold.
In opengl it'll be done in same fashion (and without multiple layers)
It won't work well with strong/crisp shadows (obviously), but it will exterminate huge smooth shadows that occurs due to page being bent, etc.
The technique (high pass filter) is frequently used for making seamless textures, and you should be able to find several such tutorials and additional info with google (GIMP seamless texture high pass or GIMP high pass).
By the way, if you want to improve "readability", then you might want to keep it grayscale (while improving contrast) instead of converting it to "black and white" (1 bit color). Sharp letter edges make text harder to read.
thanks for your help.
In the end I went for quite a basic approach.
Taking a sample of 8 nearby pixels, determining the max and min. Determined the local threshold (max - min). Then
smooth = dot(vec3(1.0/3.0), smoothstep(currentMin, currentMax, p11).rgb);
smooth = (localthreshold < threshold) ? 1.0 : smooth;
return vec4(smooth, smooth, smooth, 1);
This does not show me the text nicely in both the dark and light region, which is the ideal, but it nicely cleans up the text in the lighter region.
Mike

How to detect points which are drastically different than their neighbours

I'm doing some image processing, and am trying to keep track of points similar to those circled below, a very dark spot of a couple of pixels diameter, with all neighbouring pixels being bright. I'm sure there are algorithms and methods which are designed for this, but I just don't know what they are. I don't think edge detection would work, as I only want the small spots. I've read a little about morphological operators, could these be a suitable approach?
Thanks
Loop over your each pixel in your image. When you are done considering a pixel, mark it as "used" (change it to some sentinel value, or keep this data in a separate array parallel to the image).
When you come across a dark pixel, perform a flood-fill on it, marking all those pixels as "used", and keep track of how many pixels were filled in. During the flood-fill, make sure that if the pixel you're considering isn't dark, that it's sufficiently bright.
After the flood-fill, you'll know the size of the dark area you filled in, and whether the border of the fill was exclusively bright pixels. Now, continue the original loop, skipping "used" pixels.
How about some kind of median filtering? Sample values from 3*3 grid (or some other suitable size) around the pixel and set the value of pixel to median of those 9 pixels.
Then if most of the neighbours are bright the pixel becomes bright etc.
Edit: After some thinking, I realized that this will not detect the outliers, it will remove them. So this is not the solution original poster was asking.
Are you sure that you don't want to do an edge detection-like approach? It seems like a comparing the current pixel to the average value of the neighborhood pixels would do the trick. (I would evaluate various neighborhood sizes to be sure.)
Personally I like this corner detection algorithms manual.
Also you can workout naive corner detection algorithm by exploiting idea that isolated pixel is such pixel through which intensity changes drastically in every direction. It is just a starting idea to begin from and move on further to better algorithms.
I can think of these methods that might work with some tweaking of parameters:
Adaptive thresholds
Morphological operations
Corner detection
I'm actually going to suggest simple template matching for this, if all your features are of roughly the same size.
Just copy paste the pixels of one (or a few features) to create few templates, and then use Normalized Cross Correlation or any other score that OpenCV provides in its template matching routines to find similar regions. In the result, detect all the maximal peaks of the response (OpenCV has a function for this too), and those are your feature coordinates.
Blur (3x3) a copy of your image then diff your original image. The pixels with the highest values are the ones that are most different from their neighbors. This could be used as an edge detection algorithm but points are like super-edges so set your threshold higher.
what a single off pixel looks like:
(assume surrounding pixels are all 1)
original blurred diff
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
1,0,1 8/9,8/9,8/9 1/9,8/9,1/9
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
what an edge looks like:
(assume surrounding pixels are the same as their closest neighbor)
original blurred diff
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
Its been a few years since i did any image processing. But I would probably start by converting to a binary representation. It doesn't seem like you're overly interested in the grey middle values, just the very dark/very light regions, so get rid of all the grey. At that point, various morphological operations can accentuate the points you're interested in. Opening and Closing are pretty easy to implement, and can yield pretty nice results, leaving you with a field of black everywhere except the points you're interested in.
Have you tried extracting connected components using cvContours? First thresholding the image (using Otsu's method say) and then extracting each contour. Since the spots you wish to track are (from what I see in your image) somewhat isolated from neighborhood they will some up as separate contours. Now if we compute the area of the Bounding Rectangle of each contour and filter out the larger ones we'd be left with only small dots separate from dark neighbors.
As suggested earlier a bit of Morphological tinkering before the contour separation should yield good results.