OpenCV: Denoising image / video frame - c++

I want to denoise a video using OpenCV and C++. I found on the OpenCV doc site this:
fastNlMeansDenoising(contourImage,contourImage2);
Every time a new frame is loaded, my program should denoise the current frame (contourImage) and write it to contourImage2.
But if I run the code, it returns 0 and exits. What am I doing wrong or is there an alternative way to denoise an image? (It should be fast, because I am processing a video)

while you are using c++ you are not providing the full argument try this that way.
cv::fastNlMeansDenoisingColored(contourImage, contourImage2, 10, 10,7, 21);
// This is Original Function to be used.
cv::fastNlMeansDenoising(src[, dst[, h[, templateWindowSize[, searchWindowSize]]]]) → dst
Parameters:
src – Input 8-bit 1-channel, 2-channel or 3-channel image.
dst – Output image with the same size and type as src .
templateWindowSize – Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels.
searchWindowSize – Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater.
searchWindowsSize - greater denoising time. Recommended value 21 pixels.
h – Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise

Related

Thresholding an image in OpenCV using another image matrix of pixel standard deviations as threshold values

I have two videos, one of a background and one of that same background with a person sitting in the frame. I generated two images from the video of just the background: the mean image of the background video (by accumulating the frames and dividing by the number of frames) and an image of standard deviations from the mean per pixel, taken over the frames. In other words, I have two images representing the Gaussian distribution of the background video. Now, I want to threshold an image, not using one fixed threshold value for all pixels, but using the standard deviations from the image (a different threshold per pixel). However, as far as I understand, OpenCV's threshold() function only allows for one fixed threshold. Are there functions I'm missing, or is there a workaround?
A cv::Mat provides methodology to accomplish this.
The setTo() methods takes an optional InputArray as mask.
Assuming the following:
std is your standard deviations cv::Mat, in is the cv::Mat you want to threshold and thresh is the factor for your standard deviations.
Using these values the custom thresholding could be done like this:
// Computes threshold values based on input image + std dev
cv::Mat mask = in +/- (std * thresh);
// Set in.at<>(x,y) to 0 if value is lower than mask.at<>(x,y)
in.setTo(0, in < mask);
The in < mask expression creates a new MatExpr object, a matrix which is 0 at every pixel where the predicate is false, 255 otherwise.
This is a handy way to implement custom thresholding.

OpenCV HSV weird converted

I am working on project what detect hematoma from skin. I am having issue with color after convertion from RGB to HSV. My algorithm detect hematoma by its color.
With some images I have good results like here:
Original img: http://imgur.com/WHiOWdj
Result img: http://imgur.com/PujbnHa
But with some images i have bad result like this:
Original img: http://imgur.com/OshB99r
Result img: http://imgur.com/CuNzAId
The same original image after convertion to HSV: http://imgur.com/lkVwtCs
Do you have any ideas how to fix it?
Thanks
Looking at your result image I think that you are only using the H channel of the original image in your algorithm. The false positive detection can inherit from that the some part of the healty skin has quite the same H value than the hematoma has. You can see on the qrey-scale image of H channel that both parts have similar values:
The difference between the two parts is the saturation value. On the following image you can see the S channel of the original image and it shows perfectly that at the hematoma the saturation is much higher than at other the part of the arm:
This was expected because the hematoma has much stronger color than the healty skin has.
So, I suggest you to use both H and S channel in your algorithm that is you have to take into account only that parts of H image where the S image contains high saturation values. A possible and simple solution to do that is that you binarize both H and S images and with an AND operation you can execute this filtering:
H image after binarisation:
S image after binarisation:
Image after H&S operation:
You can see that on the result image only the hematoma part is white (except some noise but you can eliminate easily, for example by size or by morphological filtering).
EDIT
Important to note that binarization is one of most important (and sometimes also very complicated) step in the object detection algorithms namely binarization is the first highlight of the objects to detect.
If the the external conditions (lighting, color of objects etc.) do not change significantly from image to image you can use fix binaraziation thresholds. If this constant environment can not be issured you have to use more complicated methods. There are a lot of possibilies you can use, here you can read some examples:
Wikipedia - Thresholding
Wikipedia - Balanced histogram thresholding
Several solutions are based on the histogram analysis: on the histograms with objects there are always more local maximums which positions can vary depend on the environment and if you find them you can adapt the binarization threshold easily.
For example the histogram of the H channel of the original image is the following:
The first maximum belongs to the background, the second to the skin and the last to the hematome. It can be supposed that these 3 thresholds can be found in each image only their positions vary depend on the lighting or on other conditions. To put a threshold between the 2nd and the 3rd local maximum it can be a good choice to highlight the hematome.
Finally I offer you the read the following articel about thresholding in OpenCV:
OpenCV - Thresholding

Background extraction

Can anyone suggest me a fast way of getting the foreground image?
Currently I am using BackgroundSubtractorMOG2 class to do this. it is very slow. and my task doesn't need that much complex algorithm.
I can get a image of the background in the binging. camera position will not change. so I believe that there is a easy way to do this.
I need to capture a blob of the object moving in front of the camera. and there will be only one object always.
I suggest to do as following, simple solution:
Compute difference matrix:
cv::absdiff(frame, background, absDiff);
This makes each pixel (i,j) in absDiff set to |frame(i,j) - background(i.j)|. Each channel (e.g. R,G,B) is procesed independently.
Convert result to single-channeled monocolor image:
cv::cvtColor(absDiff, absDiffGray, cv::COLOR_BGR2GRAY);
Apply binary filter:
cv::threshold(absDiffGray, absDiffGrayThres, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Here we used Ots'u Method to determine appriopriate threshold level. If there was any
noise from step 2, binary filter would remove it.
Apply blob detection in absDiffGrayThres image. This can be one of built-in opencv method's or manually written code which look for pixels positions which vale are 255 (remember about fast opencv pixel retrieval operations)
Such process is enough fast to manage with 640x480 RGB images with frame rate at least 30 fps on quite old Core 2 Duo 2.1 GHz, 4 GB RAM without GPU support.
Hardware remark: be sure that your camera lense aperture is not set to auto-adjust. Imagine following situation: you computed a background image on the beginning. Then, some object appears and covers bigger part of camera view. Less light comes to the lense and, beacause of auto light adjustment, camera increases aperture, background color changes, difference gives a blob in place where actually there is not any object.

Using the Matrix operations of OpenCv (Addition and Subtraction) OpenCV C++

I added two images together using the addweighted function of openCV
addWeighted(ROI,1,watermark,0.5,0.0,ROI);
however , when i try to do the reverse , I get patches of black instead of removing the second image from the resultant image .
addWeighted(ROI,1,watermark,-0.5,0.0,ROI);
I have tried using subtract as well but I am getting the same result.
The image below describes what I'm talking about.
Do note that my algorithm did not correctly detect all the watermarked areas, but for those which were detected correctly, I am unable to subtract the watermark from it.
It would be greatly appreciated if you guys could advise me on what to do for the subtraction.
Thank you.
According to docs of addWeighted you are giving half weight to watermark (can you explain why?) and your last argument is depth type...not array...so it should be -1 if watermark and ROI are of the same depth or you put the depth value you want to put...if you note in the docs the final value is a saturated value ...that is if it exceeds 255 it is being pulled down to 255...so no wonder if you subtract you won't get the two exact value.
** EDIT:**
for you I + 0.5W = R where I is the lena image, W is the watermark and R is the resultant image. Since R is getting truncated above 255 so store the R in an integer matrix CV_32UC3. Since you are using OpenCV 2.1 so its better you perform the weighted addition by scanning the image rather than using OpenCV API. That way you can save the R in an integer matrix where the max value you can get is (255 + 255), which will be easily stored. For display use the uchar matrix (truncated one) and for reversing the process use the integer matrix...

openCV filter image - replace kernel with local maximum

Some details about my problem:
I'm trying to realize corner detector in openCV (another algorithm, that are built-in: Canny, Harris, etc).
I've got a matrix filled with the response values. The biggest response value is - the biggest probability of corner detected is.
I have a problem, that in neighborhood of a point there are few corners detected (but there is only one). I need to reduce number of false-detected corners.
Exact problem:
I need to walk through the matrix with a kernel, calculate maximum value of every kernel, leave max value, but others values in kernel make equal zero.
Are there build-in openCV functions to do this?
This is how I would do it:
Create a kernel, it defines a pixels neighbourhood.
Create a new image by dilating your image using this kernel. This dilated image contains the maximum neighbourhood value for every point.
Do an equality comparison between these two arrays. Wherever they are equal is a valid neighbourhood maximum, and is set to 255 in the comparison array.
Multiply the comparison array, and the original array together (scaling appropriately).
This is your final array, containing only neighbourhood maxima.
This is illustrated by these zoomed in images:
9 pixel by 9 pixel original image:
After processing with a 5 by 5 pixel kernel, only the local neighbourhood maxima remain (ie. maxima seperated by more than 2 pixels from a pixel with a greater value):
There is one caveat. If two nearby maxima have the same value then they will both be present in the final image.
Here is some Python code that does it, it should be very easy to convert to c++:
import cv
im = cv.LoadImage('fish2.png',cv.CV_LOAD_IMAGE_GRAYSCALE)
maxed = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
comp = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
#Create a 5*5 kernel anchored at 2,2
kernel = cv.CreateStructuringElementEx(5, 5, 2, 2, cv.CV_SHAPE_RECT)
cv.Dilate(im, maxed, element=kernel, iterations=1)
cv.Cmp(im, maxed, comp, cv.CV_CMP_EQ)
cv.Mul(im, comp, im, 1/255.0)
cv.ShowImage("local max only", im)
cv.WaitKey(0)
I didn't realise until now, but this is what #sansuiso suggested in his/her answer.
This is possibly better illustrated with this image, before:
after processing with a 5 by 5 kernel:
solid regions are due to the shared local maxima values.
I would suggest an original 2-step procedure (there may exist more efficient approaches), that uses opencv built-in functions :
Step 1 : morphological dilation with a square kernel (corresponding to your neighborhood). This step gives you another image, after replacing each pixel value by the maximum value inside the kernel.
Step 2 : test if the cornerness value of each pixel of the original response image is equal to the max value given by the dilation step. If not, then obviously there exists a better corner in the neighborhood.
If you are looking for some built-in functionality, FilterEngine will help you make a custom filter (kernel).
http://docs.opencv.org/modules/imgproc/doc/filtering.html#filterengine
Also, I would recommend some kind of noise reduction, usually blur, before all processing. That is unless you really want the image raw.