I added two images together using the addweighted function of openCV
addWeighted(ROI,1,watermark,0.5,0.0,ROI);
however , when i try to do the reverse , I get patches of black instead of removing the second image from the resultant image .
addWeighted(ROI,1,watermark,-0.5,0.0,ROI);
I have tried using subtract as well but I am getting the same result.
The image below describes what I'm talking about.
Do note that my algorithm did not correctly detect all the watermarked areas, but for those which were detected correctly, I am unable to subtract the watermark from it.
It would be greatly appreciated if you guys could advise me on what to do for the subtraction.
Thank you.
According to docs of addWeighted you are giving half weight to watermark (can you explain why?) and your last argument is depth type...not array...so it should be -1 if watermark and ROI are of the same depth or you put the depth value you want to put...if you note in the docs the final value is a saturated value ...that is if it exceeds 255 it is being pulled down to 255...so no wonder if you subtract you won't get the two exact value.
** EDIT:**
for you I + 0.5W = R where I is the lena image, W is the watermark and R is the resultant image. Since R is getting truncated above 255 so store the R in an integer matrix CV_32UC3. Since you are using OpenCV 2.1 so its better you perform the weighted addition by scanning the image rather than using OpenCV API. That way you can save the R in an integer matrix where the max value you can get is (255 + 255), which will be easily stored. For display use the uchar matrix (truncated one) and for reversing the process use the integer matrix...
Related
I'm using OpenCV and I have a gray-scale image that is the result of a smoothing operation on a binary mask:
I would like to apply this mask to a given RGB image, but using the copyTo method with the mask option takes into account all the non-zero pixels of the mask. However, what I'm interested in is to obtain an output image whose RGB pixel values are the input values 'scaled' pixel-wise by the factor given by the gray-scale mask.
I have the feeling that this is possible by using the built-in functions of OpenCV, but so far I couldn't find any way to do what I want.
I would know how to do that from scratch in a brute force fashion, but I'd prefer - if possible - to use built-in functions.
Thank you in advance!
As #api55 pointed out, the solution to my problem is:
Normalize the mask through the function cv::normalize
Multiply the normalized mask with the input image through the function cv::multiply
In particular, the type of the normalized mask must be set to CV_32F (otherwise it won't work). As a consequence, the input image has to be converted as well (e.g., with convertTo).
Example code:
cv::normalize(mask,mask,0.,1.,cv::NORM_MINMAX,CV_32F);
image.convertTo(image,CV_32F);
cv::multiply(image,mask,image);
image.convertTo(image,CV_8U); // Convert back the input image to the original type
In my previous question here's the link. According to the answer I have obtained the desired image which is white flood filled.
Now after applying the morphological operation of erosion on the white flood filled image, I get the new masked image.
Your answer helped a lot. Now what I am trying to do is that I am multiplying the new masked image with the original grayscaled image in order to get the veins pattern. But it gives me the same image as result which I get after performing erosion on the white flood filled image. After completing this step I have to apply the Laplacian function to get the veins pattern. I am attaching the original image and the result image that I want. I hope you will look into the matter.
Original Image.
Result Image.
If I am right in understanding you, you only want to extract the veins from the grayscale hand image, right? To do something like this, you would obviously multiply both of them as,
finalimg = grayimg * veinmask;
If you have done the above I think it would be more helpful to post a portion of your code so experts here might be able to point out whats wrong, also the output image that you're getting, and the one you want would also help.
I hope I understand you correctly. You have a gray scale image showing a hand (first image in your question)
You create a mask image that looks like the second image you posted.
Multiplication of both results in the mask image?
If that is the case check your values. If you work within a byte image your mask image must contain values 0 and 1, not 0 and 255 as the multiplication results for non-zero mask pixels otherwise exceed 255!
I want to denoise a video using OpenCV and C++. I found on the OpenCV doc site this:
fastNlMeansDenoising(contourImage,contourImage2);
Every time a new frame is loaded, my program should denoise the current frame (contourImage) and write it to contourImage2.
But if I run the code, it returns 0 and exits. What am I doing wrong or is there an alternative way to denoise an image? (It should be fast, because I am processing a video)
while you are using c++ you are not providing the full argument try this that way.
cv::fastNlMeansDenoisingColored(contourImage, contourImage2, 10, 10,7, 21);
// This is Original Function to be used.
cv::fastNlMeansDenoising(src[, dst[, h[, templateWindowSize[, searchWindowSize]]]]) → dst
Parameters:
src – Input 8-bit 1-channel, 2-channel or 3-channel image.
dst – Output image with the same size and type as src .
templateWindowSize – Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels.
searchWindowSize – Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater.
searchWindowsSize - greater denoising time. Recommended value 21 pixels.
h – Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise
Some details about my problem:
I'm trying to realize corner detector in openCV (another algorithm, that are built-in: Canny, Harris, etc).
I've got a matrix filled with the response values. The biggest response value is - the biggest probability of corner detected is.
I have a problem, that in neighborhood of a point there are few corners detected (but there is only one). I need to reduce number of false-detected corners.
Exact problem:
I need to walk through the matrix with a kernel, calculate maximum value of every kernel, leave max value, but others values in kernel make equal zero.
Are there build-in openCV functions to do this?
This is how I would do it:
Create a kernel, it defines a pixels neighbourhood.
Create a new image by dilating your image using this kernel. This dilated image contains the maximum neighbourhood value for every point.
Do an equality comparison between these two arrays. Wherever they are equal is a valid neighbourhood maximum, and is set to 255 in the comparison array.
Multiply the comparison array, and the original array together (scaling appropriately).
This is your final array, containing only neighbourhood maxima.
This is illustrated by these zoomed in images:
9 pixel by 9 pixel original image:
After processing with a 5 by 5 pixel kernel, only the local neighbourhood maxima remain (ie. maxima seperated by more than 2 pixels from a pixel with a greater value):
There is one caveat. If two nearby maxima have the same value then they will both be present in the final image.
Here is some Python code that does it, it should be very easy to convert to c++:
import cv
im = cv.LoadImage('fish2.png',cv.CV_LOAD_IMAGE_GRAYSCALE)
maxed = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
comp = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
#Create a 5*5 kernel anchored at 2,2
kernel = cv.CreateStructuringElementEx(5, 5, 2, 2, cv.CV_SHAPE_RECT)
cv.Dilate(im, maxed, element=kernel, iterations=1)
cv.Cmp(im, maxed, comp, cv.CV_CMP_EQ)
cv.Mul(im, comp, im, 1/255.0)
cv.ShowImage("local max only", im)
cv.WaitKey(0)
I didn't realise until now, but this is what #sansuiso suggested in his/her answer.
This is possibly better illustrated with this image, before:
after processing with a 5 by 5 kernel:
solid regions are due to the shared local maxima values.
I would suggest an original 2-step procedure (there may exist more efficient approaches), that uses opencv built-in functions :
Step 1 : morphological dilation with a square kernel (corresponding to your neighborhood). This step gives you another image, after replacing each pixel value by the maximum value inside the kernel.
Step 2 : test if the cornerness value of each pixel of the original response image is equal to the max value given by the dilation step. If not, then obviously there exists a better corner in the neighborhood.
If you are looking for some built-in functionality, FilterEngine will help you make a custom filter (kernel).
http://docs.opencv.org/modules/imgproc/doc/filtering.html#filterengine
Also, I would recommend some kind of noise reduction, usually blur, before all processing. That is unless you really want the image raw.
there are plenty of tutorials showing how to blend two images in opencv:
http://opencv.itseez.com/doc/tutorials/core/adding_images/adding_images.html
http://aishack.in/tutorials/transparent-image-overlays-in-opencv/
But all of them are based on this equation:
opencv blending http://opencv.itseez.com/_images/math/afeb868ed1632ace1fe886b5bfbb6fd933b742b8.png
which means that I will be combining two images by averaging them and consequently I'll be loosing intensity on both images.
For instance, let alpha = 0.5, f0(x) = 255, and f1(x) = 0. After applying this equation, the result image g(x) = 127. That is not what I need. The first image should remain unchanged. And the transparency must be applied in the second one.
My problem is:
the first image f0(x) should not be changed and an alpha should be applied to the second image f1(x) when it overlays the first image f0(x).
I cannot figure out how to do this. Any help?
Unfortunately, alpha channels are not supported by OpenCV. From the imread documentation:
Note that in the current implementation the alpha channel, if any, is stripped from the output image. For example, a 4-channel RGBA image is loaded as RGB if flags > 0.
See this SO post for a possible work around using imagemagick.
Hope that is helpful!