Equivalent of MATLAB’s caxis in OpenCV - c++

Recently, I used the OpenCV library to process the gray image, and I used the MATLAB platform to process (it works well), just like this:
imagesc(I), colormap 'Jet',caxis[0 1];%want to show the Pseudocolor picture
As you see, MATLAB has a function called caxis, which used to scale Pseudocolor axis.
My question is, are there any functions in OpenCV that can realize the caxis function of MATLAB or how should I realize this function?

You have the applyColorMap function. However it's always a mapping for values between 0 and 255, even if all your image's pixels have their value between 20 and 30 (for example).
If you want to apply the whole colormap between your min and max values, you will need to have an intermediary mapping to "normalize" your values to a [0-255] scale. Or you could also create your own Color Map that would take a min and a max as arguments and create its full table accordingly.

Related

Applying adaptive thresholding to inrange function opencv c++

I want to take a video and create a binary from it, I want it so that if the pixel is within a certain range it will be included within the binary. In other words I want an upper and lower bound like in the inRange() function as opposed to a simple cutoff point like in the threshold() function.
I also want to use adaptive thresholding to account for differences in lighting in my video. Is there a way to do this? I know there is inRange() that does the former and adaptiveThreshold() that does the latter, but I don't know if there is a way to do both.
Apply adaptiveThreshold() to the whole original image, then apply inRange() to the original image and use the result of inRange() as a mask:
adaptiveThreshold(original_image, dst_image ... );
inRange(original_image, minArray, maxArray, mask);
Mat output = dst_image.mul(mask);

OpenCV: Denoising image / video frame

I want to denoise a video using OpenCV and C++. I found on the OpenCV doc site this:
fastNlMeansDenoising(contourImage,contourImage2);
Every time a new frame is loaded, my program should denoise the current frame (contourImage) and write it to contourImage2.
But if I run the code, it returns 0 and exits. What am I doing wrong or is there an alternative way to denoise an image? (It should be fast, because I am processing a video)
while you are using c++ you are not providing the full argument try this that way.
cv::fastNlMeansDenoisingColored(contourImage, contourImage2, 10, 10,7, 21);
// This is Original Function to be used.
cv::fastNlMeansDenoising(src[, dst[, h[, templateWindowSize[, searchWindowSize]]]]) → dst
Parameters:
src – Input 8-bit 1-channel, 2-channel or 3-channel image.
dst – Output image with the same size and type as src .
templateWindowSize – Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels.
searchWindowSize – Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater.
searchWindowsSize - greater denoising time. Recommended value 21 pixels.
h – Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise

How to filter a single column mat with Gaussian in OpenCV

I have mat with only one column and 1600 rows. I want to filter it using a Gaussian.
I tried the following:
Mat AFilt=Mat(palm_contour.size(),1,CV_32F);
GaussianBlur(A,AFilt,cv::Size(20,1),3);
But I get the exact same values in AFilt (the filtered mat) and A. It looks like GaussianBlur has done nothing.
What's the problem here? How can I smooth a single-column mat with a Gaussian kernel?
I read about BaseColumnFilt, but haven't seen any usage examples so I'm not sure how to use them.
Any help given will be greatly appreciated as I don't have a clue.
I'm working with OpenCV 2.4.5 on windows 8 using Visual Studio 2012.
Thanks
Gil.
You have a single column but you are specifying the width of the gaussian to be big instead of specifying the height! OpenCV use row,col or x,y notation depending on the context. A general rule is whenever you use Point or Size, they behave like x,y and whenever the parameters are separate values they behave like row,col.
The kernel size should also be odd. If you specify the kernel size you can set sigma to zero to let OpenCV compute a suitable sigma value.
To conclude, this should work better:
GaussianBlur(A,AFilt,cv::Size(1,21),0);
The documentation og GaussianBlur says the kernel size must be odd, I would try using an odd size kernel and see if that makes any difference

multidimensional discrete wavelet transform

can anyone tell me the correct method to use the getOutputValue function in the following link? Also, how does the author get the 2nd and 3rd image from the code.
http://www.codeproject.com/Articles/385658/Multidimensional-Discrete-Wavelet-Transform
Thanks
Okay, usage:
I haven't tried it yet, but from what I get you simply call getOutputValue() to get one result. The parameter is a vector containing the "coordinates" (based on the number of dimensions in your input).
Images:
In this example, the author obviously used the image data as the discrete values, e.g. a black pixel would be 0 and a white pixel would be 255 with all other shades of grey being inbetween (default 8 bit grayscale image).
He then used the output signal/result to recreate a image (i.e. interpret the values as pixels once again).

openCV filter image - replace kernel with local maximum

Some details about my problem:
I'm trying to realize corner detector in openCV (another algorithm, that are built-in: Canny, Harris, etc).
I've got a matrix filled with the response values. The biggest response value is - the biggest probability of corner detected is.
I have a problem, that in neighborhood of a point there are few corners detected (but there is only one). I need to reduce number of false-detected corners.
Exact problem:
I need to walk through the matrix with a kernel, calculate maximum value of every kernel, leave max value, but others values in kernel make equal zero.
Are there build-in openCV functions to do this?
This is how I would do it:
Create a kernel, it defines a pixels neighbourhood.
Create a new image by dilating your image using this kernel. This dilated image contains the maximum neighbourhood value for every point.
Do an equality comparison between these two arrays. Wherever they are equal is a valid neighbourhood maximum, and is set to 255 in the comparison array.
Multiply the comparison array, and the original array together (scaling appropriately).
This is your final array, containing only neighbourhood maxima.
This is illustrated by these zoomed in images:
9 pixel by 9 pixel original image:
After processing with a 5 by 5 pixel kernel, only the local neighbourhood maxima remain (ie. maxima seperated by more than 2 pixels from a pixel with a greater value):
There is one caveat. If two nearby maxima have the same value then they will both be present in the final image.
Here is some Python code that does it, it should be very easy to convert to c++:
import cv
im = cv.LoadImage('fish2.png',cv.CV_LOAD_IMAGE_GRAYSCALE)
maxed = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
comp = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
#Create a 5*5 kernel anchored at 2,2
kernel = cv.CreateStructuringElementEx(5, 5, 2, 2, cv.CV_SHAPE_RECT)
cv.Dilate(im, maxed, element=kernel, iterations=1)
cv.Cmp(im, maxed, comp, cv.CV_CMP_EQ)
cv.Mul(im, comp, im, 1/255.0)
cv.ShowImage("local max only", im)
cv.WaitKey(0)
I didn't realise until now, but this is what #sansuiso suggested in his/her answer.
This is possibly better illustrated with this image, before:
after processing with a 5 by 5 kernel:
solid regions are due to the shared local maxima values.
I would suggest an original 2-step procedure (there may exist more efficient approaches), that uses opencv built-in functions :
Step 1 : morphological dilation with a square kernel (corresponding to your neighborhood). This step gives you another image, after replacing each pixel value by the maximum value inside the kernel.
Step 2 : test if the cornerness value of each pixel of the original response image is equal to the max value given by the dilation step. If not, then obviously there exists a better corner in the neighborhood.
If you are looking for some built-in functionality, FilterEngine will help you make a custom filter (kernel).
http://docs.opencv.org/modules/imgproc/doc/filtering.html#filterengine
Also, I would recommend some kind of noise reduction, usually blur, before all processing. That is unless you really want the image raw.