I'm trying to apply derivative based color constancy on images and I'm using opencv in c++ to do so.
I'm using Sobel derivatives to calculate my gradients but I don't know what I should cast pixel values of gradient image to in order to be able to read and change them.
Sobel( gradr1, gradr1, gradr1.depth(), 1, 0, 3, 1, 0, BORDER_DEFAULT );
gradr1.at<char>(i,j)
What should I use instead of char?
Related
I am attempting to use my own kernel to blur an image (for educational purposes). But my kernel just makes my whole image white. Is my blur kernel correct? I believe the proper name of the blur filter I am trying to apply is a normalised blur.
void blur_img(const Mat& src, Mat& output) {
// src is a 1 channel CV_8UC1
float kdata[] = { 0.0625f, 0.125f, 0.0625f, 0.125f, 0.25f, 0.125f, 0.0625f, 0.125f, 0.0625f };
//float kdata[] = { -1,-1,-1, -1,8,-1, -1,-1,-1}; // outline filter works fine
Mat kernel(3, 3, CV_32F, kdata);
// results in output being a completely white image
filter2D(src, output, CV_32F, kernel);
}
Your image is not white, is float. I am sure you that you are displaying the image with imshow in a later place, and it looks all white. This is explained in the imshow documentation. Specifically:
If the image is 32-bit floating-point, the pixel values are multiplied
by 255. That is, the value range [0,1] is mapped to [0,255].
This means that if it is float it has to be [0,1] values to be displayed correctly.
Now, that we know what cause it, lets see how to solve it. I can think of 3 possible ways:
1) normalize the image to [0,1]
cv::Mat dst;
cv::normalize(outputFromBlur, dst, 0, 1, cv::NORM_MINMAX);
This function normalizes the values, so it may shift the colors... this is not the best one for known images, but rather for depth maps or other matrices with values of unknown colors.
2) covertTo uchar:
cv::Mat dst;
outputFromBlur.convertTo(dst, CV_8U);
This function does saturate_cast, so it may handle possible overflow/underflow.
3) use filter2D with another output depth:
cv::filter2D(src, output, -1, kernel);
With -1 the desire output will be of the same type of the source (I assume your source is CV_8U)
I hope this helps you, if not leave a comment.
I am trying to calculate the Mean, Std and Max Value of hue, saturation and value of an image given in HSV colour space. I split it in three channels to calculate the maximum value of each channel. The problem is I am getting exactly the same value for each channel. The same for mean, std and maximum for hue, saturation and value. I think, maybe I am not understanding well what I get with the functions I am using. Here is my code:
Scalar mean, std;
meanStdDev(image, mean, std, Mat());
vector <Mat> HSV;
split(image, HSV);
double MaxValueH, MaxValueS, MaxValueV;
minMaxLoc(HSV[0], 0, &MaxValueH, 0, 0);
minMaxLoc(HSV[1], 0, &MaxValueS, 0, 0);
minMaxLoc(HSV[2], 0, &MaxValueV, 0, 0);
colour farbe(mean[0], std[0], MaxValueH, mean[1], std [1], MaxValueS, mean[2], std[2], MaxValueV);
return farbe;
I want to implement this MATLAB statement in OpenCV C++:
bwImgLabeled(bwImgLabeled > 0) = 1;
As far as I understand from then OpenCV docs, http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#threshold,
I need to do:
cv::threshold(dst, dst, 0, 1, CV_THRESH_BINARY);
Am I correct here?
Yes you are correct. What the MATLAB code is doing is that it searches for any pixels that are non-zero and sets them to 1.
Recall the definition of cv::threshold:
double threshold(InputArray src, OutputArray dst,
double thresh, double maxval, int type)
So the first two inputs are the source and destination images, where in your case, you want to take the destination image and mutate it to contain the final image. thresh = 0 and maxval = 1, with type=CV_THRESH_BINARY. Recall when using the CV_THRESH_BINARY, the following relationship occurs:
(source: opencv.org)
Therefore, if you specify thresh to be 0, maxval to be 1, you are effectively doing what the MATLAB code is doing. Any pixels that are greater than thresh=0, which are essentially non-zero, you set those intensities to 1. I'm assuming you want the input and output images to be floating-point, so make sure the image is of a compatible type, such as CV_32FC1, or CV_32FC3, and so on.
I'm new to OpenCV and trying to convert the following MATLAB code to OpenCV using C++:
[FX,FY]=gradient(mycell{index});
I have tried the following so far but my values are completely different from my MATLAB results
Mat abs_FXR;
Mat abs_FYR;
int scale = 1;
int delta = 0;
// Gradient X
Sobel(myImg, FXR, CV_64F, 1, 0, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs( FXR, abs_FXR );
imshow( window_name2, abs_FXR );
// Gradient Y
Sobel(myImg, FYR, CV_64F, 0, 1, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs( FYR, abs_FYR );
imshow( window_name3, abs_FYR );
I also tried using filter2D as per this question, but it still gave different results: Matlab gradient equivalent in opencv
Mat kernelx = (Mat_<float>(1,3)<<-0.5, 0, 0.5);
Mat kernely = (Mat_<float>(3,1)<<-0.5, 0, 0.5);
filter2D(myImg, FXR, -1, kernelx);
filter2D(myImg, FYR, -1, kernely);
imshow( window_name2, FXR );
imshow( window_name3, FYR );
I don't know if this is way off track or if it's just a parameter I need to change. Any help would be appreciated.
UPDATE
Here is my expected output from MATLAB:
But here is what I'm getting from OpenCV using Sobel:
And here is my output from OpenCV using the Filter2D method (I have tried increasing the size of my gaussian filter but still get different results compared to MATLAB)
I have also converted my image to double precision using:
eye_rtp.convertTo(eye_rt,CV_64F);
It is correct that you need to do a central difference computation instead of using the Sobel filter (although Sobel does give a nice derivative) in order to match gradient. BTW, if you have the Image Processing Toolbox, imgradient and imgradientxy have the option of using Sobel to compute the gradient. (Note that the answer in the question you referenced is wrong that Sobel only provides a second derivative, as there are first and second order Sobel operators available).
Regarding the differences you are seeing, you may need to convert myImg to float or double before filter2D. Check the output type of FXL, etc.
Also, double precision is CV_64F and single precision is CV_32F, although this will probably only cause very small differences in this case.
I am trying to normalize one matrix in OpenCV, I am doing it like this:
cv::Mat matrix = cv::Mat::zeros ( 3, 480000, CV_8UC1 );
cv::Mat matrix_norm = cv::Mat::zeros ( 3, 480000, CV_8UC1 );
... // give values to matrix
I read the documentation for "normalize" function, but couldn't fully understand how to give values for "alpha" and "beta". So from the example:
http://docs.opencv.org/doc/tutorials/features2d/trackingmotion/harris_detector/harris_detector.html
I did it like:
cv::normalize ( matrix, matrix_norm, 0, 255, NORM_MINMAX, CV_8UC1, Mat() );
But it crashed here, which I don't surprise. I think the matrix size is too big, right? Or am I doing the normalization incorrectly here?
And is there any way to speed up the normalization?
It's always useful if you normalize your matrix by writing your own code. Using Histogram for normalising your matrix values help you customise the function.It is even faster than the normal noramlisation function provided by OpenCV.