I want to implement this MATLAB statement in OpenCV C++:
bwImgLabeled(bwImgLabeled > 0) = 1;
As far as I understand from then OpenCV docs, http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#threshold,
I need to do:
cv::threshold(dst, dst, 0, 1, CV_THRESH_BINARY);
Am I correct here?
Yes you are correct. What the MATLAB code is doing is that it searches for any pixels that are non-zero and sets them to 1.
Recall the definition of cv::threshold:
double threshold(InputArray src, OutputArray dst,
double thresh, double maxval, int type)
So the first two inputs are the source and destination images, where in your case, you want to take the destination image and mutate it to contain the final image. thresh = 0 and maxval = 1, with type=CV_THRESH_BINARY. Recall when using the CV_THRESH_BINARY, the following relationship occurs:
(source: opencv.org)
Therefore, if you specify thresh to be 0, maxval to be 1, you are effectively doing what the MATLAB code is doing. Any pixels that are greater than thresh=0, which are essentially non-zero, you set those intensities to 1. I'm assuming you want the input and output images to be floating-point, so make sure the image is of a compatible type, such as CV_32FC1, or CV_32FC3, and so on.
Related
I am trying to update part of a Mat based on another Mat. For example, I want to select a part of img that is not zero in mask and add a constant value to it. When I try this:
Mat mask = imread("some grayscale image with a white area in a black background", IMREAD_GRAYSCALE);
Mat img = Mat::zeros(mask.rows, mask.cols, CV_8UC1);
Mat bnry, locations;
threshold(mask, bnry, 100, 255, THRESH_BINARY);
findNonZero(bnry, locations);
img(locations) += 5;
I get this error:
Error: Assertion failed ((int)ranges.size() == d)
img and mask have the same size.
How can I select an area of an image based on another image (mask)?
Many of the OpenCV functions will support mask in default, in other word you don't need to find non zero values and based on that doing sum operation, you just need to use cv::add function that in default support using mask as an argument,
cv::add(img,10,img,mask); // 10 is an arbitrary constant value
And about your code
img(locations) += 5;
As far as I know we don't have any like this overloaded operator+ in OpenCV to use.
I need get number of contours (only closed/looped contours) from my image. For this purpose I use cv::connectedComponents function. As it said in documentation
returns N, the total number of labels [0, N-1] where 0 represents the background label
So to get real nubmer of counturs I just need decrement returned value (sub background contour). This method works fine for most images I need to process (actually, it autocad files). However, I've got one image which is processed incorrectly. Returned value for this image 4, however we see that there are 4 circles on image and background. So returned value should be 5.
Here is image I got the problem with:
Here is the code I use:
void run_test()
{
cv::Mat img, img_edge, labels;
img = cv::imread("G:\\test.jpg", cv::IMREAD_GRAYSCALE);
cv::threshold(img, img_edge, 128, 255, cv::THRESH_BINARY);
int res = cv::connectedComponents(img_edge, labels, 8, CV_16U);
}
So I've got two questions: why returned value for this image is 4(but not 5) and is it correct way (by using connectedComponents) to get number of contours?
I am attempting to use my own kernel to blur an image (for educational purposes). But my kernel just makes my whole image white. Is my blur kernel correct? I believe the proper name of the blur filter I am trying to apply is a normalised blur.
void blur_img(const Mat& src, Mat& output) {
// src is a 1 channel CV_8UC1
float kdata[] = { 0.0625f, 0.125f, 0.0625f, 0.125f, 0.25f, 0.125f, 0.0625f, 0.125f, 0.0625f };
//float kdata[] = { -1,-1,-1, -1,8,-1, -1,-1,-1}; // outline filter works fine
Mat kernel(3, 3, CV_32F, kdata);
// results in output being a completely white image
filter2D(src, output, CV_32F, kernel);
}
Your image is not white, is float. I am sure you that you are displaying the image with imshow in a later place, and it looks all white. This is explained in the imshow documentation. Specifically:
If the image is 32-bit floating-point, the pixel values are multiplied
by 255. That is, the value range [0,1] is mapped to [0,255].
This means that if it is float it has to be [0,1] values to be displayed correctly.
Now, that we know what cause it, lets see how to solve it. I can think of 3 possible ways:
1) normalize the image to [0,1]
cv::Mat dst;
cv::normalize(outputFromBlur, dst, 0, 1, cv::NORM_MINMAX);
This function normalizes the values, so it may shift the colors... this is not the best one for known images, but rather for depth maps or other matrices with values of unknown colors.
2) covertTo uchar:
cv::Mat dst;
outputFromBlur.convertTo(dst, CV_8U);
This function does saturate_cast, so it may handle possible overflow/underflow.
3) use filter2D with another output depth:
cv::filter2D(src, output, -1, kernel);
With -1 the desire output will be of the same type of the source (I assume your source is CV_8U)
I hope this helps you, if not leave a comment.
I am computing the mean image of two images and don't know the correct method to use the function mean() in OpenCV.
Mat img1,img2,img3;
img1=imread("picture1.jpg");
img2=imread("picture2.jpg");
img3=mean(img1,img2);
However it says
R6010
-abort() has been recalled
How can I get the average of img1 & img2?
Thanks.
You could use cv::accumulate :
Mat img3 = Mat::zeros(img1.size(), CV_32F); //larger depth to avoid saturation
cv::accumulate(img1, img3);
cv::accumulate(img2, img3);
img3 = img3/2;
According to opencv documentation :
"The function mean calculates the mean value M of array elements, independently for each channel, and return it:"
This mean it should return you a scalar for each layer of you image, and the second parameter is a mask of pixels to where to perform computation
have you simply tried to do something like this ?
img3 = (img1+img2) * 0.5;
[EDIT] to avoid some losses if values are > 255, you probably should convert your images to CV_32F, before performing computations, then cast the result of you operation into CV_8U using the cv::convertTo opencv documentation on ConvertTo
I'm new to OpenCV and trying to convert the following MATLAB code to OpenCV using C++:
[FX,FY]=gradient(mycell{index});
I have tried the following so far but my values are completely different from my MATLAB results
Mat abs_FXR;
Mat abs_FYR;
int scale = 1;
int delta = 0;
// Gradient X
Sobel(myImg, FXR, CV_64F, 1, 0, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs( FXR, abs_FXR );
imshow( window_name2, abs_FXR );
// Gradient Y
Sobel(myImg, FYR, CV_64F, 0, 1, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs( FYR, abs_FYR );
imshow( window_name3, abs_FYR );
I also tried using filter2D as per this question, but it still gave different results: Matlab gradient equivalent in opencv
Mat kernelx = (Mat_<float>(1,3)<<-0.5, 0, 0.5);
Mat kernely = (Mat_<float>(3,1)<<-0.5, 0, 0.5);
filter2D(myImg, FXR, -1, kernelx);
filter2D(myImg, FYR, -1, kernely);
imshow( window_name2, FXR );
imshow( window_name3, FYR );
I don't know if this is way off track or if it's just a parameter I need to change. Any help would be appreciated.
UPDATE
Here is my expected output from MATLAB:
But here is what I'm getting from OpenCV using Sobel:
And here is my output from OpenCV using the Filter2D method (I have tried increasing the size of my gaussian filter but still get different results compared to MATLAB)
I have also converted my image to double precision using:
eye_rtp.convertTo(eye_rt,CV_64F);
It is correct that you need to do a central difference computation instead of using the Sobel filter (although Sobel does give a nice derivative) in order to match gradient. BTW, if you have the Image Processing Toolbox, imgradient and imgradientxy have the option of using Sobel to compute the gradient. (Note that the answer in the question you referenced is wrong that Sobel only provides a second derivative, as there are first and second order Sobel operators available).
Regarding the differences you are seeing, you may need to convert myImg to float or double before filter2D. Check the output type of FXL, etc.
Also, double precision is CV_64F and single precision is CV_32F, although this will probably only cause very small differences in this case.