Firstly, if you don't know, i should tell what is ndwi. Ndwi stands for normalized difference water index. It is a graphical indicator for water and the value range is [-1 1]. Ndwi is defined as follows:
(Green - NIR) / (Green + NIR)
I am middle of a simple coastline extraction tool based on opencv. I have accomplished it in MATLAB and the result is shown like this:
However, opencv version of the result is look like binarized:
When i debugged the program, i see that minimum value in the ndwi matrix is zero and this is wrong because it should be -0.8057. The code which is responsible for ndwi calculation (opencv version) as follows:
Mat ndwi = (greenRoi - nirRoi) / (greenRoi + nirRoi);
double min;
double max;
minMaxIdx(ndwi, &min, &max);
Mat adjNDWI;
convertScaleAbs(ndwi, adjNDWI, 255 / max);
What is the problem in here and how can i achieve to calculate the right ndwi values?
Note:
greenRoi and nirRoi are created in this way:
Rect rectangle = boundingRect(Mat(testCorners)); //vector<Point2f> testCorners(4);
Mat testImgGreen = imread((LPCSTR)testImgGreenPath, 0);
Mat testImgNir = imread((LPCSTR)testImgNirPath, 0);
Mat greenRoi(testImgGreen, rectangle);
Mat nirRoi(testImgNir, rectangle);
You need to explicitly create a floating point cv::Mat
cv::Mat image(cols,rows,CV_32FC1) or CV_64FC1 if you need doubles
Elements of greenRoi, nirRoi and ndwi will all be uchar's (Mat will be CV_8UC1).
Let's say greenRoi = 10, nirRoi = 40.
Your answer is not (10 - 40)/(10+40) = -0.6. The answer has to be positive (because it unsigned) and can't be a fraction. According to my calculator, this will give 0.
#Martin Beckett is correct, convert testImgGreen and testImgNir to matrices with a float type and it will work. You need:
testImgGreen.convertTo(testImgGreen, CV_32F);
testImgNir.convertTo(testImgNir , CV_32F);
Mat greenRoi(testImgGreen, rectangle);
Mat nirRoi(testImgNir, rectangle);
Mat ndwi = (greenRoi - nirRoi) / (greenRoi + nirRoi);
Related
I have a matrix img (480*640 pixel, float 64 bits) on which I apply a complex mask. After this, I need to multiply my matrix by a value but in order to win time I want to do this multiplication only on the non-zero elements because for now the multiplication is too long because I have to iterate the operation 2000 times on 2000 different matrix but with the same mask. So I found the index (on x/y axes) of the nonzero pixels which I keep in a vector of Point. But I don't succeed to use this vector to do the multplication only on the pixels indexed in this same vector.
Here is an example (with a simple mask) to understand my problem :
Mat img_temp(480, 640, CV_64FC1);
Mat img = img_temp.clone();
Mat mask = Mat::ones(img.size(), CV_8UC1);
double value = 3.56;
// Apply mask
img_temp.copyTo(img, mask);
// Finding non zero elements
vector<Point> nonZero;
findNonZero(img, nonZero);
// Previous multiplication (long because on all pixels)
Mat result = img.clone()*value;
// What I wish to do : multiplication only on non-zero pixels (not functional)
Mat result = Mat::zeros(img.size(), CV_64FC1);
result.at<int>(nonZero) = img.at(nonZero).clone() * value
What is tricky is that my pixels are not on a range (for example pixels 3, 4 and 50, 51 on a line).
Thank you in advance.
I would suggest using Mat.convertTo.
Basically, for the parameter alpha, which is the scaling factor, use the value of the mask (3.56 in your case). Make sure that the Mat is of type CV_32 or CV_64.
This will be faster than finding all non-zero pixels, saving their coordinates in a Vector and iterating (it was faster for me in Java).
Hope it helps!
Constructing vector of points will also increase computation time. I think you should consider iterating over all pixels and multiply if the pixel is not equal to zero.
Iterating will be faster if you have the matrix as raw data.
If you do
Mat result = img*value;
Instead of
Mat result = img.clone()*value;
The speed will be almost 10 times as fast
I have also tested your suggestion with vector but this is even slower than your first solution.
Below the code I used to test your firs suggestion
cv::Mat multMask(cv::Mat &img, std::vector<cv::Point> mask, double fact)
{
if (img.type() != CV_64FC1) throw "invalid format";
cv::Mat res = cv::Mat::zeros(img.size(), img.type());
int iLen = (int)mask.size();
for (int i = 0; i < iLen; i++)
{
cv::Point &p = mask[i];
((double*)(res.data + res.step.p[0] * p.y))[p.x] = ((double*)(img.data + img.step.p[0] * p.y))[p.x] * fact;
}
return res;
}
I have an image which has areas of high intensities and I would like to magnify those intensities. I accomplished this in Matlab by converting a integer array in (0,255) to floating point from (0,1), then squaring each value and finally multiplying by 255 and converting back to integer.
How would something like this be done in openCV? Is there a way to access the elements piece by piece? Even so, I suppose this would be inefficient and wonder if there are openCV methods which are vectorized or otherwise optimized to accomplish this.
Given an input grayscale image:
the result of your algorithm is:
You can:
convert and scale with convertTo.
square each pixel with element-wise multiplication mul, or use pow to raise to an arbitrary number.
This is the simple code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img = imread("path_to_image", IMREAD_GRAYSCALE);
imshow("Original", img);
// converting to float in (0,1)
img.convertTo(img, CV_32F, 1.0 / 255.0);
// power with an arbitrary number. Use 2 to square
pow(img, 2, img);
// multiplying by 255 and back to integer
img.convertTo(img, CV_8U, 255.0);
imshow("Result", img);
waitKey();
return 0;
}
I am looking to normalize the pixel values of an image to the range [0..1] using C++/OpenCV. However, when I do the normalization using either image *= 1./255 or the normalize function the pixel values are rounded down to zero. I have tried setting the image to type CV_32FC3.
Below is the code I have:
Mat image;
image = imread(imageLoc, CV_LOAD_IMAGE_COLOR | CV_LOAD_IMAGE_ANYDEPTH);
Mat tempImage;
// (didn't work) tempImage *= 1./255;
image.convertTo(tempImage, CV_32F, 3);
normalize(image, tempImage, 0, 1, CV_MINMAX);
int r = 100;
int c = 150;
uchar* ptr = (uchar*)(tempImage.data + r * tempImage.step);
Vec3f tempVals;
tempVals.val[0] = ptr[3*c+1];
tempVals.val[1] = ptr[3*c+2];
tempVals.val[2] = ptr[3*c+3];
cout<<" temp image - "<< tempVals << endl;
uchar* ptr2 = (uchar*)(image.data + r * image.step);
Vec3f imVals;
imVals.val[0] = ptr2[3*c+1];
imVals.val[1] = ptr2[3*c+2];
imVals.val[2] = ptr2[3*c+3];
cout<<" image - "<< imVals << endl;
This produces the following output in the console:
temp image - [0, 0, 0]
image - [90, 78, 60]
You can make convertTo() do the normalization for you:
image.convertTo(tempImage, CV_32FC3, 1.f/255);
You are passing 3 to convertTo(), presumably as channel-count, but that's not the correct signature.
I used the normalize function and it worked (Java):
Core.normalize(src,dst,0.0,1.0,Core.NORM_MINMAX,CvType.CV_32FC1);
You should use a 32F depth for your destination image. I believe the reason for this, is that since you need to get decimal values, you should use an a non-integer OpenCV data type. According to this table, the float types correspond to the 32F depth. I chose the number of channels to be 1 and it worked; CV_32FC1
Remember also that it's unlikely to spot any visual difference in the image.
Finally, since you probably have thousands of pixels in your image, your console might seem that it's printing only zeros. However due to the large amount of data, try to use CTRL+F to see what's going on. Hope this helps.
I'm using the opencv library on a ubuntu build with Qt Creator and I have the following problem. I am tryng to calculate the dot product of a vector and the RGB values within an image, I then want to return these values to a separate matrix which holds the result. After this operation I then want to subtract this matrix from another, however, to ensure the matrix is in the correct data type I use the convertTo() function and found that this throws the segmentation fault.
It appears to be something to do with element writing, if I change the input from the dot product to a predefined value it works.
I have spent a few hours trying to get this running and I am not sure what I am doing wrong. Any help would be greatly appreciated.
int x,y;
float Xn = 0.95;
float Zn = 1.089;
//destination matrix
Mat XYZ_mat(10, 10, CV_32FC3, Scalar(1.0,1.0,1.0));
//source matrix
Mat BGR_mat(10, 10, CV_32FC3, Scalar(1.0,1.0,1.0));
//source vectors
float LAB_mult_x[3][1] ={0.4, 0.2, 0.01};
Mat LAB_Mult_x(3, 1, CV_32FC1, LAB_mult_x);
float LAB_mult_y[3][1] ={0.35, 0.71, 0.11};
Mat LAB_Mult_y(3, 1, CV_32FC1, LAB_mult_y);
float LAB_mult_z[3][1] ={0.18, 0.07, 0.95};
Mat LAB_Mult_z(3, 1, CV_32FC1, LAB_mult_z);
for (x=0;x<=XYZ_mat.rows;x++){
for (y=0; y<=XYZ_mat.cols;y++){
//extacts BGR vals from image
Vec3f temp1 = BGR_mat.at<Vec3b>(x,y);
temp2 = Mat(temp3);
XYZ_mat.at<Vec3b>(x,y)[0] = float(temp2.dot(LAB_Mult_x)/Xn);
XYZ_mat.at<Vec3b>(x,y)[1] = float(temp2.dot(LAB_Mult_y));
XYZ_mat.at<Vec3b>(x,y)[2] = float(temp2.dot(LAB_Mult_z)/Zn);
}
}
//segmentation fault is thrown here
XYZ_mat.convertTo(XYZ_mat,CV_32FC3);
Many thanks
Laurence
I am useing the 2.4.4 version of OpenCV. - i know its a beta
but there is an example about cv::calcOpticalFlowSF the method in the example folder called: simpleflow_demo.cpp. But when i copy this demo and use it with my input images, it starts processing and after some seconds it came back a crash report.
The documentation about the method is a little bit strange, saying the output files are a x- and yflow instead of the cv::Mat& flow which the method actually wants.
Any ideas how to fix the problem to get the function working?
Try this simple demo that worked for me, then modify for your needs (display help from here):
Mat frame1 = imread("/home/radford/Desktop/1.png");
Mat frame2 = imread("/home/radford/Desktop/2.png");
namedWindow("flow");
Mat flow;
calcOpticalFlowSF(frame1, frame2, flow, 3, 2, 4);
Mat xy[2];
split(flow, xy);
//calculate angle and magnitude
Mat magnitude, angle;
cartToPolar(xy[0], xy[1], magnitude, angle, true);
//translate magnitude to range [0;1]
double mag_max;
minMaxLoc(magnitude, 0, &mag_max);
magnitude.convertTo(magnitude, -1, 1.0/mag_max);
//build hsv image
Mat _hsv[3], hsv;
_hsv[0] = angle;
_hsv[1] = Mat::ones(angle.size(), CV_32F);
_hsv[2] = magnitude;
merge(_hsv, 3, hsv);
//convert to BGR and show
Mat bgr;//CV_32FC3 matrix
cvtColor(hsv, bgr, COLOR_HSV2BGR);
imshow("flow", bgr);
waitKey(0);
In the example opencv/samples/cpp/simpleflow_demo.cpp there is a code block
if (frame1.type() != 16 || frame2.type() != 16) {
printf(APP_NAME "Images should be of equal type CV_8UC3\n");
exit(1);
}
So, grey images should be converted to CV_8UC3. For example using cvtColor(grey, grey3, CV_GRAY2RGB);