dot product and setting element of matrix opencv c++ - c++

I'm using the opencv library on a ubuntu build with Qt Creator and I have the following problem. I am tryng to calculate the dot product of a vector and the RGB values within an image, I then want to return these values to a separate matrix which holds the result. After this operation I then want to subtract this matrix from another, however, to ensure the matrix is in the correct data type I use the convertTo() function and found that this throws the segmentation fault.
It appears to be something to do with element writing, if I change the input from the dot product to a predefined value it works.
I have spent a few hours trying to get this running and I am not sure what I am doing wrong. Any help would be greatly appreciated.
int x,y;
float Xn = 0.95;
float Zn = 1.089;
//destination matrix
Mat XYZ_mat(10, 10, CV_32FC3, Scalar(1.0,1.0,1.0));
//source matrix
Mat BGR_mat(10, 10, CV_32FC3, Scalar(1.0,1.0,1.0));
//source vectors
float LAB_mult_x[3][1] ={0.4, 0.2, 0.01};
Mat LAB_Mult_x(3, 1, CV_32FC1, LAB_mult_x);
float LAB_mult_y[3][1] ={0.35, 0.71, 0.11};
Mat LAB_Mult_y(3, 1, CV_32FC1, LAB_mult_y);
float LAB_mult_z[3][1] ={0.18, 0.07, 0.95};
Mat LAB_Mult_z(3, 1, CV_32FC1, LAB_mult_z);
for (x=0;x<=XYZ_mat.rows;x++){
for (y=0; y<=XYZ_mat.cols;y++){
//extacts BGR vals from image
Vec3f temp1 = BGR_mat.at<Vec3b>(x,y);
temp2 = Mat(temp3);
XYZ_mat.at<Vec3b>(x,y)[0] = float(temp2.dot(LAB_Mult_x)/Xn);
XYZ_mat.at<Vec3b>(x,y)[1] = float(temp2.dot(LAB_Mult_y));
XYZ_mat.at<Vec3b>(x,y)[2] = float(temp2.dot(LAB_Mult_z)/Zn);
}
}
//segmentation fault is thrown here
XYZ_mat.convertTo(XYZ_mat,CV_32FC3);
Many thanks
Laurence

Related

How can I write float image in OpenCV

Someone gave me this function:
Mat tan_triggs_preprocessing(InputArray src, float alpha = 1, float gamma = 10.0,
float tau = 1000.0, int sigma1 = 2) {
Mat X = src.getMat();
Mat I, tmp, tmp2;
double meanI;
X.convertTo(X, CV_32FC1);
pow(X, gamma, I);
meanI = 0.0;
pow(abs(I), alpha, tmp);
meanI = mean(tmp).val[0];
I = I / pow(meanI, 1.0/alpha);
meanI = 0.0;
pow(min(abs(I), tau), alpha, tmp2);
meanI = mean(tmp2).val[0];
I = I / pow(meanI, 1.0/alpha);
for(int r = 0; r < I.rows; r++) {
for(int c = 0; c < I.cols; c++) {
I.at<float>(r,c) = tanh(I.at<float>(r,c) / tau);
}
}
I = tau * I;
return I;
}
The function takes an input as a gray scale image or CV_8UC1 type, and it outputs a matrix of CV_32FC1 type. All I know is the function makes the input image lighter, increases its contrast. When I show the image using imshow function, I can see the output of tan_triggs_preprocessing very clearly, and actually the output lighter, more contrast compares to the source image. But the problem is when I save it as image format (JPG for example) using imwrite function, it's totally black. I can't see anything.
I checked the value of elements in the output, and I saw that their values are between [0.06.., 2.3...]. Here are my questions, hopefully you can help me, thank you so much.
Can I write an CV_32FC1 as image file format?
Why is the file written by imwrite above totally black?
I also looked for min and max value in the output, so I can normalize it in to 256 bins for CV_8UC1, but it doesn't work, even when I use imshow or imwrite.
How can I convert it to CV_8UC1 or write it as image file format? I used convertTo but it doesn't work as well.
Thank a lot.
imwrite/imread can only handle 8/16/24/32bit integral data, not floats (if you don't count Ilm/exr)
you probably want :
Mat gray_in = ...
Mat gray_out;
cv::normalize( tan_triggs_preprocessing(gray_in), gray_out, 0, 255, NORM_MINMAX, CV_8UC1);
(admittedly hard to spot, but it's even in the small print of bytefish's code ;)
also, please look at alternatives to that, like equalizehist and CLAHE

Understanding OpenCV's undistort function

I'm looking to undistort an image using the distortion coefficients that I've computed for my camera, without changing the camera matrix. This is exactly what undistort() does, but I wanted to draw the output to a larger canvas image.
When I tried this:
Mat drawtransform = getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, size, 1.0, size * 2);
undistort(inputimage, undistorted, cameraMatrix, distCoeffs, drawtransform);
It still wrote out the same sized image, but only the top left quarter of the scaled-up-by-two undistorted result. Like the documentation says, undistort writes into a target image of the same size.
It's pretty obvious that I can just go copy out and reimplement a slightly tweaked version of undistort() but I am having some trouble understanding what it is doing. Here's the source:
void cv::undistort( InputArray _src, OutputArray _dst, InputArray _cameraMatrix,
InputArray _distCoeffs, InputArray _newCameraMatrix )
{
Mat src = _src.getMat(), cameraMatrix = _cameraMatrix.getMat();
Mat distCoeffs = _distCoeffs.getMat(), newCameraMatrix = _newCameraMatrix.getMat();
_dst.create( src.size(), src.type() );
Mat dst = _dst.getMat();
CV_Assert( dst.data != src.data );
int stripe_size0 = std::min(std::max(1, (1 << 12) / std::max(src.cols, 1)), src.rows);
Mat map1(stripe_size0, src.cols, CV_16SC2), map2(stripe_size0, src.cols, CV_16UC1);
Mat_<double> A, Ar, I = Mat_<double>::eye(3,3);
cameraMatrix.convertTo(A, CV_64F);
if( distCoeffs.data )
distCoeffs = Mat_<double>(distCoeffs);
else
{
distCoeffs.create(5, 1, CV_64F);
distCoeffs = 0.;
}
if( newCameraMatrix.data )
newCameraMatrix.convertTo(Ar, CV_64F);
else
A.copyTo(Ar);
double v0 = Ar(1, 2);
for( int y = 0; y < src.rows; y += stripe_size0 )
{
int stripe_size = std::min( stripe_size0, src.rows - y );
Ar(1, 2) = v0 - y;
Mat map1_part = map1.rowRange(0, stripe_size),
map2_part = map2.rowRange(0, stripe_size),
dst_part = dst.rowRange(y, y + stripe_size);
initUndistortRectifyMap( A, distCoeffs, I, Ar, Size(src.cols, stripe_size),
map1_part.type(), map1_part, map2_part );
remap( src, dst_part, map1_part, map2_part, INTER_LINEAR, BORDER_CONSTANT );
}
}
About half of the lines here are for sanity checking and initializing input parameters. What I'm confused about is what's going on with map1 and map2. These names are sadly less descriptive than most. I must be missing some explanation, maybe it's tucked away in some introduction page, or under the doc for another function.
map1 is a two channel signed short integer matrix and map2 is an unsigned short integer matrix, both are of dimension (height, max(4096/width, 1)). The question is, why? What will these maps contain? What is the significance and purpose of this striping? What is the significance and purpose of the strange dimension of the stripes?
Use initUndistortRectifyMap to obtain the transformation to the scale you desire , then apply its output (the two matrices you mention) to remap .
The first map is used to compute the transform the x coordinate at each pixel position, the second is used to transform the y coordinate.
You might want to read the description for the function remap. The map represents the pixel X,Y location in the source image for every pixel in the destination image. Map1_part is every X location in the source, and Map2_part is every Y location in the source.
Without reading into it much, the striping could be a method of speeding up the transformation process.
EDIT:
Also, if you are looking to just scale your image to a larger dimension you could just re-size the output image.
double scaleX = 2.0;
double scaleY = 2.0;
cv::Mat undistortedScaled;
cv::resize(undistorted, undistortedScaled, cv::Size(0,0), scaleX, scaleY);

Normalize pixel values between 0 and 1

I am looking to normalize the pixel values of an image to the range [0..1] using C++/OpenCV. However, when I do the normalization using either image *= 1./255 or the normalize function the pixel values are rounded down to zero. I have tried setting the image to type CV_32FC3.
Below is the code I have:
Mat image;
image = imread(imageLoc, CV_LOAD_IMAGE_COLOR | CV_LOAD_IMAGE_ANYDEPTH);
Mat tempImage;
// (didn't work) tempImage *= 1./255;
image.convertTo(tempImage, CV_32F, 3);
normalize(image, tempImage, 0, 1, CV_MINMAX);
int r = 100;
int c = 150;
uchar* ptr = (uchar*)(tempImage.data + r * tempImage.step);
Vec3f tempVals;
tempVals.val[0] = ptr[3*c+1];
tempVals.val[1] = ptr[3*c+2];
tempVals.val[2] = ptr[3*c+3];
cout<<" temp image - "<< tempVals << endl;
uchar* ptr2 = (uchar*)(image.data + r * image.step);
Vec3f imVals;
imVals.val[0] = ptr2[3*c+1];
imVals.val[1] = ptr2[3*c+2];
imVals.val[2] = ptr2[3*c+3];
cout<<" image - "<< imVals << endl;
This produces the following output in the console:
temp image - [0, 0, 0]
image - [90, 78, 60]
You can make convertTo() do the normalization for you:
image.convertTo(tempImage, CV_32FC3, 1.f/255);
You are passing 3 to convertTo(), presumably as channel-count, but that's not the correct signature.
I used the normalize function and it worked (Java):
Core.normalize(src,dst,0.0,1.0,Core.NORM_MINMAX,CvType.CV_32FC1);
You should use a 32F depth for your destination image. I believe the reason for this, is that since you need to get decimal values, you should use an a non-integer OpenCV data type. According to this table, the float types correspond to the 32F depth. I chose the number of channels to be 1 and it worked; CV_32FC1
Remember also that it's unlikely to spot any visual difference in the image.
Finally, since you probably have thousands of pixels in your image, your console might seem that it's printing only zeros. However due to the large amount of data, try to use CTRL+F to see what's going on. Hope this helps.

Creating NDWI Matrix in opencv

Firstly, if you don't know, i should tell what is ndwi. Ndwi stands for normalized difference water index. It is a graphical indicator for water and the value range is [-1 1]. Ndwi is defined as follows:
(Green - NIR) / (Green + NIR)
I am middle of a simple coastline extraction tool based on opencv. I have accomplished it in MATLAB and the result is shown like this:
However, opencv version of the result is look like binarized:
When i debugged the program, i see that minimum value in the ndwi matrix is zero and this is wrong because it should be -0.8057. The code which is responsible for ndwi calculation (opencv version) as follows:
Mat ndwi = (greenRoi - nirRoi) / (greenRoi + nirRoi);
double min;
double max;
minMaxIdx(ndwi, &min, &max);
Mat adjNDWI;
convertScaleAbs(ndwi, adjNDWI, 255 / max);
What is the problem in here and how can i achieve to calculate the right ndwi values?
Note:
greenRoi and nirRoi are created in this way:
Rect rectangle = boundingRect(Mat(testCorners)); //vector<Point2f> testCorners(4);
Mat testImgGreen = imread((LPCSTR)testImgGreenPath, 0);
Mat testImgNir = imread((LPCSTR)testImgNirPath, 0);
Mat greenRoi(testImgGreen, rectangle);
Mat nirRoi(testImgNir, rectangle);
You need to explicitly create a floating point cv::Mat
cv::Mat image(cols,rows,CV_32FC1) or CV_64FC1 if you need doubles
Elements of greenRoi, nirRoi and ndwi will all be uchar's (Mat will be CV_8UC1).
Let's say greenRoi = 10, nirRoi = 40.
Your answer is not (10 - 40)/(10+40) = -0.6. The answer has to be positive (because it unsigned) and can't be a fraction. According to my calculator, this will give 0.
#Martin Beckett is correct, convert testImgGreen and testImgNir to matrices with a float type and it will work. You need:
testImgGreen.convertTo(testImgGreen, CV_32F);
testImgNir.convertTo(testImgNir , CV_32F);
Mat greenRoi(testImgGreen, rectangle);
Mat nirRoi(testImgNir, rectangle);
Mat ndwi = (greenRoi - nirRoi) / (greenRoi + nirRoi);

How to get cv::calcOpticalFlowSF to work?

I am useing the 2.4.4 version of OpenCV. - i know its a beta
but there is an example about cv::calcOpticalFlowSF the method in the example folder called: simpleflow_demo.cpp. But when i copy this demo and use it with my input images, it starts processing and after some seconds it came back a crash report.
The documentation about the method is a little bit strange, saying the output files are a x- and yflow instead of the cv::Mat& flow which the method actually wants.
Any ideas how to fix the problem to get the function working?
Try this simple demo that worked for me, then modify for your needs (display help from here):
Mat frame1 = imread("/home/radford/Desktop/1.png");
Mat frame2 = imread("/home/radford/Desktop/2.png");
namedWindow("flow");
Mat flow;
calcOpticalFlowSF(frame1, frame2, flow, 3, 2, 4);
Mat xy[2];
split(flow, xy);
//calculate angle and magnitude
Mat magnitude, angle;
cartToPolar(xy[0], xy[1], magnitude, angle, true);
//translate magnitude to range [0;1]
double mag_max;
minMaxLoc(magnitude, 0, &mag_max);
magnitude.convertTo(magnitude, -1, 1.0/mag_max);
//build hsv image
Mat _hsv[3], hsv;
_hsv[0] = angle;
_hsv[1] = Mat::ones(angle.size(), CV_32F);
_hsv[2] = magnitude;
merge(_hsv, 3, hsv);
//convert to BGR and show
Mat bgr;//CV_32FC3 matrix
cvtColor(hsv, bgr, COLOR_HSV2BGR);
imshow("flow", bgr);
waitKey(0);
In the example opencv/samples/cpp/simpleflow_demo.cpp there is a code block
if (frame1.type() != 16 || frame2.type() != 16) {
printf(APP_NAME "Images should be of equal type CV_8UC3\n");
exit(1);
}
So, grey images should be converted to CV_8UC3. For example using cvtColor(grey, grey3, CV_GRAY2RGB);