OpenCV 2.3 Mat accessing single channel matrix elements - c++

This is how it should be done, and if i try simple code it works:
Mat a= Mat(4,3, CV_32FC1);
float elem_a= a.at<float>(i,j);
But after doing some math, this code gives worng results
Mat intrinsics(3, 3, CV_32FC1 );
Mat distortion( 5, 1, CV_32FC1 );
fs["camera_matrix"] >> intrinsics; //3*3
fs["distortion_coefficients"] >> distortion; //5*1
Mat rvec( 1, 3, CV_32FC1 );
Mat tvec( 1, 3, CV_32FC1 );
Mat R( 3, 3, CV_32FC1 );
Mat A( 3, 3, CV_32FC1 );
solvePnP( Mat(objectPoints), Mat(imagePoints), intrinsics, distortion, rvec, tvec, false );
Rodrigues( rvec, R );
A = intrinsics * R;
cout << "A = " << A << endl;
cout << "A[0] = " << A.at<float>(0,0) << "A[1] = " << A.at<float>(0,1) << endl;
Output:
A =
[-123.6820813196553, 792.0751394843999, -359.9404307669494;
668.8426426360758, -15.08087511838299, -513.8498143647524;
-0.3389607187919322, -0.03644067597638417, -0.9400945209128925]
A[0] = 4.12987e+09 A[1] = -3.48313
What Am I doing wrong?
Ty

Please check the data type of A matrix. I think it was silently converted to CV_64F.

Related

Accessing elements in a multi-channel OpenCV Mat

this is my first post on stackoverflow, so I hope to do everything right, sorry if I don't.
I'm writing code for a function to convert a single RGB value into CIE L*a*b* color space. The function is supposed to take a 3 floats array (RGB channels with values in [0-255]) and to give in output a 3 floats array with the L*a*b* values. To do so, I'm using the cvtColor function available with OpenCV.
As suggested on the openCV website I'm creating the Mat structures (needed by cvtColor) by contructor.
My problem is that, although I think the code runs properly and performs the conversion, I'm unable to get the values contained in the Mat structure back.
Here's my code:
float * rgb2lab(float rgb[3]) {
// bring input in range [0,1]
rgb[0] = rgb[0] / 255;
rgb[1] = rgb[1] / 255;
rgb[2] = rgb[2] / 255;
// copy rgb in Mat data structure and check values
cv::Mat rgb_m(1, 1, CV_32FC3, cv::Scalar(rgb[0], rgb[1], rgb[2]));
std::cout << "rgb_m = " << std::endl << " " << rgb_m << std::endl;
cv::Vec3f elem = rgb_m.at<cv::Vec3f>(1, 1);
float R = elem[0];
float G = elem[1];
float B = elem[2];
printf("RGB =\n [%f, %f, %f]\n", R, G, B);
// create lab data structure and check values
cv::Mat lab_m(1, 1, CV_32FC3, cv::Scalar(0, 0, 0));
std::cout << "lab_m = " << std::endl << " " << lab_m << std::endl;
// convert
cv::cvtColor(rgb_m, lab_m, CV_RGB2Lab);
// check lab value after conversion
std::cout << "lab_m2 = " << std::endl << " " << lab_m << std::endl;
cv::Vec3f elem2 = lab_m.at<cv::Vec3f>(1, 1);
float l = elem2[0];
float a = elem2[1];
float b = elem2[2];
printf("lab =\n [%f, %f, %f]\n", l, a, b);
// generate the output and return
static float lab[] = { l, a, b };
return lab;
}
As you can see, I'm extracting all channels from the Mat structure by the at function and then accessing them individually from the vector. This is proposed as the solution in many places (one of them).
But if I run this code (input vector was {123,10,200}), on cout I correctly get the outputs of the Mat structures (from which I get the algorithm is converting correctly), but as you can see the extracted values are wrong:
rgb_m =
[0.48235294, 0.039215688, 0.78431374]
RGB =
[0.000000, 0.000000, -5758185472.000000]
lab_m =
[0, 0, 0]
lab_m2 =
[35.198029, 70.120964, -71.303688]
lab =
[0.000000, 0.000000, 4822177514157213323960797626368.000000]
Anyone have an idea of what I'm doing wrong?
Thank you so much for all your help!
The first element of a cv::Mat is always at (0, 0), so just correct cv::Vec3f elem = rgb_m.at<cv::Vec3f>(1, 1); by cv::Vec3f elem = rgb_m.at<cv::Vec3f>(0, 0); and cv::Vec3f elem2 = lab_m.at<cv::Vec3f>(1, 1); by cv::Vec3f elem2 = lab_m.at<cv::Vec3f>(0, 0);

how to convert cv::Mat to cv::Matx33f

I have a cv::Mat that I want to convert into a cv::Matx33f. I try to do it like this:
cv::Mat m;
cv::Matx33f m33;
.........
m33 = m;
but all the data gets lost! Any idea how to do this ?
UPDATE
here is a part of the code which causes my problem :
cv::Point2f Order::warpPoint(cv::Point2f pTmp){
cv::Matx33f warp = this->getTransMatrix() ; // the getter gives a cv::Mat back
transformMatrix.copyTo(warp); // because the first method didn't work, I tried to use the copyto function
// and the last try was
warp = cv::Matx33f(transformationMatrix); // and waro still 0
cv::Point3f warpPoint = cv::Matx33f(transformMatrix)*pTmp;
cv::Point2f result(warpPoint.x, warpPoint.y);
return result;
}
To convert from Mat to Matx, one can use the data pointer. For example,
cv::Mat m; // assume we know it is CV_32F type, and its size is 3x3
cv::Matx33f m33((float*)m.ptr());
This should do the job, assuming continuous memory in m. You can check it by:
std::cout << "m " << m << std::endl;
std::cout << "m33 " << m33 << std::endl;
I realise that the question is old, but this should also work:
auto m33 = Matx33f(m.at<float>(0, 0), m.at<float>(0, 1), m.at<float>(0, 2),
m.at<float>(1, 0), m.at<float>(1, 1), m.at<float>(1, 2),
m.at<float>(2, 0), m.at<float>(2, 1), m.at<float>(2, 2));
http://opencv.willowgarage.com/documentation/cpp/core_basic_structures.html says:
"If you need to do some operation on Matx that is not implemented, it is easy to convert the matrix to Mat and backwards."
Matx33f m(1, 2, 3,
4, 5, 6,
7, 8, 9);
cout << sum(Mat(m*m.t())) << endl;
There are now special conversion operators available in cv::Mat class for both ways:
cv::Mat {
template<typename _Tp, int m, int n> operator Matx<_Tp, m, n>() const;
}
cv::Mat tM = cv::getPerspectiveTransform(uvp, svp);
auto ttM = cv::Matx33f(tM);
...
tM = cv::Mat(ttM);

Opencv - how does the filter2D() method actually work?

I did look for the source code to Filter2D but could not find it. Neither could Visual c++.
Are there any experts on the filter2D algorithm here? I know how it's supposed to work but not how it actually works. I made my own filter2d() function to test things, and the results are substantially different from opencvs filter2D(). Here's my code:
Mat myfilter2d(Mat input, Mat filter){
Mat dst = input.clone();
cout << " filter data successfully found. Rows:" << filter.rows << " cols:" << filter.cols << " channels:" << filter.channels() << "\n";
cout << " input data successfully found. Rows:" << input.rows << " cols:" << input.cols << " channels:" << input.channels() << "\n";
for (int i = 0-(filter.rows/2);i<input.rows-(filter.rows/2);i++){
for (int j = 0-(filter.cols/2);j<input.cols-(filter.cols/2);j++){ //adding k and l to i and j will make up the difference and allow us to process the whole image
float filtertotal = 0;
for (int k = 0; k < filter.rows;k++){
for (int l = 0; l < filter.rows;l++){
if(i+k >= 0 && i+k < input.rows && j+l >= 0 && j+l < input.cols){ //don't try to process pixels off the endge of the map
float a = input.at<uchar>(i+k,j+l);
float b = filter.at<float>(k,l);
float product = a * b;
filtertotal += product;
}
}
}
//filter all proccessed for this pixel, write it to dst
st.at<uchar>(i+(filter.rows/2),j+(filter.cols/2)) = filtertotal;
}
}
return dst;
}
Anybody see anything wrong with my implementation? (besides being slow)
Here is my execution:
cvtColor(src,src_grey,CV_BGR2GRAY);
Mat dst = myfilter2d(src_grey,filter);
imshow("myfilter2d",dst);
filter2D(src_grey,dst2,-1,filter);
imshow("filter2d",dst2);
Here is my kernel:
float megapixelarray[basesize][basesize] = {
{1,1,-1,1,1},
{1,1,-1,1,1},
{1,1,1,1,1},
{1,1,-1,1,1},
{1,1,-1,1,1}
};
And here are the (substantially different) results:
Thoughts, anyone?
EDIT: Thanks to Brians answer I added this code:
//normalize the kernel so its sum = 1
Scalar mysum = sum(dst);
dst = dst / mysum[0]; //make sure its not 0
dst = dst * -1; //show negetive
and filter2d worked better. Certain filters give an exact match, and other filters, like the Sobel, fail miserably.
I'm getting close to the actual algorithm, but not there yet. Anyone else with any ideas?
I think the issue is probably one of scale: if your input image is an 8-bit image, most of the time the convolution will produce a value that overflows the maximum value 255.
In your implementation it looks like you are getting the wrapped-around value, but most OpenCV functions handle overflow by capping to the maximum (or minimum) value. That explains why most of the output of OpenCV's function is white, and also why you are getting concentric shapes in your output too.
To account for this, normalize your megapixelarray filter by dividing every value by the entire sum of the filter (i.e. make sure that the sum of the filter values is 1):
For example, instead of this filter (sum = 10):
1 1 1
1 2 1
1 1 1
Try this filter (sum = 1):
0.1 0.1 0.1
0.1 0.2 0.1
0.1 0.1 0.1
Here is my solution for creating the filter2D manually:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
using namespace std;
int main(int argc, const char * argv[]) {
Mat img;
Mat img_conv;
Mat my_kernel;
Mat my_conv;
// Controlling if the image is loaded correctly
img = imread("my_image.jpg",CV_LOAD_IMAGE_COLOR);
if(! img.data )
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
imshow("original image", img);
img.convertTo(img, CV_64FC3);
int kernel_size; // permitted sizes: 3, 5, 7, 9 etc
cout << "Select the size of kernel (it should be an odd number from 3 onwards): \n" << endl;
cin >> kernel_size;
// Defining the kernel here
int selection;
cout << "Select the type of kernel:\n" << "1. Identity Operator \n2. Mean Filter \n3. Spatial shift \n4. Sharpening\n-> ";
cin >> selection;
switch (selection){
case 1:
my_kernel = (Mat_<double>(kernel_size,kernel_size) << 0, 0, 0, 0, 1, 0, 0, 0, 0);
break;
case 2:
my_kernel = (Mat_<double>(kernel_size,kernel_size) << 1, 1, 1, 1, 1, 1, 1, 1, 1) / ( kernel_size * kernel_size);
break;
case 3:
my_kernel = (Mat_<double>(kernel_size,kernel_size) << 0, 0, 0, 0, 0, 1, 0, 0, 0);
break;
case 4:
my_kernel = (Mat_<double>(kernel_size,kernel_size) << -1, -1, -1, -1, 17, -1, -1, -1, -1) / ( kernel_size * kernel_size);
break;
default:
cerr << "Invalid selection";
return 1;
break;
}
cout << "my kernel:\n "<<my_kernel << endl;
// Adding the countour of nulls around the original image, to avoid border problems during convolution
img_conv = Mat::Mat(img.rows + my_kernel.rows - 1, img.cols + my_kernel.cols - 1, CV_64FC3, CV_RGB(0,0,0));
for (int x=0; x<img.rows; x++) {
for (int y=0; y<img.cols; y++) {
img_conv.at<Vec3d>(x+1,y+1)[0] = img.at<Vec3d>(x,y)[0];
img_conv.at<Vec3d>(x+1,y+1)[1] = img.at<Vec3d>(x,y)[1];
img_conv.at<Vec3d>(x+1,y+1)[2] = img.at<Vec3d>(x,y)[2];
}
}
//Performing the convolution
my_conv = Mat::Mat(img.rows, img.cols, CV_64FC3, CV_RGB(0,0,0));
for (int x=(my_kernel.rows-1)/2; x<img_conv.rows-((my_kernel.rows-1)/2); x++) {
for (int y=(my_kernel.cols-1)/2; y<img_conv.cols-((my_kernel.cols-1)/2); y++) {
double comp_1=0;
double comp_2=0;
double comp_3=0;
for (int u=-(my_kernel.rows-1)/2; u<=(my_kernel.rows-1)/2; u++) {
for (int v=-(my_kernel.cols-1)/2; v<=(my_kernel.cols-1)/2; v++) {
comp_1 = comp_1 + ( img_conv.at<Vec3d>(x+u,y+v)[0] * my_kernel.at<double>(u + ((my_kernel.rows-1)/2) ,v + ((my_kernel.cols-1)/2)));
comp_2 = comp_2 + ( img_conv.at<Vec3d>(x+u,y+v)[1] * my_kernel.at<double>(u + ((my_kernel.rows-1)/2),v + ((my_kernel.cols-1)/2)));
comp_3 = comp_3 + ( img_conv.at<Vec3d>(x+u,y+v)[2] * my_kernel.at<double>(u + ((my_kernel.rows-1)/2),v + ((my_kernel.cols-1)/2)));
}
}
my_conv.at<Vec3d>(x-((my_kernel.rows-1)/2),y-(my_kernel.cols-1)/2)[0] = comp_1;
my_conv.at<Vec3d>(x-((my_kernel.rows-1)/2),y-(my_kernel.cols-1)/2)[1] = comp_2;
my_conv.at<Vec3d>(x-((my_kernel.rows-1)/2),y-(my_kernel.cols-1)/2)[2] = comp_3;
}
}
my_conv.convertTo(my_conv, CV_8UC3);
imshow("convolution - manual", my_conv);
// Performing the filtering using the opencv funtions
Mat dst;
filter2D(img, dst, -1 , my_kernel, Point( -1, -1 ), 0, BORDER_DEFAULT );
dst.convertTo(dst, CV_8UC3);
imshow("convlution - opencv", dst);
waitKey();
return 0;
}

results compareHist in openCV

I'm trying to compare two histograms which I stored as an array. I'm new with the c++ interface (cv::Mat) and calculating histograms in OpenCV.
My code:
int testArr1[4] = {12, 10, 11, 11};
int testArr2[4] = {12, 0, 11, 0};
cv::Mat M1 = cv::Mat(1,4,CV_8UC1, testArr1);
cv::Mat M2 = cv::Mat(1,4,CV_8UC1, testArr2);
int histSize = 4;
float range[] = {0, 20};
const float* histRange = {range};
bool uniform = true;
bool accumulate = false;
cv::Mat a1_hist, a2_hist;
cv::calcHist(&M1, 1, 0, cv::Mat(), a1_hist, 1, &histSize, &histRange, uniform, accumulate );
cv::calcHist(&M2, 1, 0, cv::Mat(), a2_hist, 1, &histSize, &histRange, uniform, accumulate );
double compar_c = cv::compareHist(a1_hist, a2_hist, CV_COMP_CORREL);
double compar_chi = cv::compareHist(a1_hist, a2_hist, CV_COMP_CHISQR);
double compar_bh = cv::compareHist(a1_hist, a2_hist, CV_COMP_BHATTACHARYYA);
double compar_i = cv::compareHist(a1_hist, a2_hist, CV_COMP_INTERSECT);
cout << "compare(CV_COMP_CORREL): " << compar_c << "\n";
cout << "compare(CV_COMP_CHISQR): " << compar_chi << "\n";
cout << "compare(CV_COMP_BHATTACHARYYA): " << compar_bh << "\n";
cout << "compare(CV_COMP_INTERSECT): " << compar_i << "\n";
The results are a bit unexpected:
compare(CV_COMP_CORREL): 1
compare(CV_COMP_CHISQR): 0
compare(CV_COMP_BHATTACHARYYA): 0
compare(CV_COMP_INTERSECT): 4
For intersection, for example, I expected something like 0.5. What am I doing wrong? Can I not put arrays in a cv::mat? Or did I choose the wrong histogram "settings"?
The problem are your first 4 lines where you are converting the c array of integers to a matrix of chars. The constructor assumes a char array and therefore can't read the values properly. Your matrices M1 and M2 don't contain the correct values.
But if you change the following lines, so that the type of the array matches the type of the matrix:
char testArr1[4] = {12, 10, 11, 11};
char testArr2[4] = {12, 0, 11, 0};
I get the following output from your program:
compare(CV_COMP_CORREL): 0.57735
compare(CV_COMP_CHISQR): 2.66667
compare(CV_COMP_BHATTACHARYYA): 0.541196
compare(CV_COMP_INTERSECT): 2

How to get the max value from n dimensional array in OpenCV

I am trying to get max value from a 3-d Mat, but minmaxIdx and mixmaxloc both failed to do this.
int sz[] = {BIN, BIN, BIN};
Mat accumarray(3, sz, CV_8U, Scalar::all(0)) ;
double testMaxval = 0;
int minIdx = accumarray.dims ;
minMaxIdx(accumarray, NULL, &testMaxval,NULL,minIdx ,NULL) ;
cout<<testMaxval<<endl ;
This code wouldn't work, so Can I use max(), minmaxidx(), or minmaxloc() to get the max value efficiently without manually process the entire n-dimensional array?
Following code works for me with OpenCV 2.3.1:
int sz[] = {3, 3, 3};
Mat accumarray(3, sz, CV_8U, Scalar::all(0));
accumarray.at<uchar>(0, 1, 2) = 20;
double testMaxval;
int maxIdx[3];
minMaxIdx(accumarray, 0, &testMaxval, 0, maxIdx);
cout << testMaxval << endl ;
cout << maxIdx[0] << ", " << maxIdx[1] << ", " << maxIdx[2] << endl;
Use Mat() instead of NULL for Mask or you will vioulate an assertion Mask.empty()
Mat m;
double min, max;
int minInd, maxInd;
cv::minMaxIdx(m, &min, &max, &minInd, &maxInd, Mat());