I am trying to access an 3D histogram of a RGB image. But the histogram matrix returns the number of rows and columns equal to -1. I want to iterate through the histogram and check the individual values in the 3D matrix. But, when I check the number of rows and columns in the matrix, I get -1 as shown below.
CODE
int main( int argc, const char** argv ) {
Mat image = imread("fl.png");
int histSize[3] = {8, 8, 8};
float range[2] = {0, 256};
const float * ranges[3] = {range, range, range};
int channels[3] = {0, 1, 2};
Mat hist;
calcHist(&image, 1, channels, Mat(), hist, 3, histSize, ranges);
cout << "Hist.rows = "<< hist.rows << endl;
cout << "Hist.cols = "<< hist.cols << endl;
return 0;
}
OUTPUT
Hist.rows = -1
Hist.cols = -1
What mistake am I making? How can I access the individual matrix values.
From the documentation of Mat:
//! the number of rows and columns or (-1, -1) when the array has more than 2 dimensions
But you have 3 dimensions.
You can access individual values of your histogram using hist.at<T>(i,j,k).
Or you can use iterators as described in the documentation here.
Code
// Build with gcc main.cpp -lopencv_highgui -lopencv_core -lopencv_imgproc
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
using std::cout;
using std::endl;
using namespace cv; # Please, don't include whole namespaces!
int main( int argc, const char** argv ) {
Mat image = imread("good.jpg");
int histSize[3] = {8, 8, 8};
float range[2] = {0, 256};
const float * ranges[3] = {range, range, range};
int channels[3] = {0, 1, 2};
Mat hist;
calcHist(&image, 1, channels, Mat(), hist, 3, histSize, ranges);
cout << "Hist.dims = " << hist.dims << endl;
cout << "Value: " << hist.at<double>(0,0, 0) << endl;
cout << "Hist.rows = "<< hist.rows << endl;
cout << "Hist.cols = "<< hist.cols << endl;
return 0;
}
Iterate through every value:
for (MatConstIterator_<double> it = hist.begin<double>(); it != hist.end<double>(); it++) {
cout << "Value: " << *it << "\n";
}
cout << std::flush;
Iterate through every value using indices:
for (int i=0; i<histSize[0]; i++) {
for (int j=0; j<histSize[1]; j++) {
for (int k=0; k<histSize[2]; k++) {
cout << "Value(" << i << ", " << j << ", " << k <<"): " << hist.at<double>(i, j, k) << "\n";
}
}
}
cout << std::flush;
Related
I want to use these code to create a 3-d matrix
int size[3] = { 100, 100,100};
cv::Mat mat3D(3, size, CV_8UC1, cv::Scalar(0));
but after I debug my code, I find
it seems that I don't get the right matrix, what's the problem?
Your code seems ok, maybe the debugger is misinterpreting.
This code displays well the 24 elements (I changed the dimensions to have a smaller matrix)
int size[3] = { 2, 3, 4};
cv::Mat mat3D(3, size, CV_8UC1, cv::Scalar(0));
std::cout << "Total size " << mat3D.size << std::endl;
int counter = 0;
for(cv::MatConstIterator_<uchar> it = mat3D.begin<uchar>(); it != mat3D.end<uchar>(); ++it){
std::cout << " " << (int) *it;
counter ++;
}
std::cout << std::endl;
std::cout << counter << " elts" << std::endl;
Your code is correct.
Concerning your debugger output, when dealing with multidimensional cv::Mat it is the expected behaviour. The OpenCV documentation for cv:Mat::rows reads: "the number of rows and columns or (-1, -1) when the matrix has more than 2 dimensions". Indeed, your debugger displays the values of channels x rows x columns (1 x -1 x -1).
You can try this to enumerate some cv::Mat related attributes:
std::cout
<< "dims : " << mat3D.rows
<< "\nchannels() : " << mat3D.channels()
<< "\nrows : " << mat3D.rows
<< "\ncols : " << mat3D.cols
<< "\nsize() : " << mat3D.size()
<< "\nsize : " << mat3D.size;
Your output should be:
dims : 3
channels() : 1
rows : -1
cols : -1
size() : [-1 x -1]
size : 1000 x 100 x 100
You can try this:
int DImensions3D[] = { 100,100 ,100 };
cv::Mat RTstruct3D(3,DImensions3D, CV_8U, Scalar(0));
One way is, you can use .reshape()
if you have one dimensional array
you can convert it to 3x3 array using below code
int x[] = {1,2,3,4,5,6,7,8,9};
int len_x = sizeof(x)/sizeof(x[0]);
cv::Mat mat3D(1,len_x, CV_32S,x);
mat3D = mat3D.reshape(3,3); //(dimension, height)
I have been working with the image of the retina, I have read many post and I have tried to replicate, I read an article where they use the following formula to normalize and that the bottom of the image does not affect the blood vessels, only that time to use it in OpenCV does not work in the same way, so I do not know how I can normalize so that the vessels are not affected by the bottom of the retina. Next I leave the code that I used,
using namespace cv;
using namespace std;
int main(int argc, char** argv ){
Mat green= cv::imread("green.png");
Mat img= cv::imread("img.jpg");
double minVal;
double maxVal;
minMaxLoc( green, &minVal, &maxVal);
cout << "min val : " << minVal << endl;
cout << "max val: " << maxVal << endl;
double minVal2;
double maxVal2;
double media = 58;
minMaxLoc( img, &minVal2, &maxVal2);
cout << "min val : " << minVal2 << endl;
cout << "max val: " << maxVal2 << endl;
Mat eqIm(green.rows,green.cols,green.type());
int nl = img.rows; // number of lines
int nc = img.cols * img.channels();
for (int j = 0; j<nl; j++) {// j is each row
for (int ec = 0; ec < nc; ec++) {//ec is each col and channels
eqIm.data[j*img.cols*img.channels() + ec] =
((green.data[j*img.cols*img.channels() + ec] - (maxVal)) *(((maxVal2-minVal2/((minVal)-(maxVal)))+ (minVal2))));
}
}
imwrite("eqIm.png", eqIm);
waitKey(0);
return 0;
}
I show the results also of the images, below you can see the original images:
The result image:
The formula used:
In the OpenCV Tutorial
http://docs.opencv.org/master/d6/d6d/tutorial_mat_the_basic_image_container.html
is the following example for creating a Mat.
int sz[3] = {2,2,2};
Mat L(3,sz, CV_8UC(1), Scalar::all(0));
This works fine, but when i try to print the Mat my programm crashes.
cout << "L = " << endl << " " << L << endl << endl;
Why doesn't this work ?
Is there a way to do this without loops or splitting the Mat L ?
To print n-dim matrix you could use Matrix slice. Since 2d matrices are stored row by row, 3d matrices plane by plane and so on, you could use code:
cv::Mat sliceMat(cv::Mat L,int dim,std::vector<int> _sz)
{
cv::Mat M(L.dims - 1, std::vector<int>(_sz.begin() + 1, _sz.end()).data(), CV_8UC1, L.data + L.step[0] * 0);
return M;
}
To perform mat slice.For more dimensions you should make more slices. Example shows 3 and 4 dimension matrices:
std::cout << "3 dimensions" << std::endl;
std::vector<int> sz = { 3,3,3 };
cv::Mat L;
L.create(3, sz.data(), CV_8UC1);
L = cv::Scalar(255);
std::cout<< sliceMat(L, 1, sz);
std::cout << std::endl;
std::cout <<"4 dimensions"<< std::endl;
sz = { 5,4,3,5 };
L.create(4, sz.data(), CV_8UC1);
L = cv::Scalar(255);
std::cout << sliceMat(sliceMat(L, 1, sz),2, std::vector<int>(sz.begin() + 1, sz.end()));
end result screen
I am using Eigen Solver. I am having trouble retrieving the values from Vectors/Matrix that I create. For example in the following code, I don't have an error but get a run time error.
#include <iostream>
#include <math.h>
#include <vector>
#include <Eigen\Dense>
using namespace std;
using namespace Eigen;
int main()
{
Matrix3f A;
Vector3f b;
vector<float> c;
A << 1, 2, 3, 4, 5, 6, 7, 8, 10;
b << 3, 3, 4;
cout << "Here is the matrix A:\n" << A << endl;
cout << "Here is the vector b:\n" << b << endl;
Vector3f x = A.colPivHouseholderQr().solve(b);
for (int i = 0; i < 3; i++)
{
c[i] = x[i];
cout << c[i] << " ";
}
//cout << "The solution is:\n" << x << endl;
return 0;
}
How do I retrieve the value in x to a variable of my choice (I need this as this will be a parameter in another function I wrote).
Use
vector<float> c(3);
Or
for (int i = 0; i < 3; i++)
{
c.push_back(x[i]);
cout << c[i] << " ";
}
As stated in the comment, the problem was that c was not resized before assigning values to it. Additionally, you actually don't need the Eigen::Vector3f x, but you can assign the result of the .solve() operation directly to a Map which points to the data of the vector:
#include <iostream>
#include <vector>
#include <Eigen/QR>
using namespace Eigen;
using namespace std;
int main()
{
Matrix3f A;
Vector3f b;
vector<float> c(A.cols());
A << 1, 2, 3, 4, 5, 6, 7, 8, 10;
b << 3, 3, 4;
cout << "Here is the matrix A:\n" << A << endl;
cout << "Here is the vector b:\n" << b << endl;
Vector3f::Map(c.data()) = A.colPivHouseholderQr().solve(b);
for(int i=0; i<3; ++i) std::cout << "c[" << i << "]=" << c[i] << '\n';
}
hi, i'm trying to play a little bit with Mat class.
I want to do a product element wise between two images, the c++/opencv port of MATLAB immultiply.
This is my code:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
Mat imgA, imgB;
Mat imgAB;
Mat product;
void printMinMax(Mat m, string s) {
double minVal;
double maxVal;
Point minLoc;
Point maxLoc;
minMaxLoc( m, &minVal, &maxVal, &minLoc, &maxLoc );
cout << "min val in " << s << ": " << minVal << endl;
cout << "max val in " << s << ": " << maxVal << endl;
}
int main(int /*argc*/, char** /*argv*/) {
cout << "OpenCV version: " << CV_MAJOR_VERSION << " " << CV_MINOR_VERSION << endl;
imgA = imread("test1.jpg");
cout << "original image size: " << imgA.rows << " " << imgA.cols << endl;
cout << "original type: " << imgA.type() << endl;
cvtColor(imgA, imgA, CV_BGR2GRAY);
printMinMax(imgA, "imgA");
imgB = imread("test2.jpg");
cout << "original image size: " << imgB.rows << " " << imgB.cols << endl;
cout << "original type: " << imgB.type() << endl;
cvtColor(imgB, imgB, CV_BGR2GRAY);
printMinMax(imgB, "imgB");
namedWindow("originals", CV_WINDOW_AUTOSIZE);
namedWindow("product", CV_WINDOW_AUTOSIZE);
imgAB = Mat( max(imgA.rows,imgB.rows), imgA.cols+imgB.cols, imgA.type());
imgA.copyTo(imgAB(Rect(0, 0, imgA.cols, imgA.rows)));
imgB.copyTo(imgAB(Rect(imgA.cols, 0, imgB.cols, imgB.rows)));
product = imgA.mul(imgB);
printMinMax(product, "product");
while( true )
{
char c = (char)waitKey(10);
if( c == 27 )
{ break; }
imshow( "originals", imgAB );
imshow( "product", product );
}
return 0;
}
here is the result:
OpenCV version: 2 4
original image size: 500 500
original type: 16
min val in imgA: 99
max val in imgA: 255
original image size: 500 500
original type: 16
min val in imgB: 0
max val in imgB: 255
init done
opengl support available
min val in product: 0
max val in product: 255
I think that max value in the product has to be greater than 255, but is truncated to 255 because the type of the two matrixes is 16.
I have tried to convert the matrixes to CV_32F but the maxVal in the product is 64009 (a number that i don't understand)
Thanks to Wajih comment i have done some basic test, and some basic debug, and i got i work perfectly. I think this could become a mini tutorial on alpha blending and image multiply, but for now is only a few lines of commented code.
note that the 2 images must be of the same size.. and for sure some error checking should be done for a solid code..
Hope it helps someone! And, of course, if you have some hints to make this code more readable or more compact (one-liner guys are very appreciate!) or efficient.. just comment, thank you a lot!
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
void printMinMax(Mat m, string name) {
double minVal;
double maxVal;
Point minLoc;
Point maxLoc;
if(m.channels() >1) {
cout << "ERROR: matrix "<<name<<" must have 1 channel for calling minMaxLoc" << endl;
}
minMaxLoc( m, &minVal, &maxVal, &minLoc, &maxLoc );
cout << "min val in " << name << ": " << minVal << " in loc: " << minLoc << endl;
cout << "max val in " << name << ": " << maxVal << " in loc: " << maxLoc << endl;
}
int main(int /*argc*/, char** /*argv*/) {
cout << "OpenCV version: " << CV_MAJOR_VERSION << " " << CV_MINOR_VERSION << endl; // 2 4
Mat imgA, imgB;
Mat imgAB;
Mat product;
// fast matrix creation, comma-separated initializer
// example1: create a matrix with value from 0 to 255
imgA = Mat(3, 3, CV_8UC1);
imgA = (Mat_<uchar>(3,3) << 0,1,2,3,4,5,6,7,255);
cout << "test Mat 3x3" << endl << imgA << endl;
// not that if a value exceed 255 it is truncated at value%256
imgA = (Mat_<uchar>(3,3) << 0,1, 258 ,3,4,5,6,7,255);
cout << "test Mat 3x3 with last element truncated to 258%256=2" << endl << imgA << endl;
// create a second matrix
imgB = Mat(3, 3, CV_8UC1);
imgB = (Mat_<uchar>(3,3) << 0,1,2,3,4,5,6,7,8);
// now the matrix product. we are multiplying a value that can goes from 0-255 with another 0-255 value..
// the edge cases are "min * min" and "max * max",
// that means: our product is a function that return a value in the domain 0*0-255*255 ; 0-65025
// ah, ah! this number exceed the Mat U8C1 domain!, we need different data types.
// we need a bigger one.. let's say 32FC1
Mat imgA_32FC1 = imgA.clone();
imgA_32FC1.convertTo(imgA_32FC1, CV_32FC1);
Mat imgB_32FC1 = imgB.clone();
imgB_32FC1.convertTo(imgB_32FC1, CV_32FC1);
// after conversion.. value are scaled?
cout << "imgA after conversion:" << endl << imgA_32FC1 << endl;
cout << "imgB after conversion:" << endl << imgB_32FC1 << endl;
product = imgA_32FC1.mul( imgB_32FC1 );
// note: the product values are in the range 0-65025
cout << "the product:" << endl << product << endl;
// now, this does not have much sense, because we started from a 0-255 range Mat and now we have a 0-65025 that is nothing..
// it is not uchar range and it is not float range (that is a lot bigger than that)
// so, we can normalize back to 0-255
// what do i mean with 'normalize' now?
// i mean: scale all values for a constant that maps 0 to 0 and 65025 to 255..
product.convertTo(product, CV_32FC1, 1.0f/65025.0f * 255);
// but it is still a 32FC1.. not as the start matix..
cout << "the product, normalized back to 0-255, still in 32FC1:" << endl << product << endl;
product.convertTo(product, CV_8UC1);
cout << "the product, normalized back to 0-255, now int 8UC1:" << endl << product << endl;
cout << "-----------------------------------------------------------" << endl;
// real stuffs now.
imgA = imread("test1.jpg");
cvtColor(imgA, imgA, CV_BGR2GRAY);
imgB = imread("test2.jpg");
cvtColor(imgB, imgB, CV_BGR2GRAY);
imgA_32FC1 = imgA.clone();
imgA_32FC1.convertTo(imgA_32FC1, CV_32FC1);
imgB_32FC1 = imgB.clone();
imgB_32FC1.convertTo(imgB_32FC1, CV_32FC1);
product = imgA_32FC1.mul( imgB_32FC1 );
printMinMax(product, "product");
product.convertTo(product, CV_32FC1, 1.0f/65025.0f * 255);
product.convertTo(product, CV_8UC1);
// concat two images in one big image
imgAB = Mat( max(imgA.rows,imgB.rows), imgA.cols+imgB.cols, imgA.type());
imgA.copyTo(imgAB(Rect(0, 0, imgA.cols, imgA.rows)));
imgB.copyTo(imgAB(Rect(imgA.cols, 0, imgB.cols, imgB.rows)));
namedWindow("originals", CV_WINDOW_AUTOSIZE);
namedWindow("product", CV_WINDOW_AUTOSIZE);
while( true )
{
char c = (char)waitKey(10);
if( c == 27 )
{ break; }
imshow( "originals", imgAB );
imshow( "product", product );
}
return 0;
}
You are right, you should convert your matrices imgA, imgB to say CV32FC1 type. Since the max values in this matrices is 255, the maximum possible value is 65025. However, the maximum at imgA and imgB may not be in the same location, so 64009 is quite possible.