I load an image in grayscale mode into Mat image. I use image.convertTo(image, CV_32F);
to convert the data type to double. I would like to convert the image into a vector<double>, so I iterate through the matrix in the following way:
int channels = image.channels();
int nRows = image.rows;
int nCols = image.cols;
vector<double> vectorizedMatrix (nRows*nCols);
if (image.isContinuous()) {
nCols *= nRows;
nRows = 1;
}
double* pI;
int k = 0;
for (int i=0; i<nRows; i++) {
pI = image.ptr<double>(i);
for (int j=0;j<nCols;j++) {
vectorizedMatrix.at(k) = pI[j];
k++;
}
}
return vectorizedMatrix;
When checking the data I get, I see huge values in the area of 10^10, which cannot be. Am I iterating wrongly through the matrix or does the function convertTo do something I'm not aware of?
"I use image.convertTo(image, CV_32F); to convert the data type to double"
no, that will convert to float. if you want double, instead use:
image.convertTo(image, CV_64F);
Related
I do not understand why I am getting core dumped here:
int nrows = 480, ncols = 640;
cv::Mat m3_8u;
// Create a variable of type cv::Mat* named m3_8u which has three channels with a
// depth of 8bit per channel. Then, set the first channel to 255 and display the result.
m3_8u.create(nrows, ncols, CV_8UC3);
for (int i=0; i<nrows; i++){
for (int j=0; j<ncols; j++){
m3_8u.at<cv::Vec3f>(i,j)[0] = 255;
}
}
I want to change all the values from the first color dimension.
I want to do an operation like this however I cannot get the values of the vector Mat and change them. table is a 1 dimensional array by the way. Thanks.
vector<Mat> orjchannel;
vector<Mat> refchannel;
// There are some functions here
for (int i = 0; i < 512; i++){
for (int j = 0; j < 512; j++){
double value = refchannel[i][j]; // This part does not work
orjchannel[i][j] = tables[value];
With OpenCV, you typically access the values of a Mat with the at<DATATYPE>(r,c) command. For example...
// Mat constructor
Mat data(4, 1, CV_64FC1);
// Set Value
data.at<double>(0,0) = 4;
// Get Value
double value = data.at<double>(0,0);
I'm not certain about my mean function. In Matlab, the mean of my image is 135.3565 by using mean2; however, my function gives 140.014 and OpenCV built-in cv::mean gives me [137.67, 152.467, 115.933, 0]. This is my code.
double _mean(const cv::Mat &image)
{
double N = image.rows * image.cols;
double mean;
for (int rows = 0; rows < image.rows; ++rows)
{
for (int cols = 0; cols < image.cols; ++cols)
{
mean += (float)image.at<uchar>(rows, cols);
}
}
mean /= N;
return mean;
}
My guess is that you are feeding one type of image to Matlab and another type to your algoritm and to the opencv built-in function.
The mean2 function of Matlab takes a 2D image (grayscale) . Your function assumes that the image is 2D matrix of unsigned chars (grayscale too), and when you do this:
mean += (float)image.at<uchar>(rows, cols);
and you pass a color image to the function, an incorrect value is retrieved. Try to convert your image to grayscale before passing to your function and compare the result with Matlab.
For a color image, modify your function to this:
double _mean(const cv::Mat &image)
{
double N = image.rows * image.cols * image.channels();
double mean;
for (int rows = 0; rows < image.rows; ++rows)
{
for (int cols = 0; cols < image.cols; ++cols)
{
for(int channels = 0; channels < image.channels(); ++channels)
{
mean += image.at<cv::Vec3b>(rows, cols)[channels];
}
}
}
mean /= N;
return mean;
}
and in Matlab compute the mean with
mean(image(:))
which will vectorize your image before compute the mean. Compare the results.
The opencv function computes the mean of each channel of the image separately, so the result is a vector of the means of each channel.
I hope this will help!
I want to declare, populate, access a Multi-Dimensional Matrix in OpenCV (C++) which is compatible with namespace cv. I found no quick and easy to learn examples on them. Can you please help me out?
Here is a short example from the NAryMatIterator documentation; it shows how to create, populate, and process a multi-dimensional matrix in OpenCV:
void computeNormalizedColorHist(const Mat& image, Mat& hist, int N, double minProb)
{
const int histSize[] = {N, N, N};
// make sure that the histogram has a proper size and type
hist.create(3, histSize, CV_32F);
// and clear it
hist = Scalar(0);
// the loop below assumes that the image
// is a 8-bit 3-channel. check it.
CV_Assert(image.type() == CV_8UC3);
MatConstIterator_<Vec3b> it = image.begin<Vec3b>(),
it_end = image.end<Vec3b>();
for( ; it != it_end; ++it )
{
const Vec3b& pix = *it;
hist.at<float>(pix[0]*N/256, pix[1]*N/256, pix[2]*N/256) += 1.f;
}
minProb *= image.rows*image.cols;
Mat plane;
NAryMatIterator it(&hist, &plane, 1);
double s = 0;
// iterate through the matrix. on each iteration
// it.planes[*] (of type Mat) will be set to the current plane.
for(int p = 0; p < it.nplanes; p++, ++it)
{
threshold(it.planes[0], it.planes[0], minProb, 0, THRESH_TOZERO);
s += sum(it.planes[0])[0];
}
s = 1./s;
it = NAryMatIterator(&hist, &plane, 1);
for(int p = 0; p < it.nplanes; p++, ++it)
it.planes[0] *= s;
}
Also, check out the cv::compareHist function for another usage example of the NAryMatIterator here.
To create a multi-dimensional matrix that is of size 100x100x3, using floats, one channel, and with all elements initialized to 10 you write like this:
int size[3] = { 100, 100, 3 };
cv::Mat M(3, size, CV_32FC1, cv::Scalar(10));
To loop over and output the elements in the matrix you can do:
for (int i = 0; i < 100; i++)
for (int j = 0; j < 100; j++)
for (int k = 0; k < 3; k++)
std::cout << M.at<cv::Vec3f>(i,j)[k] << ", ";
However, beware of the troubles with using multi-dimensional matrices as documented here: How do i get the size of a multi-dimensional cv::Mat? (Mat, or MatND)
How would I be able to cycle through an image using opencv as if it were a 2d array to get the rgb values of each pixel? Also, would a mat be preferable over an iplimage for this operation?
cv::Mat is preferred over IplImage because it simplifies your code
cv::Mat img = cv::imread("lenna.png");
for(int i=0; i<img.rows; i++)
for(int j=0; j<img.cols; j++)
// You can now access the pixel value with cv::Vec3b
std::cout << img.at<cv::Vec3b>(i,j)[0] << " " << img.at<cv::Vec3b>(i,j)[1] << " " << img.at<cv::Vec3b>(i,j)[2] << std::endl;
This assumes that you need to use the RGB values together. If you don't, you can uses cv::split to get each channel separately. See etarion's answer for the link with example.
Also, in my cases, you simply need the image in gray-scale. Then, you can load the image in grayscale and access it as an array of uchar.
cv::Mat img = cv::imread("lenna.png",0);
for(int i=0; i<img.rows; i++)
for(int j=0; j<img.cols; j++)
std::cout << img.at<uchar>(i,j) << std::endl;
UPDATE: Using split to get the 3 channels
cv::Mat img = cv::imread("lenna.png");
std::vector<cv::Mat> three_channels = cv::split(img);
// Now I can access each channel separately
for(int i=0; i<img.rows; i++)
for(int j=0; j<img.cols; j++)
std::cout << three_channels[0].at<uchar>(i,j) << " " << three_channels[1].at<uchar>(i,j) << " " << three_channels[2].at<uchar>(i,j) << std::endl;
// Similarly for the other two channels
UPDATE: Thanks to entarion for spotting the error I introduced when copying and pasting from the cv::Vec3b example.
Since OpenCV 3.0, there are official and fastest way to run function all over the pixel in cv::Mat.
void cv::Mat::forEach (const Functor& operation)
If you use this function, operation is runs on multi core automatically.
Disclosure : I'm contributor of this feature.
If you use C++, use the C++ interface of opencv and then you can access the members via http://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way or using cv::Mat::at(), for example.
This is an old question but needs to get updated since opencv is being actively developed. Recently, OpenCV has introduced parallel_for_ which complies with c++11 lambda functions. Here is the example
parallel_for_(Range(0 , img.rows * img.cols), [&](const Range& range){
for(int r = range.start; r<range.end; r++ )
{
int i = r / img.cols;
int j = r % img.cols;
img.ptr<uchar>(i)[j] = doSomethingWithPixel(img.at<uchar>(i,j));
}
});
This is mention-worthy that this method uses the CPU cores in modern computer architectures.
Since OpenCV 3.3 (see changelog) it is also possible to use C++11 style for loops:
// Example 1
Mat_<Vec3b> img = imread("lena.jpg");
for( auto& pixel: img ) {
pixel[0] = gamma_lut[pixel[0]];
pixel[1] = gamma_lut[pixel[1]];
pixel[2] = gamma_lut[pixel[2]];
}
// Example 2
Mat_<float> img2 = imread("float_image.exr", cv::IMREAD_UNCHANGED);
for(auto& p : img2) p *= 2;
The docs show a well written comparison of different ways to iterate over a Mat image here.
The fastest way is to use C style pointers. Here is the code copied from the docs:
Mat& ScanImageAndReduceC(Mat& I, const uchar* const table)
{
// accept only char type matrices
CV_Assert(I.depth() != sizeof(uchar));
int channels = I.channels();
int nRows = I.rows;
int nCols = I.cols * channels;
if (I.isContinuous())
{
nCols *= nRows;
nRows = 1;
}
int i,j;
uchar* p;
for( i = 0; i < nRows; ++i)
{
p = I.ptr<uchar>(i);
for ( j = 0; j < nCols; ++j)
{
p[j] = table[p[j]];
}
}
return I;
}
Accessing the elements with the at is quite slow.
Note that if your operation can be performed using a lookup table, the built in function LUT is by far the fastest (also described in the docs).
If you want to modify RGB pixels one by one, the example below will help!
void LoopPixels(cv::Mat &img) {
// Accept only char type matrices
CV_Assert(img.depth() == CV_8U);
// Get the channel count (3 = rgb, 4 = rgba, etc.)
const int channels = img.channels();
switch (channels) {
case 1:
{
// Single colour
cv::MatIterator_<uchar> it, end;
for (it = img.begin<uchar>(), end = img.end<uchar>(); it != end; ++it)
*it = 255;
break;
}
case 3:
{
// RGB Color
cv::MatIterator_<cv::Vec3b> it, end;
for (it = img.begin<cv::Vec3b>(), end = img.end<cv::Vec3b>(); it != end; ++it) {
uchar &r = (*it)[2];
uchar &g = (*it)[1];
uchar &b = (*it)[0];
// Modify r, g, b values
// E.g. r = 255; g = 0; b = 0;
}
break;
}
}
}