How should I name my RGB channels, using cv::Mat_ - c++

I want to access my matrix elements in the following manner:
frame[i][j].Red
, that is, the (i,j)-th pixe's red channel.
I have tried:
typedef struct{unsigned char Blue,Green,Red;}Pixel;
typedef cv::Mat_<Pixel> Image;
However when trying to imread(), imwrite() or whatever with the thus defined type, g++ greets me with:
OpenCV Error: Assertion failed (func != 0) in convertTo, file /home/users/mvitkov/projects/opencv-legacy/OpenCV-2.3.1/modules/core/src/convert.cpp, line 937
terminate called after throwing an instance of 'cv::Exception'
what(): /home/users/mvitkov/projects/opencv-legacy/OpenCV-2.3.1/modules/core/src/convert.cpp:937: error: (-215) func != 0 in function convertTo
Update: So no answer to my probably badly asked question. Too bad. The essence of the question is how to address the individual channels with sensible names (red, green, bkue), and not the c-era array indexing notation [2]. Duh!

Here's how you access each channel:
blue = frame.at<cv::Vec3b>(i,j)[0];
green = frame.at<cv::Vec3b>(i,j)[1];
red = frame.at<cv::Vec3b>(i,j)[2];
The above code assumes that you have a 3-channel image where each value is an 8-bit unsigned char (CV_8UC3). This type is used in many common image formats. However, if you have a different type of 3-channel image, here's what you do:
If the image type is 3-channel float (CV_32FC3), then replace cv::Vec3b with cv::Vec3f
If the image type is 3-channel double (CV_64FC3), then replace cv::Vec3b with cv::Vec3d
If the image type is 3-channel int (CV_32SC3), then replace cv::Vec3b with cv::Vec3i
If the image type is 3-channel short int (CV_16SC3) or 16-bit uchar (CV_16UC3), then replace cv::Vec3b with cv::Vec3s
Not sure what image format you're using? Try calling getImgType(frame) (see the code below).
string getImgType(cv::Mat frame)
{
int imgTypeInt = frame.type();
int numImgTypes = 28; // 7 base types, with 4 channel options each (C1, ..., C4)
int enum_ints[] = {CV_8UC1, CV_8UC2, CV_8UC3, CV_8UC4, CV_8SC1, CV_8SC2, CV_8SC3, CV_8SC4, CV_16UC1, CV_16UC2, CV_16UC3, CV_16UC4, CV_16SC1, CV_16SC2, CV_16SC3, CV_16SC4, CV_32SC1, CV_32SC2, CV_32SC3, CV_32SC4, CV_32FC1, CV_32FC2, CV_32FC3, CV_32FC4, CV_64FC1, CV_64FC2, CV_64FC3, CV_64FC4};
string enum_strings[] = {"CV_8U", "CV_8UC1", "CV_8UC2", "CV_8UC3", "CV_8UC4", "CV_8SC1", "CV_8SC2", "CV_8SC3", "CV_8SC4", "CV_16UC1", "CV_16UC2", "CV_16UC3", "CV_16UC4", "CV_16SC1", "CV_16SC2", "CV_16SC3", "CV_16SC4", "CV_32SC1", "CV_32SC2", "CV_32SC3", "CV_32SC4", "CV_32FC1", "CV_32FC2", "CV_32FC3", "CV_32FC4", "CV_64FC1", "CV_64FC2", "CV_64FC3", "CV_64FC4"};
for(int i=0; i<numImgTypes; i++)
{
if(imgTypeInt == enum_ints[i]) return enum_strings[i];
}
return "unknown image type";
}

If the image type is 3-channel short int (CV_16SC3) or 16-bit uchar (CV_16UC3), then replace cv::Vec3b with cv::Vec3s
this is simply not correct. I adressed a 3 channel 16 bit unsigned image once with < Vec3s>, and got a negative value G:29096 B:-21671 R:23413 returned.
If CV_16UC3, adress Mat with < Vec3w>. Using this I got G:29096 B:43865 R:23413
the "s" in CV_16SC3 stands for signed and not short.

Related

Map BGR OpenCV Mat to Eigen Tensor

I'm trying to convert an OpenCV 3-channel Mat to a 3D Eigen Tensor.
So far, I can convert 1-channel grayscale Mat by:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_GRAYSCALE);
Eigen::MatrixXd myMatrix;
cv::cv2eigen(mat, myMatrix);
My attempt to convert a BGR mat to a Tensor have been:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_COLOR);
Eigen::MatrixXd temp;
cv::cv2eigen(mat, temp);
Eigen::Tensor<double, 3> myTensor = Eigen::TensorMap<Eigen::Tensor<double, 3>>(temp.data(), 3, mat.rows, mat.cols);
However, I'm getting the following error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.1.0) /tmp/opencv-20190505-12101-14vk1fh/opencv-4.1.0/modules/core/src/matrix_wrap.cpp:1195:
error: (-215:Assertion failed) !fixedType() || ((Mat*)obj)->type() == mtype in function 'create'
in the line: cv::cv2eigen(mat, temp);
Any help is appreciated!
The answer might be disappointing for you.
After going through 12 pages, My conclusion is you have to separate the RGB to individual channel MAT and then convert to eigenmatrix. Or create your own Eigen type and opencv convert function
In OpenCV it is tested like this. It only allows a single channel greyscale image
https://github.com/daviddoria/Examples/blob/master/c%2B%2B/OpenCV/ConvertToEigen/ConvertToEigen.cxx
And in OpenCV it is implemented like this. Which dont give you much option for custom type aka cv::scalar to eigen std::vector
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
And according to this post,
https://stackoverflow.com/questions/32277887/using-eigen-array-of-arrays-for-rgb-images
I think Eigen was not meant to be used in this way (with vectors as
"scalar" types).
they also have the difficulting in dealing with RGB image in eigen.
Take note that Opencv Scalar and eigen Scalar has a different meaning
It is possible to do so if and only if you use your own datatype aka matrix
So you either choose to store the 3 channel info in 3 eigen matrix and you can use default eigen and opencv routing.
Mat src = imread("img.png",CV_LOAD_IMAGE_COLOR); //load image
Mat bgr[3]; //destination array
split(src,bgr);//split source
//Note: OpenCV uses BGR color order
imshow("blue.png",bgr[0]); //blue channel
imshow("green.png",bgr[1]); //green channel
imshow("red.png",bgr[2]); //red channel
Eigen::MatrixXd bm,gm,rm;
cv::cv2eigen(bgr[0], bm);
cv::cv2eigen(bgr[1], gm);
cv::cv2eigen(bgr[2], rm);
Or you can define your own type and write you own version of the opencv cv2eigen function
custom eigen type follow this. and it wont be pretty
https://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
https://eigen.tuxfamily.org/dox/TopicNewExpressionType.html
Rewrite your own cv2eigen_custom function similar to this
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
So good luck.
Edit
Since you need tensor. forget about cv function
Mat image;
image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Tensor<float, 3> t_3d(image.rows, image.cols, 3);
// t_3d(i, j, k) where i is row j is column and k is channel.
for (int i = 0; i < image.rows; i++)
for (int j = 0; j < image.cols; j++)
{
t_3d(i, j, 0) = (float)image.at<cv::Vec3b>(i,j)[0];
t_3d(i, j, 1) = (float)image.at<cv::Vec3b>(i,j)[1];
t_3d(i, j, 2) = (float)image.at<cv::Vec3b>(i,j)[2];
//cv ref Mat.at<data_Type>(row_num, col_num)
}
watch out for i,j as em not sure about the order. I only write the code based on reference. didnt compile for it.
Also watch out for image type to tensor type cast problem. Some times you might not get what you wanted.
this code should in principle solve your problem
Edit number 2
following the example of this
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
Applied to your case is
cv::Mat frame=imread('myimg.ppm');
TensorMap<Tensor<float, 3>> t_3d(frame.data, image.rows, image.cols, 3);
problem is I'm not sure this will work or not. Even it works, you still have to figure out how the inside data is being organized so that you can get the shape correctly. Good luck
Updated answer - OpenCV now has conversion functions for Eigen::Tensor which will solve your problem. I needed this same functionality too so I made a contribution back to the project for everyone to use. See the documentation here:
https://docs.opencv.org/3.4/d0/daf/group__core__eigen.html
Note: if you want RGB order, you will still need to reorder the channels in OpenCV before converting to Eigen::Tensor

openCV look up table for 16CU1

I need to replace the value of a Mat 8UC1 [0,255] to values of a cv::Mat lookUpTable(1, 256, CV_16UC1); I check un this OpenCV tutorial an explanation which is the fastest method, however, when Im checking the assigned values of the LUT in each position im only sabing 8-bits so Im lossing the other 8-bits. This is the source code:
unsigned short int zDTableHexa[256]={0};
.... get the values...
cv::Mat lookUpTable(1, 256, CV_16UC1);
uchar* p = lookUpTable.data;
for( int i = 0; i < 256; i++){
p[i] = zDTableHexa[i];
cout<<(int)p[i]<<":"<<zDTableHexa[i]<<sizeof(p[i])<<":"<<sizeof(zDTableHexa[i])<<endl;
}
The printing result are:
104:872
101:869
97:865
93:861
90:858
86:854
83:851
80:848
76:844
73:841
70:838
66:834
63:831
When I check in binary is only the first 8-bits.
I understand that the pointer is UCHAR(8bits) but how I can assign the full value?
try
unsigned short* p = (unsigned short*) lookUpTable.data;
OpenCV LUT is working just with CV_8U, so what you can do is splitting the numbers in 3, that is 3x CV_8U, or CV_8UC3. But you cannot have more than 256 elements in the LUT and not other values than 0..255 as index.
In other words, you can parse uchars to uchars or uchars to floats: CV_8UC1 -> CV_8UC1 or CV_8UC1 -> CV_8UC3 (I did not tried it whit CV_8UC4, it could work).
For getting the elements of CV_8UC3 check cv::Vec3b
I found this, that might interest you

OpenCV: Error copying one image to another

I am trying to copy one image to another pixel by pixel (I know there are sophisticated methods available. I am trying to solve another problem and answer to this will be useful).
This is my code:
int main()
{
Mat Img;
Img = imread("../../../stereo_images/left01.jpg");
Mat copyImg = Mat::zeros(Img.size(), CV_8U);
for(int i=0; i<Img.rows; i++){
for(int j=0; j<Img.cols; j++){
copyImg.at<uchar>(j,i) = Img.at<uchar>(j,i);
}}
namedWindow("Image", CV_WINDOW_AUTOSIZE );
imshow("Image", Img);
namedWindow("copyImage", CV_WINDOW_AUTOSIZE );
imshow("copyImage", copyImg);
waitKey(0);
return 0;
}
When I run this code in visual studio I get the following error
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)si
ze.p[0] && (unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channel
s()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3
) - 1))*4) & 15) == elemSize1()) in cv::Mat::at, file c:\opencv\opencv-2.4.9\ope
ncv\build\include\opencv2\core\mat.hpp, line 537
I know for fact that Img's type is CV_8U. Why does this happen ?
Thanks!
// will read in a rgb image , no matter what the content is
Img = imread("../../../stereo_images/left01.jpg");
to make it read grayscale images use:
Img = imread("../../../stereo_images/left01.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then, you don't need to copy per pixel (and you should even avoid that), just use:
Mat im2 = Img.clone();
if you do per-pixel loops, watch out to get the indices right. it's row-col world here, not x,y, so it should be:
copyImg.at<uchar>(i,j) = Img.at<uchar>(i,j);
in your case
I know for fact that Img's type is CV_8U.
But CV_8U is just the image depth (8-bit U-nsigned). The type also specifies the number of channels, which is usually three. One for blue, one for green and one for red in this order as default for OpenCV. The type would be CV_8UC3 (C-hannels = 3). imread will convert even a black and white image to a 3-channel image by default. imread(filename, CV_LOAD_IMAGE_GRAYSCALE) will load a 1-channel image (CV_8UC1). But if you're not sure the easiest solution is
Mat copyImg = Mat::zeros(Img.size(), Img.type());
To access the array elements you have to know the size of it. Using .at<uchar>() on a 3-channel image will only access the first channel because you have 3*8 bit per pixel. So on a 3-channel image you have to use
copyImg.at<Vec3b>(i,j) = Img.at<Vec3b>(i,j);
where Vec3b is a cv::Vec<uchar, 3>. You should also note that the first argument of at<>(,) is the index along dim 0 which are the rows and second argument cols. Or in other words in classic 2d-xy-chart order you access a pixel with .at<>(y,x) == .at<>(Point(x,y)).
your problem is with this line :
copyImg.at<uchar>(j,i) = Img.at<uchar>(j,i);
It should be :
copyImg.at<uchar>(i,j) = Img.at<uchar>(i,j);
Note that if you want to copy image you can simply do this :
Mat copyImg = Img.clone();

Using Mat::at(i,j) in opencv for a 2-D Mat object

I am using Ubuntu 12.04 and OpenCV 2
I have written the following code :
IplImage* img =0;
img = cvLoadImage("nature.jpg");
if(img != 0)
{
Mat Img_mat(img);
std::vector<Mat> RGB;
split(Img_mat, RGB);
int data = (RGB[0]).at<int>(i,j)); /*Where i, j are inside the bounds of the matrix size .. i have checked this*/
}
The problem is I am getting negative values and very large values in the data variable. I think I have made some mistake somewhere. Can you please point it out.
I have been reading the documentation (I have not finished it fully.. it is quite large. ) But from what I have read, this should work. But it isnt. What is going wrong here?
Img_mat is a 3 channeled image. Each channel consists of pixel values uchar in data type.
So with split(Img_mat, BGR) the Img_mat is split into 3 planes of blue, green and red which are collectively stored in a vector BGR. So BGR[0] is the first (blue) plane with uchar data type pixels...hence it will be
int dataB = (int)BGR[0].at<uchar>(i,j);
int dataG = (int)BGR[1].at<uchar>(i,j);
so on...
You have to specify the correct type for cv::Mat::at(i,j). You are accessing the pixel as int, while it should be a vector of uchar. Your code should look something like this:
IplImage* img = 0;
img = cvLoadImage("nature.jpg");
if(img != 0)
{
Mat Img_mat(img);
std::vector<Mat> BGR;
split(Img_mat, BGR);
Vec3b data = BGR[0].at<Vec3b>(i,j);
// data[0] -> blue
// data[1] -> green
// data[2] -> red
}
Why are you loading an IplImage first? You are mixing the C and C++ interfaces.
Loading a cv::Mat with imread directly would be more straight-forward.
This way you can also specify the type and use the according type in your at call.

OpenCV cv::Mat 'ones' for multi-channel matrix?

When working with 1-channel (e.g. CV_8UC1) Mat objects in OpenCV, this creates a Mat of all ones: cv::Mat img = cv::Mat::ones(x,y,CV_8UC1).
However, when I use 3-channel images (e.g. CV_8UC3), things get a little more complicated. Doing cv::Mat img = cv::Mat::ones(x,y,CV_8UC3) puts ones into channel 0, but channels 1 and 2 contain zeros. So, how do I use cv::Mat::ones() for multi-channel images?
Here's some code that might help you to see what I mean:
void testOnes() {
int x=2; int y=2; //arbitrary
// 1 channel
cv::Mat img_C1 = cv::Mat::ones(x,y,CV_8UC1);
uchar px1 = img_C1.at<uchar>(0,0); //not sure of correct data type for px in 1-channel img
printf("px of 1-channel img: %d \n", (int)px1); //prints 1
// 3 channels
cv::Mat img_C3 = cv::Mat::ones(x,y,CV_8UC3); //note 8UC3 instead of 8UC1
cv::Vec3b px3 = img_C3.at<cv::Vec3b>(0,0);
printf("px of 3-channel img: %d %d %d \n", (int)px3[0], (int)px3[1], (int)px3[2]); //prints 1 0 0
}
So, I would have expected to see this printout: px of 3-channel img: 1 1 1, but instead I see this: px of 3-channel img: 1 0 0.
P.S. I did a lot of searching before posting this. I wasn't able to resolve this by searching SO for "[opencv] Mat::ones" or "[opencv] +mat +ones".
I don't use OpenCV, but I believe I know what's going on here. You define a data-type, but you are requesting the value '1' for that. The Mat class appears not to pay attention to the fact that you have a multi-channel datatype, so it simply casts '1' as a 3-byte unsigned char.
So instead of using the ones function, just use the scalar constructor:
cv::Mat img_C3( x, y, CV_8UC3, CV_RGB(1,1,1) );
You can also initialize like this:
Mat img;
/// Lots of stuff here ...
// Need to initialize again for some reason:
img = Mat::Mat(Size(width, height), CV_8UC3, CV_RGB(255,255,255));