I have a question about this peace of code.
...............
cv::Mat image;
image = cv::imread(filename.c_str(), CV_LOAD_IMAGE_COLOR);
if (image.empty()) {
std::cerr << "Couldn't open file: " << filename << std::endl;
exit(1);
}
cv::cvtColor(image, imageRGBA, CV_BGR2RGBA);
imageGrey.create(image.rows, image.cols, CV_8UC1);
*inputImage = (uchar4 *)imageRGBA.ptr<unsigned char>(0);
*greyImage = imageGrey.ptr<unsigned char>(0);
As I understand we create a openCV mat object. Read the image into it. But why we use filename.c_str()? instead of just filename? And why we convert from BGR to RGBA?
cv::cvtColor(image, imageRGBA, CV_BGR2RGBA); I read in the documentation that imread reads the image as RGB not BGR.
The most confusing for we is this part:
*inputImage = (uchar4 *)imageRGBA.ptr<unsigned char>(0);
*greyImage = imageGrey.ptr<unsigned char>(0);
What's happening here? why we need all this casts?
I know this is a lot of question, but I really want to know whats happening here.)
imread takes a const char* as first argument and you cannot pass a std::string directly to it
OpenCV stores matrices as BGR. So also imread adheres to this channel order (documentation might be misleading, don't confuse image format read (RGB) versus internal representation (BGR)). Based on your cuda tag I guess somebody wants to pass the image data to the GPU. GPUs typically work with RGBA format. It is not only about BGR<->RGB but also about having four channels in the interleaved format.
The Mat::ptr() is templated (it is not casting!) because Mat hides the datatype from you. The code is risky, as it just assumes imread would create a Mat_<uchar> and so this is the right type to access. It would be better to start with a cv::Mat_<uchar> in the first place, then use Mat_<T>::operator[] to get a pointer to the first row etc.
I don't know what comes next in your code but there might be a bug if the stride (step) is not considered.
Related
I am trying to create a program which imports an RGB image and converts it to grayscale. I would like the output image to consist of 3 channels. To achieve that I use cv::cvtColor function with dstCn parameter set to 3:
cv::Mat mat = cv::imread("lena.bmp");
std::cout << CV_MAT_CN(mat.type()) << "\n"; // prints "3", OK
cv::cvtColor(mat, mat, cv::COLOR_BGR2GRAY, 3);
std::cout << CV_MAT_CN(mat.type()) << "\n"; // prints "1" regardless of dstCn
but it looks like dstCn isn't taken into account, and the output array has only 1 channel.
The OpenCV documentation says:
dstCn - number of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code.
It's a very basic case and I am aware there are plenty of workarounds, but I would like to know whether it is a bug or my incomprehension.
The answer can be found in the OpenCV source code. Let's have a look at cvtColor function in imageproc/src/color.cpp file. There is a very long switch-case, so I only post here the most interesting part:
void cvtColor( InputArray _src, OutputArray _dst, int code, int dcn )
{
...
switch( code )
{
...
case COLOR_BGR2GRAY: case COLOR_BGRA2GRAY:
case COLOR_RGB2GRAY: case COLOR_RGBA2GRAY:
cvtColorBGR2Gray(_src, _dst, swapBlue(code));
break;
}
}
The code from my question uses COLOR_BGR2GRAY. Nothing special is done before the switch statement. Invoking swapBlue does not do anything interesting too. We can see that this case completely ignores dcn (aka dstCn). So it seems to be fully intentional and my idea was wrong from the start.
I have also found a similar post on OpenCV forum where Doomb0t pointed that:
the concept of greyscale is that you have one channel describing the intensity on a gradual scale between black and white. So, it is not clear why would you need a 3 channels greyscale image (...)
Yes, grayscale is one channel and what you ask doesn't make sense at first sight, however there could be a legit reason if you want it copied in three channels in one operation and then manipulate each of the copies, while they are kept in the same container.
swapBlue is because default format is BGR.
BTW, you can also read it directly in blackwhite and merge it into new 3 channel image:
cv::Mat bw = cv.imread("lena.bmp",0);
vector<Mat> ch(3);
bw3 = Mat::zeros(Size(bw.cols, bw.rows), CV_8UC3); //3 channels 8 bit unsigned
for(i=0; i<3; i++) ch.push_back(bw)
cv.merge(ch, bw3)
(Maybe there's a shorter way, I don't know.)
More examples with merge
I have an algorithm that does some stuff. Among them, there is a conversion that works fine if I'm working on a CV_8UC3 image but goes wrong if the file type is C_16UC3.
This is some code:
//new image is created
Mat3w img(100,100,Vec3w(1000,0,0));
//Image Conversion - ERROR!
cv::Mat inputSource;
//saving the image here will work
img.convertTo(inputSource, CV_64FC3);
//saving the image here will not work -> black image
The problem is that the CV_16UC3 image's processing result is an image of the right dimensions but fully black.
The problem is in the conversion because saving the image right before will give a legit one while saving it right after will give an almost completely white one.
EDIT:
I made some changes: cut off some useless code and added the inputSource declaration.
Now, while I was trying stuff, I arrived at the conclusion that either I haven't understood the CV Types, or something strange is happening.
I always thought that the number in the type was indicating the number of bits per channel. So, in my head, CV_16UC3 is a 3 channel with 16bits per channel. That idea is strengthened by the fact that the image I save during as tests (before the img.convertTo) actually had matching bits per channel number. The strange thing, is that the saved inputSource (type CV_64FC3) is an 8bpc image.
What's am I missing?
You get confused with the way imwrite and imread work in OpenCV. From the OpenCV documentation
imwrite
The function imwrite saves the image to the specified file. The image format is chosen based on the filename extension (see imread() for the list of extensions). Only 8-bit (or 16-bit unsigned (CV_16U) in case of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’ channel order) images can be saved using this function. If the format, depth or channel order is different, use Mat::convertTo() , and cvtColor() to convert it before saving. Or, use the universal FileStorage I/O functions to save the image to XML or YAML format.
imread
The function imread loads an image from the specified file and returns it. Possible flags are:
IMREAD_UNCHANGED : If set, return the loaded image as is (with alpha channel, otherwise it gets cropped).
IMREAD_GRAYSCALE : If set, always convert image to the single channel grayscale image.
IMREAD_COLOR : If set, always convert image to the 3 channel BGR color image.
IMREAD_ANYDEPTH : If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit.
IMREAD_ANYCOLOR : If set, the image is read in any possible color format.
So for your case, CV_16U are saved without conversion, while CV_64F is converted and saved as CV_8U. If you want to store double data, you should use FileStorage.
You should also take care to use imread the image with the appropriate flag.
This example should clarify:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Create a 16-bit 3 channel image
Mat3w img16UC3(100, 200, Vec3w(1000, 0, 0));
img16UC3(Rect(0, 0, 20, 50)) = Vec3w(0, 2000, 0);
// Convert to 64-bit (double) 3 channel image
Mat3d img64FC3;
img16UC3.convertTo(img64FC3, CV_64FC3);
// Save to disk
imwrite("16UC3.png", img16UC3); // No conversion
imwrite("64FC3.png", img64FC3); // Converted to CV_8UC3
FileStorage fout("64FC3.yml", FileStorage::WRITE);
fout << "img" << img64FC3; // No conversion
fout.release();
Mat img_maybe16UC3_a = imread("16UC3.png" /*, IMREAD_COLOR*/); // Will be CV_8UC3
Mat img_maybe16UC3_b = imread("16UC3.png", IMREAD_ANYDEPTH); // Will be CV_16UC1
Mat img_maybe16UC3_c = imread("16UC3.png", IMREAD_UNCHANGED); // Will be CV_16UC3
Mat img_maybe64FC3_a = imread("64FC3.png" /*, IMREAD_COLOR*/); // Will be CV_8UC3
Mat img_maybe64FC3_b = imread("64FC3.png", IMREAD_ANYDEPTH); // Will be CV_8UC1
Mat img_maybe64FC3_c = imread("64FC3.png", IMREAD_UNCHANGED); // Will be CV_8UC3
Mat img_mustbe64FC3;
FileStorage fin("64FC3.yml", FileStorage::READ);
fin["img"] >> img_mustbe64FC3; // Will be CV_64FC3
fin.release();
return 0;
}
I know this question has been asked and answered by others. But I still can't not solve my question. I read a frame from a video, which has format unsigned char (CV_8U). I hope to convert it to double precision(CV_64F). I do as following:
VideoCapture capture(fileName);
Mat image;
capture >> image;
cvtColor(image, image, CV_BGR2GRAY);
image.convertTo(image, CV_32FC1, 1.0/255);
cout << typeid(image.data[0]).name() << endl;
But the result shows the image is still unsigned char. What's wrong with my code? Thank.
This is not the right way to test for type conversion.
OpenCV's data variable in cv::Mat is always of type uchar. It is basically a pointer to memory, but it doesn't mean that the data is uchar.
To get the type of the image data use the type() function. Here is an example to test if the type was successfully converted to float (which will be)
cv::DataType<float>::type == image.type();
I want to load an image in c++ opencv that comes from a postgresql database.
The image, jpg extension, is stored as a binary data (bytea type) in the base, that I can access thanks to libpqxx.
The problem is that I do not know how to convert the data into a cv::Mat instance. With a regular image I could use imread('myImage.jpg', ...), but in this case I cannot even load the database image in the data attribute of Mat because it is jpeg and not bmp.
Any idea ? Is there some opencv method I could use that could understand directly the binary data and convert it to the appropriate structure ? the imdecode() functions seems to be used for bitmap datas.
edit : Berak, using a vector the imdecode function returns null Matrice What happens "If the buffer is too short or contains invalid data, the empty matrix/image is returned." Here is the code :
pqxx::result r=bdd::requete("SELECT image FROM lrad.img WHERE id=3",1);//returns the bytea image in r[0]["image"]
const char* buffer=r[0]["image"].c_str();
vector<uchar>::size_type size = strlen((const char*)buffer);
vector<uchar> jpgbytes(buffer, buffer+size);
Mat img = imdecode(jpgbytes, CV_LOAD_IMAGE_COLOR);
//jpgbytes.size()=1416562 img.size()=[0 x 0]
What am I missing ?
still, use imdecode . it can handle png,jpg,bmp,ppm,webp,jp2,exr, but no gif.
vector<uchar> jpgbytes; // from your db
Mat img = imdecode(jpgbytes);
(you should do the same for bmp or any other supported formats, don't mess with Mat's raw data pointers!)
Ok I have the process to convert a bytea data to a cv::Mat, here is the code.
inline int dec(uchar x){ //convert uchar to int
if (x>='0'&&x<='9') return (x-'0');
else if (x>='a'&&x<='f') return (x-'a'+10);
else if (x>='A'&&x<='F') return (x-'A'+10);
return 0;
}
cv::Mat bytea2Mat(const pqxx::result::field& f){
const char* buffer=f.c_str();
vector<uchar>::size_type size = strlen((const char*)buffer);
vector<uchar> jpgbytes(size/2-1);
for (size_t i=0; i!=size/2-1;i++) {
jpgbytes[i]=(dec(buffer[2*(i+1)])<<4)+dec(buffer[2*(i+1)+1]);
}
cout <<size/2<<";"<<jpgbytes.size()<<endl;
return imdecode(jpgbytes, CV_LOAD_IMAGE_COLOR);
}
The bytea output is encrypted as a char* looking like "\x41204230", for an original input string of "a b0" in hexa form. (the \x may not be present according to the data input)
to get the original data you have to calculate the original input from the two char, ('4','1'= 0x41=65). The vector is half the size of the char*.
I have a raw file which contains a header of 5 bytes containing the number of rows and columns in first two bits each . The 5th byte contains the number of bits for each pixel in the image which is 8 bits in all cases. The image data follows after that.
Since I am new to openCV, i want to ask how to view this RAW image file as an greyscale image using C++?
I know how to read binary data in C++ and have stored the image as a 2-D unsigned char array (since each pixel is 8 bit).
Can anyone please tell me how to view this data as image using openCV ?
I am using the below code , but getting a completely weird image :
void openRaw() {
cv::Mat img(numRows, numCols,CV_8U,&(image[0][0]));
//img.t();
cv::imshow("img",img);
cv::waitKey();
}
Any help will be greatly appreciated.
Thanks,
Rohit
You have to convert it to an IplImage.
If you want to see it as a pure grey-scale image, its actually rather easy.
Example code I use in one application:
CvSize mSize;
mSize.height = 960;
mSize.width = 1280;
IplImage* image1 = cvCreateImage(mSize, 8, 1);
memcpy( image1->imageData, rawDataPointer, sizeOfImage);
cvNamedWindow( "corners1", 1 );
cvShowImage( "corners1", image1 );
At that point you have a valid IplImage, which you can then display. (last 2 lines of code display it)
If the image is bayer-tiled, you will have to convert to RGB.
c++ notation:
cv::Mat img(rows,cols,CV_8U,ptrToDat);
cv::imwhow("img",img);
cv::waitkey();
*data should be saved columwise, otherewise use:
cv::Mat img(cols,rows,CV_8U,ptrToDat);
img=img.t();
cv::imwhow("img",img);
cv::waitkey();