opencv binary data jpg image to cv::Mat - c++

I want to load an image in c++ opencv that comes from a postgresql database.
The image, jpg extension, is stored as a binary data (bytea type) in the base, that I can access thanks to libpqxx.
The problem is that I do not know how to convert the data into a cv::Mat instance. With a regular image I could use imread('myImage.jpg', ...), but in this case I cannot even load the database image in the data attribute of Mat because it is jpeg and not bmp.
Any idea ? Is there some opencv method I could use that could understand directly the binary data and convert it to the appropriate structure ? the imdecode() functions seems to be used for bitmap datas.
edit : Berak, using a vector the imdecode function returns null Matrice What happens "If the buffer is too short or contains invalid data, the empty matrix/image is returned." Here is the code :
pqxx::result r=bdd::requete("SELECT image FROM lrad.img WHERE id=3",1);//returns the bytea image in r[0]["image"]
const char* buffer=r[0]["image"].c_str();
vector<uchar>::size_type size = strlen((const char*)buffer);
vector<uchar> jpgbytes(buffer, buffer+size);
Mat img = imdecode(jpgbytes, CV_LOAD_IMAGE_COLOR);
//jpgbytes.size()=1416562 img.size()=[0 x 0]
What am I missing ?

still, use imdecode . it can handle png,jpg,bmp,ppm,webp,jp2,exr, but no gif.
vector<uchar> jpgbytes; // from your db
Mat img = imdecode(jpgbytes);
(you should do the same for bmp or any other supported formats, don't mess with Mat's raw data pointers!)

Ok I have the process to convert a bytea data to a cv::Mat, here is the code.
inline int dec(uchar x){ //convert uchar to int
if (x>='0'&&x<='9') return (x-'0');
else if (x>='a'&&x<='f') return (x-'a'+10);
else if (x>='A'&&x<='F') return (x-'A'+10);
return 0;
}
cv::Mat bytea2Mat(const pqxx::result::field& f){
const char* buffer=f.c_str();
vector<uchar>::size_type size = strlen((const char*)buffer);
vector<uchar> jpgbytes(size/2-1);
for (size_t i=0; i!=size/2-1;i++) {
jpgbytes[i]=(dec(buffer[2*(i+1)])<<4)+dec(buffer[2*(i+1)+1]);
}
cout <<size/2<<";"<<jpgbytes.size()<<endl;
return imdecode(jpgbytes, CV_LOAD_IMAGE_COLOR);
}
The bytea output is encrypted as a char* looking like "\x41204230", for an original input string of "a b0" in hexa form. (the \x may not be present according to the data input)
to get the original data you have to calculate the original input from the two char, ('4','1'= 0x41=65). The vector is half the size of the char*.

Related

How to convert Opencv Mat to JPEG char data

Recently, i am having trouble with converting a Mat frame captured from my webcam by OpenCV to a normal JPEG unsigned char array. I've tried one or two way on Google but the result seems not the correct jpeg uchar array. Here is a piece of my code:
VideoCapture cap(0);
if(!cap.isOpened())
return -1;
Mat frame;
cap >> frame;
if( frame.empty())
return -1;
int size = frame.total() * frame.elemSize();
unsigned char* buffer = new unsigned char[size];
memcpy(buffer, frame.data, size * sizeof(unsigned char));
Then i used fwrite to write that buffer into a file.jpg (it looks silly but it does work if the buffer is correct),but the file cannot be openned or be determined as a jpeg image.
Can anyone help me figure this out?
Check out the OpenCV function imencode(). It will fill a buffer with data encoded as the correct image type (based on the file type argument) so that it can be written to a file and other programs will know what to do with it.
The problem with your current approach is that you are attempting to write raw image data as a JPEG, but JPEG is a compressed data format so programs won't know what to do with the data you've written. It would be equivalent of taking a binary file and just saving it as a JPEG, the file won't have the right headers to be decoded as an image and the data otherwise likely won't match up with the JPEG format anyways.

OpenCV save CV_32FC1 images

A program I am using is reading some bitmaps, and expects 32FC1 images.
I am trying to create these images
cv::Mat M1(255, 255, CV_32FC1, cv::Scalar(0,0,0));
cv::imwrite( "my_bitmap.bmp", M1 );
but when I check the depth - it is always CV_8U
How can I create the files so that they will contain the correct info ?
Update: It makes no difference if I use a different file extension - e.g. tif or png
I am reading it - using code that is already implemented - with cvLoadImage.
I am trying to CREATE the files that the existing code - that checks for the image type - can use.
I cannot convert files in the existing code. The existing code does not try to read random image type and convert it to desired type, but checks that the files are of the type it needs.
I found out - thank you for the answers - that cv::imwrite only writes integer type images.
Is there another way - either using OpenCV or something else - to write the images so that I end up with CV_32F type ?
Update again:
The code to read image... if into a cv::Mat:
cv::Mat x = cv::imread(x_files, CV_LOAD_IMAGE_ANYDEPTH|CV_LOAD_IMAGE_ANYCOLOR);
The existing code:
IplImage *I = cvLoadImage(x_files.c_str(), CV_LOAD_IMAGE_ANYDEPTH|CV_LOAD_IMAGE_ANYCOLOR);
cv::imwrite() .bmp encoder assumes 8 bit channels.
If you only need to write .bmp files with OpenCV , you can convert your 32FC1 image to 8UC4, then use cv::imwrite() to write it and you will get a 32bits per pixel .bmp file.
I am guessing that your program that reads the file will interpret the 32 bit pixels as a 32FC1.
The .bmp format doesn't have an explicit channel structure, just a number of bits per pixel. Therefore you should be able to write 32 bit pixels as 4 channels of 8 bits in OpenCV and read them as single channel 32 bit pixels in another program - if you do this you need to be aware of endianness assumptions by the reader. Someting like the following should work:
cv::Mat m1(rows, cols, CV_32FC1);
... // fill m1
cv::Mat m2(rows, cols, CV_8UC4, m1.data); // provide different view of m1 data
// depending on endianess of reader, you may need to swap byte order of m2 pixels
cv::imwrite("my_bitmap.bmp", m2);
You will not be able to read properly the files you created in OpenCV because the .bmp decoder in OpenCV assumes the file is 1 or 3 channel of 8 bit data (i.e. it can't read 32 bit pixels).
EDIT
Probably a much better option would be to use the OpenEXR format, for which OpenCV has a codec. I assume you just need to save your files with a .exr extension.
Your problem is that bitmaps store data internally as integers not floats. If your problem is rounding error when saving you will need to either use a different file format or scale your data up before saving and then back down after saving. If you just want to convert the matrix you get after reading the file to a float you can use cv::convertto
I was struggling with the same problem. At the end i decided it would just be easier to write a custom function that can write and load an arbitrary CV Mat.
bool writeRawImage(const cv::Mat& image, const std::string& filename)
{
ofstream file;
file.open (filename, ios::out|ios::binary);
if (!file.is_open())
return false;
file.write(reinterpret_cast<const char *>(&image.rows), sizeof(int));
file.write(reinterpret_cast<const char *>(&image.cols), sizeof(int));
const int depth = image.depth();
const int type = image.type();
const int channels = image.channels();
file.write(reinterpret_cast<const char *>(&depth), sizeof(depth));
file.write(reinterpret_cast<const char *>(&type), sizeof(type));
file.write(reinterpret_cast<const char *>(&channels), sizeof(channels));
int sizeInBytes = image.step[0] * image.rows;
file.write(reinterpret_cast<const char *>(&sizeInBytes), sizeof(int));
file.write(reinterpret_cast<const char *>(image.data), sizeInBytes);
file.close();
return true;
}
bool readRawImage(cv::Mat& image, const std::string& filename)
{
int rows, cols, data, depth, type, channels;
ifstream file (filename, ios::in|ios::binary);
if (!file.is_open())
return false;
try {
file.read(reinterpret_cast<char *>(&rows), sizeof(rows));
file.read(reinterpret_cast<char *>(&cols), sizeof(cols));
file.read(reinterpret_cast<char *>(&depth), sizeof(depth));
file.read(reinterpret_cast<char *>(&type), sizeof(type));
file.read(reinterpret_cast<char *>(&channels), sizeof(channels));
file.read(reinterpret_cast<char *>(&data), sizeof(data));
image = cv::Mat(rows, cols, type);
file.read(reinterpret_cast<char *>(image.data), data);
} catch (...) {
file.close();
return false;
}
file.close();
return true;
}

Understanding openCV code snippet

I have a question about this peace of code.
...............
cv::Mat image;
image = cv::imread(filename.c_str(), CV_LOAD_IMAGE_COLOR);
if (image.empty()) {
std::cerr << "Couldn't open file: " << filename << std::endl;
exit(1);
}
cv::cvtColor(image, imageRGBA, CV_BGR2RGBA);
imageGrey.create(image.rows, image.cols, CV_8UC1);
*inputImage = (uchar4 *)imageRGBA.ptr<unsigned char>(0);
*greyImage = imageGrey.ptr<unsigned char>(0);
As I understand we create a openCV mat object. Read the image into it. But why we use filename.c_str()? instead of just filename? And why we convert from BGR to RGBA?
cv::cvtColor(image, imageRGBA, CV_BGR2RGBA); I read in the documentation that imread reads the image as RGB not BGR.
The most confusing for we is this part:
*inputImage = (uchar4 *)imageRGBA.ptr<unsigned char>(0);
*greyImage = imageGrey.ptr<unsigned char>(0);
What's happening here? why we need all this casts?
I know this is a lot of question, but I really want to know whats happening here.)
imread takes a const char* as first argument and you cannot pass a std::string directly to it
OpenCV stores matrices as BGR. So also imread adheres to this channel order (documentation might be misleading, don't confuse image format read (RGB) versus internal representation (BGR)). Based on your cuda tag I guess somebody wants to pass the image data to the GPU. GPUs typically work with RGBA format. It is not only about BGR<->RGB but also about having four channels in the interleaved format.
The Mat::ptr() is templated (it is not casting!) because Mat hides the datatype from you. The code is risky, as it just assumes imread would create a Mat_<uchar> and so this is the right type to access. It would be better to start with a cv::Mat_<uchar> in the first place, then use Mat_<T>::operator[] to get a pointer to the first row etc.
I don't know what comes next in your code but there might be a bug if the stride (step) is not considered.

Viewing 8 bit RAW image file in openCV

I have a raw file which contains a header of 5 bytes containing the number of rows and columns in first two bits each . The 5th byte contains the number of bits for each pixel in the image which is 8 bits in all cases. The image data follows after that.
Since I am new to openCV, i want to ask how to view this RAW image file as an greyscale image using C++?
I know how to read binary data in C++ and have stored the image as a 2-D unsigned char array (since each pixel is 8 bit).
Can anyone please tell me how to view this data as image using openCV ?
I am using the below code , but getting a completely weird image :
void openRaw() {
cv::Mat img(numRows, numCols,CV_8U,&(image[0][0]));
//img.t();
cv::imshow("img",img);
cv::waitKey();
}
Any help will be greatly appreciated.
Thanks,
Rohit
You have to convert it to an IplImage.
If you want to see it as a pure grey-scale image, its actually rather easy.
Example code I use in one application:
CvSize mSize;
mSize.height = 960;
mSize.width = 1280;
IplImage* image1 = cvCreateImage(mSize, 8, 1);
memcpy( image1->imageData, rawDataPointer, sizeOfImage);
cvNamedWindow( "corners1", 1 );
cvShowImage( "corners1", image1 );
At that point you have a valid IplImage, which you can then display. (last 2 lines of code display it)
If the image is bayer-tiled, you will have to convert to RGB.
c++ notation:
cv::Mat img(rows,cols,CV_8U,ptrToDat);
cv::imwhow("img",img);
cv::waitkey();
*data should be saved columwise, otherewise use:
cv::Mat img(cols,rows,CV_8U,ptrToDat);
img=img.t();
cv::imwhow("img",img);
cv::waitkey();

Save char array as JPG for C++ Windows Store App

Given the following
Bitmap raw image data in char array
Image width and height
Path wzAppDataDirectory in std::wstring generated using the following code
// Get a good path.
wchar_t wzAppDataDirectory[MAX_PATH];
wcscpy_s( wzAppDataDirectory, MAX_PATH, Windows::Storage::ApplicationData::Current->LocalFolder->Path->Data() );
wcscat_s( wzAppDataDirectory, MAX_PATH, (std::wstring(L"\\") + fileName).c_str() );
How can we save the image as JPG? (Include encoding as well as the char array is raw bitmap form)
Code example is very much appreciated.
You'll need to use a library to encode the JPEG. Some possibilities are the Independent JPEG Group's jpeglib, stb_image, or DevIL.
This is an example code which I obtained from my friend.
It uses OpenCV's Mat data structure. Note that, you need to ensure the unsigned char data array within cv::Mat is in continuous form. cv::cvtColor will do the trick (Or, cv::Mat.clone).
Take note, do not use OpenCV's imwrite. As at current time of writing, imwrite doesn't pass Windows Store Certification Test. It is using several APIs, which is prohibited in WinRT.
void SaveMatAsJPG(const cv::Mat& mat, const std::wstring fileName)
{
cv::Mat tempMat;
cv::cvtColor(mat, tempMat, CV_BGR2BGRA);
Platform::String^ pathName = ref new Platform::String(fileName.c_str());
task<StorageFile^>(ApplicationData::Current->LocalFolder->CreateFileAsync(pathName, CreationCollisionOption::ReplaceExisting)).
then([=](StorageFile^ file)
{
return file->OpenAsync(FileAccessMode::ReadWrite);
}).
then([=](IRandomAccessStream^ stream)
{
return BitmapEncoder::CreateAsync(BitmapEncoder::JpegEncoderId, stream);
}).
then([=](BitmapEncoder^ encoder)
{
const Platform::Array<unsigned char>^ pixels = ref new Platform::Array<unsigned char>(tempMat.data, tempMat.total() * tempMat.channels());
encoder->SetPixelData(BitmapPixelFormat::Bgra8, BitmapAlphaMode::Ignore, tempMat.cols , tempMat.rows, 96.0, 96.0, pixels);
encoder->FlushAsync();
});
}