Convert a cv::Mat to a pangolin::Image? - c++

I have an application that takes in pangolin::Image format rgbd images. I would like to send in a cv::Mat. How can I convert a cv::Mat to a pangolin::Image?
(pangolin: https://github.com/stevenlovegrove/Pangolin)
Image header:
https://github.com/stevenlovegrove/Pangolin/blob/master/include/pangolin/image/image.h
currently the format is:
pangolin::ManagedImage<unsigned short> firstData(640, 480);
pangolin::Image<unsigned short> firstRaw(firstData.w, firstData.h, firstData.pitch, (unsigned short*)firstData.ptr);
where firstRaw is then sent through the application.
If I now have:
cv::Mat frame = cv::imread(filepath,0);
What is the conversion from frame to firstRaw?
I start like this:
int loadDepthFromMat(cv::Mat filepath, pangolin::Image<unsigned short> & depth)
{
int width = filepath.cols;
int height = filepath.rows;
pangolin::ManagedImage<unsigned short> depthRaw(width, height);
pangolin::Image<unsigned short> depthRaw16((unsigned short*)depthRaw.ptr, depthRaw.w, depthRaw.h, depthRaw.w * sizeof(unsigned short));
//copy data
}
Thank you.

So, assuming you have converted your cv::Mat to unsigned short format with the correct pitch (or channels, in OpenCV), you just use memcpy.
(I've renamed your cv::Mat from filepath to mat (why is it called filepath?)):
memcpy((void*)depthRaw16.begin(), (void*)mat.data, mat.total() * mat.elemSize());
Again, be sure your pangolin image has identical dimensions and be sure the cv::Mat is converted to unsigned short.

Related

OpenCV how to encode raw image information for imencode?

I am in C++.
Assume some mysterious function getData() returns all but only the pixel information of an image.
i.e a char* that points to only the pixel information with no metadata (no width, length, height, nor channels of any form)
Thus we have:
unsigned char *raw_data = getData();
Then we have another function that returns a structure containing the metadata.
eg:
struct Metadata {
int width;
int height;
int channels;
//other useful fields
}
I now need to prepend the object metadata in the correct way to create a valid image buffer.
So instead of [pixel1, pixel2, pixel3 ...]
I would have, for example [width, height, channels, pixel1, pixel2, pixel3...]
What is the correct order to prepend the metadata and are width, height and channels enough?
You can use Mat constructor to create an image from data and meta data
Mat::Mat(int rows, int cols, int type, void* data, size_t
step=AUTO_STEP); // documentation here
cv::Mat image = cv::Mat(height, width, CV_8UC3, raw_data);
type argument specifies the number of channels and data format. For example, typical RGB image data is unsigned char and the number of channels is 3 so its type = CV_8UC3
Available OpenCV Mat types are defined cvdef.h

DevIL/OpenIL isn't loading alpha channel

I am having an issue where the .png image that I want to load as a byte array using DevIL is not having an alpha channel.
A complete black image is also appearing as having alpha channel values as 0.
This is my image loading function:
DevILCall(ilGenImages(1, &m_ImageID));
DevILCall(ilBindImage(m_ImageID));
ASSERT("Loading image: " + path);
DevILCall(ilLoadImage(path.c_str()));
GraphicComponents::Image image(
ilGetData(),
ilGetInteger(IL_IMAGE_HEIGHT),
ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_BITS_PER_PIXEL)
);
return image;
The Image object I am using is as follows:
struct Image
{
ILubyte * m_Image;
const unsigned int m_Height;
const unsigned int m_Width;
const unsigned int m_BPP;
Image(ILubyte imageData[ ], unsigned int height, unsigned int width, unsigned int bpp);
~Image();
};
And this is how I am printing out the image data for now:
for(unsigned int i = 0; i < image->m_Height*image->m_Width*4; i+=4)
{
LOG("Red:");
LOG((int) image->m_Image[i]);
LOG("Green:");
LOG((int) image->m_Image[i+1]);
LOG("Blue:");
LOG((int) image->m_Image[i+2]);
LOG("Alpha:");
LOG((int) image->m_Image[i+3]);
}
I also tried using the ilTexImage() to format the loaded image to RGBA format but that also doesn't seem to work. The printing loop starts reading garbage values when I change the maximum value of the loop variable to 4 times the number of pixels in the image.
The image is also confirmed to have an alpha channel.
What might be going wrong here?
EDIT: ilGetInteger(IL_IMAGE_BPP) is returning 3, which should mean RGB for now. When I use the ilTexImage() to force 4 channels, then ilGetInteger(IL_IMAGE_BPP) returns 4 but I still see garbage values popping up at the std output
The problem was fixed by a simple ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE) call after loading the image.
I suppose DevIL loads the image in RGB mode with unsigned byte values by default and to use otherwise, you need to convert the loaded image using ilConvertImage().

Convert from RGB to YUYV in OpenCV

Is there a way to convert from RGB to YUYV (YUY 4:2:2) format? I noted that OpenCV has reverse operation, but not RGB to YUYV for some reason. Maybe someone can point to code which does that (even outside of OpenCV library)?
UPDATE
I found libyuv library which may work for this purpose by doing BGR to ARGB conversion and then ARGB to YUY2 format (hopefully this is the same as YUYV 4:2:2). But it doesn't seem to work. Do you happen to know what yuyv buffer dimensions/type should look like? What its stride?
To clarify YUYV and YUY2 are the same formats if it helps.
UPDATE 2
Here is my code of using libyuv library:
Mat frame;
// Convert original image im from BGR to BGRA for further use in libyuv
cvtColor(im, frame, CVX_BGR2BGRA);
// Actually libyuv requires ARGB (i.e. reverse of BGRA), so I swap channels here
int from_to[] = { 0,3, 1,2, 2,1, 3,0 };
mixChannels(&frame, 1, &frame, 1, from_to, 4);
// This is the most confusing part. Not sure what argb_stride suppose to be - length of a row in bytes or size of single value in the array?
const uint8_t* argb_data = frame.data;
int argb_stride = 8;
// Also it is not clear what size of yuyv frame should be since we duplicate one Y
Mat yuyv(frame.rows, frame.cols, CVX_8UC2);
uint8_t* yuyv_data = yuyv.data;
int yuyv_stride = 16;
// Do actual conversion
libyuv::ARGBToYUY2(argb_data, argb_stride, yuyv_data, yuyv_stride,
frame.cols, frame.rows);
// Then I feed yuyv_data to video stream buffer and see green or purple image instead of video stream.
UPDATE 3
Mat frame;
cvtColor(im, frame, CVX_BGR2BGRA);
// ARGB
int from_to[] = { 0,3, 1,2, 2,1, 3,0 };
Mat rgba(frame.size(), frame.type());
mixChannels(&frame, 1, &rgba, 1, from_to, 4);
const uint8_t* argb_data = rgba.data;
int argb_stride = rgba.cols*4;
Mat yuyv(rgba.rows, rgba.cols, CVX_8UC2);
uint8_t* yuyv_data = yuyv.data;
int yuyv_stride = width * 2;
int res = libyuv::ARGBToYUY2(argb_data, argb_stride, yuyv_data, yuyv_stride, rgba.cols, rgba.rows);
It appears that although method is called ARGBToYUY2 it requires BGRA order of channels (not reverse).

OpenCV Convert Image to Bytes and Back

I am trying to convert images to a vector of bytes and back again but each image is horrible distorted. I was hoping someone could tell me why.
I have this as my conversion methods.
typedef unsigned char byte;
std::vector<byte> matToBytes(cv::Mat image)
{
int size = image.total() * image.elemSize();
std::vector<byte> img_bytes(size);
img_bytes.assign(image.datastart, image.dataend);
return img_bytes;
}
cv::Mat bytesToMat(vector<byte> bytes,int width,int height)
{
cv::Mat image(height,width,CV_8UC3,bytes.data());
return image;
}
It works but not well, I hope someone can spot why. I am pretty lost!
I was playing around with my code and I got this to work.
cv::Mat bytesToMat(vector<byte> bytes,int width,int height)
{
cv::Mat image = cv::Mat(height,width,CV_8UC3,bytes.data()).clone(); // make a copy
return image;
}
I suppose the .clone() does something importat

Moving my array to Mat and showing image with open CV

I am having problems using opencv to display an image. As my code is currently working, I have function that loads 78 images of size 710X710 of unsigned shorts into a single array. I have verified this works by writing the data to a file and reading it with imageJ. I am now trying to extract a single image frame from the array and load it into a Mat in order to perform some processing on it. Right now I have tried two ways to do this. The code will compile and run if I do not try to read the output, but if I cout<
My question would be, how do I extract the data from my large, 1-D array of 78images of size 710*710 into single Mat images. Or is there an more efficient way where I can load the images into a 3-D mat of dimensions 710X710X78 and operate on each 710X710 slice as needed?
int main(int argc, char *argv[])
{
Mat OriginalMat, TestImage;
long int VImageSize = 710*710;
int NumberofPlanes = 78;
int FrameNum = 150;
unsigned short int *PlaneStack = new unsigned short int[NumberofPlanes*VImageSize];
unsigned short int *testplane = new unsigned short int[VImageSize];
/////Load PlaneStack/////
Load_Vimage(PlaneStack, Path, NumberofPlanes);
//Here I try to extract a single plane image to the mat testplane, I try it two different ways with the same results
memcpy(testplane, &PlaneStack[710*710*40], VImageSize*sizeof(unsigned short int));
//copy(&PlaneStack[VImageSize*40],&PlaneStack[VImageSize*41], testplane);
// move single plane to a mat file
OriginalMat = Mat(710,710,CV_8U, &testplane) ;
//cout<<OriginalMat;
namedWindow("Original");
imshow("Original", OriginalMat);
}
The problem is you are using the constructor Mat::Mat(int rows, int cols, int type, void* data) with a pointer to 16 bit data (unsigned short int) but you are specifying the type CV_8U (8 bit).
Therefore the first byte of your 16 bit pixel becomes the first pixel in OriginalMat, and the second byte of the first pixel becomes the second pixel in OriginalMat, etc.
You need to create a 16 bit Mat, then convert it to 8 bit if you want to display it, e.g.:
int main(int argc, char *argv[])
{
long int VImageSize = 710*710;
int NumberofPlanes = 78;
int FrameNum = 150;
/////Load PlaneStack/////
unsigned short int *PlaneStack = new unsigned short int[NumberofPlanes*VImageSize];
Load_Vimage(PlaneStack, Path, NumberofPlanes);
// Get a pointer to the plane we want to view
unsigned short int *testplane = &PlaneStack[710*710*40];
// "move" single plane to a mat file
// actually nothing gets moved, OriginalMat will just contain a pointer to your data.
Mat OriginalMat(710,710,CV_16UC1, &testplane) ;
double scale_factor = 1.0 / 256.0;
Mat DisplayMat;
OriginalMat.convertTo(DisplayMat, CV_8UC1, scale_factor);
namedWindow("Original");
imshow("Original", DisplayMat);
}