Fast conversion from gray level image to QImage - c++

In an application I handle images where each pixel is either an unsigned or a float with each value is a pixel with the given level of grey. I have the source available so I can access the data of the images freely.
I need to display/save and load these pictures using the qt framework. Currently the only way of handling the conversion is to get and set each pixel which is proving to be a bit slow.
Are there any other way one could convert these images?

Instead of using QImage::setPixel you should access to the image buffer directly.
After you create the image with the desidered format, width and height, you can use QImage::bits() to access the memory buffer, or also QImage::scanLine() to retrieve a pointer to the beginning of each line in the image and set the pixels directly in memory: this is much faster than calling setPixel() for each pixel.

QImage has a constructor that takes a pointer to an existing buffer/image:
QImage ( uchar * data, int width, int height, Format format )
It does not take ownership of the buffer nor does it copy the contents, so you are responsible that the buffer is valid throughout the lifetime of the QImage.
Note: QImage requires 32-bit aligned image rows, so you might need to copy the image rowwise into a new buffer with appropriate padding. You have only unsigned or float pixels, so it doesn't apply for you (already 32bit values), but remember it, should you have different pixel types in the future.

Related

How to write opencv cv::Mat image directly in boost shared memory

I have two processes who want to share cv::Mat image information, and I want to use the boost managed_shared_memory to realize it. Since copying an image is really time consuming, so I am trying to find a way to write the image directly to the shared memory when it first appears.
However, since cv::Mat is only a header who has the a pointer to the image data, and the data locates somewhere else, I couldn't realize my idea. I have some test code but they are very chaos and can not work, so I think I am in the totally wrong direction. Anyone has experience about this? thank you!
The cv::Mat.ptr() function gives you the first pointer of an OpenCV image.
The size of the data buffer is equal to Channels * Height * Width * elmsize, so you can just use
memcpy(dest, image.ptr(), Channels * Height * Width) if the elements are 1 byte each (based on CvType).
Caveats:
- The image must be continuous. Use isContinuous() to check. If it fails, clone() the image to get a continuous copy.
- To retrieve the image from Shared memory, you will have to construct a new cv:Mat with the same height, width, channels, CvType and step. Then use memcpy.
See Shared Memory Example for a minimal working example.

Save raw RBG values to JPG using libjpeg

I have a canvas that is represented by a 2D array of the type colorData
The class colorData simply holds the RGB value or each pixel.
I have been looking at examples of people using libjpeg to write a jpg but none of them seem to use the RGB values.
Is there a way to save raw RGB values to a jpeg using libjpeg? Or better yet, is there an example of code using the raw RGB data for the jpeg data?
Look in example.c in the libjpeg source. It gives a complete example of how to write a JPEG file using RGB data.
The example uses a buffer variable image_buffer and height and width variables image_width and image_height. You will need to adapt it to copy the RGB values from your ColorData class and place them into the image buffer (this can be done one row at a time).
Fill an array of bytes with the RGB data (3 bytes for each pixel) and then set row_buffer[0] point to the array before calling jpeg_write_scanlines.

OpenCV generate cv::Mat from array using stride

I have an array of pixel data in RGBA format. Although I have already converted this data to grayscale using the GPU (thus all 4 channels are identical).
I now want to use this grayscale data in OpenCV, and I don't want to store 4 copies of the same data. Is it possible to create a cv::Mat structure from this pixel array by specifying a stride. (i.e. only read out every 4th byte)
I am currently using
GLubyte* Img = stuff from GPU;
cv::Mat tmp(height, width, CV_8UC4, Img);
But this copies all the data, or does it wrap the existing pointer into a cv::Mat without copying it? If it wraps without copy then I will be happy to use standard c++ routines to copy only the data I want from Img into a new section of memory and then wrap this as cv::Mat.
Otherwise how would you suggest doing this to reduce the amount of data being copied.
Thanks
The code that you are using
cv::Mat tmp(rows, cols, CV_8UC4, dataPointer);
does not perform any copy but only assign the data field of the Mat instance.
If it's ok for you to work with a matrix of 4 channels, then just go on.
Otherwise, if you prefer working with a 1-channel matrix, then just use the function cv::cvtColor() to create a new image with a single channel (but then you will get one additional image in memory and pay the CPU cycles for the conversion):
cv::Mat grey;
cv::cvtColor(tmp, grey, CV_BGR2GRAY);
Finally, one last thing: if you can deinterlace the colorplanes beforehand (for example on the GPU) and get some image with [blue plane, green plane, red plane], then you can pass CV_8UC1 as image type in the construction of tmp and you get a single channel grey image without any data copy.

Converting opencv image to gdi bitmap doesn't work depends on image size

I have this code that converts an opencv image to a bitmap:
void processimage(MAT imageData)
{
Gdiplus::Bitmap bitmap(imageData.cols,imageData.rows,stride, PixelFormat24bppRGB,imageData.data);
// do some work with bitmap
}
It is working well when the size of image is 2748 X 3664. But I am tring to process an image wth size 1374 X 1832, it doesn't work.
The error is invalid parameter(2).
I checked and can confirm that:
in 2748 *3664:
cols=2748
rows=3664
stride= 8244
image is continues.
in 1374 X 1832
cols=1374
rows=1832
stride= 4122
image is continues.
So everything seems correct to me, but it generate error.
What is the problem and how can I fix it?
Edit
Based on answer which explained why I can not create bitmap. I finally implemented it in this way:
Mat newImage;
cvtColor(imageData, newImage, CV_BGR2BGRA);
Gdiplus::Bitmap bitmap(newImage.cols,newImage.rows,newImage.step1(), PixelFormat32bppRGB,newImage.data);
So effectively, I convert input image to a 4 byte per pixel and then use the convert it to bitmap.
All credits to Roger Rowland for his answer.
I think the problem is that a BMP format must have a stride that is a multiple of 4.
Your larger image has a stride of 8244, which is valid (8244/4 = 2061) but your smaller image has a stride of 4122, which is not (4122/4 = 1030.5).
As it says on MSDN for the stride parameter (with my emphasis):
Integer that specifies the byte offset between the beginning of one
scan line and the next. This is usually (but not necessarily) the
number of bytes in the pixel format (for example, 2 for 16 bits per
pixel) multiplied by the width of the bitmap. The value passed to this
parameter must be a multiple of four.
Assuming your stride is correct, I think you're only option is to copy it row by row. So, something like:
Great a Gdiplus::Bitmap of the required size and format
Use LockBits to get the bitmap pixel data.
Copy the OpenCV image one row at a time.
Call UnlockBits to release the bitmap data.
You can use my class CGdiPlus that implements all you need to convert from cv::Mat to Gdiplus::Bitmap and vice versa:
OpenCV / Tesseract: How to replace libpng, libtiff etc with GDI+ Bitmap (Load into cv::Mat via GDI+)

scanline function in qimage class

I'm developing application for editing raster graphic. In this application I have to create scanline function which will do same thing as scanline function in QImage class.
But I'm little confused with the way that scanline function works and with scanline generally.
For example, when I call bytesPerLine() for image which height is 177px I was expecting that value will be 531 (3 bytes for each pixel) but this function is returning 520?
Also, when I use
uchar data = image->scanLine(y)[x]
for R=249 G=249 B=249 value in variable data is 255.
I really don't understand this value.
Thanks in advance :)
For reliable behavior you should check the return value of QImage::format() to see what underlying format is used before accessing the raw image data.
Qt seems to prefer RGB32/ARGB32 format for true-colors, where each pixel takes 4 bytes, whether an alpha channel exists or not (for RGB32 format it's simply filled with 0xff). If you load a true-color image, it's probably in one of these two formats.
Besides, the byte order can be different across platforms, use QRgb to access 32-bit pixels whenever possible.
BTW, shouldn't a scanline be horizontal? I think you should use width() instead of height() to calculate the length of a scanline.