Save raw RBG values to JPG using libjpeg - c++

I have a canvas that is represented by a 2D array of the type colorData
The class colorData simply holds the RGB value or each pixel.
I have been looking at examples of people using libjpeg to write a jpg but none of them seem to use the RGB values.
Is there a way to save raw RGB values to a jpeg using libjpeg? Or better yet, is there an example of code using the raw RGB data for the jpeg data?

Look in example.c in the libjpeg source. It gives a complete example of how to write a JPEG file using RGB data.
The example uses a buffer variable image_buffer and height and width variables image_width and image_height. You will need to adapt it to copy the RGB values from your ColorData class and place them into the image buffer (this can be done one row at a time).
Fill an array of bytes with the RGB data (3 bytes for each pixel) and then set row_buffer[0] point to the array before calling jpeg_write_scanlines.

Related

Convert RGB32 image to ofPixels in Open Frameworks

I am trying to display a video from a video decoder library.
The video is delivered as a byte array with RGB32 pixel format.
Meaning every pixel is represented by 32 bits.
RRBBGGFF - 8bit R, 8bit G, 8bit B, 8bit 0xFF.
Similar to QT Qimage Format_RGB32.
I thnik I need to convert the pixel array to ofPixels, Then load the pixels to ofTexture.
Then I can draw the texture.
I don't know how to convert/set the ofPixels from this pixel format.
Any tips/ideas are so so welcome.
Thanks!
Try using a ofThreadChannel as described in this example in order to avoid writing and reading from your ofTexture / ofPixels.
Then you can load a uint8_t* by doing :
by using an ofTexture's method loadFromPixels() :
// assuming data is populating the externalBuffer
void* externalBuffer;
tex.loadFromPixels((uint8*)externalData, width, height, GL_RGBA);
Hope this helps,
Best,
P

How does cv::imencode read an image?

I have a question about the cv::imencode function.
It says here that it encodes an image into a buffer. I understood that the result is an array of [0, 255] value. Is that correct? Let's say it's a grayscale image to simplify.
Assuming my picture is represented by this grid:
If I would draw an arrow representing the order in which the pixel are read by the cv::imencode function, what would be the result?
I understood that the result is an array of [0, 255] value. Is that correct?
Not necessarily. The format depends on the encoder. The main point of most encoders is to compress data, and the corresponding decoder decompresses the encoded data/image into the n-channel (1-channel in grayscale images) dense matrices.
See for example how this PAM encoder is implemented in OpenCV. That shows how to access the "raw" image data and this particular way of encoding the image.

OpenCV generate cv::Mat from array using stride

I have an array of pixel data in RGBA format. Although I have already converted this data to grayscale using the GPU (thus all 4 channels are identical).
I now want to use this grayscale data in OpenCV, and I don't want to store 4 copies of the same data. Is it possible to create a cv::Mat structure from this pixel array by specifying a stride. (i.e. only read out every 4th byte)
I am currently using
GLubyte* Img = stuff from GPU;
cv::Mat tmp(height, width, CV_8UC4, Img);
But this copies all the data, or does it wrap the existing pointer into a cv::Mat without copying it? If it wraps without copy then I will be happy to use standard c++ routines to copy only the data I want from Img into a new section of memory and then wrap this as cv::Mat.
Otherwise how would you suggest doing this to reduce the amount of data being copied.
Thanks
The code that you are using
cv::Mat tmp(rows, cols, CV_8UC4, dataPointer);
does not perform any copy but only assign the data field of the Mat instance.
If it's ok for you to work with a matrix of 4 channels, then just go on.
Otherwise, if you prefer working with a 1-channel matrix, then just use the function cv::cvtColor() to create a new image with a single channel (but then you will get one additional image in memory and pay the CPU cycles for the conversion):
cv::Mat grey;
cv::cvtColor(tmp, grey, CV_BGR2GRAY);
Finally, one last thing: if you can deinterlace the colorplanes beforehand (for example on the GPU) and get some image with [blue plane, green plane, red plane], then you can pass CV_8UC1 as image type in the construction of tmp and you get a single channel grey image without any data copy.

Converting opencv image to gdi bitmap doesn't work depends on image size

I have this code that converts an opencv image to a bitmap:
void processimage(MAT imageData)
{
Gdiplus::Bitmap bitmap(imageData.cols,imageData.rows,stride, PixelFormat24bppRGB,imageData.data);
// do some work with bitmap
}
It is working well when the size of image is 2748 X 3664. But I am tring to process an image wth size 1374 X 1832, it doesn't work.
The error is invalid parameter(2).
I checked and can confirm that:
in 2748 *3664:
cols=2748
rows=3664
stride= 8244
image is continues.
in 1374 X 1832
cols=1374
rows=1832
stride= 4122
image is continues.
So everything seems correct to me, but it generate error.
What is the problem and how can I fix it?
Edit
Based on answer which explained why I can not create bitmap. I finally implemented it in this way:
Mat newImage;
cvtColor(imageData, newImage, CV_BGR2BGRA);
Gdiplus::Bitmap bitmap(newImage.cols,newImage.rows,newImage.step1(), PixelFormat32bppRGB,newImage.data);
So effectively, I convert input image to a 4 byte per pixel and then use the convert it to bitmap.
All credits to Roger Rowland for his answer.
I think the problem is that a BMP format must have a stride that is a multiple of 4.
Your larger image has a stride of 8244, which is valid (8244/4 = 2061) but your smaller image has a stride of 4122, which is not (4122/4 = 1030.5).
As it says on MSDN for the stride parameter (with my emphasis):
Integer that specifies the byte offset between the beginning of one
scan line and the next. This is usually (but not necessarily) the
number of bytes in the pixel format (for example, 2 for 16 bits per
pixel) multiplied by the width of the bitmap. The value passed to this
parameter must be a multiple of four.
Assuming your stride is correct, I think you're only option is to copy it row by row. So, something like:
Great a Gdiplus::Bitmap of the required size and format
Use LockBits to get the bitmap pixel data.
Copy the OpenCV image one row at a time.
Call UnlockBits to release the bitmap data.
You can use my class CGdiPlus that implements all you need to convert from cv::Mat to Gdiplus::Bitmap and vice versa:
OpenCV / Tesseract: How to replace libpng, libtiff etc with GDI+ Bitmap (Load into cv::Mat via GDI+)

Direct Show YUY2 Pixel Output from videoInput

I'm using videoInput to interface with DirectShow and get pixel data from my webcam.
From another question I've asked, people have suggested that the pixel format is just appended arrays in the order of the Y, U, and V channels.
FourCC's website suggests that the pixel format does not actually follow this pattern, and is instead |Y0|U0|Y1|V0|Y2|U0|Y3|V0|
I'm working on a few functions that convert the YUY2 input image into RGB and YV12, and after having little to no success, thought that it might be an issue with how I'm interpreting the initial YUY2 image data.
Am I correct in assuming that the pixel data should be in the format from the FourCC website, or are the Y, U and V channels separate arrays that have be concentrated (so the data is in the order of channels, for example: YYYYUUVV?
In YUY2 each row is a sequence of 4-byte packets: YUYV describing two adjacent pixels.
In YV12 there are 3 separate planes: first Y of size width*height then V and then U, both of size width/2 * height/2.