Convert RGB32 image to ofPixels in Open Frameworks - c++

I am trying to display a video from a video decoder library.
The video is delivered as a byte array with RGB32 pixel format.
Meaning every pixel is represented by 32 bits.
RRBBGGFF - 8bit R, 8bit G, 8bit B, 8bit 0xFF.
Similar to QT Qimage Format_RGB32.
I thnik I need to convert the pixel array to ofPixels, Then load the pixels to ofTexture.
Then I can draw the texture.
I don't know how to convert/set the ofPixels from this pixel format.
Any tips/ideas are so so welcome.
Thanks!

Try using a ofThreadChannel as described in this example in order to avoid writing and reading from your ofTexture / ofPixels.
Then you can load a uint8_t* by doing :
by using an ofTexture's method loadFromPixels() :
// assuming data is populating the externalBuffer
void* externalBuffer;
tex.loadFromPixels((uint8*)externalData, width, height, GL_RGBA);
Hope this helps,
Best,
P

Related

Save raw RBG values to JPG using libjpeg

I have a canvas that is represented by a 2D array of the type colorData
The class colorData simply holds the RGB value or each pixel.
I have been looking at examples of people using libjpeg to write a jpg but none of them seem to use the RGB values.
Is there a way to save raw RGB values to a jpeg using libjpeg? Or better yet, is there an example of code using the raw RGB data for the jpeg data?
Look in example.c in the libjpeg source. It gives a complete example of how to write a JPEG file using RGB data.
The example uses a buffer variable image_buffer and height and width variables image_width and image_height. You will need to adapt it to copy the RGB values from your ColorData class and place them into the image buffer (this can be done one row at a time).
Fill an array of bytes with the RGB data (3 bytes for each pixel) and then set row_buffer[0] point to the array before calling jpeg_write_scanlines.

Converting opencv image to gdi bitmap doesn't work depends on image size

I have this code that converts an opencv image to a bitmap:
void processimage(MAT imageData)
{
Gdiplus::Bitmap bitmap(imageData.cols,imageData.rows,stride, PixelFormat24bppRGB,imageData.data);
// do some work with bitmap
}
It is working well when the size of image is 2748 X 3664. But I am tring to process an image wth size 1374 X 1832, it doesn't work.
The error is invalid parameter(2).
I checked and can confirm that:
in 2748 *3664:
cols=2748
rows=3664
stride= 8244
image is continues.
in 1374 X 1832
cols=1374
rows=1832
stride= 4122
image is continues.
So everything seems correct to me, but it generate error.
What is the problem and how can I fix it?
Edit
Based on answer which explained why I can not create bitmap. I finally implemented it in this way:
Mat newImage;
cvtColor(imageData, newImage, CV_BGR2BGRA);
Gdiplus::Bitmap bitmap(newImage.cols,newImage.rows,newImage.step1(), PixelFormat32bppRGB,newImage.data);
So effectively, I convert input image to a 4 byte per pixel and then use the convert it to bitmap.
All credits to Roger Rowland for his answer.
I think the problem is that a BMP format must have a stride that is a multiple of 4.
Your larger image has a stride of 8244, which is valid (8244/4 = 2061) but your smaller image has a stride of 4122, which is not (4122/4 = 1030.5).
As it says on MSDN for the stride parameter (with my emphasis):
Integer that specifies the byte offset between the beginning of one
scan line and the next. This is usually (but not necessarily) the
number of bytes in the pixel format (for example, 2 for 16 bits per
pixel) multiplied by the width of the bitmap. The value passed to this
parameter must be a multiple of four.
Assuming your stride is correct, I think you're only option is to copy it row by row. So, something like:
Great a Gdiplus::Bitmap of the required size and format
Use LockBits to get the bitmap pixel data.
Copy the OpenCV image one row at a time.
Call UnlockBits to release the bitmap data.
You can use my class CGdiPlus that implements all you need to convert from cv::Mat to Gdiplus::Bitmap and vice versa:
OpenCV / Tesseract: How to replace libpng, libtiff etc with GDI+ Bitmap (Load into cv::Mat via GDI+)

Save float * images in C++

I wanted to understand how I can save an image of type float:
float * image;
Allocated in this way:
int size = width * height;
image = (float *)malloc(size * sizeof(float));
I tried using the CImg library, but does not accept float directly. Infact, i only use it to capture image to float, because I need only float images.
CImg<float> image("image.jpg");
int width = image.width();
int height = image.height();
int size = width*height
float * image = image.data();
How do I save this picture to float from .jpg or .bmp readable. I thought to open a write buffer but not save me anything and I can not read from a file!
well, what you need is first of all to realize what you are tying to do.
you are creating a pointer to float array
image=(float *)malloc(size* sizeof(float));
and then you're doing
float * image =image.data();
which is double use of image that will cause a compiler error and a bad thing to do also if you could.
now you should read on CImg here and see that Data() returns a pointer to the first pixel of the image.
now that we established all of that let's go to the solution:
if you want to save the float array to a file use this example
#include <fstream.h>
void saveArray(float* array, int length);
int main()
{
float image[] = { 15.25, 15.2516, 84.168, 84356};
saveArray(floatArray, sizeof(image)/ sizeof(image[0]));
return 0;
}
void saveArray(float* array, int length)
{
ofstream output("output.txt");
for(int i=0;i<length;i++)
{
output<<array[i]<<endl;
}
}
Since the JPEG image format only supports 8 bit color components (actually the standard allows for 12 bit, but I have nver seen an implementation of that), you cannot do this with JPEG.
You may be able to do this with a .bmp file. See my answer to a question with a possible way to do this with the OpenCV library. With some other library it may be easy with .bmp files because OpenCV assumes 8 bit color channels even though, as fas I know, the .bmp format doesn't dictate that.
Do you need compression? If not just write a binary file, or store the file in yml format, etc.
If you need compression OpenEXR would be option to consider. Probably Image Magick would be the best implementation for you as it integrates well with CImg. Since CImg doesn't natively support .jpg, I suspect that you may already have Image Magick.
Well I can see from your code that you are using only 32bit float grayscale (no R,G,B just I)
so this are my suggestions:
Radiance RGBE (.hdr)
use 3x8bit mantisa for R,G,B and 1x8bit exponent which gives you only 16bit precision.
But if you use also R,G,B than for simulation purposes this format is not siutable for you. (you loose to much precision because of that the exponent is the same for all channels)
any HDR format is not native so you need to install viewers and must code read/write functions for your source code or use of libs
non HDR formats (bmp,jpg,tga,png,pcx...)
If you use grayscale only than this is the best solution for you. These formats are usualy 8bit per channel so you can use 24-32bits together for your single intensity. Also you can view/edit these images natively on most OS. there are 2 ways to do this.
for 32bit images you can simply copy float to color =((DWORD*)(&col))[0]; where col is your float pixel. This is simplest without precision loss but if you view your image it will be not pretty :) because floats are stored in different way than integer types.
use of color palette. Create color scale palette from min to max possible value of your pixel colors (more colors it has more precision is saved). then bound whole image to this values. after this convert float value to index in your palette and store it (for save) and reverse get float from index in palette from color (for load) in this way the picture will be viewable similar to thermal images ... the conversion from float value to index/RGB color can be done linearly (loose lots of precision) or nonlinearly (by exp,log functions or any nonlinear you want) In best case if you use 24bit pixels and have scale palette from all 2^24 colors and you use nonlinear conversion than you loose only 25% of precision (if you really use whole dynamic range of float, if not than the loss is smaller even down to zero)
tips for scale:
look at the light spectrum colors its a good color scale for start (there are many simple source codes that create this with some fors just google), you can also use any color gradient patterns.
nonlinear function should be changing less on float range where you need to keep precision (range where most of your pixels can be) and changing much where precision is not important (+/- NaN). I usualy use exp,ln or tan, but you must scale them to range of your color scale palette.
The BMP file format is pretty simple:
https://en.m.wikipedia.org/wiki/BMP_file_format
Read the header to determine height, width, bpp, and data start index. And then just start filling in your float array by casting the pixel channel values to float (starting from the index specified in header), going across the width. When you reach module the specified width, go to next row in array.
JPG decoding is more complex. I would advise against rying to do it yourself.
If you want to save float values, you need to use a format that supports them - which is not JPEG and not BMP. The most likely options are:
TIFF - which requires a library to write
FITS - which is mainly used for Astronomy data, and is not too hard to write
PFM (Portable Float Format) which is a least common denominator format, in the same vein as NetPBM format and which is described here.
The good news is that CImg supports PFM out-of-the-box with no additional libraries required. So the answer to your question is very simple:
#include "CImg.h"
using namespace cimg_library;
int main() {
CImg<float> image("image.png");
image.normalize(0.0,1.0);
image.save_pfm("result.pfm");
}
If you want to view your image later, ImageMagick understands all the above formats and can convert any of them to anything else:
convert result.pfm image.jpg # convert PFM to JPG
convert result.pfm image.png # convert PFM to PNG
convert result.tif image.bmp # convert TIF to BMP
Keywords: CImg, C++, C, float, floating point, image, image processing, save as float, real, save as real, 32-bit, PFM, Portable Float Map

OpenCV imwrite a float image, which conversion to use?

I need to store a float image in OpenCV. Converting it to a CV8U image as suggested by #tomriddle_1234 still stores a black png.
reference.type() = 5
reference.channels() = 1
reference.depth() = 5
How can I convert the image to a 8bit or 16bit so that imwrite can store the image, while maintaining it's float property i.e: the stored image is not "washed out colours" due to conversion/loss of precision!
imshow("5t aligned Mean", reference); //Displays the correct image
//reference.convertTo(reference, CV_8U); //Convert image to 8Bit INCORRECT
reference.convertTo(reference, CV_8U, 255.0, 1/255.0); //Correct image
imwrite(subject.c_str(), reference); //Stores a completely black png
Any suggestions are much appreciated!
You can convert to 16bit by multiplying each float pixel by 2^16-1. Floating point images are stored with values between [0,1] which you want to map to the range [0,2^16-1]
opencv will save 16bit uncompressed in PNG and TIFF with the normal imwrite().
(It will also save them as JPEG although I've had less luck finding things that read 16bit jpeg)
normalize the image before converting between 0 and 255 using CV_NORM_MINMAX

Direct Show YUY2 Pixel Output from videoInput

I'm using videoInput to interface with DirectShow and get pixel data from my webcam.
From another question I've asked, people have suggested that the pixel format is just appended arrays in the order of the Y, U, and V channels.
FourCC's website suggests that the pixel format does not actually follow this pattern, and is instead |Y0|U0|Y1|V0|Y2|U0|Y3|V0|
I'm working on a few functions that convert the YUY2 input image into RGB and YV12, and after having little to no success, thought that it might be an issue with how I'm interpreting the initial YUY2 image data.
Am I correct in assuming that the pixel data should be in the format from the FourCC website, or are the Y, U and V channels separate arrays that have be concentrated (so the data is in the order of channels, for example: YYYYUUVV?
In YUY2 each row is a sequence of 4-byte packets: YUYV describing two adjacent pixels.
In YV12 there are 3 separate planes: first Y of size width*height then V and then U, both of size width/2 * height/2.