I have written some code that looks more or less like this:
QVector<QRgb> colorTable(256);
QImage *qi = new QImage(lutData, imwidth,imheight, QImage::Format_Indexed8);
while (index < 256)
{
colorTable.replace(index, qRgb(2552,255, 255));
index++;
}
qi->setColorTable(colorTable);
QPixmap p(QPixmap::fromImage(*qi,Qt::AutoColor));
so lutData (unsigned char) is my indexes into the colorTable. This crashes on the last line of the snippet, and the actual line is in a library I cant see source to called QX11PixmapData. What am I doing wrong to cause this crash, or is it a Qt Bug?
I am running CentOS 5.5 if that matters.
Thanks!
The QImage constructor you called is:
QImage::QImage ( const uchar * data, int width, int height, Format format )
Which requires the scanline data to be 32-bit aligned. So make sure it is and also has enough bytes in it. Or you can use:
QImage::QImage ( uchar * data, int width, int height, int bytesPerLine, Format format )
Which allows specification of bytes per scanline without being 32-bit aligned. So you can call it this way:
QImage *qi = new QImage(lutData, imwidth, imheight, imwidth, QImage::Format_Indexed8);
Since for a index color image, the scanline bytes is the same as the width.
Related
I am having an issue where the .png image that I want to load as a byte array using DevIL is not having an alpha channel.
A complete black image is also appearing as having alpha channel values as 0.
This is my image loading function:
DevILCall(ilGenImages(1, &m_ImageID));
DevILCall(ilBindImage(m_ImageID));
ASSERT("Loading image: " + path);
DevILCall(ilLoadImage(path.c_str()));
GraphicComponents::Image image(
ilGetData(),
ilGetInteger(IL_IMAGE_HEIGHT),
ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_BITS_PER_PIXEL)
);
return image;
The Image object I am using is as follows:
struct Image
{
ILubyte * m_Image;
const unsigned int m_Height;
const unsigned int m_Width;
const unsigned int m_BPP;
Image(ILubyte imageData[ ], unsigned int height, unsigned int width, unsigned int bpp);
~Image();
};
And this is how I am printing out the image data for now:
for(unsigned int i = 0; i < image->m_Height*image->m_Width*4; i+=4)
{
LOG("Red:");
LOG((int) image->m_Image[i]);
LOG("Green:");
LOG((int) image->m_Image[i+1]);
LOG("Blue:");
LOG((int) image->m_Image[i+2]);
LOG("Alpha:");
LOG((int) image->m_Image[i+3]);
}
I also tried using the ilTexImage() to format the loaded image to RGBA format but that also doesn't seem to work. The printing loop starts reading garbage values when I change the maximum value of the loop variable to 4 times the number of pixels in the image.
The image is also confirmed to have an alpha channel.
What might be going wrong here?
EDIT: ilGetInteger(IL_IMAGE_BPP) is returning 3, which should mean RGB for now. When I use the ilTexImage() to force 4 channels, then ilGetInteger(IL_IMAGE_BPP) returns 4 but I still see garbage values popping up at the std output
The problem was fixed by a simple ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE) call after loading the image.
I suppose DevIL loads the image in RGB mode with unsigned byte values by default and to use otherwise, you need to convert the loaded image using ilConvertImage().
How can I load RAW 16-bit grayscale image with FreeImage?
I have unsigned char* buffer with raw data. I know its dimensions in pixels and I know it is 16bit grayscale.
I'm trying to load it with
FIBITMAP* bmp = FreeImage_ConvertFromRawBits(buffer, 1000, 1506, 2000, 16, 0, 0, 0);
and get broken RGB888 image. It is unclear what color masks I should use for grayscale as it has only one channel.
After many experiments I found partially working solution with FreeImage_ConvertFromRawBitsEx:
FIBITMAP* bmp = FreeImage_ConvertFromRawBitsEx(true, buffer, FIT_UINT16, 1000, 1506, 2000, 16, 0xFFFF, 0xFFFF, 0xFFFF);
(thanks #1201ProgramAlarm for hint with masks).
In this way, FreeImage loads the data, but in some semi-custom format. Most of conversion and saving functions (tried: JPG, PNG, BMP, TIF) fail.
As I can't load data in native 16bit format, I preferred to convert it into 8bit grayscale
unsigned short* buffer = new unsigned short[1000 * 1506];
// load data
unsigned char* buffer2 = new unsigned char[1000 * 1506];
for (int i = 0; i < 1000 * 1506; i++)
buffer2[i] = (unsigned char)(buffer[i] / 256.f);
FIBITMAP* bmp = FreeImage_ConvertFromRawBits(buffer2, 1000, 1506, 1000, 8, 0xFF, 0xFF, 0xFF, true);
This is really not the best solution, I even don't want to mark it as right answer (will wait for something better). But after this the format will be convenient for FreeImage and it could save/convert data to whatever.
Concerning your issue: I have read this from their PDF documentation FreeImage1370.pdf:
FreeImage_ConvertFromRawBits
1 4 8 16 24 32
DLL_API FIBITMAP *DLL_CALLCONV FreeImage_ConvertFromRawBits(BYTE *bits, int width, int
height, int pitch, unsigned bpp, unsigned red_mask, unsigned green_mask, unsigned
blue_mask, BOOL topdown FI_DEFAULT(FALSE));
Converts a raw bitmap somewhere in memory to a FIBITMAP. The parameters in this
function are used to describe the raw bitmap. The first parameter is a pointer to the start of
the raw bits. The width and height parameter describe the size of the bitmap. The pitch
defines the total width of a scanline in the source bitmap, including padding bytes that may be
applied. The bpp parameter tells FreeImage what the bit depth of the bitmap is. The
red_mask, green_mask and blue_mask parameters tell FreeImage the bit-layout of the color
components in the bitmap. The last parameter, topdown, will store the bitmap top-left pixel
first when it is TRUE or bottom-left pixel first when it is FALSE.
When the source bitmap uses a 32-bit padding, you can calculate the pitch using the
following formula:
int pitch = ((((bpp * width) + 31) / 32) * 4);
In the code you are showing:
FIBITMAP* bmp = FreeImage_ConvertFromRawBits(buffer, 1000, 1506, 2000, 16, 0, 0, 0);
You have the appropriate FIBTMAP* return type, you pass in your buffer of raw bits. From there the 2nd & 3rd parameters which are the width & height: width = 1000, height = 1506 and the 4th parameter which is the pitch: pitch = 2000 (if the bitmap is using 32bit padding refer to the last note above), the 5th parameter will be the bit depth measured in bpp you have as bpp = 16, the next 3 parameters are for your RGB color masks. Here you label them all as being 0. The last parameter is a bool flag for the orientation of the image :
if (topdown == true ) {
stores top-left pixel first )
else {
bottom left pixel is stored first
}
in which you omit the value.
Without more code of how you are reading in the file, parsing the header information etc. to prepare your buffer it is hard to tell where else there may be an error or an issue, but from what you provided; I think you need to check the color channel masks for grayscale images.
EDIT - I found another PDF for FreeImage from standford.edu here that refers to an older version 3.13.1 however the function declaration - definition doesn't look like it has changed any and they provide examples for b FreeImage_ConvertToRawBits & Free_Image_ConvertFromRawBits:
// this code assumes there is a bitmap loaded and
// present in a variable called ‘dib’
// convert a bitmap to a 32-bit raw buffer (top-left pixel first)
// --------------------------------------------------------------
FIBITMAP *src = FreeImage_ConvertTo32Bits(dib);
FreeImage_Unload(dib);
// Allocate a raw buffer
int width = FreeImage_GetWidth(src);
int height = FreeImage_GetHeight(src);
int scan_width = FreeImage_GetPitch(src);
BYTE *bits = (BYTE*)malloc(height * scan_width);
// convert the bitmap to raw bits (top-left pixel first)
FreeImage_ConvertToRawBits(bits, src, scan_width, 32,
FI_RGBA_RED_MASK, FI_RGBA_GREEN_MASK, FI_RGBA_BLUE_MASK,
TRUE);
FreeImage_Unload(src);
// convert a 32-bit raw buffer (top-left pixel first) to a FIBITMAP
// ----------------------------------------------------------------
FIBITMAP *dst = FreeImage_ConvertFromRawBits(bits, width, height, scan_width,
32, FI_RGBA_RED_MASK, FI_RGBA_GREEN_MASK, FI_RGBA_BLUE_MASK, FALSE);
I think this should help you with your question about the bit masks for the color channels in a grayscale image.
You already mentioned the FreeImage_ConvertFromRawBitsEx() function, which was added at some point between FreeImage v3.8 and v3.17, but are you calling it correctly? I was able to use this function with 16-bit grayscale data:
int nBytesPerRow = nWidth * 2;
int nBitsPerPixel = 16;
FIBITMAP* pFIB = FreeImage_ConvertFromRawBitsEx(TRUE, pImageData, FIT_UINT16, nWidth, nHeight, nBytesPerRow, nBitsPerPixel, 0, 0, 0, TRUE);
Note that nBytesPerRow and nBitsPerPixel have to be specified correctly for the 16-bit data. Also, I believe the color mask parameters are irrelevant for this data, since it is monochrome.
EDIT: I noticed that you said that saving the 16-bit data did not work correctly. That may be due to the file formats themselves. The only file format that I have found to be compatible with 16-bit grayscale data is TIFF. So, if you have 16-bit grayscale data, you can save a TIFF with FreeImage_Save() but you cannot save a BMP.
I'm confused about the way libjpeg jpeg_read_scanlines works. It's my understanding that it decompresses a JPEG, row by row, and creates a decompressed pixel buffer.
Typical usage is something like:
jpeg_decompress_struct cinfo;
...
unsigned char* image = new unsigned char[cinfo.image_width * cinfo.image_height];
unsigned char* ptr = image;
int row_stride = cinfo.image_width;
while (cinfo.output_scanline < cinfo.image_height)
{
jpeg_read_scanlines(&cinfo, &ptr, 1);
ptr += row_stride;
}
Question: I'm confused about the output buffer size. In all example code I see which uses jpeg_read_scanlines, the size of the output buffer is width X height, where width and height refer to the dimensions of the JPEG file. So for a 10x10 JPEG file we'd have a 100 byte output buffer.
But... isn't the size of each RGB pixel 3 bytes (24-bit)? So shouldn't the uncompressed data actually be width X height X 3 bytes?
Why isn't it?
I notice that with code which uses jpeg_write_scanlines, the buffer to be compressed IS width X height X 3. So why is the buffer used with jpeg_read_scanlines only width X height?
You are only reading 1 line at a time with the line
jpeg_read_scanlines(&cinfo, &ptr, 1);
so you only needed the line
unsigned char* image = new unsigned char[cinfo.image_width * cinfo.image_height];
to be
unsigned char* image = new unsigned char[cinfo.image_width * cinfo.image_components];
The start of the buffer is being re-used for every scanline. Most of your current buffer is actually unused.
For RGB data, output_components will be 3 (R,G,B).
Here's some related documentation from libjpeg.txt:
output_width image width and height, as scaled
output_height
out_color_components # of color components in out_color_space
output_components # of color components returned per pixel
colormap the selected colormap, if any
actual_number_of_colors number of entries in colormap
output_components is 1 (a colormap index) when quantizing colors; otherwise it
equals out_color_components. It is the number of JSAMPLE values that will be
emitted per pixel in the output arrays.
Typically you will need to allocate data buffers to hold the incoming image.
You will need output_width * output_components JSAMPLEs per scanline in your
output buffer, and a total of output_height scanlines will be returned.
I am quite suprised since I'm not able to find any method that loads an image from raw data. Is there any elegant way to do it? I just need to create a QImage or similar from raw bitmap binary data (no header).
You can create a QImage object from raw data with the ctor that takes an array of uchars.
You need to specify the format of the data given to the QImage (RGB, RGBA, Indexed, etc.)
QImage ( uchar * data, int width, int height, Format format )
QImage ( const uchar * data, int width, int height, Format format )
QImage ( uchar * data, int width, int height, int bytesPerLine, Format format )
QImage ( const uchar * data, int width, int height, int bytesPerLine,
Format format )
http://doc.qt.digia.com/qt/qimage.html
E.g.:
uchar* data = getDataFromSomewhere();
QImage img(data, width, height, QImage::Format_ARGB32);
Hope that helps.
your question is not clear. use Qpixmap. and Qbyte array. its very easy.
QPixmap pic;
pic.loadFromData(array); //array contains a bite array of the image.
label->setPixmap(pic); //do what ever you want from the image. here I set it to a lable.
I have been able to display an image in a label in Qt using something like the following:
transformPixels(0,0,1,imheight,imwidth,1);//sets unsigned char** imageData
unsigned char* fullCharArray = new unsigned char[imheight * imwidth];
for (int i = 0 ; i < imheight ; i++)
for (int j = 0 ; j < imwidth ; j++)
fullCharArray[(i*imwidth)+j] = imageData[i][j];
QImage *qi = new QImage(fullCharArray, imwidth, imheight, QImage::Format_RGB32);
ui->viewLabel->setPixmap(QPixmap::fromImage(*qi,Qt::AutoColor));
So fullCharArray is an array of unsigned chars that have been mapped from the 2D array imageData, in other words, it is imheight * imwidth bytes.
The problem is, it seems like only a portion of my image is showing in the label. The image is very large. I would like to display the full image, scaled down to fit in the label, with the aspect ratio preserved.
Also, that QImage format was the only one I could find that seemed to give me a close representation of the image I am wanting to display, is that what I should expect? I am only using one byte per pixel (unsigned char - values from 0 to 255), and it seems liek RGB32 doesnt make much sense for that data type, but none of the other ones displayed anything remotely correct
edit:
Following dan gallaghers advice, I implemented this code:
QImage *qi = new QImage(fullCharArray, imwidth, imheight, QImage::Format_RGB32);
int labelWidth = ui->viewLabel->width();
int labelHeight = ui->viewLabel->height();
QImage small = qi->scaled(labelWidth, labelHeight,Qt::KeepAspectRatio);
ui->viewLabel->setPixmap(QPixmap::fromImage(small,Qt::AutoColor));
But this causes my program to "unexpectedly finish" with code 0
Qt doesn't support grayscale image construction directly. You need to use 8-bit indexed color image:
QImage * qi = new QImage(imageData, imwidth, imheight, QImage::Format_Indexed8);
for(int i=0;i<256;++i) {
qi->setColor(i, qRgb(i,i,i));
}
QImage has a scaled member. So you want to change your setPixmap call to something like:
QImage small = qi->scaled(labelWidth, labelHeight, Qt::KeepAspectRatio);
ui->viewLabel->setPixmap(QPixmap::fromImage(small, Qt::AutoColor);
Note that scaled does not modify the original image qi; it returns a new QImage that is a scaled copy of the original.
Re-Edit:
To convert from 1-byte grayscale to 4-byte RGB grayscale:
QImage qi = new QImage(imwidth, imheight, QImage::Format_RGB32);
for (int i = 0; i < imheight; i++)
{
for (int j = 0; j < imwidth; j++)
{
qi->setPixel(i, j, QRgb(imageData[i][j], imageData[i][j], imageData[i][j]));
}
}
Then scale qi and use the scaled copy as the pixmap for viewLabel.
I've also faced similar problem - QImage::scaled returned black images. The quick work-around which worked in my case was to convert QImage to QPixmap, scale and convert back then. Like this:
QImage resultImg = QPixmap::fromImage(image)
.scaled( 400, 400, Qt::KeepAspectRatio )
.toImage();
where "image" is the original image.
I was not aware of format-problem, before reading this thread - but indeed, my images are 1-Bit black-white.
Regards,
Valentin Heinitz