DevIL/OpenIL isn't loading alpha channel - c++

I am having an issue where the .png image that I want to load as a byte array using DevIL is not having an alpha channel.
A complete black image is also appearing as having alpha channel values as 0.
This is my image loading function:
DevILCall(ilGenImages(1, &m_ImageID));
DevILCall(ilBindImage(m_ImageID));
ASSERT("Loading image: " + path);
DevILCall(ilLoadImage(path.c_str()));
GraphicComponents::Image image(
ilGetData(),
ilGetInteger(IL_IMAGE_HEIGHT),
ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_BITS_PER_PIXEL)
);
return image;
The Image object I am using is as follows:
struct Image
{
ILubyte * m_Image;
const unsigned int m_Height;
const unsigned int m_Width;
const unsigned int m_BPP;
Image(ILubyte imageData[ ], unsigned int height, unsigned int width, unsigned int bpp);
~Image();
};
And this is how I am printing out the image data for now:
for(unsigned int i = 0; i < image->m_Height*image->m_Width*4; i+=4)
{
LOG("Red:");
LOG((int) image->m_Image[i]);
LOG("Green:");
LOG((int) image->m_Image[i+1]);
LOG("Blue:");
LOG((int) image->m_Image[i+2]);
LOG("Alpha:");
LOG((int) image->m_Image[i+3]);
}
I also tried using the ilTexImage() to format the loaded image to RGBA format but that also doesn't seem to work. The printing loop starts reading garbage values when I change the maximum value of the loop variable to 4 times the number of pixels in the image.
The image is also confirmed to have an alpha channel.
What might be going wrong here?
EDIT: ilGetInteger(IL_IMAGE_BPP) is returning 3, which should mean RGB for now. When I use the ilTexImage() to force 4 channels, then ilGetInteger(IL_IMAGE_BPP) returns 4 but I still see garbage values popping up at the std output

The problem was fixed by a simple ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE) call after loading the image.
I suppose DevIL loads the image in RGB mode with unsigned byte values by default and to use otherwise, you need to convert the loaded image using ilConvertImage().

Related

uint8_t buffer to cv::Mat conversion results in distorted image

I have a Mipi camera that captures frames and stores them into the struct buffer that you can see below. Once the frame is stored I want to convert it into a cv::Mat, the thing is that the Mat ends up looking like the first pic.
The var buf.index is just part of the V4L2 API, useful to understand which buffer I'm using.
//The structure where the data is stored
struct buffer{
void *start;
size_t length;
};
struct buffer *buffers;
//buffer->mat
cv::Mat im = cv::Mat(cv::Size(width, height), CV_8UC3, ((uint8_t*)buffers[buf.index].start));
At first I thought that the data might be corrupted but storing the image with lodepng results in a nice image without any distortion.
unsigned char* out_buf = (unsigned char*)malloc( width * height * 3);
for(int pix = 0; pix < width*height; ++pix) {
memcpy(out_buf + pix*3, ((uint8_t*)buffers[buf.index].start)+4*pix+1, 3);
}
lodepng_encode24_file(filename, out_buf, width, height);
I bet it's something really silly.
the picture you post has oddly colored pixels and the patterns look like there's more information than simply 24 bits per pixel.
after inspecting the data, it appears that V4L gives you four bytes per pixel, and the first byte is always 0xFF (let's call that X). further, the channel order seems to be XRGB.
create a cv::Mat using 8UC4 to contain the data.
to use the picture in OpenCV, you need BGR order. cv::split the received data into its four color planes which are X,R,G,B. use cv::merge to reassemble the B,G,R planes into a picture that OpenCV can handle, or reassemble into R,G,B to create a Mat for other purposes (that other library you seem to use).

Loading RAW grayscale image with FreeImage

How can I load RAW 16-bit grayscale image with FreeImage?
I have unsigned char* buffer with raw data. I know its dimensions in pixels and I know it is 16bit grayscale.
I'm trying to load it with
FIBITMAP* bmp = FreeImage_ConvertFromRawBits(buffer, 1000, 1506, 2000, 16, 0, 0, 0);
and get broken RGB888 image. It is unclear what color masks I should use for grayscale as it has only one channel.
After many experiments I found partially working solution with FreeImage_ConvertFromRawBitsEx:
FIBITMAP* bmp = FreeImage_ConvertFromRawBitsEx(true, buffer, FIT_UINT16, 1000, 1506, 2000, 16, 0xFFFF, 0xFFFF, 0xFFFF);
(thanks #1201ProgramAlarm for hint with masks).
In this way, FreeImage loads the data, but in some semi-custom format. Most of conversion and saving functions (tried: JPG, PNG, BMP, TIF) fail.
As I can't load data in native 16bit format, I preferred to convert it into 8bit grayscale
unsigned short* buffer = new unsigned short[1000 * 1506];
// load data
unsigned char* buffer2 = new unsigned char[1000 * 1506];
for (int i = 0; i < 1000 * 1506; i++)
buffer2[i] = (unsigned char)(buffer[i] / 256.f);
FIBITMAP* bmp = FreeImage_ConvertFromRawBits(buffer2, 1000, 1506, 1000, 8, 0xFF, 0xFF, 0xFF, true);
This is really not the best solution, I even don't want to mark it as right answer (will wait for something better). But after this the format will be convenient for FreeImage and it could save/convert data to whatever.
Concerning your issue: I have read this from their PDF documentation FreeImage1370.pdf:
FreeImage_ConvertFromRawBits
1 4 8 16 24 32
DLL_API FIBITMAP *DLL_CALLCONV FreeImage_ConvertFromRawBits(BYTE *bits, int width, int
height, int pitch, unsigned bpp, unsigned red_mask, unsigned green_mask, unsigned
blue_mask, BOOL topdown FI_DEFAULT(FALSE));
Converts a raw bitmap somewhere in memory to a FIBITMAP. The parameters in this
function are used to describe the raw bitmap. The first parameter is a pointer to the start of
the raw bits. The width and height parameter describe the size of the bitmap. The pitch
defines the total width of a scanline in the source bitmap, including padding bytes that may be
applied. The bpp parameter tells FreeImage what the bit depth of the bitmap is. The
red_mask, green_mask and blue_mask parameters tell FreeImage the bit-layout of the color
components in the bitmap. The last parameter, topdown, will store the bitmap top-left pixel
first when it is TRUE or bottom-left pixel first when it is FALSE.
When the source bitmap uses a 32-bit padding, you can calculate the pitch using the
following formula:
int pitch = ((((bpp * width) + 31) / 32) * 4);
In the code you are showing:
FIBITMAP* bmp = FreeImage_ConvertFromRawBits(buffer, 1000, 1506, 2000, 16, 0, 0, 0);
You have the appropriate FIBTMAP* return type, you pass in your buffer of raw bits. From there the 2nd & 3rd parameters which are the width & height: width = 1000, height = 1506 and the 4th parameter which is the pitch: pitch = 2000 (if the bitmap is using 32bit padding refer to the last note above), the 5th parameter will be the bit depth measured in bpp you have as bpp = 16, the next 3 parameters are for your RGB color masks. Here you label them all as being 0. The last parameter is a bool flag for the orientation of the image :
if (topdown == true ) {
stores top-left pixel first )
else {
bottom left pixel is stored first
}
in which you omit the value.
Without more code of how you are reading in the file, parsing the header information etc. to prepare your buffer it is hard to tell where else there may be an error or an issue, but from what you provided; I think you need to check the color channel masks for grayscale images.
EDIT - I found another PDF for FreeImage from standford.edu here that refers to an older version 3.13.1 however the function declaration - definition doesn't look like it has changed any and they provide examples for b FreeImage_ConvertToRawBits & Free_Image_ConvertFromRawBits:
// this code assumes there is a bitmap loaded and
// present in a variable called ‘dib’
// convert a bitmap to a 32-bit raw buffer (top-left pixel first)
// --------------------------------------------------------------
FIBITMAP *src = FreeImage_ConvertTo32Bits(dib);
FreeImage_Unload(dib);
// Allocate a raw buffer
int width = FreeImage_GetWidth(src);
int height = FreeImage_GetHeight(src);
int scan_width = FreeImage_GetPitch(src);
BYTE *bits = (BYTE*)malloc(height * scan_width);
// convert the bitmap to raw bits (top-left pixel first)
FreeImage_ConvertToRawBits(bits, src, scan_width, 32,
FI_RGBA_RED_MASK, FI_RGBA_GREEN_MASK, FI_RGBA_BLUE_MASK,
TRUE);
FreeImage_Unload(src);
// convert a 32-bit raw buffer (top-left pixel first) to a FIBITMAP
// ----------------------------------------------------------------
FIBITMAP *dst = FreeImage_ConvertFromRawBits(bits, width, height, scan_width,
32, FI_RGBA_RED_MASK, FI_RGBA_GREEN_MASK, FI_RGBA_BLUE_MASK, FALSE);
I think this should help you with your question about the bit masks for the color channels in a grayscale image.
You already mentioned the FreeImage_ConvertFromRawBitsEx() function, which was added at some point between FreeImage v3.8 and v3.17, but are you calling it correctly? I was able to use this function with 16-bit grayscale data:
int nBytesPerRow = nWidth * 2;
int nBitsPerPixel = 16;
FIBITMAP* pFIB = FreeImage_ConvertFromRawBitsEx(TRUE, pImageData, FIT_UINT16, nWidth, nHeight, nBytesPerRow, nBitsPerPixel, 0, 0, 0, TRUE);
Note that nBytesPerRow and nBitsPerPixel have to be specified correctly for the 16-bit data. Also, I believe the color mask parameters are irrelevant for this data, since it is monochrome.
EDIT: I noticed that you said that saving the 16-bit data did not work correctly. That may be due to the file formats themselves. The only file format that I have found to be compatible with 16-bit grayscale data is TIFF. So, if you have 16-bit grayscale data, you can save a TIFF with FreeImage_Save() but you cannot save a BMP.

Saving an image with imwrite in opencv writes all black but imshow shows correctly

Original Question
This example code will display the image created correctly, but will save a png with only black pixels. The Mat is in CV_32FC3 format, so 3 channels of floats.
The answered questions I've found deal with image manipulation issues or converting incorrectly or saving in jpeg with various compression.
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main()
{
int i = 0;
int j = 0;
Vec3f intensity;
cv::Mat imageF;
imageF= cv::Mat::zeros(36,36,CV_32FC3);
for(j=0;j<imageF.cols;++j){
for(i=0;i<imageF.rows;++i){
intensity = imageF.at<Vec3f>(j, i);
intensity.val[2] = 0.789347;
intensity.val[1] = 0.772673;
intensity.val[0] = 0.692689;
imageF.at<Vec3f>(j, i) = intensity;
}}
imshow("Output", imageF);
imwrite("test.png", imageF);
waitKey(0);
return 0;
}
What changes need to be made to make it save as expected?
Berriel's Solution
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main() {
int i = 0;
int j = 0;
Vec3f intensity;
cv::Mat imageF;
cv::Mat image;
imageF= cv::Mat::zeros(36,36,CV_32FC3);
for(j=0; j<imageF.cols; ++j) {
for(i=0; i<imageF.rows; ++i) {
intensity = imageF.at<Vec3f>(j, i);
intensity.val[2] = 0.789347;
intensity.val[1] = 0.772673;
intensity.val[0] = 0.692689;
imageF.at<Vec3f>(j, i) = intensity;
}
}
imshow("Output", imageF);
Mat3b imageF_8UC3;
imageF.convertTo(imageF_8UC3, CV_8UC3, 255);
imwrite("test.png", imageF_8UC3);
waitKey(0);
return 0;
}
As you can read in the documentation:
The function imwrite saves the image to the specified file. The image
format is chosen based on the filename extension (see imread() for the
list of extensions). Only 8-bit (or 16-bit unsigned (CV_16U) in case
of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’
channel order) images can be saved using this function. If the format,
depth or channel order is different, use Mat::convertTo() , and
cvtColor() to convert it before saving.
You should use convertTo to convert from CV_32FC3 to CV_8UC3 to get the same result:
Mat3b imageF_8UC3;
imageF.convertTo(imageF_8UC3, CV_8UC3, 255);
imwrite("test.png", imageF_8UC3);
By the way, imshow() displays correctly because...
If the image is 8-bit unsigned, it is displayed as is.
If the image is 16-bit unsigned or 32-bit integer, the pixels are divided by 256. That is, the value range [0,255*256] is mapped to
[0,255].
If the image is 32-bit floating-point, the pixel values are multiplied by 255. That is, the value range [0,1] is mapped to
[0,255].
Basically, the same trick is what you need to do before writing.
I came to this question, because I also had a problem with black ".png" images. Eventually I realised, that my 32 bit image with channels (Red, Green, Blue, Alpha) had a zero-valued alpha channel (full transparency). Thus, programs that are aware of transparency just show the "black background behind the image". After changing transparency to "255" (no transparency) my saved png-image could be visualized just fine:
MyImage[:,:,3] = 255
You can check that behaviour by assigning a value of 127 and you'll get a pale/greyed version of your image.

Moving my array to Mat and showing image with open CV

I am having problems using opencv to display an image. As my code is currently working, I have function that loads 78 images of size 710X710 of unsigned shorts into a single array. I have verified this works by writing the data to a file and reading it with imageJ. I am now trying to extract a single image frame from the array and load it into a Mat in order to perform some processing on it. Right now I have tried two ways to do this. The code will compile and run if I do not try to read the output, but if I cout<
My question would be, how do I extract the data from my large, 1-D array of 78images of size 710*710 into single Mat images. Or is there an more efficient way where I can load the images into a 3-D mat of dimensions 710X710X78 and operate on each 710X710 slice as needed?
int main(int argc, char *argv[])
{
Mat OriginalMat, TestImage;
long int VImageSize = 710*710;
int NumberofPlanes = 78;
int FrameNum = 150;
unsigned short int *PlaneStack = new unsigned short int[NumberofPlanes*VImageSize];
unsigned short int *testplane = new unsigned short int[VImageSize];
/////Load PlaneStack/////
Load_Vimage(PlaneStack, Path, NumberofPlanes);
//Here I try to extract a single plane image to the mat testplane, I try it two different ways with the same results
memcpy(testplane, &PlaneStack[710*710*40], VImageSize*sizeof(unsigned short int));
//copy(&PlaneStack[VImageSize*40],&PlaneStack[VImageSize*41], testplane);
// move single plane to a mat file
OriginalMat = Mat(710,710,CV_8U, &testplane) ;
//cout<<OriginalMat;
namedWindow("Original");
imshow("Original", OriginalMat);
}
The problem is you are using the constructor Mat::Mat(int rows, int cols, int type, void* data) with a pointer to 16 bit data (unsigned short int) but you are specifying the type CV_8U (8 bit).
Therefore the first byte of your 16 bit pixel becomes the first pixel in OriginalMat, and the second byte of the first pixel becomes the second pixel in OriginalMat, etc.
You need to create a 16 bit Mat, then convert it to 8 bit if you want to display it, e.g.:
int main(int argc, char *argv[])
{
long int VImageSize = 710*710;
int NumberofPlanes = 78;
int FrameNum = 150;
/////Load PlaneStack/////
unsigned short int *PlaneStack = new unsigned short int[NumberofPlanes*VImageSize];
Load_Vimage(PlaneStack, Path, NumberofPlanes);
// Get a pointer to the plane we want to view
unsigned short int *testplane = &PlaneStack[710*710*40];
// "move" single plane to a mat file
// actually nothing gets moved, OriginalMat will just contain a pointer to your data.
Mat OriginalMat(710,710,CV_16UC1, &testplane) ;
double scale_factor = 1.0 / 256.0;
Mat DisplayMat;
OriginalMat.convertTo(DisplayMat, CV_8UC1, scale_factor);
namedWindow("Original");
imshow("Original", DisplayMat);
}

Replacement for glaux function

I'm going through NeHe's tutorials and I'm running into a problem when it comes to bump mapping. Up until now I've been using the SOIL library to load image files into OpenGL which works great. But the bump mapping tutorial uses a pointer to the image data to modify the colors of the image pixel by pixel. To my knowledge I can't do this with the SOIL library. Is there a good way to get this affect now that glaux is deprecated? Apparently we're trying to set the alpha channel to be the value of the red component of the pixel color. On another note are we loading these into a char array because c++ doesn't care about the difference between bytes and char (they're the same size right?) or is there some other thing I'm missing in all this?
// Load The Logo-Bitmaps
if (Image=auxDIBImageLoad("Data/OpenGL_ALPHA.bmp")) {
alpha=new char[4*Image->sizeX*Image->sizeY];
// Create Memory For RGBA8-Texture
for (int a=0; a<Image->sizeX*Image->sizeY; a++)
alpha[4*a+3]=Image->data[a*3]; // Pick Only Red Value As Alpha!
if (!(Image=auxDIBImageLoad("Data/OpenGL.bmp"))) status=false;
for (a=0; a<Image->sizeX*Image->sizeY; a++) {
alpha[4*a]=Image->data[a*3]; // R
alpha[4*a+1]=Image->data[a*3+1]; // G
alpha[4*a+2]=Image->data[a*3+2]; // B
}
SOIL_load_image() should give you the raw image bits:
/**
Loads an image from disk into an array of unsigned chars.
Note that *channels return the original channel count of the
image. If force_channels was other than SOIL_LOAD_AUTO,
the resulting image has force_channels, but *channels may be
different (if the original image had a different channel
count).
\return 0 if failed, otherwise returns 1
**/
unsigned char*
SOIL_load_image
(
const char *filename,
int *width, int *height, int *channels,
int force_channels
);