How can I tell if a Magick::Image has an alpha channel? - c++

If I load an image into ImageMagick with the read function like so:
Magick::Image image;
image.read(filename);
how can I tell if the loaded image has an alpha channel? I want to direct my program to a different algorithm when I manipulate the pixels of a PNG with transparency vs when I load an opaque JPG.
Is there a simple yes/no test I can do?
The reason I am asking is because a code snippet like the following seems to assign random opacities if the loaded image does not have them, rather than assuming the pixel is completely opaque:
// transform the pixels to something GL can use
Magick::Pixels view(image);
GLubyte *pixels = (GLubyte*)malloc( sizeof(GLubyte)*width*height*4 );
for ( ssize_t row=0; row<height; row++ ) {
const Magick::PixelPacket *im_pixels = view.getConst(0,row,width,1);
for ( ssize_t col=0; col<width; col++ ) {
*(pixels+(row*width+col)*4+0) = (GLubyte)im_pixels[col].red;
*(pixels+(row*width+col)*4+1) = (GLubyte)im_pixels[col].green;
*(pixels+(row*width+col)*4+2) = (GLubyte)im_pixels[col].blue;
*(pixels+(row*width+col)*4+3) = 255-(GLubyte)im_pixels[col].opacity;
}
}
*pTex = pContext->LoadTexture( pixels, width, height );
free(pixels);

You can use the matte() property to determine if your image supports transparency.
Magick::Image image;
image.read(filename);
if (image.matte())
executeMethod(image);

Related

Create a QImage from a BITMAPINFO and an uchar* data

I am trying to load an image that is given to me as a BITMAPINFO* and a uchar array.
The documentation states that it is a standard Microsoft device independent bitmap (DIB) with 8-bit pixels and a 256-entry color table.
I am curently able to open this image through:
BITMAPINFO* bmih = givenBITMAPINFO;
uchar* data = givenData;
QImage img = QImage(data, bmih->biWidth, bmih->biHeight, QImage::Format_Grayscale8);
But I have two problems with that:
the image is in QImage::Format_Grayscale8 when the documentation states an 8-bit pixels and a 256-entry color table;
the image is upside down and mirrored. This come from the way the bitmap data is stored in Win32.
Anyone knows how I can load properly this image?
By casting the provided header to a BITMAPINFO instead of a BITMAPINFOHEADER I have access to the color table and then apply a trasformation to get a streight image:
BITMAPINFO* bmi = givenHeader;
uchar* data = givenData;
QImage img = QImage(data, bmi->bmiHeader.biWidth, bmi->bmiHeader.biHeight, QImage::Format_Indexed8);
img.setColorCount(256);
for (int i=0; i<256; ++i){
RGBQUAD* rgbBmi = bmi->bmiColors;
img.setColor(i, qRgb(rgbBmi[i].rgbRed, rgbBmi[i].rgbGreen, rgbBmi[i].rgbBlue))
}
img = img.mirrored(false, true);

DevIL library: save gray scale image in three matrices instead one

I need to make a program that convert a RGB image to a GRAYSCALE image and save it in PGM format. I use DevIL library, but when I save the image, I obtain always a 3D image (3 matrix), in grayscale but, if I load it in MATLAB, I have 3 matrices instead of just one. How can I obtain just one matrix in my output file using DevIL?
int main()
{
ilInit();
ilEnable(IL_ORIGIN_SET);
ilOriginFunc(IL_ORIGIN_UPPER_LEFT);
ilEnable(IL_FILE_OVERWRITE);
ILuint ImageName; // The image name to return.
ilGenImages(1, &ImageName);
ilBindImage(ImageName);
if(!ilLoadImage("/home/andrea/Scrivania/tests/siftDemoV4/et000.jpg"))
{ printf("err");
exit;
}
else
printf("caricata\n");
ILuint width,height;
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
double v[3]={0.2989360212937755001405548682669177651405334472656250000,0.5870430744511212495240215503145009279251098632812500000,0.1140209042551033058465748126764083281159400939941406250};
printf("%.55f %.55f %.55f",v[0],v[1],v[2]);
ILubyte *imgValue=ilGetData();
int i=0;
ILubyte imgNuova[width*height];
while( i < width*height)
{
imgNuova[i]=(char)round( ( (double)imgValue[3*i]*v[0])+ ( (double)imgValue[3*i+1]*v[1])+((double)imgValue[3*i+2]*v[2]));
i++;
}
ILuint ImageName2;
ilGenImages(2, &ImageName2);
ilBindImage(ImageName2);
ilTexImage(width, height, 1, 1, IL_LUMINANCE,
IL_UNSIGNED_BYTE, imgNuova);
iluFlipImage();
ilSave(IL_PNM,"/home/andrea/Scrivania/tests/siftDemoV4/et000new.pgm");
return 0;
}
Unfortunately, due to a bug in the PNM export, DevIL can and will only write PPM (Portable Pixmaps, 3 channel RGB) files regardless of the file extension. The only solution to this is to use a different file format, that supports single channel grayscale images, like PNG.
Matlab should be able to use that just as well. If you absolutely need or want files in the PGM format, you will have to use a converter like png2pnm.

How to create video using avcodec from jpeg images of type OpenCV::Mat?

I have colored jpeg images of OpenCV::Mat type and I create from them video using avcodec. The video that I get is upside-down, black & white and each row of each frame is shifted and I got diagonal line. What could be the reason for such output?
Follow this link to watch the video I get using avcodec.
I'm using acpicture_fill function to create avFrame from cv::Mat frame!
P.S.
Each cv::Mat cvFrame has width=810, height=610, step=2432
I noticed that avFrame (that is filled by acpicture_fill) has linesize[0]=2430
I tried manually setting avFrame->linesizep0]=2432 and not 2430 but it still didn't helped.
======== CODE =========================================================
AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
AVStream *outStream = avformat_new_stream(outContainer, encoder);
avcodec_get_context_defaults3(outStream->codec, encoder);
outStream->codec->pix_fmt = AV_PIX_FMT_YUV420P;
outStream->codec->width = 810;
outStream->codec->height = 610;
//...
SwsContext *swsCtx = sws_getContext(outStream->codec->width, outStream->codec->height, PIX_FMT_RGB24,
outStream->codec->width, outStream->codec->height, outStream->codec->pix_fmt, SWS_BICUBIC, NULL, NULL, NULL);
for (uint i=0; i < frameNums; i++)
{
// get frame at location I using OpenCV
cv::Mat cvFrame;
myReader.getFrame(cvFrame, i);
cv::Size frameSize = cvFrame.size();
//Each cv::Mat cvFrame has width=810, height=610, step=2432
1. // create AVPicture from cv::Mat frame
2. avpicture_fill((AVPicture*)avFrame, cvFrame.data, PIX_FMT_RGB24, outStream->codec->width, outStream->codec->height);
3avFrame->width = frameSize.width;
4. avFrame->height = frameSize.height;
// rescale to outStream format
sws_scale(swsCtx, avFrame->data, avFrame->linesize, 0, outStream->codec->height, avFrameRescaledFrame->data, avFrameRescaledFrame ->linesize);
encoderRescaledFrame->pts=i;
avFrameRescaledFrame->width = frameSize.width;
avFrameRescaledFrame->height = frameSize.height;
av_init_packet(&avEncodedPacket);
avEncodedPacket.data = NULL;
avEncodedPacket.size = 0;
// encode rescaled frame
if(avcodec_encode_video2(outStream->codec, &avEncodedPacket, avFrameRescaledFrame, &got_frame) < 0) exit(1);
if(got_frame)
{
if (avEncodedPacket.pts != AV_NOPTS_VALUE)
avEncodedPacket.pts = av_rescale_q(avEncodedPacket.pts, outStream->codec->time_base, outStream->time_base);
if (avEncodedPacket.dts != AV_NOPTS_VALUE)
avEncodedPacket.dts = av_rescale_q(avEncodedPacket.dts, outStream->codec->time_base, outStream->time_base);
// outContainer is "mp4"
av_write_frame(outContainer, & avEncodedPacket);
av_free_packet(&encodedPacket);
}
}
UPDATED
As #Alex suggested I changed the lines 1-4 with the code below
int width = frameSize.width, height = frameSize.height;
avpicture_alloc((AVPicture*)avFrame, AV_PIX_FMT_RGB24, outStream->codec->width, outStream->codec->height);
for (int h = 0; h < height; h++)
{
memcpy(&(avFrame->data[0][h*avFrame->linesize[0]]), &(cvFrame.data[h*cvFrame.step]), width*3);
}
The video (here) I get now is almost perfect. It's NOT upside-down, NOT black & white, BUT it seems that one of the RGB components is missing. Every brown/red colors became blue (in original images it should be vice-verse).
What could be the problem? Could rescaling(sws_scale) to AV_PIX_FMT_YUV420P format causes this?
The problem in a nutshell: avpicture_fill() expects no padding between rows, ie the stride (step) to be equal to width*sizeof(pixel), ie 810*3 = 2430. The actual stride of the data in cv::Mat step as you say is 2432 which is different, so just passing the data directly won't work. There is no way to tell avpicture_fill() to use a different stride for the input data; it is not part of the API (you might say it should be :)
There are two possible solutions:
Create an array in which the input data is contiguous, no padding between rows. You'd have to memcopy each row from the cv::Mat into that array. Then pass it to avpicture_fill().
int width, height; // get from mat
uint8_t* buf = malloc(width * height * 3); // 3 bytes per pixel
for (int i = 0; i < height; i++)
{
memcpy( &( buf[ i*width*3 ] ), &( mat->data[ i*mat->step ] ), width*3 );
}
avpicture_fill(..., buf, ...)
Btw, to flip the video vertically, you can do this to copy the last row to the first and so forth:
...
memcpy( &( buf[ i*width*3 ] ), &( mat->data[ (height - i - 1)*mat->step ] ), width*3 );
...
Or, fill in the AVPicture yourself:
AVPicture* pic = malloc(sizeof(AVPicture));
avpicture_alloc(pic, PIX_FMT_BGR24, width, height);
for (int i = 0; i < height; i++)
{
memcpy( &( pic->data[0][ i*pic->linesize[0] ] ), &( mat->data[ i*mat->step ] ), width*3);
}
There is no need to allocate pic->data[0] or set pic->linesize[0], avpicture_alloc() should do that. There is also no need to fill in data[1] or data[2], those should be null.
EDIT: Removed old code which showed copying R, G, B to separate planes. PIX_FMT_BGR24 is not a planar format.
I'm not familiar enough with OpenCV C++ API to figure out how to get the width and height (it's not mat->width, obviously) but I think you know what I mean.
P.S. Btw, your video is not actually black and white. It's just that each successive row is offset by two bytes, so the colors are rotated: red becomes green, green becomes blue, and so forth. The result is grayscale-ish, but if you look closely the individual rows are colored.
Have you considered using OpenCV's features to create the video for you? It's much more easier since your data is already store in a cv::Mat.
If you would like to keep your approach, you could simply rotate the cv::Mat.
About the color problem in the UPDATE of the original post. Is that caused by,
OpenCV Mat is (BGR) -> FFmpeg AVFrame is (RGB) ?
If so, try,
cvtColor( cvFrame , cvFrame , CV_BGR2RGB ) ;
before line 1.

FreeImage: Get pixel color

I'm writing a little application that reads color of each pixel in image and writes it to file. First I did it in Python, buit it's too slow on big images. Then I discovered FreeImage library, which I could use, but I can't understand how to use GetPixelColor method.
Could you please provide an example on how to get color, for example, of pixel[50:50]?
Here is information about GetPixelColor: http://freeimage.sourceforge.net/fnet/html/13E6BB72.htm.
Thank you very much!
With FreeImagePlus using a 24 or 32 bit image, getting the pixel at coords 50, 50 would look like this:
fipImage input;
RGBQUAD pixel;
input.load("myimage.png");
height = in.getHeight();
in.getPixelColor(50, height-1-50, &pixel);
Be aware that in FreeImage the origin is bottom left, so y values will probably need to be inverted by subtracting y from the image height as above.
To get pixel color from an input image: img, from a function call let's say: void read_image(const char* img) follow the below code snippet.
Here is the code snippet for above read_image function:
FREE_IMAGE_FORMAT fif = FreeImage_GetFIFFromFilename(img);
FIBITMAP *bmp = FreeImage_Load(fif, img);
unsigned width = FreeImage_GetWidth(bmp);
unsigned height = FreeImage_GetHeight(bmp);
int bpp = FreeImage_GetBPP(bmp);
FIBITMAP* bitmap = FreeImage_Allocate(width, height, bpp);
RGBQUAD color; FreeImage_GetPixelColor(bitmap, x, y, &color);
variable color will contain the color of the image pixel. You can extract rgb values as follows:
float r,g,b;
r = color.rgbRed;
g = color.rgbGreen;
b = color.rgbBlue;
Hope it helps!

Displaying and sizing a grayscale from a QImage in Qt

I have been able to display an image in a label in Qt using something like the following:
transformPixels(0,0,1,imheight,imwidth,1);//sets unsigned char** imageData
unsigned char* fullCharArray = new unsigned char[imheight * imwidth];
for (int i = 0 ; i < imheight ; i++)
for (int j = 0 ; j < imwidth ; j++)
fullCharArray[(i*imwidth)+j] = imageData[i][j];
QImage *qi = new QImage(fullCharArray, imwidth, imheight, QImage::Format_RGB32);
ui->viewLabel->setPixmap(QPixmap::fromImage(*qi,Qt::AutoColor));
So fullCharArray is an array of unsigned chars that have been mapped from the 2D array imageData, in other words, it is imheight * imwidth bytes.
The problem is, it seems like only a portion of my image is showing in the label. The image is very large. I would like to display the full image, scaled down to fit in the label, with the aspect ratio preserved.
Also, that QImage format was the only one I could find that seemed to give me a close representation of the image I am wanting to display, is that what I should expect? I am only using one byte per pixel (unsigned char - values from 0 to 255), and it seems liek RGB32 doesnt make much sense for that data type, but none of the other ones displayed anything remotely correct
edit:
Following dan gallaghers advice, I implemented this code:
QImage *qi = new QImage(fullCharArray, imwidth, imheight, QImage::Format_RGB32);
int labelWidth = ui->viewLabel->width();
int labelHeight = ui->viewLabel->height();
QImage small = qi->scaled(labelWidth, labelHeight,Qt::KeepAspectRatio);
ui->viewLabel->setPixmap(QPixmap::fromImage(small,Qt::AutoColor));
But this causes my program to "unexpectedly finish" with code 0
Qt doesn't support grayscale image construction directly. You need to use 8-bit indexed color image:
QImage * qi = new QImage(imageData, imwidth, imheight, QImage::Format_Indexed8);
for(int i=0;i<256;++i) {
qi->setColor(i, qRgb(i,i,i));
}
QImage has a scaled member. So you want to change your setPixmap call to something like:
QImage small = qi->scaled(labelWidth, labelHeight, Qt::KeepAspectRatio);
ui->viewLabel->setPixmap(QPixmap::fromImage(small, Qt::AutoColor);
Note that scaled does not modify the original image qi; it returns a new QImage that is a scaled copy of the original.
Re-Edit:
To convert from 1-byte grayscale to 4-byte RGB grayscale:
QImage qi = new QImage(imwidth, imheight, QImage::Format_RGB32);
for (int i = 0; i < imheight; i++)
{
for (int j = 0; j < imwidth; j++)
{
qi->setPixel(i, j, QRgb(imageData[i][j], imageData[i][j], imageData[i][j]));
}
}
Then scale qi and use the scaled copy as the pixmap for viewLabel.
I've also faced similar problem - QImage::scaled returned black images. The quick work-around which worked in my case was to convert QImage to QPixmap, scale and convert back then. Like this:
QImage resultImg = QPixmap::fromImage(image)
.scaled( 400, 400, Qt::KeepAspectRatio )
.toImage();
where "image" is the original image.
I was not aware of format-problem, before reading this thread - but indeed, my images are 1-Bit black-white.
Regards,
Valentin Heinitz