C/C++ library to blend PNG (including Alpha) with raw ARGB buffer - c++

I have a PNG with an encoded alpha channel that I want to blend with a raw ARGB image in memory that is stored interleaved. The PNG is of a different resolution to the image buffer and needs to be resized accordingly (preferably with interpolation).
Whilst I appreciate it's not particularly difficult to do this by hand (once the PNG image is loaded into an appropriate structure), I was hoping to find a good open source image processing library to do the work for me.
I've looked at a few including:
libGD
libPNG
openCV
ImageMagick
CxImage
Intel Integrated Performance Primitives (IPP)
But none of seem to handle all the requirements of loading PNGs, resizing the PNG image, alpha blending into the image data and handling the ARGB format (as opposed to RGBA).
Performance is a concern so reducing the passes over the image data would be beneficial, especially being able to hold the ARGB data in place rather than having to copy it to a different data structure to perform the blending.
Does anyone know of any libraries that may be able to help or whether I've missed something in one of the above?

You can do this with SDL by using SDL_gfx and SDL_Image.
// load images using SDL_Image
SDL_Surface *image1, image2;
image1=IMG_Load("front.png");
image2=IMG_Load("back.png");
// blit images onto a surface using SDL_gfx
SDL_gfxBlitRGBA ( image1, rect, screen, rect );
SDL_gfxBlitRGBA ( image2, rect, screen, rect );

You need to pair a file-format library (libPNG or ImageMagick) with a image manipulation library. Boost.GIL would be good here. If you can load the ARGB buffer (4 bytes per pixel) into memory, you can create a GIL image with interleaved_view, and reinterpret_casting your buffer pointer to a boost::gil::argb32_ptr_t

With the ImageMagick it is very easy thing to do by using appendImages function.
Like this :
#include <list>
#include <Magick++.h>
using namespace std;
using namespace Magick;
int main(int /*argc*/,char **/*argv*/)
{
list<Image> imageList;
readImages( &imageList, "test_image_anim.gif" );
Image appended;
appendImages( &appended, imageList.begin(), imageList.end() );
appended.write( "appended_image.miff" );
return 0;
}

Related

C++(MFC) using Pylon :How to draw image which is in array

Hello I am working on a Pylon Application, and I want to know how to draw image which is in array.
Basler's Pylon SDK, on memory Image saving
In this link, it shows I can save image data in array(I guess) Pylon::CImageFormatConverter::Convert(Pylon::CPylonImage[i],Pylon::CGrabResultPtr)
But the thing is I can not figure out how to draw that images which is in array.
I Think it would be easy problem, but I'd appreciate it if you understand because it's my first time doing this.
For quick'n dirty displaying of Pylon buffers, you have a helper class available in Pylon SDK to put somewhere in buffer grabbing loop:
//initialization
CPylonImage img;
CPylonImageWindow wnd;
//somewhere in grabbing loop
wnd.SetImage(img);
wnd.Show();
If you want to utilize in your own display procedures (via either GDI or 3rd party libraries like OpenCV etc.), you can get image buffer with GetBuffer() method:
CGrabResultPtr ptrGrabResult;
camera.RetrieveResult( 5000, ptrGrabResult, TimeoutHandling_ThrowException);
const uint8_t *pImageBuffer = (uint8_t *) ptrGrabResult->GetBuffer();
Of course, you have to be aware of your target pixel alignment and optionally use CImageFormatConverter for proper output pixel format of your choice.
CImageFormatConverter documentation

High performance graphical overlay on a YUV (UYVY) image (C++)

My requirements: Overlay graphics (with alpha/antialiasing) onto a UYVY image as fast as possible. The rendering must take place in UYVY preferably because I need to to both render and encode (H.264 with ffmpeg).
What framework (perferablly cross-platform, but Windows only is OK) should I use to render the image to later render/encode?
I looked at openvc, and it seems the drawing happens in BGR, which would requirement to convert each frame from UYVY (2-channel) to BGR (3-channel), and then back again.
I looked at SDL, which uses hardware acceleration. It supports multiple textures with different color spaces. However, the method SDL_RenderReadPixels, which I would need to get the resulting composited image, mentions in the documentation "warning This is a very slow operation, and should not be used frequently."
Is there a framework that can draw onto a BYTE array of YUV, possible with alpha blending/anti-aliasing?
You also can convert YUV to BGRA. And then perform drawing operation with using of the format. BGRA is more convenient then BGR for drawing because every its pixel is equal to 32-bit integer. Naturally after drawing you have to convert backward BGRA to YUV.
There is a fast cross-platform C++ library which can perform these manipulations.

reading indexed palette image in C++

My platform is Windows. I didn't expect reading indexed palette image to be this difficult in C++. In case you are not familiar with it, it's single channel image but expresses its pixel color with 256 indexed colors called palette.
I was using OpenCV but its imread just converts the file to a 3 channel image so I have no way to save it back to indexed palette image or compare it with another indexed palette image.
I tried to use Bitmap but for some reason, it does not read correct pixel values.
So right now, I am looking for a light library or code to read pixels from indexed palette file.
Using OpenCV to read or write a image from real cameras will lose and change the image information, so I prefer to use gdi+, which is more powerful in dealing with image format problems to solve your problem.
As comments on the question shows, I decided to have two methods, OpenCV for non-indexed-palette images and Bitmap (GDI+) for indexed palette images. Now everything is working perfect.

How can I draw a png ontop of another png?

How can I "draw"\"merge" a png top of another png (background) using libpng, while keeping the alpha section of the png being drawn on top of the backhround png. There does not seem to be any tutorials or anything mentioned in the documentation about it.
libpng is a library for loading images stored in the PNG file format. It is not a library for blitting images, compositing images, or anything of that nature. libpng's basic job is to take a file or memory image and turn it into an array of color values. What you're talking about is very much out of scope for libpng.
If you want to do this, you will have to do the image composition yourself manually. Or use a library that can do image composition (cairo, etc).

Qt and OpenGL how to render a PVR

I'm having a few issues regarding how to render a PVR.
I'm confused about how to get the data from the PVR to screen. I have a window that is ready to draw a picture and I'm a bit stuck. What do I need to get from the PVR as parameters to then be able to draw a texture? With jpeg and pngs locally you can just load the image from a directory but how would the same occur for a PVR?
Depends what format the data inside the PVR is in. If it's a supported standard then just copy it to a texture with glTexSubImage2D(), otherwise you will need to decompress it into something OpenGL understands - like RGB or RGBA.
edit - OpenGL is a display library (well much much more than that), it doesn't read images, decode movies or do sound.
TGA files are generally very simple uncompressed RGB or RGBA image data, it should be trivial to decode the file, extract the image data and copy it directly to an opengl texture.
since you tagged the question Qt you can use QImage to load the tga and Using QImage with OpenGL