How to play a movie with alpha channel on Qt? - c++

I'm trying to load and view a video with an alpha channel in Qt. The video was encoded using Quicktime Animation set to RGB + Alpha and Millions of Colors+. I'm sure the video has transparency working as I loaded it into After Effects and checked.
I tried using the Phonon module with no success. The video loads alright but without the alpha channel, it just shows a black background. I tried setting WA_TranslucentBackground attribute but that didn't work either. GIF is not an option since the graphics are quite complex.
Is there any way to do this?

I'm not sure if it's possible (don't know the export options for After Effects) but did you try converting the movie to the MNG format? Then you can load it with QMovie, and it supports alpha channel (may be quite heavy in size, though).
Maybe this link will help: http://www.libpng.org/pub/mng/mngapcv.html

Related

WebP with alpha losing color channels data

We are currently switching to WebP for textures in a video game. We've come across a problem where the areas in an image that have the alpha channel set to zero end up losing all the detail. You can see this effect in the following example:
Original Image (left is color channels, right is alpha channel)
After saving as WebP
As you can see, the zero-alpha areas have lost their detail.
This optimization makes sense when the alpha channel is being used as transparency. However, in our game we are using the alpha for something else and need to maintain the color channel integrity independently from the alpha channel. How do I disable this effect in the encoder so the color channel encodes normally?
I should mention I'm using libwebp in C++, calling the function WebPEncodeRGBA.
Thanks!
https://developers.google.com/speed/webp/docs/cwebp
In this documentation, the -exact parameter is documented.
-exact
Preserve RGB values in transparent area. The default is off, to help compressibility.
Found the solution. After tracing through libwebp code I discovered an undocumented option in WebPConfig called "exact". Setting this to 1 will prevent the library from optimizing zero-alpha areas when encoding.

Problems rendering an image in gtk

I'm programming an application in c++ with a GUI in GTK3 that will show the images obtained from a genicam camera. I've got the official API of the camera that deals with it and extract returns an unsigned char* to the buffer where the image is contained.
The problem comes when I try to convert the image to a GTK format to render it in the GUI. I've tried with cairo and pixbuf, but I've problems in both of them, as the image is in MONO8 format, and pixbuf only deals with RGB. Cairo can deal with 8bit images, but only if they have an alpha channel, which is not the case.
Does someone know a way to approach this issue?
Thanks in advance

Windows Enhanced Video Renderer (EVR): Layer multiple 1080p Videos with transparency?

I am looking for ways to layer multiple 1080p Videos with transparency on Windows in C++ and DirectX or Opengl. The videos will start at different moments in time. Ideally the videos can be blended with another render target with other game content, so the resulting video texture should contain transparent pixels.
Can this be done with EVR and hardware acceleration? Which codecs are supported? http://en.wikipedia.org/wiki/Media_Foundation mentions transparency, but does not answer my questions. It sounds as if all the videos have to start at the same time and the resulting video texture has no transparency.
TIA
Christoph
This are my research results from around 03/14 with no definitive answer to this problem.
I did not try the mentioned possibility in Media Foundation, since it sounded as if the result has no transparency.
I was able to use a second gray scale video to mask the rgb video inside a shader. This can be done with a separate video stream, but syncing is needed. Moreover it is possible to encode a video with two frames side by side, but many HW accelerated video codecs do not allow this, WMF being the exception. Performance is not great but I was able to play 3 1080p30 videos simultaneously.
On a side note, to my surprise Flash was able to play 5+ 1080p30 videos with transparency simultaneously. The flash video codec allows alpha values, but I managed only inside flash to use them.

Combining Direct3D, Axis to make multiple IP camera GUI

Right now, what I'm trying to do is to make a new GUI, essentially a software using directX (more exact, direct3D), that display streaming images from Axis IP cameras.
For the time being I figured that the flow for the entire program would be like this:
1. Get the Axis program to get streaming images
2. Pass the images to the Direct3D program.
3. Display the program, on the screen.
Currently I have made a somewhat basic Direct3D app that loads and display video frames from avi videos(for testing). I dunno how to load images directly from videos using DirectX, so I used OpenCV to save frames from the video and have DX upload them up. Very slow.
Right now I have some unclear things:
1. How to Get an Axis program that works in C++ (gonna look up examples later, prolly no big deal)
2. How to upload images directly from the Axis IP camera program.
So guys, do you have any recommendations or suggestions on how to make my program work more efficiently? Anything just let me know.
Well you may find it faster to use directshow and add a custom renderer at the far end that, directly, copies the decompressed video data directly to a Direct3D texture.
Its well worth double buffering that texture. ie have texture 0 displaying and texture 1 being uploaded too and then swap the 2 over when a new frame is available (ie display texture 1 while uploading to texture 0).
This way you can de-couple the video frame rate from the rendering frame rate which makes dropped frames a little easier to handle.
I use in-place update of Direct3D textures (using IDirect3DTexture9::LockRect) and it works very fast. What part of your program works slow?
For capture images from Axis cams you may use iPSi c++ library: http://sourceforge.net/projects/ipsi/
It can be used for capturing images and control camera zoom and rotation (if available).

Does the VFW (Video For Windows) API support Alpha Channel Transparency?

Does the VFW (Video For Windows) API support Alpha Channel Transparency? I want to be able to export video with Alpha channel information. How can I do this in VC6?
I'm pretty sure it does; just set the pixel format to RGB32, which should give you an alpha channel to use.
Of course, finding a video compression format that fits all your needs and supports alpha channel is another problem.