Playing transparent video over screen with custom user input handling - c++

I need to play animated characters over the screen on Windows. Basically, it will be character video with transparency and only non-transparent parts should be able to accept user input (e.g. mouse clicks), all other events should be passed through to underlying window.
I've made a simple transparent DirectX window with video in it. But I don't know how to make parts of this window "transparent" for user input. So if I clicking on the character, my application should accept this click, if I clicking on the transparent part of the video - click should be handled by the underlying window. How can I make it?
Thanks in advance.

I assume you mean Direct Show rather than DirectX?
You can do it using the Video Mixing Renderer. As with anything directshow its not, necessarily, easy.
First connect the video to the VMR Filter.
Second, for the animating characters all you need to do is build a simple DirectShow push source filter (Its explained really well in the DirectShow samples) that supplies the animation frames.
Third you need to create an IVMRImageCompositor class. You can then use DirectX to composite the images.

Related

How to create a cursor in X11 from raw data c++

I have been searching around about this problem for awhile. I am making a cross platform program and I have figured out how to load an animated cursor with the windows API and how to create a cursor during run time from raw bitmap data. However I can't find good documentation for this for X11, for my Unix/Linux build of my program. I know I need to use the XRender extension functions, XRenderCreateCursor and XRenderCreateAnimCursor from this documentation https://www.x.org/releases/X11R7.6/doc/libXrender/libXrender.txt but I do not know how to use these functions and the documentation does now show any examples.
Also the raw image data is in the ARGB format, and I want to support the Alpha channel if possible with these cursors.
Could someone show me how to use the X11 and XRender (or XCursor) Library to create a cursor, static and animated, and possibly how to do it so the cursor can be used with any X11 window.
Thanks!
PS.
I am acually editing a open source libary for cross platfrom Gui that I am using for my program, and I am trying to add this feature into the libary but I am not used to programing with X11.
When it comes to X, nothing is simple.
First, review the specification of the X render extension.
The steps for creating an animated cursor are as follows.
First, you need to create a PICTURE for each frame of the animated cursor, using CreatePicture.
Use CreateCursor to create a CURSOR from each PICTURE. CreateCursor returns a CURSOR handle.
Then, you take the list of all CURSORs for all of the frames, and then use CreateAnimCursor to create a single CURSOR representing the animated cursor.
This all comes down to creating a PICTURE for each frame. A PICTURE is created using CreatePicture from a DRAWABLE and a PICTFORMAT. DRAWABLE would be the PIXMAP with the actual bitmask for the cursor's frame, and PICTFORMAT specifies which channels in the pixmap represent the red, color, and green channels, and must be one of the enumerated PICTFORMATs returned from QueryPictformat.
For more information, see the aforementioned X render extension specification.

Borland C++ stretch a TAnimate

Hi I have an application made with Borland C++Builder. I am using RAD Studio for it.
In the application there is a TForm with a TAnimate (object for videos) on it. I wanted to know if it is somehow possible to stretch the TAnimate object?
If I change the size of the object:
video->Width = newwidth;
video->Height = newheight;
The video doesn't get stretched but a white border gets added to the video image.
Is there some way to scale the video image?
If someone tells me that it is impossible that would be ok !
Maybe it is possible to convert TAnimate in a scaled TImage.
The autosize property of TAnimate doesn't work.
TAnimate is just a thin wrapper around a Win32 Animation control, which has no option for stretching/scaling video. Even MSDN says:
Note The AVI file, or resource, must not have a sound channel. The capabilities of the animation control are very limited and are subject to change. If you need a control to provide multimedia playback and recording capabilities for your application, you can use the MCIWnd control. For more information, see MCIWnd Window Class.

Producing buttons with Direct2D and DirectWrite (C++, DirectX)

I've been looking around for awhile about how to produce buttons using Direct2D and DirectWrite with no luck. I'm comfortable with shapes, text and that jazz. However, it suddenly occurred to me I might be looking about it in the wrong way.
Take the sentence:
you draw your controls and content for your app using the Direct2D and
DirectWrite APIs, handling all the input events directly.
I'm now thinking this means that instead of being able to quickly produce a fully functional button as I would using XAML. I would draw the button, manually check the location of the mouse on click, whether it's within the button boundaries and then handle the event? Similar method for hovering without the click.
Is this the kind of method required when using Direct2D and DirectWrite?
I haven't any experience with DirectX, but in OpenGL I build my buttons from scratch. Assuming you have animated sprites implemented, your buttons are essentially sprites that play certain animations in response to being clicked, hovered over, etc., and which you can register callbacks with. In my 2D engine, I have a class called UiButton, which inherits Sprite, and listens for various UI events. It gets more complicated when you want to handle keyboard navigation (arrow keys + enter to select) as you have to think about how the buttons are connected and which of them has focus at any given moment.
Here is my implementation for reference:
Headers: https://github.com/RobJinman/dodge/tree/master/Dodge/include/dodge/ui
Source: https://github.com/RobJinman/dodge/tree/master/Dodge/src/ui
If you're not prepared to roll your own, Googling "direct2d gui framework" seems to bring up some promising results.
Sorry I can't be of more help.
Yes, to draw a UI Button with Direct2D, you need to handle everything yourself, why? Direct2D is a 2D graphics API, not controls library. you need to draw the layout of your button, and handle the message of your button(such as click, mouse hover...), you lost lots of convenient and that's time-consuming, but the most important thing is: you can control it by yourself!
Direct2D is a graphics library. UI controls like, Text-selection, Textbox, and Buttons is not a part of it. However the benefits of using Direct2D and DirectWrite is we can implement our own UI controls, and having a full control of it.
Please also see: ID2D1Geometry::FillsContainsPoint() for hit-testing task.

How to display a video in one layer and camera view in another layer in same window

I need to design a GUI in such a way that a video should be played in one thread (as a panel in the frame or window) and Camera video should be captured and displayed in another thread(as a panel in the frame or window). Please find the below link for the
I need a GUI as http://www.youtube.com/watch?v=mA883X4uaHk (pl watch from .35s).
I found that this can be done in QT. But am bit new to QT. Can Some one give me some tips on how to do this in QT.? Anything with the basic code to handle frames and panels would be really helpful.

Creating a program that creates a full screen overlay

I want to write a program that would create a transparent overlay filling the entire screen in Windows 7, preferably with C++ and OpenGL. Though, if there is an API written in another language that makes this super easy, I would be more than willing to use that too. In general, I assume I would have to be able to read the pixels that are already on the screen somehow.
Using the same method screen capture software uses to get the pixels from the screen and then redrawing them would work initially, but the problem would then be if the screen updates. My program would then have to minimize/close and reappear in order for me to be able to read the underlying pixels.
Windows Vista introduced a new flag into the PIXELFORMATDESCRIPTOR: PFD_SUPPORT_COMPOSITION. If the OpenGL context is created with an alpha channel, i.e. AlphaBits of the PFD is nonzero, the alpha channel of the OpenGL framebuffer is respected by the Windows compositor.
Then by creating a full screen, borderless, undecorated window you get this exakt kind of overlay you desire. However this window will still receive all input events, so you'll have to do some grunt work and pass on all input events to the underlying windows manually.