How to pass D3DDevice in LibVLC to be the "HWND" - c++

I wanted to use libVCL to display a video in my game, however I have issues with using HWND when the game is in fullscreen, the fullscreen surface overlaps the video.
I do have the D3DDevice handle available though so the video could draw inside the game surface.
But all I've found is libvlc_media_player_set_hwnd() and not a way to pass the video surface to my game's surface for drawing. Is there any way/example to do this?

There is no such function in LibVLC.
I think you need to use the video format callbacks and render the video buffer to a texture yourself. That is the I the approach I used (from Java with JMonkeyEngine for example).
See libvlc_video_set_callbacks, libvlc_video_set_format and libvlc_video_set_format_callbacks.
I've seen this play back full HD smoothly, but this will consume more CPU than having VLC render directly into a video surface.

Related

Qt5: Drawing some graphics on top of QOpenGLWidget

I'm building an app which uses Qt5 that aims to display a video from a digital camera (using Gstramer pipeline). Then with the help of QPainter I'm drawing some graphics on top of it (some text shapes, and icons).
The thing is that the video refresh rate is ~30 FPS, and I only need to update the graphics in ~10 FPS or so. Currently I redraw the graphics overlay for every video frame which is very wasteful in terms of CPU.
Is there a better approach in which I can re-use the overlay from the previous frame, and only update the background (the frame from the camera)?
One idea that I had was to paint the overlay into a QImage, and then just paint the image onto the QOpenGLWidget, but it just feels wrong.
Thank you...

Window for OpenVR with Vulkan needed?

When using vulkan and OpenVR for a game, do I need to create and open a window to make it work or can i just Submit the image to OpenVR?
Technically you only need to submit the frames to the OpenVR compositor but it is strongly recommends you also display those same frames in a window.
The overhead from doing so is minimal, you are literally just displaying the same textures to a window as well as in the HMD.

How to take reliable QGLWidget snapshot

In my application I take snapshots of a QGLWidget's contents for two purposes:
Not redraw the scene all the time when only an overlay changes, using a cached pixmap instead
Lat the user take screenshots of the particular plots (3D scene)
The first thing I tried is grabFrameBuffer(). It is natural to use this function as for the first application, what is currently visible in the widget is exactly what I want to cache.
PROBLEM: On some hardware (e.g. Intel integrade graphics, Mac OS X with GeForce graphics), the image obtained does not contain the current screen content, but the content before that. So if the scene would be drawn two times, on the screen you see the second drawing, in the image you see the first drawing (which should be the content of the backbuffer?).
The second thing I tried is renderToPixmap(). This renders using paintGL(), but not using paint(). I have all my stuff in paint(), as I use Qt's painting functionality and only a small piece of the code uses native GL (beginNativePainting(), endNativePainting()).
I also tried the regular QWidget's snapshot capability (QPixmap::fromWidget(), or what it is called), but there the GL framebuffer is black.
Any ideas on how to resolve the issue and get a reliable depiction of the currently drawn scene?
How to take reliable QGLWidget snapshot
Render current scene to framebuffer, save data from framebuffer to file. Or grab current backbuffer after glFlush. Anything else might include artifacts or incomplete scene.
It seems that QGLWidget::grabFrameBuffer() internally calls glReadPixels() from OpenGL. On double-bufferd configurations the initial mode reads the back buffer (GL_BACK), switch with the OpenGL call glReadBuffer(GL_FRONT) to the front buffer before using QGLWidget::grabFrameBuffer() displaying an image on the screen.
The result of QGLWidget::grabFrameBuffer(), like every other OpenGL calls, depends on the video driver. Qt merely forwards your call to the driver and grabs the image it returns, but the content of the image is not under Qt's control. Other than making sure you have installed the latest driver for your video card, there is not much you can do except report a bug to your video card manufacturer and pray.
I use paintGL(); and glFlush(); before using grabFrameBuffer(). The paintGL helps to draw current frame again before grab the frame buffer, which makes an exact copy of what is currently showing.

Questions about OpenGL Settings and drawing over a mask in a window

I would like to know the OpenGL Rendering settings for having a program render OpenGL over top of any window on screen that has a specific color code (screen-level buffer?)
I.E. VLC Media Player and Media Player Classic both have rendering modes which allow you to full-screen then minimize player, but maintain watching media via allowing a specific color to act as a transparent mask. For example, you could set the background color of a terminal application to be 0x000010 for VLC 0x000001 for MPC and you could then type over the media using text (as it is in it's original color). When you try to do a "printscreen" all you get is the mask color, However, this is an acceptable side-effect.
Is it possible to do this as well with any OpenGL application with the right settings and hardware? If so, what are the settings or at least the terminology of this effect to further research it?
What you are trying to implement is called "overlay". You can try this angelcode tutorial. If I remember correctly, there was also a tutorial in DirectX SDK.
If you need to use OpenGL, you will need to perform offscreen rendering (using FBO or P-buffer), read the results using glReadPixels() and display using overlay.

Desktop DirectX Surface (WDM)

I'm trying to make a screen recording app. Is there a way to use DirectX to capture the entire screen and store it as a texture? This would be in WDM. I know there's a way to get the texture for windows, but what about the entire screen.
I've tried the GDI method of using getDC(null) but that's rather slow for my uses.
There's three methods: gdi method, directx method and windows media api. When you need to capture entire screen as texture, you should use
IDirect3DSurface9->GetFrontBufferData()