Recently I have tried to do some graphics on the top of VLC video using vlc-qt (which provides a video widget). The approach was trying to draw something on the widget. But it failed duo to the fact that vlc-qt's widget uses an internal widget to render video. (See more details here)
Now I'm trying to do something different. I want to try drawing text (or some rectangles) on the VLC media itself (not the widget). I suppose it's the way how VLC media player renders subtitles (isn't it?)
So the question is this: Having a vlc-qt interface, how can I access underlying vlc object and draw something on it [using libVLC API]?
I'm afraid the only way to do it with libvlc is to use libvlc_video_set_callbacks + libvlc_video_set_format_callbacks. It will decode media stream's frames to memory, which you could use as you wish.
Related
I am porting an video streamer application to QT (from HTML), I just had the webcam working using QT, and now I want to know How can I get the RGB Video buffer from a QT Camera? All the samples I can see is capturing image to a file.
I am using QT Widgets and not QML since its a desktop application.
What I am trying to do is to get the buffer image of the camera, compress it and send to network.
And I want to trigger this manualy since I want to call the capture of next frame when all the compression and sending is done to prevent timing issue.
I'm trying to adapt the C++ non QML camera example in QT 5.11 so that it can intercept and add a visual time stamp to the video frames before they are written to the video file.
I want to achieve this without the use of QML.
There is a way to intercept the frames with QVideoprobe but the frame is passed in by const ref and so can't be modified.
Any suggestions except to use qml would be appreciated
Update - the typical way of doing this is to use QAbstractVideoFilter but all the examples I've found only show the filter being applied with QML so I'm initially looking to see how the filter can be applied to the QCamera pipeline in C++.
I'm using Qt and I can encapsulate the video stream from my Logitech webcam in a QCamera, subclass QVideoAbstractSurface and from there feed it to a video widget (specifically a QCameraViewfinder) without problem. Afaik the video frames never enter into system memory and that's what I want. However I further want to manipulate this video (add some overlays). To do this, for each frame I need to get the handle (QAbstractVideoBuffer::handle()) and use this with e.g. the OpenGL API to add my overlays.
bool MyVideoSurface::present(const QVideoFrame& frame)
{
QVariant h = frame.handle();
// ... Manipulate h using OpenGL
// ... Send frame to a video widget
}
The problem is that when I do this, h is invalid, no matter how I specify the QVideoSurfaceFormat. E.g. earlier at initialization I have
QVideoSurfaceFormat qsvf(surfaceSize, pixelFormat, QAbstractVideoBuffer::GLTextureHandle);
mVideoSurface->start(qsvf); // MyVideoSurface instance
(Relevant docs here and here.) It doesn't matter what handle type I input, none of them are valid (not surprisingly in some cases).
Is there any way for me to access my webcam video frames without mapping? I'm happy with solutions that break any specific assumptions I have about which Qt classes are the relevant ones, but there need to be overlays and it needs to be done on the GPU. Is the ability to access them hardware dependent, i.e. I need to choose a device in advance that will support this?
I'd like to make a detailed video list in my Qt application using vlc-qt. Other playback engines such as QtAV or QtMultimedia are not an option. It should be vlc-qt (libvlc). That's why I need to get a small picture of a video, a preview, but can't find anything suitable for this task, except libvlc_video_take_snapshot. This method will save a picture locally, and I guess it needs a real render window to exist. That's not a good variant for me, maybe there's some better solution?
I need to play animated characters over the screen on Windows. Basically, it will be character video with transparency and only non-transparent parts should be able to accept user input (e.g. mouse clicks), all other events should be passed through to underlying window.
I've made a simple transparent DirectX window with video in it. But I don't know how to make parts of this window "transparent" for user input. So if I clicking on the character, my application should accept this click, if I clicking on the transparent part of the video - click should be handled by the underlying window. How can I make it?
Thanks in advance.
I assume you mean Direct Show rather than DirectX?
You can do it using the Video Mixing Renderer. As with anything directshow its not, necessarily, easy.
First connect the video to the VMR Filter.
Second, for the animating characters all you need to do is build a simple DirectShow push source filter (Its explained really well in the DirectShow samples) that supplies the animation frames.
Third you need to create an IVMRImageCompositor class. You can then use DirectX to composite the images.