I am porting an video streamer application to QT (from HTML), I just had the webcam working using QT, and now I want to know How can I get the RGB Video buffer from a QT Camera? All the samples I can see is capturing image to a file.
I am using QT Widgets and not QML since its a desktop application.
What I am trying to do is to get the buffer image of the camera, compress it and send to network.
And I want to trigger this manualy since I want to call the capture of next frame when all the compression and sending is done to prevent timing issue.
Related
I'm trying to adapt the C++ non QML camera example in QT 5.11 so that it can intercept and add a visual time stamp to the video frames before they are written to the video file.
I want to achieve this without the use of QML.
There is a way to intercept the frames with QVideoprobe but the frame is passed in by const ref and so can't be modified.
Any suggestions except to use qml would be appreciated
Update - the typical way of doing this is to use QAbstractVideoFilter but all the examples I've found only show the filter being applied with QML so I'm initially looking to see how the filter can be applied to the QCamera pipeline in C++.
I've written a c++ program that receives a RTSP stream via gstreamer and displays this video via Qt5 in a QWidget. As the gstreamer videosink, I use a Widgetqt5glvideosink.
The problem is when I look at the received stream it has too much red-value in it. This only occurs when the vertical resolution exceeds +-576 pixels. (lower resolutions have no issue)
When I use cpu rendering (Widgetqt5videosink) instead of openGL rendering i get a correct image.
When I view the stream via gstreamer command line or via VLC it is also correct.
So it likes to be an issue when using an openGL rendered QWidget.
Is this an driver issue or something else?
Info:
Tested on Ubuntu16.04 and 17.04 for the viewer application.
Links:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/qt-gstreamer/html/qtvideosink_overview.html
I managed to fix my problem by patching two files in the source code of qt-gstreamer.
There were two wrong color matrices of the colorimetry BT709.
Patch to fix red artifact in Widgetqt5glvideosink
Recently I have tried to do some graphics on the top of VLC video using vlc-qt (which provides a video widget). The approach was trying to draw something on the widget. But it failed duo to the fact that vlc-qt's widget uses an internal widget to render video. (See more details here)
Now I'm trying to do something different. I want to try drawing text (or some rectangles) on the VLC media itself (not the widget). I suppose it's the way how VLC media player renders subtitles (isn't it?)
So the question is this: Having a vlc-qt interface, how can I access underlying vlc object and draw something on it [using libVLC API]?
I'm afraid the only way to do it with libvlc is to use libvlc_video_set_callbacks + libvlc_video_set_format_callbacks. It will decode media stream's frames to memory, which you could use as you wish.
I'm developing an application that loads frames from an ethernet camera and displays these frames in an element within a Qt QWebView.
So I would like to ask, which is the best or the most efficient way to display images in sequence from the camera, so that it would display as a live video for the user.
I'm using VLC library to create a simple media player, the program will display instructions above the video. These instructions are varying in position, size and color. I need to process the video frame before it's displayed to add my drawings. How can this be done? How can I have the libvlc show this large text when changing the volume up or down?