Display a video using imgui - c++

I'm trying to create an interface that would allow me to drive a remote controlled car.
I was wondering if it were possible to display a video using ImGui ? I know I can split my video into several frames and display each frames one after the other but is there any other way to do this ?
Thank you !

Yes, it is possible to display a video in dear ImGui
Above picture shows the sample of displaying from the webcam feed using ESCAPI.
refer https://github.com/jarikomppa/escapi/ for more details.

I once developed an application using imgui that displayed video via imgui, and it did work, but there was performance limitations. If you dont need to display more than 8 feeds at a time, you should be okay.
You'll need an appsink on your gst pipeline, and then in the appsink you need to pull the gstbuffer and convert it to a GL texture, then pass the GL texture to imgui.
You can reference this repo, its the same one i used as a starting block:
https://github.com/tbeloqui/gst-imgui

Related

How to intercept and modify video frames from QCamera before they are saved to video file

I'm trying to adapt the C++ non QML camera example in QT 5.11 so that it can intercept and add a visual time stamp to the video frames before they are written to the video file.
I want to achieve this without the use of QML.
There is a way to intercept the frames with QVideoprobe but the frame is passed in by const ref and so can't be modified.
Any suggestions except to use qml would be appreciated
Update - the typical way of doing this is to use QAbstractVideoFilter but all the examples I've found only show the filter being applied with QML so I'm initially looking to see how the filter can be applied to the QCamera pipeline in C++.

Red artifact on visualizing rtsp stream via gstreamer and qt5

I've written a c++ program that receives a RTSP stream via gstreamer and displays this video via Qt5 in a QWidget. As the gstreamer videosink, I use a Widgetqt5glvideosink.
The problem is when I look at the received stream it has too much red-value in it. This only occurs when the vertical resolution exceeds +-576 pixels. (lower resolutions have no issue)
When I use cpu rendering (Widgetqt5videosink) instead of openGL rendering i get a correct image.
When I view the stream via gstreamer command line or via VLC it is also correct.
So it likes to be an issue when using an openGL rendered QWidget.
Is this an driver issue or something else?
Info:
Tested on Ubuntu16.04 and 17.04 for the viewer application.
Links:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/qt-gstreamer/html/qtvideosink_overview.html
I managed to fix my problem by patching two files in the source code of qt-gstreamer.
There were two wrong color matrices of the colorimetry BT709.
Patch to fix red artifact in Widgetqt5glvideosink

OpenGL - Display video a stream of the desktop on Windows

So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.

C++ DirectShow Video and Audio capture - beginning

I have finally managed to drop working with VFW after several problems I have encountered during the application development.
Thanks to StackOverflow, I am now aware that VFW is obsolete and wish to switch to DShow, to let my application work with Vista/W7.
Unfortunately, the work has been made and application has been shipped to the client, but as soon as we realized we have troubles with frame rates on Vista / W7 - we decided to rewrite the video class and use DirectShow to establish a good audio/video capture engine for webcameras.
This will be tricky, as we never coded with DShow, and right now we are looking for few specific examples of how to:
Connect to a selected webcamera
similar to: capDriverConnect
Set camera resolution to 640x480 and RGB24 format ( we need to do RGB24 to YUV420 for each frame )
similar to: capSetVideoFormat / capCaptureSetSetup
Set audio capturing for this webcamera
similar to: capSetAudioFormat
Register two callbacks:
One for video frame ( we will pass frames to video encoder )
similar to: capSetCallbackOnVideoStream
One for wave buffer ( we will pass wave buffer to audio encoder )
similar to: capSetCallbackOnWaveStream
Be able to show a preview window somewhere on parent window
similar to: capPreview
Perform Start/Stop operation when needed
Start - would mean, connect and start capturing audio/video frames
Disconnect - would mean, stop capturing audio video frames
Perform drawing to the actual frame
similar to:
SetBitmapBits(CameraInput.GetFrameBitmap(),w*h*3,vdhdr->lpData);
// draw something with gdi+
GetBitmapBits(CameraInput.GetFrameBitmap(),w*h*3,vdhdr->lpData);//set back the frame with data
All of the above was already made with VFW, but as I wrote before we unfortunately need to switch do Direct Show.
Is there anyone who could help us out achieving a class that could rescue us from months of studying Direct Show ?
Your best bet for examples will be the ones from Microsoft.
Your questions are still phrased in terms of VFW so it's hard to answer them as written. For example, in DirectShow you wouldn't register a callback for to encode a video frame. Instead, you'd develop an encoder filter that would receive data from the capture source.
As an alternative, if you're only targeting Vista and later, there is the Microsoft Media Foundation. I have no experience with it so I don't know how the learning curve compares to DirectShow.
I'd suggest you to build a graph on GraphEdit using FFDshow filters.
EditGraph is making a demonstration of building a graph on DirectShow
I don't think you need you build the filter class by your own. After you'll build the graph and you'd be able to watch the video using GraphEdit. Implementing the graph is a very simple task.

simpest way to get audio/video data from directshow

I compiled the DirectShow sample player (from the Windows SDK's "Samples\multimedia\directshow\players\dshowplayer" folder).
Everything works well but it renders directly to the screen and the audio goes directly to directsound. I need to be able to grab the data and write out images to BMPs and write out the audio to .wav.
Am I using the wrong sample as a starting point? If not, what is the easiest way to modify the sample so I can get access to the video and audio data?
Thanks!
You can insert a SampleGrabber filter before the renderer, and use the ISampleGrabberCB Interface to access the data. You can still render the video to the screen, and output the audio. If you don't want that, use a NullRenderer instead. See also this example on codeproject.