Red artifact on visualizing rtsp stream via gstreamer and qt5 - c++

I've written a c++ program that receives a RTSP stream via gstreamer and displays this video via Qt5 in a QWidget. As the gstreamer videosink, I use a Widgetqt5glvideosink.
The problem is when I look at the received stream it has too much red-value in it. This only occurs when the vertical resolution exceeds +-576 pixels. (lower resolutions have no issue)
When I use cpu rendering (Widgetqt5videosink) instead of openGL rendering i get a correct image.
When I view the stream via gstreamer command line or via VLC it is also correct.
So it likes to be an issue when using an openGL rendered QWidget.
Is this an driver issue or something else?
Info:
Tested on Ubuntu16.04 and 17.04 for the viewer application.
Links:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/qt-gstreamer/html/qtvideosink_overview.html

I managed to fix my problem by patching two files in the source code of qt-gstreamer.
There were two wrong color matrices of the colorimetry BT709.
Patch to fix red artifact in Widgetqt5glvideosink

Related

Display a video using imgui

I'm trying to create an interface that would allow me to drive a remote controlled car.
I was wondering if it were possible to display a video using ImGui ? I know I can split my video into several frames and display each frames one after the other but is there any other way to do this ?
Thank you !
Yes, it is possible to display a video in dear ImGui
Above picture shows the sample of displaying from the webcam feed using ESCAPI.
refer https://github.com/jarikomppa/escapi/ for more details.
I once developed an application using imgui that displayed video via imgui, and it did work, but there was performance limitations. If you dont need to display more than 8 feeds at a time, you should be okay.
You'll need an appsink on your gst pipeline, and then in the appsink you need to pull the gstbuffer and convert it to a GL texture, then pass the GL texture to imgui.
You can reference this repo, its the same one i used as a starting block:
https://github.com/tbeloqui/gst-imgui

QT widget How to get RGB Buffer from QCamera

I am porting an video streamer application to QT (from HTML), I just had the webcam working using QT, and now I want to know How can I get the RGB Video buffer from a QT Camera? All the samples I can see is capturing image to a file.
I am using QT Widgets and not QML since its a desktop application.
What I am trying to do is to get the buffer image of the camera, compress it and send to network.
And I want to trigger this manualy since I want to call the capture of next frame when all the compression and sending is done to prevent timing issue.

OpenGL - Display video a stream of the desktop on Windows

So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.

FFmpeg disables Window Desktop Manager in my app

I render video using DirectShow & FFmpeg in my app. FFmpeg is only used for decoding of MPEG4/Part2 frames (custom decoder filter). My app does not play audio (only video).
When I lock my PC (Win7 Pro 64bit) with Win+L and then unlock it Windows brings me the following message:
The color scheme has been changed
The following program has performed an action that requires Windows to temporarily change the color scheme to Windows 7 Basic.
...app name, publisher, pid...
Windows will automatically change the color scheme back to Windows Aero when this program or other programs performing similar actions are no longer running.
I have a possibility of using another custom decoder filter which was developed without FFmpeg, and using it Windows does not show such messages to me.
I ran Aero troubleshooter that detected Desktop Window Manager was disabled.
My main question: Why this message appears after unlocking?
P.S. I'm using ffmpeg mpeg4 decoder, sws_scale from RGB24 to YUV420p. FFmpeg was built only with mpeg4 decoder/encoder, everything else was disabled.
The issue was caused by providing negative height (top-down bitmap) in bitmapinfoheader when negotiating media types.
I changed height to be positive (bottom-up bitmap) in my decoder and color scheme of Windows 7 is not touched anymore.

How to get an off-screen plain surface from 'normal' IDirect3D9Surface

I'm working on a media player with Media Foundation. I'm trying to use post processing with DXVA-HD. However, when I try to do a VideoProcessBltHD using the HD device, it fails with E_INVALIDARGS. What I doubt is it is not somehow working correctly with the ID39Surface I'm providing as input. I'm getting the input surface from 'IMFMediaBuffer' (which I get from reading a sample from the SourceReader).
I'm extracting the surface as follows:
CHECK_HR (hr = MFGetService( video_buffer, MR_BUFFER_SERVICE, __uuidof(IDirect3DSurface9), (void**)&pSurface) );
However, in the DXVA-HD example on MSDN, the VideoProcessBltHD works fine.
Whereas the IDirect3DSurface9 surface in the sample code is an off screen plain surface.
How do I pass 'my surface'(which has the video data) as an off screen plain surface to the video processor and the get 'blt-operation' succeed?
Any help would be appreciated.
Thanks
Mots
I would suggest installing full DirectX SDK, switch runtime library to debug mode in DirectX Control Pannel, turn full validation, break on error and run your app in debug mode. This way, you will get DirectX human readable error description.