I render video using DirectShow & FFmpeg in my app. FFmpeg is only used for decoding of MPEG4/Part2 frames (custom decoder filter). My app does not play audio (only video).
When I lock my PC (Win7 Pro 64bit) with Win+L and then unlock it Windows brings me the following message:
The color scheme has been changed
The following program has performed an action that requires Windows to temporarily change the color scheme to Windows 7 Basic.
...app name, publisher, pid...
Windows will automatically change the color scheme back to Windows Aero when this program or other programs performing similar actions are no longer running.
I have a possibility of using another custom decoder filter which was developed without FFmpeg, and using it Windows does not show such messages to me.
I ran Aero troubleshooter that detected Desktop Window Manager was disabled.
My main question: Why this message appears after unlocking?
P.S. I'm using ffmpeg mpeg4 decoder, sws_scale from RGB24 to YUV420p. FFmpeg was built only with mpeg4 decoder/encoder, everything else was disabled.
The issue was caused by providing negative height (top-down bitmap) in bitmapinfoheader when negotiating media types.
I changed height to be positive (bottom-up bitmap) in my decoder and color scheme of Windows 7 is not touched anymore.
Related
I am porting an video streamer application to QT (from HTML), I just had the webcam working using QT, and now I want to know How can I get the RGB Video buffer from a QT Camera? All the samples I can see is capturing image to a file.
I am using QT Widgets and not QML since its a desktop application.
What I am trying to do is to get the buffer image of the camera, compress it and send to network.
And I want to trigger this manualy since I want to call the capture of next frame when all the compression and sending is done to prevent timing issue.
I've written a c++ program that receives a RTSP stream via gstreamer and displays this video via Qt5 in a QWidget. As the gstreamer videosink, I use a Widgetqt5glvideosink.
The problem is when I look at the received stream it has too much red-value in it. This only occurs when the vertical resolution exceeds +-576 pixels. (lower resolutions have no issue)
When I use cpu rendering (Widgetqt5videosink) instead of openGL rendering i get a correct image.
When I view the stream via gstreamer command line or via VLC it is also correct.
So it likes to be an issue when using an openGL rendered QWidget.
Is this an driver issue or something else?
Info:
Tested on Ubuntu16.04 and 17.04 for the viewer application.
Links:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/qt-gstreamer/html/qtvideosink_overview.html
I managed to fix my problem by patching two files in the source code of qt-gstreamer.
There were two wrong color matrices of the colorimetry BT709.
Patch to fix red artifact in Widgetqt5glvideosink
I was planning to use Qt to record video/audio and to show the camera view while it is recording. However, there is a bug report with recording videos on Windows, as reported in this page, and I confirmed that the issue is legitimate. There wasn't any problem with recording video and audio on Macintosh and Linux-Ubuntu.
Due to this issue with Qt, I'm looking for another C++ library that does video and audio recording and can display the camera view. It will be great if it is easy to use and integrate into my project.
I'm working on a project that requires me to record webcam, microphone, and the screen. I have webcam recording, audio is a work in progress, and I stumbled across CMonitor wrapper (which I did some minor modifications to) to grab RGB images of the desktop on a specified monitor (if there are multiple monitors).
How do I go about pushing my raw RGB frames into windows media foundation to encode into a video file? My current video encoding is using a slightly modified version of this msdn sample, if that's easier to modify than it is to write a new class handler.
Or, perhaps there is some sort of media foundation route to recording the screen that I don't know of (which is possible, I'm not that great of a win32 programmer)?
Found PushSource in the Windows SDK samples, which does this.
Check Desktop Duplication API for capture desktop. Media Foundation provides two solution for encoding, MF Sink Writer for simple encoding, Media Session for a more flexible control of the media pipeline. Read this overview page first.
Our streaming media player is an in house C++/DirectShow application and runs on XP and greater.
One of our most widely used streaming codecs is WMV, as it's widely supported.
We've noticed that output from WMV media streams looks different on Windows 7 clients than on Windows XP.
Windows 7 output looks much more pixelated / blocky for WMV streams. An identical stream (coming from a media server) on Windows XP appears much smoother / less pixelated.
The same playback graph is used on both platforms and the same media server is used to encode and stream to both clients.
W7 client has the windows media codec that comes as part of the windows media player application.
Has anyone else noticed this issue, or can anyone comment on what I might check / correct on the Windows 7 platform?
From communication with Chris P, a Microsoft MVP:
The VMR9 renderer on Windows 7 appears not to support the texture smoothing properties, as such all video looks like crap. The only viable solution that I've found is to use the EVR or a custom renderer
It doesn't implement any of these features (but gives no error if you enable it):
MixerPref9_BiLinearFiltering,
MixerPref9_AnisotropicFiltering,
MixerPref9_PyramidalQuadFiltering,
MixerPref9_GaussianQuadFiltering
instead it always seems to use MixerPref9_PointFiltering.
The problem of course is not in WMV per se. It's the implementation of the Video Renderer filter that is different in Windows Vista and in Windows 7. And the deeper reason for this is that with Aero turned on there are no overlay surfaces. So you have to search for other means of rendering on Windows 7.