IMemAllocator:GetBuffer hangs - c++

Does anybody know any reasons why IMemAllocator:GetBuffer (Directshow) hangs, apart from all samples being in use?
I have a directshow application which uses GMFBridge by Geraint Davies to connect two graphs. The GMFBridge is used to be able to switch inputs, but I am not switching in this situation. The application captures audio and video, and should do that non-stop. But after about 10 hours it stops. I found out both audio and video got stuck in a call to IMemAllocator:GetBuffer:
/* _COM_SMARTPTR_TYPEDEF(IMemAllocator, IID_IMemAllocator); */
/* IMemAllocatorPtr m_pCopyAllocator; */
hr = m_pCopyAllocator->GetBuffer(&pOut, NULL, NULL, 0);
If all samples are in use, this function can block, but I am pretty sure this is not the case. There are two threads calling this function, one for the video and one for the audio samples. The Audio thread blocks first, and after the GetBuffer has returned a buffer for almost 60 video samples, the video thread blocks too. (this is about 2 seconds later)
After almost 8 hours both threads continue for a small period, first the audio thread, and after 45 buffers for audio samples have been returned, the video thread unblocks too.
So because both threads do not block at the same time, it looks to me there is not a problem with all samples being in use.
The stacktrace shows a function inside quartz.dll is being called at that moment.
UPDATE
It looks like there was a memoryleak caused by decoder filters already installed on the pc. The graph included decoding of mpeg, for example the audio decoding used a cyberlink decoder. After installing ffdshow, the ffdshow audio + video decoder was used instead, and the problem seems to be disappeared. Lesson learned, do not depend automatically on existing filters.

Not sure that I can debug this from the info given. Can you create a log file (create an empty file c:\gmfbridge.txt, run until it hangs, then zip the file and email it). Also, if you set up your symbols with _NT_SYMBOL_PATH, you could look at the stack trace to see where in quartz.dll the various threads are.
G

Related

C++ Video Capturing using Sink Writer - Memory consumption

I am writing a C++ program (Win64) using C++ Builder 11.1.5 that captures video from a web cam and stores the captured frames in a WMV file using the sink writer interface as described in the following tutorial:
https://learn.microsoft.com/en-gb/windows/win32/medfound/tutorial--using-the-sink-writer-to-encode-video?redirectedfrom=MSDN
The video doesn't need to be real time using 30 Frames per second as the process being recorded is a slow one so I have set the FPS to be 5 (which is fine.)
The recording needs to run for about 8-12 hours at a time and using the algorithms in the sink writer tutorial, I have seen the memory consumption of the program go up dramatically after 10 minutes of recording (in excess of 10 Gb of memory). I also have seen that the final WMV file only becomes populated when the Finalize routine is called. Because of the memory consumption, the program starts to slow down after a while.
First Question: Is it possible to flush the sink writer to free up ram while it is recording?
Second Question: Maybe it would be more efficient to save the video in pieces and finalize the recording every 10 minutes or so then start another recording using a different file name such that when 8 hours is done the program could combine all the saved WMV files? How would one go about combining numerous WMV files into one large file?

Monitoring audio recorded via QMediaCaptureSession

Monitoring audio recorded via QMediaCaptureSession
Good afternoon.
When using QMediaCaptureSession, QMediaRecorder to capture audio from a microphone to a file, a problem occurs:
I start recording through the record() method of the QMediaRecorder class. Next, I receive a QMediaRecorder::recorderStateChanged signal with a QMediaRecorder::RecordingState value, and display in the window an inscription that recording has begun. The problem is that the resulting recording starts with some delay, and not at the moment the QMediaRecorder::RecordingState signal is received, i.e. the recording does not include sound or words that were spoken IMMEDIATELY after receiving the QMediaRecorder::RecordingState signal for 1-2 seconds.
However, the Qt documentation has this statement:
https://doc.qt.io/qt-6/qtmultimedia-changes-qt6.html
New features in Qt 6
...
You can now also monitor the audio recorded by a capture session.
...
Questions:
Can I somehow analyze the data that is going to be recorded and understand that the data has not yet begun to arrive and it is not yet necessary to display a message in the program window that the recording has begun?
What does the Qt documentation mean by saying that you can monitor audio recorded via a capture session?
I did not find information on this issue.

Writing multithreaded video and audio packets with FFmpeg

I couldn't find any information on the way av_interleaved_write_frame deals with video and audio packets.
I have multiple audio and video packets coming from 2 threads. Each thread calls a write_video_frame or write_audio_frame, locks a mutex, initialize an AVPacket and writes data to an .avi file.
Initialization of AVCodecContext and AVFOrmatContext is ok.
-- Edit 1 --
Audio and video are coming from an external source (microphone and camera) and are captured as raw data without any compression (even for video).
I use h264 to encode video and no compression for Audio (PCM).
Audio captured is: 16bits, 44100khz, stereo
Video captured is 25FPS
Question:
1) Is it a problem if I write multiple video packets at once (let's say 25 packets/sec) and just one audio packet/sec.
Answer: Apparently not, the function av_interleaved_write_frame should be able to manage that kind of data as soon as pts and dts is well managed
This means I call av_interleaved_write_frame 25 times for video writing and just 1 for audio writing per second. Could this be a problem ? If it is how can I deal with this scenario ?
2) How can I manage pts and dts in this case ? It seems to be a problem in my application since I cannot correctly render the .avi file. Can I use real time stamps for both video and audio ?
Answer: The best thing to do here is to use the timestamp given when capturing audio / video as pts and dts for this kind of application. So these are not exactly real time stamps (from wall clock) but media capture timestamps.
Thank you for your precious advices.
av_interleaved_write_frame writes otput packets in such way so they are properly interleaved (maybe queueing them internally). "Properly interleaved" depends on a container format but usually it means that DTS stamps of the packets in output file are monotonically increasing.
av_interleaved_write_frame, like most FFmpeg APIs, shouldn't be ever called simultaneously by two threads with same AVFormatContext. I assume you make sure of that with a mutex. If you do then it doesn't matter whether it is multithreaded application or now.
Is it a problem if I write multiple video packets at once (let's say 25 packets/sec) and just one audio packet/sec
It is not a problem in general but most audio codecs can't output 1 second long audio packets. Which codec do you use?
How can I manage pts and dts in this case? Can I use real time stamps for both video and audio ?
Same way as you would in single-threaded application. Dts usually are generated by codecs from pts. Pts usually comes from a capture device/decoder together with the corresponding audio/video data.
Real time stamps might be ok to use but it really depends on how and when you are acquiring them. Please elaborate on what exactly you are trying to do. Where is audio/video data coming from?

Multithreading an OpenCV Program

Thanks for reading my post.
I have a problem with multithreading an opencv application I was hoping you guys could help me out with.
My aim is to Save 400 frames (in jpeg) from the middle of a video sequence for further examination.
I have the code running fine single threaded, but the multithreading is causing quite a lot of issues so I’m wondering if I have got the philosophy all wrong.
In terms of a schematic of what I should do, would I be best to:
Option 1 : somehow simultaneously access the single video file (or make copies?), then with individual threads cycling through the video frame by frame, save each frame when it is between predetermined limits? E.g. thread 1 saves frames 50 to 100, thread 2 saves frames 101 to 150 etc.
Option 2 : open the file once, cycle through frame by frame then pass an individual frame to a series of unique threads to carry out a saving operation. E.g. frame 1 passed to thread 1 for saving, frame 2 to thread 2 for saving, frame 3 to thread 1, frame 4 to thread 2 etc etc.
Option 3 : some other buffer/thread arrangement which is a better idea than the above!
I'm using visual C++ with the standard libs.
Many thanks for your help on this,
Cheers, Kay
Option 1 is what i have tried to do this far, but because of the errors, i was wondering if it was even possible to do this! Can threads usually access the same file? how do I find out how many threads i can have?
Certainly, different threads can access the same file, but it's really a question if the supporting libraries support that. For reading a video stream, you can use either OpenCV or ffmpeg (you can use both in the same app, ffmpeg for reading and OpenCV for processing, for example). Haven't looked at the docs, so I'm guessing here: either lib should allow multiple readers on the same file.
To find out the number of cores:
SYSTEM_INFO sysinfo;
GetSystemInfo( &sysinfo );
numCPU = sysinfo.dwNumberOfProcessors;
from this post . You would create one thread / core as a starting point, then change the number based on your performance needs and on actual testing.

SDL - Audio callback function is not called in time sometimes

I'm new to SDL.
I'm developing a media player using SDL, and now I met the problem that the audio callback function is sometimes not called in time, and cause the audio a little fitful.
I use such piece of code to open the audio device:
wanted_spec.xxx = xxx;
wanted_spec.callback = audio_callback; //audio_callback is my audio callback function
SDL_OpenAudio(&wanted_spec, &spec);
My OS is Windows XP.
Do you know anything about that? Can someone suggest how to synchronize data feeding to callback function with 0 latency.
My Problem is instead of providing whole wav file through SDL_LoadWAV, I want to pass PCM samples (probably 1024 samples).(Design is like this I will be getting PCM samples)
But issue is, callback function is not called in time or calling is delayed which causes the sound to be fitful. I'm not able to syn passing of data to callback function.
Can you please suggest a way to synchronize passing data (Samples) to callback function or provide some application where data is passed in samples?
We need real values to fully answer your question.
What is your attempted buffer size?
Also realize that it is common for SDL to not give you what you want, so check what the actual spec buffer size is.
I've been using a binary mingw32 port of SDL on windows that won't give me buffers smaller than one second no matter what I request.