How to measure pipeline latency? - c++

I want to measure how much time does it take MF to process my video samples.
I’ve tried using sample time as unique sample identifier, discovered the pipeline adjusts that value so it drifts away (not fast, 0-1 100-nanoseconds ticks per frame, but even off-by-1 is sufficient for the value to be worthless as a unique ID).
I’ve tried putting custom value in attributes, works OK on Win10 with nVidia encoder, fails on Win7 with MS encoder: the output frame doesn’t contain my value, apparently the encoder dropped all attributes from samples. Tried MFSampleExtension_DeviceTimestamp built-in attribute, same result, the value is lost in the pipelines.
Any other way to match input samples with output samples? Manually counted sequence numbers are too fragile IMO, the framework is heavily multithreaded.

You may write a wrapper encoder MFT which wrap MS decoder in Win7, and record the sample times/additional attributes into a queue in IMFTransform::ProcessInput, and process it in IMFTransform::ProcessOutput, and get the attribute according to the sample time, and set the related attributes to the output samples, is it ok?

Related

What units for the progress to use for a transcoding converter using ffmpeg - % etc.?

I'm gonna make a converter to h.265 with ffmpeg, based on documentation: http://www.ffmpeg.org/doxygen/trunk/transcoding_8c-example.html
I want to add info about the progress, but I have no idea what number I can use to show that, for example in %.
Please help. :)
What about offering several variants with a choice with an argument?
I think time passed and the estimated time left are more suggestive for than % - for example in order to leave the machine or the window to work and return to check it later.
Also, the current frame rate of the conversion is suggestive, it gives hints eventually for adjusting the bitrate etc. if it's too slow.
So you may measure the time of the encoding so far and try to estimate the frame rate of processing and how much remains.
ffmpeg itself displays current time or current frame from the processed video and the duration of the video.

How to get the next frame presentation time in Vulkan

Is there a way to get an estimated (or exact) timestamp when the submitted frame will be presented on screen?
I'm interested in WSI windowed presentation as well as fullscreen on Windows and Linux.
UPD: One of the possible ways on Windows is IDCompositionDevice::GetFrameStatistics (msdn), which is used for DirectComposition and DirectManipulation, but I'm not sure is it applicable to Vulkan WSI presentation.
VK_GOOGLE_display_timing extension exposes timings of past presents, and allows to supply timing hint for a subsequent present. But the extension is supported only on some Androids.
VK_EXT_display_control provides a VSync counter and an Fence signal when Vblank starts. But it only works with a VkDisplayKHR type swapchain. And it has only some small support on Linuxes.
The appropriate issue has been raised at Vulkan-Docs#370. Unfortunately, it is taking its time to be resolved.
I don't think you can get the exact presentation time (which would be tricky in any case, since monitors have some internal latency). I think you can get close though: The docs for vkAcquireNextImageKHR say you can pass a fence that gets signaled when the driver is done with the image, which should be close to the time it gets sent off to the display. If you're using VK_PRESENT_MODE_FIFO_KHR you can then use the refresh rate to work out when later images in the queue get presented.

DirectShow video stream ends immediately (m_pMediaSample is NULL)

I have a directshow Video renderer redived from CBaseVideoRenderer. The renderer is used in a graph that receives data from a live source (BDA). It looks like the connections are established properly, but the video rendering immediately ends because there is no sample. However, audio Rendering works, ie I can hear the sound while DoRenderSample of my renderer is never called.
Stepping through the code in the debugger, I found out that in CBaseRenderer::StartStreaming, the stream ends immedately, because the member m_pMediaSample is NULL. If I replace my renderer with the EVR renderer, it shows frames, ie the stream is not ending before the first frame for the EVR renderer, but only for my renderer.
Why is that and how can I fix it? I implemented (following the sample from http://www.codeproject.com/Articles/152317/DirectShow-Filters-Development-Part-Video-Render) what I understand as the basic interface (CheckMediaType, SetMediaType and DoRenderSample), so I do not see any possibility to influence what is happening here...
Edit: This is the graph as seen from the ROT:
What I basically try to do is capturing a DVB stream that uses VIDEOINFOHEADER2, which is not supported by the standard sample grabber. Although the channel is a public German TV channel without encryption, could it be that this is a DRM issue?
Edit 2: I have attached my renderer to another source (a Blackmagic Intensity Shuttle). It seams that the source causes the issue, because I get samples in the other graph.
Edit 3: Following Roman's Suggestion, I have created a transform filter. The graph looks like
an has unfortunately the same problem, ie I do not get any sample (Transform is not called).
You supposedly chose wrong path of fetching video frames out of media pipeline. So you are implementing a "network renderer", something that terminates the pipeline to further send data to network.
A renderer which accepts the feed sounds appropriate. Implementing a custom renderer, however, is an untypical task and then there is not so much information around on this. Additionally, a fully featured renderer is typically equipped with sample scheduling part, which end of stream delivery - things relatively easy to break when you customize it through inheriting from base classes. That is, while the approach sounds good, you might want to compare it to another option you have, which is...
A combination of Sample Grabber + Null Renderer, two standard filters, which you can attach your callback to and get frames having the pipeline properly terminated. The problem here is that standard Sample Grabber does not support VIDEOINFOHEADER2. With another video decoder you could possibly have the feed decoded info VIDEOINFOHEADER, which is one option. And then improvement of Sample Grabber itself is another solution: DirectX SDK Extras February 2005 (dxsdk_feb2005_extras.exe) was the SDK which included a filter similar to standard Sample Grabber called Grabber \DirectShow\Samples\C++\DirectShow\Filters\Grabber. It is/was available in source code and provided with a good description text file. It is relatively easy to extend to allow it accept VIDEOINFOHEADER2 and make payload data available to your application this way.
The easiest way to get data out of a DirectShow graph, if youњre not going to use
MultiMedia Streaming, is probably to write your own TransInPlace filter, a sub-variety
of a Transform filter. Then connect this filter to the desired stream of data you wish to
monitor, and then run, pause, seek, or otherwise control the graph. The data, as it passes
through the transform filter, can be manipulated however you want. We call this kind of
filter, a Њsample grabberћ. Microsoft released a limited-functionality sample grabber
with DX8.0. This filter is limited because it doesnњt deal with DV Data or mediatypes
with a format of VideoInfo2. It doesnњt allow the user to receive prerolled samples.
(Whatњs a preroll sample? See the DX8.1 docs) Its ЊOneShotћ mode also has some problems.
To add to this, the Grabber sample is pretty simple itself - perhaps 1000 lines of code all together, including comments.
Looks like your decoder or splitter isn't demuxing the video frames. Look further up the chain to see what filters are supplying your renderer pin with data, chances are its only recognising audio.
Try dropping the file into Graphedit (there's a better one on the web BTW) and see what filters it creates.
Then look at the samples in the DirectShow SDK.

Audio mixing with alsa's dmix plugin in c++

I trying to play two wav files at the same time using alsa. Note that the wav files have a different sample rate. This is possible, and audio streams are mixed and send to the audio chip. (I'm developing on an embedded linux device.) But one stream is being played a couple times faster then normal. So I guess there's a problem with resampling.
I have a default device with dmix plugin enabled in /etc/asound.conf and set the sample rate to 44100Hz. But to my understanding ALSA resamples all streams internally to 48khz and mixes them before downsampling them again to my desired output rate, in my case 44.1khz.
Is this correct?
When using the alsa-lib for playing audio files, do I need to set all parameters for that specific wav file?
For example: 8000hz mono 16-bits
set snd_pcm_hw_params_set_rate() to 8000hz
snd_pcm_hw_params_set_format to 16bits LE/BE/signed/unsigned
snd_pcm_hw_params_set_channels for mono
Does this change the hardware settings for the device or only for this specific audio stream?
Any clarification would be appreciated.
EDIT:
I might have misinterpreted the following: [ALSA]
When software mixing is enabled, ALSA is forced to resample everything to the same frequency (48000 by default when supported). dmix uses a poor resampling algorithm which produces noticeable sound quality loss.
So to be clear, if I change the rate in asound.conf of the dmix device to 44100, everything should be automagically be resampled to 44100 and mixed?
Thus the reason that one of my two mixed audio files has a incorrect speed is probably caused by incorrect stream settings using alsa-lib?
Because if I play one wav file at a time , both streams seem correct.
It's only when the first one is playing and at the same time I mix the other one in the stream, the speed of the first wav file is changed. Note that hw settings are the same at this time. Why does setting hw parameters (and playing) of stream2 changes something in stream1?
ALSA does not have a fixed 48 kHz resampling.
A dmix device uses a fixed sample rate and format, but all the devices using it typically use the plug plugin to enable automatic conversions.
When using alsa-lib, you must set all parameters that are important to you; for any parameters not explicitly set, alsa-lib chooses a somewhat random value.
Different streams can use different parameters.

Frame accurate synchronizing of subtitle files with MPEG video using DirectShow

This is a problem I have been dealing with for a while, and haven't been able to get a good answer (even from Microsoft). I'm using the generic dump filter to write hardware compressed MPEG files out to disk. In the graph, I also have a SampleGrabber filter that gets called on every frame. From the SampleGrabber callback, I get a subtitle, along with the DirectShow timestamp and write them out to a SAMI (.smi) subtitle file. This all seems to be working, as the SAMI file contains the correct subtitles for every frame. However, I have a few problems:
The first few (usually 3 or 4) DirectShow timestamps are all 0. If I'm getting callbacks from the SampleGrabber, shouldn't these timestamps be incrementing?
When I begin playback, the first timestamp shown is about 10-20 subtitles into the SAMI file. I'd assume the first frame would show the first timestamp in the file.
This is probably related to #2, but the subtitles are not synchronized to the appropriate frames in the file. They can sometimes be up to 40 frames late.
I'm using DirectShow via C++, capturing with a Hauppauge HVR-1800 under Windows XP SP3 (with latest drivers 09/08/2008), and playing back under Media Player Classic 6.4.9.0. Any ideas are welcome.
Are you using getting the incoming IMediaSample's GetTime or GetMediaTime. GetTime is what you want as it respresents the streams presentations time.
Be sure to also check the incoming IMediaSample's isPreRoll function. Preroll samples should be ignored as they will be output again during playback. Another thing I would do is make sure that your sample grabber is as far downstream in your filtergraph as it can be. Preferably after any demuxer's and renderers.
Also see the article on TimeStamps in the DirectShow documentation. It outlines the other caveats of using timestamps.
Of course, even after all of the tips above, there is still no absolute guarantee as to how a particular DirectShow filter is going to behave.