Gstreamer total latancy - gstreamer

Goal:
Measure the whole Pipeline time thats​ need a frame from the Stream src to the sink. The src is a IP camera and we should detect how long take a frame from the camera to the sink, If the time to high we should Show something in the Display.
Can you guys explain me how is this measurment possible in gstreamer ?
Our gstreamer Applikation is written in c++ some hints or code examples are welcome
Thank you du mucj guys

You can do this with pad probes perhaps:
https://gstreamer.freedesktop.org/documentation/application-development/advanced/pipeline-manipulation.html#using-probes
Depending on your pipeline behavior - you would choose the earliest element that can access reasonable data (not sure what the camera delivers as samples in your case) record the current system's time to the sample's DTS/PTS (frame reordering may be a pitfall here) and do the same thing at the last pad you have access to.
Compare the system's times of a sample with the same PTS/DTS and you should have the time delta the sample spend in the pipeline. Depending on your required accuracy this may be a good enough estimate.

Related

Seeking within MP3 file

I am working on the development of driving software for the hardware implementation by these people. The decoder works properly in overall, but I am struggling making it starting playing the sound at the middle. I suspect that it is common feature of the MP3 decoders as they must have some history of data in order to properly construct current sound (I am not that skilled in MPEG, however have an idea of some basics).
The problem is that this decoder is a black box, and any deepening in its code is an enormous time and effort.
I empirically found out that the sound garbage, when starting somewhere in the middle, happens in no more that 1 (one) seconds after start with file # 320 kbps and 44100 sampling rate. I am actually ok to mute decoder for a second (while it gathers/decodes proper required data for further playback), and then unmute it to continue playback.
I did search on the internet for the matter, did not find anything useful. Tried to invalidate first frames by corrupting frame headers (the easiest that could be done without going into the MP3 headers/data), made things even worse.
Questions:
is there any body of knowledge of how players perform seek in MP3 files and keep non-corrupt sound?
Is my action plan seem valid - mute for 1 second while decoder plays garbage? Is there any way to (easily) calculate the time I must mute output for?
Update: just tried on another file # 128 kbps/48k and maximal garbage time to be about 2 seconds... I can not believe that decoder with so limited resources - input buffer used is 2 kB with some intermediate working buffers, in total must be not more than 36 kB - can keep the history for 2 seconds, or decoder is having problems finding the sync word in the stream... and thus my driver needs to figure out the frame start (by finding out sync word, reading frame header, calculating frame size, and looking after the frame to contain another sync word).
I've found workarounds. The difficulty was that there are actually two problems overlaying each other, but was easy to cope with having structured approach.
The decoder is having issues getting the first sync word of the stream, and works very well when the first bytes supplied to it are FF FB or FF FA. All other bytes - in the middle of the frame - with very high probability, cause major sound corruption, until decoder catches correct sync. Thus I designed the code seeking to the next frame start after the seek point, checking that this is actual start of the frame by calculating frame size and looking at the next frame to contain FFFB/FA.
Having fixed the problem 1 I have had minor corruption left from the decoder starting decoding the frame without historical data. I have solved it by muting the decoder for the first 4 buffering transactions.
Major corruption still happens, but is rare, and it seems that nature of corruption depends on what was in the decoder buffers (not only Huffman input buffer, but other intermediate buffers) before the decoder is instructed to start. My hardware performs clear of the input buffers to 0 when decoder is in reset state, but it seems to be not enough (or just incorrect)...
The decoder itself is a kind of PoC (proof of concept) work, a student term with the aim to prove that they were able to make it; the package is having test bench code, but lacks low level documentation/comments in the code, and is not ready for field implementation and production. In general the fact that it worked for me at all (almost) out of the box makes the honor to the developers and is a mark of high quality of their work. I have reviewed and tried several published projects for MP3 decoders for silicon implementation (FPGA) and concluded that this one is the best available. In addition, the license they provide their work on is generous one.
Update: my research have shown that the most problem lies not in the input buffer (however it is possible to improve the situation by uploading 528 bytes of historical data to the decoder's buffer so that it would be able to grab main data from previous frame), but in the internal state of the decoder. Its documentation says:
To reduce resource usage, part of the RAM for buffering the intermediate data is also shared with Huffman decoding as bit reservoir ...
thus it is a contents of the reservoir and intermediate computed data affecting the decoding. I have confirmed it by starting various set of frames in different sequence, and if set of frames are played in different sequence, nature of garbage changes, or garbage may simply not appear.
Thus, unfortunately, my conclusion: it is not possible to properly seek using this decoder as is. I even do not think it is possible to "fake" playback (to quickly "play" the file till the needed point in buffers) as all three clocks are tied to each other.
I will keep my "best tested" implementation, with the notes on the quality.
Update 2: I was wrong, it is possible to seek softly, but to mitigate the sound corruption (yes, I am still unsure if I fixed it completely) I had to find another deficiency in the decoder: it is related to timing, decoder assumes that further data is always available in the buffer, while it may not be there yet. (It is actually clear from the test bench code supplied within the IP - the way data was replenished during QA and testing). In the cases I caught the corruption, first frames in the first part of the input buffer RAM were not decoded properly, skipped, and decoder quickly skips to second part of the RAM, assuming new data is there, however driving hardware is not ready yet fetching required data and putting this data into the second part of decoder's buffer RAM, thus corruption persisted for quite a long time with decoder looping skipping "invalid" frames until it catches correct image of the frame and normalizes its pace through the buffer.
Now the solution:
play (almost) 5 frames of silence through decoder before unmuting it. This will ensure all decoder's internal buffers are purged. It will not take much time, however requires some coding;
introduce a possibility to set huffman's decoder starting pointer readptr (in huffctl.v) after reset into the value other than 0. It will give the flexibility to have some history data uploaded into the decoder's buffer and start huffman decoder from the middle of the buffer rather than from its very start;
calculate the position to seek to, it calculates relatively easily for MPEG-1 Layer-3: duration=(filesize-ID3size)/(bitrate/8*1000), newPosition=ID3size+seekTime*(bitrate/8*1000). Duration is needed to check that position to seek to fits into the play time, alternatively newPosition can be used to check against file size. These forumlas do not take into account older tag versions appearing at the end of the file, but they are usually not more than 128 bytes, thus a kind of negligible for timing calculation relative to average MP3 sound file size; it also assumes CBR (VBR will require completely different way, requiring more power and data I/O for accurate seeking). Funny enough I found web pages with incorrect duration calculation formula, thus beware posts by ignorant people with cool job titles;
Seek to the calculated position, find next frame from this position on, calculate frame size, and ensure that there's next valid frame at that distance. New pointer will point to this next frame found at the distance;
find out the main_data_begin lookback pointer of the frame now being pointed to at step 4. Decrease the new pointer by this value so that pointer points within previous frame to the start of the main data for the current frame - it will be a pointer for the decoder data start. Note that it will fail if main data begins in more than one frame back (removal of headers of previous frame(s) will be required for proper operation);
fill decoder's buffer starting pointer identified in step 5, and set decoder's decoding start pointer to the one identified in step 4. While the implementation assumes you fill buffer in halves, do it different from the start: fill the whole buffer instead of just a first half. For this, after reset, set idle bit, check for data request, reset idle bit, perform two 1024 byte transfers to the decoder's buffer (effectively filling it completely), and then set idle bit, then reset it, and then set it again;
after performing step 7 continue normally replenishing 1024 bytes per decoder's request.
Employing this plan I had zero sound corruption cases. As you see it requires some changes to Verilog, but it must be easy if you know basics or hardware, know Verilog amd can perform reverse engineering.

How to detect camera frame loss using Windows media API like Media Foundation or DirectShow?

I am writing an application for Windows that runs a CUDA accelerated HDR algorithm. I've set up an external image signal processor device that presents as a UVC device, and delivers 60 frames per second to the Windows machine over USB 3.0.
Every "even" frame is a more underexposed frame, and every "odd" frame is a more overexposed frame, which allows my CUDA code perform a modified Mertens exposure fusion algorithm to generate a high quality, high dynamic range image.
Very abstract example of Mertens exposure fusion algorithm here
My only problem is that I don't know how to know when I'm missing frames, since the only camera API I have interfaced with on Windows (Media Foundation) doesn't make it obvious that a frame I grab with IMFSourceReader::ReadSample isn't the frame that was received after the last one I grabbed.
Is there any way that I can guarantee that I am not missing frames, or at least easily and reliably detect when I have, using a Windows available API like Media Foundation or DirectShow?
It wouldn't be such a big deal to miss a frame and then have to purposefully "skip" the next frame in order to grab the next oversampled or undersampled frame to pair with the last frame we grabbed, but I would need to know how many frames were actually missed since a frame was last grabbed.
Thanks!
There is IAMDroppedFrames::GetNumDropped method in DirectShow and chances are that it can be retrieved through Media Foundation as well (never tried - they are possibly obtainable with a method similar to this).
The GetNumDropped method retrieves the total number of frames that the filter has dropped since it started streaming.
However I would question its reliability. The reason is that with these both APIs, the attribute which is more or less reliable is a time stamp of a frame. Capture devices can flexibly reduce frame rate for a few reasons, including both external like low light conditions and internal like slow blocking processing downstream in the pipeline. This makes it hard to distinguish between odd and even frames, but time stamp remains accurate and you can apply frame rate math to convert to frame indices.
In your scenario I would however rather detect large gaps in frame times to identify possible gap and continuity loss, and from there run algorithm that compares exposure on next a few consecutive frames to get back to sync with under-/overexposition. Sounds like a more reliable way out.
After all this exposure problem is highly likely to be pretty much specific to the hardware you are using.
Normally MFSampleExtension_Discontinuity is here for this. When you use IMFSourceReader::ReadSample, check this.

Changing the pitch of raw waveform data in Windows

I am trying to modify waveform data that I am getting through waveInOpen via WAVEHDR-structs. I want to change the pitch of the sound.
All I have is a pointer to the raw audio data and the number of the used bytes.
I am a little lost because I cant find any examples on how to do this.
I would be really thankful for a starting point on how to edit raw waveform data (or even an example of how to change the pitch would be really awesome).
Thanks!
You can change the pitch by changing the playback rate. Say for example you recorded a waveform at 48kHz sampling rate and then when you played it back you told the system that the sample rate was 96kHz. The pitch of everything would double. Also the playback duration would halve which you may not want. An alternative to changing the sample rate is to add or remove samples to achieve basically the same effect. Contrary to the other answer it is not as arbitrary as adding or removing samples. When you remove samples you need to apply low pass filtering to prevent aliasing. And when inserting samples you need to apply an interpolation filter. These are not trivial if you don't have a signal processing background. Finally, if your goal is to shift the pitch but leave the duration at the original duration then it you need to look at something like a phase vocoder.

Programmatically convert WAV

I'm writing a file compressor utility in C++ that I want support for PCM WAV files, however I want to keep it in PCM encoding and just convert it to a lower sample rate and change it from stereo to mono if applicable to yield a lower file size.
I understand the WAV file header, however I have no experience or knowledge of how the actual sound data works. So my question is, would it be relatively easy to programmatically manipulate the "data" sub-chunk in a WAV file to convert it to another sample rate and change the channel number, or would I be much better off using an existing library for it? If it is, then how would it be done? Thanks in advance.
PCM merely means that the value of the original signal is sampled at equidistant points in time.
For stereo, there are two sequences of these values. To convert them to mono, you merely take piecewise average of the two sequences.
Resampling the signal at lower sampling rate is a little bit more tricky -- you have to filter out high frequencies from the signal so as to prevent alias (spurious low-frequency signal) from being created.
I agree with avakar and nico, but I'd like to add a little more explanation. Lowering the sample rate of PCM audio is not trivial unless two things are true:
Your signal only contains significant frequencies lower than 1/2 the new sampling rate (Nyquist rate). In this case you do not need an anti-aliasing filter.
You are downsampling by an integer value. In this case, downampling by N just requires keeping every Nth sample and dropping the rest.
If these are true, you can just drop samples at a regular interval to downsample. However, they are both probably not true if you're dealing with anything other than a synthetic signal.
To address problem one, you will have to filter the audio samples with a low-pass filter to make sure the resulting signal only contains frequency content up to 1/2 the new sampling rate. If this is not done, high frequencies will not be accurately represented and will alias back into the frequencies that can be properly represented, causing major distortion. Check out the critical frequency section of this wikipedia article for an explanation of aliasing. Specifically, see figure 7 that shows 3 different signals that are indistinguishable by just the samples because the sampling rate is too low.
Addressing problem two can be done in multiple ways. Sometimes it is performed in two steps: an upsample followed by a downsample, therefore achieving rational change in the sampling rate. It may also be done using interpolation or other techniques. Basically the problem that must be solved is that the samples of the new signal do not line up in time with samples of the original signal.
As you can see, resampling audio can be quite involved, so I would take nico's advice and use an existing library. Getting the filter step right will require you to learn a lot about signal processing and frequency analysis. You won't have to be an expert, but it will take some time.
I don't think there's really the need of reinventing the wheel (unless you want to do it for your personal learning).
For instance you can try to use libsnd

encoding camera with audio source in realtime with WMAsfWriter - jitter problem

I build a DirectShow graph consisting of my video capture filter
(grabbing the screen), default audio input filter both connected
through spliiter to WM Asf Writter output filter and to VMR9 renderer.
This means I want to have realtime audio/video encoding to disk
together with preview. The problem is that no matter what WM profile I
choose (even very low resolution profile) the output video file is
always "jitter" - every few frames there is a delay. The audio is ok -
there is no jitter in audio. The CPU usage is low < 10% so I believe
this is not a problem of lack of CPU resources. I think I'm time-
stamping my frames correctly.
What could be the reason?
Below is a link to recorder video explaining the problem:
http://www.youtube.com/watch?v=b71iK-wG0zU
Thanks
Dominik Tomczak
I have had this problem in the past. Your problem is the volume of data being written to disk. Writing to a faster drive is a great and simple solution to this problem. The other thing I've done is placing a video compressor into the graph. You need to make sure both input streams are using the same reference clock. I have had a lot of problems using this compressor scheme and keeping a good preview. My preview's frame rate dies even if i use an infinite Tee rather than a Smart Tee, the result written to disk was fine though. Its also worth noting that the more of a beast the machine i was running it on was the less of an issue so it may not actually provide much of a win if you need both over sticking a new faster hard disk in the machine.
I don't think this is an issue. The volume of data written is less than 1MB/s (average compression ratio during encoding). I found the reason - when I build the graph without audio input (WM ASF writer has only video input pint) and my video capture pin is connected through Smart Tree to preview pin and to WM ASF writer input video pin then there is no glitch in the output movie. I reckon this is the problem with audio to video synchronization in my graph. The same happens when I build the graph in GraphEdit. Without audio, no glitch. With audio, there is a constant glitch every 1s. I wonder whether I time stamp my frames wrongly bu I think I'm doing it correctly. How is the general solution for audio to video synchronization in DirectShow graphs?