I created an application using libsndfile and wasapi, it allows playing an audio file slowly and manually with the cursor using QSlider.
I see that libsndfile uses frames, I want to know how to get the duration of this audio file but I can not find it in the documentation of libsndfile and play parts of the audio file using ms.
I managed to make it work with the frames but I want to understand how to do it with the milliseconds.
The objects libsndfile creates have a frames() method that gives you this information. samplerate() gives you the sampling rate, so then the duration is:
static_cast<double>(frames())/samplerate()
Related
So I know that one can adjust the rate of download with youtube-dl using the -r or --limit-rate flags; however, as part of a simulation testing, I am trying to simulate a user watching a video , and so I want to download a video at a rate such that the download would take as long as the video's duration is if one were to watch the video, so that a 2min long video would take 2min to download, and so on and so forth.
I have meticulously reviewed the available options on their github page, but it seems like there are no options natively to do that. But then the next best thing I can think of is to get the video duration in seconds (lets call it t) and the video size in bytes (lets call it s) and then use s/t as a value for the --limit-rate flag.
However now the problem is that there doesn't seem to be any options/flags to get the video file-size in bytes!
Is there anyway I can accomplish what my goal is here? I am open to using other tools/programs if this is outside the capabilities of youtube-dl.
To be more specific, I am working in linux server environment (no video-card and needs to be able headlessly), and the videos I'm dealing with are MPEG Dash videos from an MPD file, so whatever tool I use needs to be able to parse and work with MPD files.
Thank you for your help,
I'm working on a c++ project that generates frames to be converted to a video later.
The project currently dumps all frames as jpg or png files in a folder and then I run ffmpeg manually to generate a mp4 video file.
This project runs on a web server and an ios/android app (under development) will call this web server to have the video generated and downloaded.
The web service is pretty much done and working fine.
I don't like this approach for obvious reasons like a server dependency, cost etc...
I successfully created a POC that exposes the frame generator lib to android and I got it to save the frames in a folder, my next step now is to convert it to video. I considered using any ffmpeg for android/ios lib and just call it when the frames are done.
Although it seems like I fixed half of the problem, I found a new one which is... each frame depending on the configuration could end up having 200kb+ in size, so depending on the amount of frames, it will take a lot of space from the user's device.
I'm sure this will become a huge problem very easily.
So I believe that the ideal solution would be to generate the mp4 file on demand as each frame is created, so in the end there would be no storage space being taken as I woudn't need to save a file for the frame.
The problem is that I don't know how to do that, I don't know much about ffmpeg, I know it's open source but I have no idea how to include a reference to it from the frames generator and generate the video "on demand".
I heard about libav as well but again, same problem...
I would really appreciate any sugestion on how to do it. What I need is basically a way to generate a mp4 video file given a list of frames.
thanks for any help!
I am developing app witch may sample audio from microphone. I have used QAudioRecorder and QAudioProbe to sample. Everything works fine. But I have just realized the QAudioRecorder saves recorded audio to my documents. Maybe I should use QAudioInput instead. I will do it all again if I must. But, is there any way to disable creation of that audio file? I have my samples. I don't need them on my harddrive. Thank you for help.
Unfortunately, a storage location is inherent to QAudioRecorder, and you must use a lower-level way to "capture audio" without storing it to disk.
Here is a minimal example using QAudioInput: http://doc.qt.io/qt-5/qtmultimedia-multimedia-audioinput-example.html
I have two cameras, listed below, that I am trying to use in a Media Foundation topology. Here is a summary of my topology:
Webcam --> MJPG Decoder --> Custom MFT --> H264 Encoder --> MP4 File Sink
The problem with this is that the generated MP4 file has incorrect duration and time scale tags, both for the MP4 container and the H264 stream. I can easily correct this with a tool like MP4Box or YAMB, but my eventual goal is to stream the video.
One potential cause I have identified is that the samples generated by the camera sources do not start at time 0. According to bullet #2 in http://msdn.microsoft.com/en-us/library/windows/desktop/ms700134(v=vs.85).aspx#live_sources, timestamps of a live source should start at 0.
Along this line, I've tried the following to "correct" the sample timestamps:
Re-based the sample time in my custom MFT, using IMFSample::SetSampleTime.
Created a wrapper for the IMFMediaSource and IMFMediaStream objects, which catches and corrects the time stamps associated with the MEMediaSample and MEStreamTick events.
In both of these cases, the media session throws an error 0xC00D4A44 (MF_E_SINK_NO_SAMPLES_PROCESSED), and the resulting MP4 file ends abruptly after the "mdat" atom declaration.
Cameras
Logitech BCC950 ConferenceCam
Thinkpad W520 Integrated Camera
Systems used (both have same issue):
Windows 7 Professional x64
Windows 8 x86
Questions:
Is there some other cause I have overlooked for incorrect video duration/time scale?
If not, is there a correct approach for how to re-base sample timestamps?
Try to reset for every sample flag MFSampleExtension_Discontinuity
pSample->SetUINT32( MFSampleExtension_Discontinuity, FALSE );
This is a problem I have been dealing with for a while, and haven't been able to get a good answer (even from Microsoft). I'm using the generic dump filter to write hardware compressed MPEG files out to disk. In the graph, I also have a SampleGrabber filter that gets called on every frame. From the SampleGrabber callback, I get a subtitle, along with the DirectShow timestamp and write them out to a SAMI (.smi) subtitle file. This all seems to be working, as the SAMI file contains the correct subtitles for every frame. However, I have a few problems:
The first few (usually 3 or 4) DirectShow timestamps are all 0. If I'm getting callbacks from the SampleGrabber, shouldn't these timestamps be incrementing?
When I begin playback, the first timestamp shown is about 10-20 subtitles into the SAMI file. I'd assume the first frame would show the first timestamp in the file.
This is probably related to #2, but the subtitles are not synchronized to the appropriate frames in the file. They can sometimes be up to 40 frames late.
I'm using DirectShow via C++, capturing with a Hauppauge HVR-1800 under Windows XP SP3 (with latest drivers 09/08/2008), and playing back under Media Player Classic 6.4.9.0. Any ideas are welcome.
Are you using getting the incoming IMediaSample's GetTime or GetMediaTime. GetTime is what you want as it respresents the streams presentations time.
Be sure to also check the incoming IMediaSample's isPreRoll function. Preroll samples should be ignored as they will be output again during playback. Another thing I would do is make sure that your sample grabber is as far downstream in your filtergraph as it can be. Preferably after any demuxer's and renderers.
Also see the article on TimeStamps in the DirectShow documentation. It outlines the other caveats of using timestamps.
Of course, even after all of the tips above, there is still no absolute guarantee as to how a particular DirectShow filter is going to behave.