I'm using Media Foundation to create an MP4 (H264 + AAC) output file out of an input MP4 after a series of filters. The creation of the video works perfectly and the video is reproduced without issues locally. The problem is that when executed remotely (through a web player or even VLC), the video doesn't start until it's fully downloaded.
I checked and confirmed that the http website hosting the file supports the Accepts-Ranges header field and after a while I figured out that the problem happens because the file hasn't been created with the "fast start" flag that allows for progressive download of the video.
I tried to search online for a solution, but I've been unable to find a way to apply that flag with Media Foundation's Sinkwriter. Any idea? (I can't use any external application to do this as this code is going to run within the Windows Store environment)
Progressive download requires that moov box goes before mdat box in the MPEG-4 file, which typically requires additional effort when the file is generated, and which is not the default behavior with Media Foundation.
Media Foundation introduced MF_MPEG4SINK_MOOV_BEFORE_MDAT attribute to handle this:
The default behavior of the mpeg4 media sink is to write 'moov' after
'mdat' box. Setting this attribute causes the generated file to write
'moov' before 'mdat' box.
In order for the mpeg4 sink to use this attribute, the byte stream
passed in must not be slow seek or remote for .
This feature involves an additional file copying/remuxing.
Note minimal requirements. Or, you need to post-process the file to move the moov box to the beginning.
See also:
How to generate "moov before mdat" MP4 video files with Media Foundation
Related
I'm working on a c++ project that generates frames to be converted to a video later.
The project currently dumps all frames as jpg or png files in a folder and then I run ffmpeg manually to generate a mp4 video file.
This project runs on a web server and an ios/android app (under development) will call this web server to have the video generated and downloaded.
The web service is pretty much done and working fine.
I don't like this approach for obvious reasons like a server dependency, cost etc...
I successfully created a POC that exposes the frame generator lib to android and I got it to save the frames in a folder, my next step now is to convert it to video. I considered using any ffmpeg for android/ios lib and just call it when the frames are done.
Although it seems like I fixed half of the problem, I found a new one which is... each frame depending on the configuration could end up having 200kb+ in size, so depending on the amount of frames, it will take a lot of space from the user's device.
I'm sure this will become a huge problem very easily.
So I believe that the ideal solution would be to generate the mp4 file on demand as each frame is created, so in the end there would be no storage space being taken as I woudn't need to save a file for the frame.
The problem is that I don't know how to do that, I don't know much about ffmpeg, I know it's open source but I have no idea how to include a reference to it from the frames generator and generate the video "on demand".
I heard about libav as well but again, same problem...
I would really appreciate any sugestion on how to do it. What I need is basically a way to generate a mp4 video file given a list of frames.
thanks for any help!
First of all, here is what I'm trying to accomplish:
We'd like to add the capability to our commercial application to generate a video file to visualize data. It should be saved in a reasonably compressed format. It is important that the encoding library/codecs are licensed such that we're allowed to use and sell our software without restriction. It's also important that the media format can easily be played by a customer, i.e. can be played by Windows Media Player without requiring a codec pack to be installed.
I'm trying to use DirectShow in c++ by creating a source filter with one output pin that generates the video. I'm closely following the DirectShow samples called Bouncing Ball and Push Source. In GraphEdit I can successfully connect to a video renderer and see the video play. I have also managed to connect to AVI Mux and then file writer to write an uncompressed AVI file. The only issue with this is the huge file size. However, I have not been able to save the video in a compressed format. This problem also happens with the Ball and Push Source samples.
I can connect the output pin to a WM ASF Writer, but when I click play I get "This graph can't play. Unspecified error (Return code: 0x80004005)."
I can't even figure out how to connect to the WMVideo9 Encoder DMO ("These filters cannot agree on a connection"). I could successfully save to mjpeg, but compression was not very substantial.
Please let me know if I'm doing something wrong in GraphEdit or if my source filter code needs to be modified. Alternatively, if there is another (non-DirectShow) option that would work for me I'm open to suggestions. Thanks.
You don't give details to help you with your modification of the filters, however Ball sample generates output which can be written to a file.
Your choice of WM ASF Writer filter is okay - it is a stock filter and it is more or less easy to deal with. There is however a caveat: you need to select video only profile on the filter first, and then connect video input. WM ASF Writer won't run with an unconnected input pin, and in default state it also has an audio input. Of course this can also be done programmatically.
The graph can be as simple as this, and it can be run and it generates a playable file.
I've stumbled through some code to enumerate my microphone devices (with some help), and am able to grab the "friendly name" and "clsid" information from each device.
I've done some tinkering with GraphEd.exe to try and figure out how I can take audio from directshow and write it to a file (I'm not currently concerned about the format, wav should be fine), and can't seem to find the right combination.
One of the articles I've read linked to this Windows SDK sample, but when I examined the code, I ended up getting pretty confused at how to use that code, ie. setting the output file, or specifying which audio capture device to use.
I also came across a codeguru article that has a nicely featured audio recorder, but it does not have an interface for selecting the audio device, and I can't seem to find where it statically picks which recording device to use.
I think I'd be most interested in figuring out how to use the Windows SDK sample, but any explanation on either of the two approaches would be fantastic.
Edit: I should mention my knowledge and ability as a win32 COM programmer is very low on the scale, so if this is easy, just explain it to me like I'm five, please.
Recording audio into file with DirectShow needs you to build the right filter graph, as you should have figured out already. The parts include:
The device itself, which you instantiate via moniker (not CLSID!), it is typically PCM format
Multiplexer component that converts streams into container format
File Writer Filter that takes file-compatible stream and writes into a file
The tricky moment is #2 since there is not standard component available. Windows SDK samples however contains the missing part - WavDest Filter Sample. Building it and making it ready for use, you can build a graph that records from device into .WAV file.
Your graph will look like this, and it's built easily programmatically as well:
I noticed that I have a variation of WavDest installed with Google Earth - for the case you have troubles building it yourself and you will be looking for prebuilt binary.
You can instruct ffmpeg to record from a directshow device, and output to a file.
I try to read a .avi file on MacOS X with Phonon and Qt. But I can't. Even in QtDemo, the "Media Player" is not able to display those kind of file. Is there a way to make it work ?
Try installing the right codecs to the system. "avi" is actually just a container file format that can contain media encoded with many possible audio and video codecs, that might not be supported/installed by default in OS-X. Phonon itself doesn't provide media decoding capabilities, it uses whatever is available on the system or separately configured as it's back end.
I'm currently writing some custom EVR for a Media Foundation player.
So far everything work, but i'm in need of finding the native resolution of the video file i'm rendering.
I try to use the IBasicFilter2 Interface to use the getVideoSize, get_VideoHeight or other get_SourceWidth etc... but it always return me a E_NOINTERFACE...
So do someone have an esay way of getting resolution of a video file? Even if it's with a nice light library...just the size nothing else...Windows manage to find it inside the file browser, but i'm totally unable to get it from code...
Thanks!
You can use IMediaDet in DirectShow to get information on the streams in a media file including the resolution of video streams.
There are come caveats though so you might want a backup method.
You need suitable DirectShow filters registered which understand the media file being examined. It's possible that you may have a filter installed that gives wrong results - e.g. an audio only filter is registered for a media type that ignores any video streams in the file.
It's currently deprecated with no indication on the MSDN reference page of what is replacing this functionality. It can also be a pain to build as the headers have been removed from the Windows SDK.
Here's one case in point where that method doesn't work...
Get MP4 stream lengths