I use MFCreateSourceReaderFromByteStream to create an IMFSourceReader with a custom IMFByteStream getting data from a remote HTTP source.
When the source is an m4a file, everything works as expected. However, When the source is mp3, the function MFCreateSourceReaderFromByteStream does not return until the whole file is downloaded. Any idea on how to avoid that behavior and start to decode audio before the end of the download?
Assuming you are using default mediafoundation source, perhaps this is the default behaviour for the MP3 File Source and MPEG-4 File Source.
To confirm this, you can try using a custom audio mpeg file source, like this one I implemented : MFSrMpeg12Decoder
This mediafoundation source only manages mp1/mp2 audio file, and performs the decoding.This is not mp3, but it provides the bytestream once there is a valid audio mpeg header, and does not read full file (you can trust me...).
This will confirm that default MP3 File Source needs to read full file before provided the bytestream.
One possible answer would be that the MP3 file source reads the entire file to see if there is a variable bit rate, and thus provides the correct duration of the file (MF_PD_DURATION).
For m4a audio file, the duration is provided by the moov atom, so no need to read full file.
Related
I was able to successfully encode an MP4 file which contains H.264 encoded video only (using IMFSinkWriter interface). Now I want to add an audio stream to it.
Whenever I try to create a sink writer for an audio using:
MFCreateSinkWriterFromURL(filePath, null, null, &pSinkWriter)
it deletes the previous file and writes only the audio (well, according to this link it is expected).
So my question is: how to add an audio stream to an existing file which contains only video stream?
Or, If I have both raw data from audio and video how do I encode both of them into a single media file (I suppose I have to do something called multiplexing. If so, can someone provide me helpful references)?
Sink Writer API creates a media file from scratch when you do IMFSinkWriter::BeginWriting to final completion when you do IMFSinkWriter::Finalize. You don't add new streams to finalized file (well, you can do it, but it works differently - see last paragraph below).
To create a media file with both video and audio you need to add two streams before you begin. Two calls IMFSinkWriter::AddStream, then two IMFSinkWriter::SetInputMediaType, then you start writing IMFSinkWriter::BeginWriting and you feed both video and audio data IMFSinkWriter::WriteSample providing respective stream index.
To adding a new stream to already existing file you need to create a completely new file. One of the options you have is to read already compressed data from existing file you have and write it to the new file using IMFSinkWriter::WriteSample method without re-compression. At the same time second stream can be written doing the compression. This way you can create a video and audio MP4 file by taking video from existing file and adding/encoding an additional audio track.
I'm using Media Foundation to create an MP4 (H264 + AAC) output file out of an input MP4 after a series of filters. The creation of the video works perfectly and the video is reproduced without issues locally. The problem is that when executed remotely (through a web player or even VLC), the video doesn't start until it's fully downloaded.
I checked and confirmed that the http website hosting the file supports the Accepts-Ranges header field and after a while I figured out that the problem happens because the file hasn't been created with the "fast start" flag that allows for progressive download of the video.
I tried to search online for a solution, but I've been unable to find a way to apply that flag with Media Foundation's Sinkwriter. Any idea? (I can't use any external application to do this as this code is going to run within the Windows Store environment)
Progressive download requires that moov box goes before mdat box in the MPEG-4 file, which typically requires additional effort when the file is generated, and which is not the default behavior with Media Foundation.
Media Foundation introduced MF_MPEG4SINK_MOOV_BEFORE_MDAT attribute to handle this:
The default behavior of the mpeg4 media sink is to write 'moov' after
'mdat' box. Setting this attribute causes the generated file to write
'moov' before 'mdat' box.
In order for the mpeg4 sink to use this attribute, the byte stream
passed in must not be slow seek or remote for .
This feature involves an additional file copying/remuxing.
Note minimal requirements. Or, you need to post-process the file to move the moov box to the beginning.
See also:
How to generate "moov before mdat" MP4 video files with Media Foundation
I'm looking to write already compressed (h264) image data into an MPEG-4 video file. Since this code needs to be optimized to run on an embedded platform the code should be as simple as possible.
Best would be to just give some header information (like height width format fourcc etc.) and a filename and the compressed data and have that transformed into a data chunck and writen to that file.
So what i need either of these:
MPEG-4 header information (what goes where exactly)
Is there a main header or are there just headers for each data chunck
What header information is needed for a single video stream (rectangular)
What header information is needed for adding audio
A simple MPEG-4 file writer that does not have to do the compression itself and also allows to add audio frames. (c/c++)
.MP4 file format is described in MPEG-4 Part 14 specification. It is not just main header and subheaders, it has certain hierarchy and so called boxes in there. Some of your choice to write data into .MP4 file:
FFmpeg (libavcodec, libavformat) - related Q and related code link
In Windows via DirectShow API - GDCL MP4 Multiplexer or numerous commerical similar offerings
In Windows via Media Foundation API - MPEG-4 File Sink
I am trying to compress an AVI file using Visual Basic 6.0 but having trouble finding out information on how I can do this.
I am aware that I may be able to use direct show and use the File Source, AVI Compressor and File Writer filters but in quick tests I have not been able to get the Pins of the Filters to Connect.
Is there any other simple mmethod of achieving this such as a DLL?
Thanks/
If you have an AVI file, you can go
AVI File -> AVI SPlitter -> (compressor/encoder like ffdshow video encoder) -> AVI Mux -> File writer
turned a 2.16gb avi into a 5.56mb avi
Are you sure you mean compress? Your question lack information but I think you want to convert the file using ffmpeg/libav to another format.
I'm working in an app in wich we use IMediaDet to get stream lengths. Now we're starting to work with MP4 containers. The problem is, when I try an IMediaDet::put_fileName() with the MP4 file, I get HRESULT = -2147024770 (ERROR_MOD_NOT_FOUND). Using a comercial mp4 demuxer, I see the video stream uses mpg2 encoding.
My questions: How to get the stream length of a stream inside a MP4 container? Is there a way to make IMediaDet accept these files? Is there a way to point what demuxer IMediaDet should use?
Any help would be much appreciated.
Thanks.
Unfortunately, DirectShow does not contain an MP4 parser, even in Windows 7. In Win7, the MP4 functionality was added to media foundation.
So you have a few options. You can buy or build a directshow filter that implements an MP4 demux and associate it with the "mp4" file extension, which should allow IMediaDet to properly demux the file. Or you can use Media Foundation, which should be able to return this info. Or you could use a separate library entirely for MP4 files, like MP4v2. (note you could also implement an MP4 demux filter with MP4v2, if you want to use DirectShow instead of MP4v2 directly)