I want to delete audio stream from video and get the only video stream. But, when I search on google could not find any tutorial except decoding. Is there a way to delete a specific stream from video.
You cannot directly delete a stream from a file. You can, however, write a new file that contains all but one (or more) streams of the original file, which can be done without decoding and encoding the streams.
For this purpose, you can use libavformat, which is part of ffmpeg. You first have to demux the video file, which gives you packages that contain the encoded data for each stream inside the container. Then, you write (mux) these packages into a new video container. Take a look at the remuxing example for details.
Note, however, you can get the same result, by calling the ffmpeg program and passing it the apropriate parameters, which is probably easier [SO1].
Related
I was able to successfully encode an MP4 file which contains H.264 encoded video only (using IMFSinkWriter interface). Now I want to add an audio stream to it.
Whenever I try to create a sink writer for an audio using:
MFCreateSinkWriterFromURL(filePath, null, null, &pSinkWriter)
it deletes the previous file and writes only the audio (well, according to this link it is expected).
So my question is: how to add an audio stream to an existing file which contains only video stream?
Or, If I have both raw data from audio and video how do I encode both of them into a single media file (I suppose I have to do something called multiplexing. If so, can someone provide me helpful references)?
Sink Writer API creates a media file from scratch when you do IMFSinkWriter::BeginWriting to final completion when you do IMFSinkWriter::Finalize. You don't add new streams to finalized file (well, you can do it, but it works differently - see last paragraph below).
To create a media file with both video and audio you need to add two streams before you begin. Two calls IMFSinkWriter::AddStream, then two IMFSinkWriter::SetInputMediaType, then you start writing IMFSinkWriter::BeginWriting and you feed both video and audio data IMFSinkWriter::WriteSample providing respective stream index.
To adding a new stream to already existing file you need to create a completely new file. One of the options you have is to read already compressed data from existing file you have and write it to the new file using IMFSinkWriter::WriteSample method without re-compression. At the same time second stream can be written doing the compression. This way you can create a video and audio MP4 file by taking video from existing file and adding/encoding an additional audio track.
I am currently working on a student project, we have to create a live streaming service for videos with those constraints :
We capture the video from the Webcam with OpenCV
We want to stream the video while it's recorded
We have a file capture.avi that is saved to the computer, and while this file is saved, we want to stream it.
Currently, we have no idea how to do it, we don't know if the file transferred from A to B will be openable (Via VLC for example) in B, and if we won't have any interruption.
We plan to use RTSP for the network protocol. We code everything in C++.
Here the questions :
Does RTSP take care to stream a file that is being written
What format of the source should we use ? Should we stream the frames captured from OpenCV from A to B (So in B we have to use OpenCV to convert the frames to a video), or should we let OpenCV create a video file in A, and stream that video file from A to B ?
Thank you !
I don't believe it is safe to do so and what you need is two buffers.
The first would allow whatever library you want to use to write you recorded video to your local file system.
The later would allow your video to be streamed through your network.
Both should share the same context therefore the same data which would manage the synchronization of the two buffers.
I'm trying to use FFMpeg to create a video. So far i've been playing with a multiplexing example:
http://ffmpeg.org/doxygen/trunk/muxing_8c-source.html, and i'm able to create a compressed video from an already existing video.
Because my program is going to run on an embedded platform I would like to use some custom code (generated by a colleague) to compress the video data and place it into the video file.
So I'm looking for a way to create a video file in c/c++ using ffmpeg in which i have full control over the compression part (to basically circumvent ffmpeg from doing the compression for me and inserting my own code).
To clarify i'm planning to use this to save film from an intelligent camera into a compressed h264 mpeg-4 file.
You could pipe the output with -vcodec rawvideo to your custom program, or write it as a codec and have ffmpeg handle it.
By the way, ffmpeg was superceded by avconv. ffmpeg only exists for backwards compatibility now.
Edit: apparently avconv is a newer fork of ffmpeg, and seems to have more support. Either way, the options are almost the same.
We have a requirement to lets users record a video of our 3D application. I can already grab the individual rendered frames so this question is specifically about how to write frames into a video file.
I don't think writing each frame as a separate file and post-processing is a workable option.
I can look at options to record to a simple video file for later optimising/encoding, or writing directly to a sensibly encoded format.
FFmpeg was suggested in another post but it looks a bit daunting to me. Is it the best option, if not what can be suggested? We can work with LGPL but not full GPL.
We're working on Windows (Win32 not MFC) in C++. Sample/pseudo code with your recommended library is very much appreciated... basically after how to do 3 functions:
startRecording() does whatever initialization is needed
recordFrame() takes pointer to frame data and encodes it, ideally with timing data
endRecording() finalizes the video file, shuts down video system, etc
Check out the sources to Taksi on sourceforge. http://taksi.sourceforge.net/
You need 2 things.
1. A code to compress the frames.
2. A container file format. Like AVI or MPG.
Taksi useses the old VideoForWindows API and AVI not the newer COM API's but it still might work for you.
I'm working in an app in wich we use IMediaDet to get stream lengths. Now we're starting to work with MP4 containers. The problem is, when I try an IMediaDet::put_fileName() with the MP4 file, I get HRESULT = -2147024770 (ERROR_MOD_NOT_FOUND). Using a comercial mp4 demuxer, I see the video stream uses mpg2 encoding.
My questions: How to get the stream length of a stream inside a MP4 container? Is there a way to make IMediaDet accept these files? Is there a way to point what demuxer IMediaDet should use?
Any help would be much appreciated.
Thanks.
Unfortunately, DirectShow does not contain an MP4 parser, even in Windows 7. In Win7, the MP4 functionality was added to media foundation.
So you have a few options. You can buy or build a directshow filter that implements an MP4 demux and associate it with the "mp4" file extension, which should allow IMediaDet to properly demux the file. Or you can use Media Foundation, which should be able to return this info. Or you could use a separate library entirely for MP4 files, like MP4v2. (note you could also implement an MP4 demux filter with MP4v2, if you want to use DirectShow instead of MP4v2 directly)