My application is transforming an AVI video file into another AVI file. I use
the OpenCV library. Unfortunately videos created with OpenCV have no sound as the library does not support audio.
Is there any easy way to copy the audio track from one video file to another? Maybe FFmpeg?
My application is written in Visual C++.
You can use FFmpeg. The easiest way would be to just use the command line tool to extract/reassemble. If you need your application to do it itself, looking into the sources for how they do it should help.
Alternatively, as you mention VC++, why not use DirectShow? It should not be too difficult to sink the audio into a file for extraction and later sink the video/audio mix into a file for composition.
Related
I have a Visual Studio C++ project that renders information from custom telemetry hardware. I need to be able to render that information over video that was shot during the telemetry gathering process. I've had suggestions that I use ffmpeg to do the extraction to individual frames and this would work for short videos, but longer ones would require ~2TB of drive space. How do I read and write .mp4s, frame-by-frame, in VSC++?
ffmpeg has a libavcodec component that is supposed to do the job, but the instructions for building and incorporating ffmpeg are vague and not recently updated.
How do I pull video frames/audio into a VSC++ application from a file, then write out again to another file?
I'm trying to use FFMpeg to create a video. So far i've been playing with a multiplexing example:
http://ffmpeg.org/doxygen/trunk/muxing_8c-source.html, and i'm able to create a compressed video from an already existing video.
Because my program is going to run on an embedded platform I would like to use some custom code (generated by a colleague) to compress the video data and place it into the video file.
So I'm looking for a way to create a video file in c/c++ using ffmpeg in which i have full control over the compression part (to basically circumvent ffmpeg from doing the compression for me and inserting my own code).
To clarify i'm planning to use this to save film from an intelligent camera into a compressed h264 mpeg-4 file.
You could pipe the output with -vcodec rawvideo to your custom program, or write it as a codec and have ffmpeg handle it.
By the way, ffmpeg was superceded by avconv. ffmpeg only exists for backwards compatibility now.
Edit: apparently avconv is a newer fork of ffmpeg, and seems to have more support. Either way, the options are almost the same.
We have a requirement to lets users record a video of our 3D application. I can already grab the individual rendered frames so this question is specifically about how to write frames into a video file.
I don't think writing each frame as a separate file and post-processing is a workable option.
I can look at options to record to a simple video file for later optimising/encoding, or writing directly to a sensibly encoded format.
FFmpeg was suggested in another post but it looks a bit daunting to me. Is it the best option, if not what can be suggested? We can work with LGPL but not full GPL.
We're working on Windows (Win32 not MFC) in C++. Sample/pseudo code with your recommended library is very much appreciated... basically after how to do 3 functions:
startRecording() does whatever initialization is needed
recordFrame() takes pointer to frame data and encodes it, ideally with timing data
endRecording() finalizes the video file, shuts down video system, etc
Check out the sources to Taksi on sourceforge. http://taksi.sourceforge.net/
You need 2 things.
1. A code to compress the frames.
2. A container file format. Like AVI or MPG.
Taksi useses the old VideoForWindows API and AVI not the newer COM API's but it still might work for you.
How to save an IPLImage of OpenCV as a Flash file? Maybe there is a library that does that?
If you mean storing your output as a flash video (.flv) just use ffmpeg (libavcodec/libavformat). It is cross platform and supports the .flv format (besides a massive amout of others) and should be quite easy to do. You can embed audio too.
As a note: ffmpeg is partially included in opencv (depending on your build) as a video coder/decoder, i don't know though if you can force it to write as .flv (by choosing the right codec string) from within opencv. Anyways it's not too hard to convert IplImage to a ffmpeg buffer and store from there.
A problem you might have is that latest opencv (2.1) has trouble to build with ffmpeg support or is build against some ffmpeg version you don't want. But as mentioned above you don't need to use ffmpeg via the opencv 2.1 api, since you can use it directly by using the ffmpeg api.
Look for the examples in libavcodec on how to write a video, and check the opencv source on how to convert from IplImage to AVPacket/AVFrame. I've done this before and it was quite
easy to do.
I don't know Flash much, but you can manipulate the data pointer of an IplImage (named char *imageData). Data is accessible as between 1 and 4 bit plans, in a format you surely know. Try writing your Flash file from this data pointer.
lital , Well to my knowledge openCV doesn't support creating flash .
My solution for such a problem is Red5 Server
and as their page says
Red5 is an Open Source Flash Server
written in Java that supports:
Streaming Video (FLV, F4V, MP4)
....
You could dump your images in a sequence of files, say img00000.ppm, img00001.ppm, ..., and then delegate the video encoding to MEncoder, which, according to docs, supports flv.
That's what we usually do in order to prepare videos such as this one.
I'm working in an app in wich we use IMediaDet to get stream lengths. Now we're starting to work with MP4 containers. The problem is, when I try an IMediaDet::put_fileName() with the MP4 file, I get HRESULT = -2147024770 (ERROR_MOD_NOT_FOUND). Using a comercial mp4 demuxer, I see the video stream uses mpg2 encoding.
My questions: How to get the stream length of a stream inside a MP4 container? Is there a way to make IMediaDet accept these files? Is there a way to point what demuxer IMediaDet should use?
Any help would be much appreciated.
Thanks.
Unfortunately, DirectShow does not contain an MP4 parser, even in Windows 7. In Win7, the MP4 functionality was added to media foundation.
So you have a few options. You can buy or build a directshow filter that implements an MP4 demux and associate it with the "mp4" file extension, which should allow IMediaDet to properly demux the file. Or you can use Media Foundation, which should be able to return this info. Or you could use a separate library entirely for MP4 files, like MP4v2. (note you could also implement an MP4 demux filter with MP4v2, if you want to use DirectShow instead of MP4v2 directly)