I have a application (qt c++) that reads data from USB-device, decodes that data into 24bit RGB pixels which are stored in a uchar array.
Framerate is ~10 FPS. Framesize is 128x4096.
Question is: How to encode these frames into VP8 or h.264 video in real time?
No external processes are allowed, everything needs to run inside my application.
ffmpeg is an option but how to include it to my project and use it? Documentation is rather bad to say the least. Also x264 could be an option but same question as to ffmpeg. And it's also quite expensive, 1$ for unit and minimum of 10000.
Simple guide would be helpful but I doubt there exists one.
Application should run in Windows and Linux.
The problem with the VP8 SDK is that the examples only encode to IVF. That codec appears to have been shut down by Microsoft due to a security flaw (buffer overflow). It's pretty hard to even get the VP8 project setup when you can't even check the results. It at least uses a BSD license scheme and its supposedly unencumbered with patents.
The VP8 SDK has some routines for converting formats, but they are buried in the source tree.
An option not mentioned is the Intel Media SDK, but that locks you to windows.
There is also Theora and Dirac.
X264 has an encoder, but it would be expensive to get a commercial license.
GPLv2 source code is not "free". I don't care what they try to get you to believe.
There is also a project called "Revel - the Really Easy Video Encoding Library". That is a path to getting MPEG-4 part 2 files encoded. H264 is MPEG-4 part 10. H264 is also called AVC. Revel is also GPL'd.
Ffmpeg is a catch all utility that tries to create a wrapper around the various encoders/decoders. If you use the x264 encoder with it, it becomes GPLv2.
The VP8 SDK has documentation and even some sample code
Related
I'm trying to use FFMpeg to create a video. So far i've been playing with a multiplexing example:
http://ffmpeg.org/doxygen/trunk/muxing_8c-source.html, and i'm able to create a compressed video from an already existing video.
Because my program is going to run on an embedded platform I would like to use some custom code (generated by a colleague) to compress the video data and place it into the video file.
So I'm looking for a way to create a video file in c/c++ using ffmpeg in which i have full control over the compression part (to basically circumvent ffmpeg from doing the compression for me and inserting my own code).
To clarify i'm planning to use this to save film from an intelligent camera into a compressed h264 mpeg-4 file.
You could pipe the output with -vcodec rawvideo to your custom program, or write it as a codec and have ffmpeg handle it.
By the way, ffmpeg was superceded by avconv. ffmpeg only exists for backwards compatibility now.
Edit: apparently avconv is a newer fork of ffmpeg, and seems to have more support. Either way, the options are almost the same.
I need to add webcam video capture to a legacy MFC C++ application. The video needs to be saved as MP4. Did a bit of googling but didn't come across anything that looked promising. Any suggestions on the best approach?
EDIT:
Windows platform.
EDIT:
Must be compatible with XP
There are a few popular options to choose from:
DirectShow API - it does not have stock MPEG-4 compressors for video and audio, neither it has a stock multiplexor for .MP4 format, though there is an excellent free multiplexor from GDCL: http://www.gdcl.co.uk/mpeg4/. Also there is decent documentation, a lot of samples
Media Foundation API - it has everything you need (codecs, multiplexor) but only in Windows 7 (maybe even not all edtions)
FFmpeg and libavcodec/libavformat are definitely relevant, however H.264 encoder is only available under GPL license, not sure about video capture part right there, and you might have hard time looking for documentation and samples.
I'd say look at OpenCV as a library, hook into their video capture for that aspect, it can write out to mp4 but you'll need a couple of other libs for handling the output stream (on Linux I'd say ffmpeg and x264), that should get the buffer into the container with a reasonable amount of hassle.
We have a requirement to lets users record a video of our 3D application. I can already grab the individual rendered frames so this question is specifically about how to write frames into a video file.
I don't think writing each frame as a separate file and post-processing is a workable option.
I can look at options to record to a simple video file for later optimising/encoding, or writing directly to a sensibly encoded format.
FFmpeg was suggested in another post but it looks a bit daunting to me. Is it the best option, if not what can be suggested? We can work with LGPL but not full GPL.
We're working on Windows (Win32 not MFC) in C++. Sample/pseudo code with your recommended library is very much appreciated... basically after how to do 3 functions:
startRecording() does whatever initialization is needed
recordFrame() takes pointer to frame data and encodes it, ideally with timing data
endRecording() finalizes the video file, shuts down video system, etc
Check out the sources to Taksi on sourceforge. http://taksi.sourceforge.net/
You need 2 things.
1. A code to compress the frames.
2. A container file format. Like AVI or MPG.
Taksi useses the old VideoForWindows API and AVI not the newer COM API's but it still might work for you.
How to save an IPLImage of OpenCV as a Flash file? Maybe there is a library that does that?
If you mean storing your output as a flash video (.flv) just use ffmpeg (libavcodec/libavformat). It is cross platform and supports the .flv format (besides a massive amout of others) and should be quite easy to do. You can embed audio too.
As a note: ffmpeg is partially included in opencv (depending on your build) as a video coder/decoder, i don't know though if you can force it to write as .flv (by choosing the right codec string) from within opencv. Anyways it's not too hard to convert IplImage to a ffmpeg buffer and store from there.
A problem you might have is that latest opencv (2.1) has trouble to build with ffmpeg support or is build against some ffmpeg version you don't want. But as mentioned above you don't need to use ffmpeg via the opencv 2.1 api, since you can use it directly by using the ffmpeg api.
Look for the examples in libavcodec on how to write a video, and check the opencv source on how to convert from IplImage to AVPacket/AVFrame. I've done this before and it was quite
easy to do.
I don't know Flash much, but you can manipulate the data pointer of an IplImage (named char *imageData). Data is accessible as between 1 and 4 bit plans, in a format you surely know. Try writing your Flash file from this data pointer.
lital , Well to my knowledge openCV doesn't support creating flash .
My solution for such a problem is Red5 Server
and as their page says
Red5 is an Open Source Flash Server
written in Java that supports:
Streaming Video (FLV, F4V, MP4)
....
You could dump your images in a sequence of files, say img00000.ppm, img00001.ppm, ..., and then delegate the video encoding to MEncoder, which, according to docs, supports flv.
That's what we usually do in order to prepare videos such as this one.
I'm writing a cross-platform Qt-based program that from time to time needs to play back audio supplied externally (outside my control) as raw PCM. The exact format is 16 bit little-endian PCM at various common sample rates.
My first obvious idea was to use Qt's own Phonon for audio playback, but there are two problems with this approach:
As far as I can see, Phonon does not support headerless PCM data. I would have to hack around this and fake a WAV header every time playback starts. Not a showstopper, though.
More importantly: There doesn't seem to be any way to control how Phonon (and its backends such as xine, PulseAudio, DirectX, whatever) prebuffers. Its default behaviour seems to be something like 5 seconds of prebuffering, which is way too much for me. I'd prefer about 1 second, and I'd definitely like to be able to control this!
I'm currently looking at Gstreamer, FFMPEG and libvlc. Any thoughts? Since my audio is in a very simple format and I don't need to do fancy mixing stuff (just volume control), I'd like a simple, free (as in freedom), cross-platform and widely available library.
Qt 4.6 has the new QtMultimedia module.
https://doc.qt.io/archives/4.6/qtmultimedia.html
The QAudioOutput class would seem to do what you want - it just plays raw PCM data.
ffmpeg, libvlc and gstreamer have abilities beyond raw pcm, such as codec support.
For your purposes, SDL (example 1, example 2), OpenAL, QAudioOutput are sufficient. SDL is probably the most popular option.
Also, why do you want to control buffering? Buffering a lot means less interrupts and lower power consumption.
Have you looked at OpenAL?