Is this possible? Someone tried to do on-line recording of audio and video(of the screen) with ffmpeg? I read everything google can find about ffmpeg in the net. The variant of recording I deed load CPU to 100%, but it still can't convert frames with appr. speed relevant to how fast frames are recording, audio go good, but video lost frames..
Recording audio/video of the screen is possible with ffmpeg. People do this for the purposes of screen casting. Performance of this depends on the hardware in use, the codecs used and various other factors.
See this post (or this one) for some further advice and command line use.
This pretty much depends on the codec used, the frame size/complexity and obviously the capabilities of the computer doing the compression. You can try a low complexity codec like MJPEG, which might improve your experience.
Related
I am a student currently working on my final project. Our project is focusing on new type network coding research. Now my task is to do a real-time video transmission to test the network coding. I have learned something of ffmepg and opencv and have finished a c++ program which can divide the video into frames and send it frame by frame. However, by this way, the transmission data (the frames)size are quite much more than the original video file size. My prof advise me try to find the keyframe and inter frame diff of the video (mjpeg format), so that transmit the keyframe and interframe diff only instead of all the frames with large amount of redundancy, and therefore reduce the transmission data. I have no idea in how to do this in c++ and ffmpeg or opencv. Can any one give any advice?
For my old program, please refer to here. C++ Video streaming and transimisson
I would recommend against using ffmpeg/libav* at all. I would recommend using libx264 directly. By using x264 you can have greater control of NALU slice sizes as well as lower encoder latency by utilizing callbacks.
Two questions which already may help yourself:
How are you interfacing from c++ to ffmpeg? ffmpeg generally refers to the command line tool, from c++ you generally use the individual libs which are part of ffmpeg. You should use libavcodec to encode your frames and possibly libavformat to packetize them into a container format.
Which codec do you use?
I am looking for a fast way to load in a video file and to create images from them at certain intervals ( every second, every minute, every hour, etc.).
I tried using DirectShow, but it just ran too slow for me to start the video file and move to a certain location to get data and to save it out to an image. Even if I disabled the reference clock. Tried OpenCV, but it has trouble opening the AVI file unless I know the exact codec information. So if I know a way to get the codec information out from OpenCV I may give it another shot. I tried to use FFMPEG, but I don't have as much control over it as well as I would wish.
Any advice would be greatly appreciated. This is being developed on a Windows box since it has to be hosted on a Windows box.
MPEG-4 format is not an intra-coded format, so you can't just jump to a random frame and decode it on its own, as most frames only encode the differences from one or more other frames. I suspect your decoding is slow because when you land on a frame for which several other dependent frames to be decoded first.
One way to improve performance would be to determine which frames are keyframes (or sometimes also called 'sync' points) and limit your decoding to those frames, since these can be decoded on their own.
I'm not very familiar with DirectShow capabilities, but I would expect it has some API to expose sync points.
Also, I should mention that the QuickTime SDK on Windows is possibly another good option that you have for decoding frames from movies. You should first test that your AVI movies are played correctly in the QuickTime Player. And the QT SDK does expose sync points, see the section Finding Interesting Times in the QT SDK documentation.
ffmpeg's libavformat might work for ya...
I have a custom library that can decode to RGBA or any other format.
What is the best way to marry it with OpenGL to decode onto texture so that it won't drop frames ?
Or is there a better way completely skipping textures?
Edit:
Full HD video streamed over net. So performance is an issue. 30 Hz. Recorded.
glTexSubImage2D() is quick and easy. You may be able to get more throughput with a PBO-based pipeline, at the expense of more latency.
Looks like glover may have some example code, as well as Ye Olde NeHe #35.
Is it possible to use the iPhone's hardware accelerated decoding of mp3s and AAC when using the OpenAL library?
I suppose there are two possible approaches if this is possible.
iPhone specific OpenAL extensions.
iPhone APIs to decode audio into raw bytes.
I have two specific use cases.
Completely decode a short sound bite.
Piecewise decode a larger sound file so it can be streamed into OpenAL rather than loaded all at once.
update
Boy! no one's got an answer for this? Does Apple's NDA stiffle these kinds of questions? What's going on? Surely someone else using OpenAL has wanted better audio performance.
There is at least one hardware (or hardware assisted) decoder in all iPhone device models. It can be accessed to convert mp3 and AAC files into raw PCM bytes by using the Audio Queue Services API. From thence you can process those bytes or send them to OpenAL.
AFAIK, there is no hardware audio decoder in the iPhone, 3S, and 3GS. This might have changed in the iPhone 4, but I have not heard anything to make be believe so.
I build a DirectShow graph consisting of my video capture filter
(grabbing the screen), default audio input filter both connected
through spliiter to WM Asf Writter output filter and to VMR9 renderer.
This means I want to have realtime audio/video encoding to disk
together with preview. The problem is that no matter what WM profile I
choose (even very low resolution profile) the output video file is
always "jitter" - every few frames there is a delay. The audio is ok -
there is no jitter in audio. The CPU usage is low < 10% so I believe
this is not a problem of lack of CPU resources. I think I'm time-
stamping my frames correctly.
What could be the reason?
Below is a link to recorder video explaining the problem:
http://www.youtube.com/watch?v=b71iK-wG0zU
Thanks
Dominik Tomczak
I have had this problem in the past. Your problem is the volume of data being written to disk. Writing to a faster drive is a great and simple solution to this problem. The other thing I've done is placing a video compressor into the graph. You need to make sure both input streams are using the same reference clock. I have had a lot of problems using this compressor scheme and keeping a good preview. My preview's frame rate dies even if i use an infinite Tee rather than a Smart Tee, the result written to disk was fine though. Its also worth noting that the more of a beast the machine i was running it on was the less of an issue so it may not actually provide much of a win if you need both over sticking a new faster hard disk in the machine.
I don't think this is an issue. The volume of data written is less than 1MB/s (average compression ratio during encoding). I found the reason - when I build the graph without audio input (WM ASF writer has only video input pint) and my video capture pin is connected through Smart Tree to preview pin and to WM ASF writer input video pin then there is no glitch in the output movie. I reckon this is the problem with audio to video synchronization in my graph. The same happens when I build the graph in GraphEdit. Without audio, no glitch. With audio, there is a constant glitch every 1s. I wonder whether I time stamp my frames wrongly bu I think I'm doing it correctly. How is the general solution for audio to video synchronization in DirectShow graphs?