I'm trying to get ffmpeg to seek h264 interlaced videos, and i found that i can seek to any frame if i just force it.
I already hacked the decoder to consider I - Frames as keyframes, and it works nicely with the videos i need it to work with. And there will NEVER be any videos encoded with different encoders.
However, i'd like the seek to find me an I - Frame and not just any frame.
What i'd need to do is to hack The AVIndexEntry creation so that it marks any frame that is an I-Frame to be a key frame.
Or alternatively, hack the search thing to return I - Frames.
The code does get a tad dfficult to follow at this point.
Can someone please point me at the correct place in ffmpeg code which handles this?
This isn't possible as far as i can tell..
But if you do know where the I-Frames are, by either decoding the entire video or by just knowing, you can insert stuff into the AVIndexEntry information stored in the stream.
AVIndexEntries have a flag that tells if it's a keyframe, just set it to true on I-Frames.
Luckily, i happen to know where they are in my videos :)
-mika
Related
I'm trying to get information about frames in h264 bitstream. Especially motion vectors of macroblocks. I think, I have to use ffmpeg code for it, but it's really huge and hard to understand.
So, can someone give me some tips or exapmles of partial decoding from raw data of single frame from h264 stream?
Thank you.
Unfortunately, to get that level of information from the bitstream you have to decode every macroblock, there's no quick option, like there would be for getting information from the slice header.
One option is to use the h.264 reference software and turn on the verbose debug output and/or add your own printf's where needed, but this is also a large code base to navigate:
http://iphome.hhi.de/suehring/tml/
(You can also use ffmpeg and add output where needed too as you said, but it would take some understanding of that code base too)
There are graphical tools for analyzing video bitstreams which will show you this type of information on a per-macroblock basis, many are expensive, but sometimes there are free trial versions available.
I am currently attempting to develop a player that can perform accurate seeking based on an mpeg4 elementary video stream. I'm in the planning stage and trying to decide how to go about things and I'd like some advice before I start.
Some things to note are:
I will have complete control over the encoding of the file.
The original content will be I-frame only
FFmpeg is the encoding/decoding library
Audio can be disregarded for now. I will only be dealing with the video stream.
Frame accurate seeking must be implemented
So, when I'm encoding the content, can I query what type of frame (I, P, B) has been encoded so I can construct an additional index stream for the seeking operation. If not, I can query the GOP after it has been encoded to find the I-frame.
As for playback, the user needs to be able to type in a specific time and go to that frame (the nearest I-frame will be suitable for now). We can assume that the GOP is closed and the length is fairly short (e.g. 15 frames). My thoughts are to query the index stream that I created during encode and determine the relevant distance into the stream for the requested time.
I'm not sure how to seek using the FFMpeg library when playing back files.
Has anyone done anything similar and if so, can you give a brief explanation of how you did it?
I've always wanted to try and make a media player but I don't understand how. I found FFmpeg and GStreamer but I seem to be favoring FFmpeg despite its worse documentation even though I haven't written anything at all. That being said, I feel I would understand how things worked more if I knew what they were doing. I have no idea how video/audio streams work and the several media types so that doesn't help. At the end of the day, I'm just 'emulating' some of the code samples.
Where do I start to learn how to encode/decode/playback video/audio streams without having to read hundreds of pages of several 'standards'. Perhaps to a certain extent also be enough knowledge to playback media without relying on another API. Googling 'basic video audio decoding encoding' doesn't seem to help. :(
This seem to be a black art that nobody is out to tell anyone about.
The first part is extracting streams from the container. From there, you need to decode the streams into media. I recommend finding a small Theora video and seeing how the pieces relate there.
you want that we write one answer and you read that and be master in multimedia domain..!!!!
Anyway that can not be by one answer.
First of all understand this terminolgy by googling
1> container -- muxer/demuxer
2> codec --coder/decoder
If you like ffmpeg then go with its basic video plater application. iT is well documented at here http://dranger.com/ffmpeg/ it will shows the method of demuxing container and decoding any elementry stream with ffmpeg api. more about this at http://ffmpeg.org/ffplay.html
i like gstreamer more then ffmpeg. it has well documentation. it will be good choise if you start with gstreamer
I want to make a screen capture utility, so far i am able to capture the screen in regular interval to get a numbered sequence of images and now i want to encode them to a video format preferably flv(because of good compression and web support)
....I tried the ffmpeg.exe for that reason but for some strange reason it did'nt work
on my vista ultimate...only the first picture is encoded while the rest -I dont know what happened to them.
Also I would prefer doing the encoding stuf programatically (using c/c++ library api if any for that purpose) rather than using tools as ffmpeg.exe and i am interested in encoding picture sequence to video not capturing contineouse video directly.
I searched through internet....there are lots of libraries and tutorial for converting between video formats but I did'nt find something usefull for my problem.
I am not verry proficient with video formats and sdk library, I just need a quick way to encode some pictures to video with some basic control (as time interval between two consecutive frames).
So can you help me with some pointers as to which library i should use and how(code fragment and little descriptive answer would greatly help) and please dont recomend any .NET solution I need to learn something out of this and dont want to apply some bruteforce approach to solve the problem.
Sorry for my english....and thanks in advance.
It appears that an .avi file can more or less directly be made of .jpg's:
An AVI file may carry audio/visual data inside the chunks in virtually any compression scheme, including Full Frame (Uncompressed), ..., Motion JPEG.
Also, something very similar has been discussed here before.
I have a set of mp3 files, some of which have extended periods of silence or periodic intervals of silence. How can I programmatically detect this?
I am looking for a library in C++, or preferably C#, that will allow me to examine the sound content of these files for the silences.
EDIT: I should elaborate what I am trying to achieve. I am capturing streaming sports commentary using VLC and saving it to mp3. When a game is delayed, or cancelled, the streaming commentary is replaced by a repetitive message saying commentary is not available. By looking for these periodic silences (or total silence), I can detect if there is no commentary and stop the streaming recording
For this reason I am reluctant to decompress the mp3 because if would mean my test for these silences would be very slow. Unless I can decode the last 5 minutes of the file?
Thanks
Andrew
I'm not aware of a library that will detect silence directly in the MP3 encoded data, since its not a trivial task to detect silence without first decompressing. Luckily, its easy to find libraries that decode MP3 files and access them as PCM data, and its trivial to detect silence in PCM Data. Here is one such Library for C# I found, but I'm sure there are tons: http://www.robburke.net/mle/mp3sharp/
Once you decode the data, you will have a list of PCM samples. In the most basic form, the algorithm you need to detect silence is simply to analyze a small chunks (could be as little as .25s or as much as several seconds), and make sure that the absolute value of each sample in the chunk is below a threshold. The threshold value you use determines how 'quiet' the sound has to be to be considered silence, and the chunk size determines how long the volume needs to be below that threshold to be considered silence (If you go with very short chunks, you will get lots of false positives due to samples near zero-crossings, but .25s or higher should be ok. There are improvements to the basic approach such as using historesis (which is basically using two thresholds, one for the transition to silence, and one for the transition from silence), and filtering.
Unfortunately, I don't know a library for C++ or C# that implements level detection off hand, and nothing immediately springs up on google, but at least for the simple version its pretty easy to code.
Edit: Also, this library seems interesting: http://naudio.codeplex.com/
Also, while not a true duplicate question, the answers here will be useful for you:
Detecting audio silence in WAV files using C#