FFmpeg resample audio while decoding - c++

I am having a task to build a decoder that generates exactly 1 raw audio frame for 1 raw video frame, from an encoded mpegts network stream, so that users can use the API by calling getFrames() and receive exactly these two frames.
Currently I am reading with av_read_frame in a thread, decode as packets come, audio or video; collect until a video packet is hit. Problem is generally multiple audio packets are received before video is seen.
av_read_frame is blocking, returns when certain amount of audio data is collected (1152 samples for mp2); and decoding that packet gives a raw AVFrame having duration of T (depends on samplerate); whereas the video frame generally has duration bigger than T (depends on fps), so multiple audio frames are received before it.
I was guessing I have to find a way to merge collected audio frames into 1 single frame just when video is hit. Also resampling and setting timestamp to align with video is needed I guess. I don't know if this is even valid though.
What is the smoothest way to sync video and audio in this manner ?

Related

Getting frame timestamp on RTSP playback

I'm making simple camera playback videoplayer on qt with gstreamer, it is possible to get frame date/time or only time from OSD? example of timestamp on picture, currently work with hikvision, tried to find it on RTP packets dumped with wireshark, but there is only relative to first frame timestamps.

Create I Frame out of P and B frames

I've written a C++ converter based on FFMpeg which can receive a link to hls-stream and convert it into local .mp4 video. So far, so good, converter works like a charm, no questions about that.
PROBLEM: No matter what input source I'm providing to the converter, at the end of convertation I need to receive video with key-frames ONLY. I need such video due to perfect seeking forward and reverse.
It's a well-known fact that subsidiary video frames (P and B) dependent on their owner-frame (I frame), because this frame contains full pixel map. According to that, we can recreate a I frame for each P and B frame by merging their data with their I frame. That's why such ffmpeg command ffmpeg -i video.mp4 output%4d.jpg works.
QUESTION: How can I implement an algorithm of merging of frames in order to recreate Key-frames ONLY video at the end? What kind of quirks I need to know about merging datas of AVPackets?
Thanks.
You cannot "merge" P and B-frames of a compressed stream (e.g. with H.264 codec), to obtain I-frames.
What ffmpeg does with
ffmpeg -i video.mp4 output%4d.jpg
is decoding each frame (thus it needs to start from an I-frame, then decode all subsequent P and B-frames in the stream), and compress them back to JPEG and output a JPEG image for each frame in the original input stream.
If you want to convert an input stream with P/B frames to an intra-only stream (with all I-frames), you need to transcode the stream.
That means decode all frames from the original stream and encode them back to an intra-only stream.

Why my App cannot decode the RTSP stream?

I use live555 to receive RTP video frame (frame encoded in H264). I use Live555 open my local .sdp file to receive frame data. I just saw DummySink::afterGettingFrame was called ceaselessly。 if fReceiveBuffer in DummySink is correct, Why FFMPEG cannot decode the frame? My code is wrong?
Here is my Code Snippet:
http://paste.ubuntu.com/12529740/
the function avcodec_decode_video2 is always return failed , its value less than zero
fReceiveBuffer is present one video frame?
Oh, Here is my FFMPEG init code need to open related video decoder:
http://paste.ubuntu.com/12529760/
I read the document related H264 again, I found out that I-frame(IDR) need SPS/PPS separated by 0x00000001 insert into the header and decoder have a capacity to decode the frame correctly. Here is a related solution
FFmpeg can't decode H264 stream/frame data
Decoding h264 frames from RTP stream
and now, My App works fine, it can decode the frame and convert it to OSD Image for displaying to screen .

recording with uncompressed audio speeds up video

I have been using a recorder (based on muxer example) satisfactorily for quite some time for various formats. Now I need to use uncompressed audio to go with MJPEG video and I notice video speeds up considerable (like 10 times as fast) in the recorded file. Audio is OK, and if I use a compressed audio format (like mp3) video is fine as always. Does anyone have an idea why video speeds up the moment I use uncompressed audio (CODEC_ID_PCM_S16LE)?

mp4 video created using direct show filter is not playing

Using direct show filters I have created a mp4 file writer with two input pin (one for audio and one for video) . I was writing audio sample received through one pin into one track and video sample received from the other pin into another track. But my video is not playing. If I connected only one pin, either Audio or Video, I can play the output file. that is if there is only one track.
I am using h264 encoder for video and mpeg4 encoder for audio. The encoders are working fine as I am able to play the audio and video separately.
I am setting the track count as 2. Is there any information to be provided in the moov box to make the video playing. Or Should we tell the decoder that which track is audio and which track is video. As we are setting those fields in track information I don't think that is important, but Why my video is not playing?