I'm making simple camera playback videoplayer on qt with gstreamer, it is possible to get frame date/time or only time from OSD? example of timestamp on picture, currently work with hikvision, tried to find it on RTP packets dumped with wireshark, but there is only relative to first frame timestamps.
Related
I'm trying to capture sound with ALSA asound + video via v4l2 on raspberry pi, and separately it works fine. But on same time some audio frames loosing during VIDIOC_QBUF ioctl call:
ioctl(fd, VIDIOC_QBUF, &bufferinfo[i])
and capturing audio with
snd_pcm_readi (capture_handle, input_buffer_audio, audio_frames)
On every VIDIOC_QBUF snd_pcm_readi loose/hang for around ~300 audio frames. I'm also tried calling audio capture and video on separated test apps running on RPi, the problem also reproduced in this case.
I don't see and CPU overload or something indicating the problem (the load bellow < 12% on my RPi 3b). And the problem not reproduced on PC with linux with ALSA+v4l2 stack with the same camera (logitech c270).
On camera with 30 fps it's a big problem, because such big frames loose make sound laggy.
I am having a task to build a decoder that generates exactly 1 raw audio frame for 1 raw video frame, from an encoded mpegts network stream, so that users can use the API by calling getFrames() and receive exactly these two frames.
Currently I am reading with av_read_frame in a thread, decode as packets come, audio or video; collect until a video packet is hit. Problem is generally multiple audio packets are received before video is seen.
av_read_frame is blocking, returns when certain amount of audio data is collected (1152 samples for mp2); and decoding that packet gives a raw AVFrame having duration of T (depends on samplerate); whereas the video frame generally has duration bigger than T (depends on fps), so multiple audio frames are received before it.
I was guessing I have to find a way to merge collected audio frames into 1 single frame just when video is hit. Also resampling and setting timestamp to align with video is needed I guess. I don't know if this is even valid though.
What is the smoothest way to sync video and audio in this manner ?
I use live555 to receive RTP video frame (frame encoded in H264). I use Live555 open my local .sdp file to receive frame data. I just saw DummySink::afterGettingFrame was called ceaselessly。 if fReceiveBuffer in DummySink is correct, Why FFMPEG cannot decode the frame? My code is wrong?
Here is my Code Snippet:
http://paste.ubuntu.com/12529740/
the function avcodec_decode_video2 is always return failed , its value less than zero
fReceiveBuffer is present one video frame?
Oh, Here is my FFMPEG init code need to open related video decoder:
http://paste.ubuntu.com/12529760/
I read the document related H264 again, I found out that I-frame(IDR) need SPS/PPS separated by 0x00000001 insert into the header and decoder have a capacity to decode the frame correctly. Here is a related solution
FFmpeg can't decode H264 stream/frame data
Decoding h264 frames from RTP stream
and now, My App works fine, it can decode the frame and convert it to OSD Image for displaying to screen .
Using direct show filters I have created a mp4 file writer with two input pin (one for audio and one for video) . I was writing audio sample received through one pin into one track and video sample received from the other pin into another track. But my video is not playing. If I connected only one pin, either Audio or Video, I can play the output file. that is if there is only one track.
I am using h264 encoder for video and mpeg4 encoder for audio. The encoders are working fine as I am able to play the audio and video separately.
I am setting the track count as 2. Is there any information to be provided in the moov box to make the video playing. Or Should we tell the decoder that which track is audio and which track is video. As we are setting those fields in track information I don't think that is important, but Why my video is not playing?
I have a two dump files of raw video and raw audio from an encoder and I want to be able to measure the "Lip-sync". Imagine a video of a hammer striking an anvil. I want to go frame by frame and see that when the hammer finally hits the anvil, there is a spike in amplitude on the audio track.
Because of the speed that everything happens at, I cannot merely listen to the audio, i need to see the waveform in time domain.
Are there any tools out there that will let me see both the video and audio?
If you are concerned about validating a decoder then generally from a validation perspective the goal is to check Audio and Video PTS values against a common real time clock.
Raw YUV and PCM files do not include timestamps. If you know the frame-rate and sample-rate you can use a raw yuv file viewer (I wrote my own) to figure out the time (from start of file) of a given frame in the video, and a tool like Audacity to figure out the time form start of file to a start of tone in the audio file. this still may not tell you the whole story since tools usually embed a delay between the audio and video in the ts/ps file. Or you can hook up ab OScope and go old school.