mp4 video created using direct show filter is not playing - c++

Using direct show filters I have created a mp4 file writer with two input pin (one for audio and one for video) . I was writing audio sample received through one pin into one track and video sample received from the other pin into another track. But my video is not playing. If I connected only one pin, either Audio or Video, I can play the output file. that is if there is only one track.
I am using h264 encoder for video and mpeg4 encoder for audio. The encoders are working fine as I am able to play the audio and video separately.
I am setting the track count as 2. Is there any information to be provided in the moov box to make the video playing. Or Should we tell the decoder that which track is audio and which track is video. As we are setting those fields in track information I don't think that is important, but Why my video is not playing?

Related

Getting frame timestamp on RTSP playback

I'm making simple camera playback videoplayer on qt with gstreamer, it is possible to get frame date/time or only time from OSD? example of timestamp on picture, currently work with hikvision, tried to find it on RTP packets dumped with wireshark, but there is only relative to first frame timestamps.

Gstreamer filesink black screen first few seconds

I decode the rtp h264 stream and display it on the screen. In a parallel thread, recording to the mp4 file is sometimes performed. Also, during recording, I mix the sound through mp4mux into the file. Separately, sound and video are written perfectly, but as soon as I combine this, a problem appears. The first few seconds of the video is a black screen, but there is sound. At the same time, sound and video are synchronous. How to solve this problem? Thank you in advance.
Video has a higher latency than audio. That's why you get audio sooner. So you would need to trim the file afterwards if you don't want that. Or you add some logic in your code that drops all audio until the first video is decoded.

How to to add additional metadata to individual frames, DDB's, when creating an AVI file with ffmpeg

I'm creating avi videos from device dependent bitmaps, DDB's.
The pipeline is quite simple, a GigE camera provides frame by frame, and each frame, a DDB, is piped to a ffmpeg process creating a final AVI file, using h264 compression.
These videos are scientific in nature, and we would like to store/embed experimental hardware information, such as the states of a few digital lines, with each frame.
This information need to be available in the final avi video
Question is, is this possible?
Looking at this: https://learn.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmap it does not seem that adding additional data to the DDB themselves is possible, but I'm not sure.

FFmpeg resample audio while decoding

I am having a task to build a decoder that generates exactly 1 raw audio frame for 1 raw video frame, from an encoded mpegts network stream, so that users can use the API by calling getFrames() and receive exactly these two frames.
Currently I am reading with av_read_frame in a thread, decode as packets come, audio or video; collect until a video packet is hit. Problem is generally multiple audio packets are received before video is seen.
av_read_frame is blocking, returns when certain amount of audio data is collected (1152 samples for mp2); and decoding that packet gives a raw AVFrame having duration of T (depends on samplerate); whereas the video frame generally has duration bigger than T (depends on fps), so multiple audio frames are received before it.
I was guessing I have to find a way to merge collected audio frames into 1 single frame just when video is hit. Also resampling and setting timestamp to align with video is needed I guess. I don't know if this is even valid though.
What is the smoothest way to sync video and audio in this manner ?

recording with uncompressed audio speeds up video

I have been using a recorder (based on muxer example) satisfactorily for quite some time for various formats. Now I need to use uncompressed audio to go with MJPEG video and I notice video speeds up considerable (like 10 times as fast) in the recorded file. Audio is OK, and if I use a compressed audio format (like mp3) video is fine as always. Does anyone have an idea why video speeds up the moment I use uncompressed audio (CODEC_ID_PCM_S16LE)?