I receive audio packets from net (4 packets per second, 250ms each) and video - 15fps. Everything goes with my own timestamps. How should I sync them? I've seen the source code of one of our developers but he did syncing VIDEO according to audio. I.e. audio is always played immediately and video can be dropped or buffered. I don't think it is correct because audio can overrun video for a second or two - in that case we will not have actual video frames at all.
I'd like to know some basics in sync stuff. What should be buffered? Should audio and video in sync mode be played in separate thread(s)? Any clues would be regardful!
Thanks a lot!
I needed in smth like that - http://www.freepatentsonline.com/7680153.html
Pretty difficult to understand but I think this patent explains the basics of sync.
Related
I couldn't find any information on the way av_interleaved_write_frame deals with video and audio packets.
I have multiple audio and video packets coming from 2 threads. Each thread calls a write_video_frame or write_audio_frame, locks a mutex, initialize an AVPacket and writes data to an .avi file.
Initialization of AVCodecContext and AVFOrmatContext is ok.
-- Edit 1 --
Audio and video are coming from an external source (microphone and camera) and are captured as raw data without any compression (even for video).
I use h264 to encode video and no compression for Audio (PCM).
Audio captured is: 16bits, 44100khz, stereo
Video captured is 25FPS
Question:
1) Is it a problem if I write multiple video packets at once (let's say 25 packets/sec) and just one audio packet/sec.
Answer: Apparently not, the function av_interleaved_write_frame should be able to manage that kind of data as soon as pts and dts is well managed
This means I call av_interleaved_write_frame 25 times for video writing and just 1 for audio writing per second. Could this be a problem ? If it is how can I deal with this scenario ?
2) How can I manage pts and dts in this case ? It seems to be a problem in my application since I cannot correctly render the .avi file. Can I use real time stamps for both video and audio ?
Answer: The best thing to do here is to use the timestamp given when capturing audio / video as pts and dts for this kind of application. So these are not exactly real time stamps (from wall clock) but media capture timestamps.
Thank you for your precious advices.
av_interleaved_write_frame writes otput packets in such way so they are properly interleaved (maybe queueing them internally). "Properly interleaved" depends on a container format but usually it means that DTS stamps of the packets in output file are monotonically increasing.
av_interleaved_write_frame, like most FFmpeg APIs, shouldn't be ever called simultaneously by two threads with same AVFormatContext. I assume you make sure of that with a mutex. If you do then it doesn't matter whether it is multithreaded application or now.
Is it a problem if I write multiple video packets at once (let's say 25 packets/sec) and just one audio packet/sec
It is not a problem in general but most audio codecs can't output 1 second long audio packets. Which codec do you use?
How can I manage pts and dts in this case? Can I use real time stamps for both video and audio ?
Same way as you would in single-threaded application. Dts usually are generated by codecs from pts. Pts usually comes from a capture device/decoder together with the corresponding audio/video data.
Real time stamps might be ok to use but it really depends on how and when you are acquiring them. Please elaborate on what exactly you are trying to do. Where is audio/video data coming from?
I have written a rtsp ondemand server in C++ using live555 and I am able to host a rtsp stream. I then used VLC to connect to the server through the WAN and the image streams and looks great. Then I went to another computer and connected to the rtsp stream, I am seeing that both videos become choppy.
The data is h264 compressed and the resolution of the image is 800x600. The symptoms looks like there isnt enough bandwidth?
Basically my question is how many concurrent rtsp connections can be done over the WAN with live555. Has anyone else been able to stream reliably over the WAN using live555?
Thanks in advance.
This is mostly dependent on your WAN up-link bandwidth and your video bit-rate.
Let's try to estimate bit-rate of your given video. A very good explanation can be found here Assuming a moderate level of motion and 30 fps video this results in 3 mbps (800 x 600 x 30 x 3 x 0.07) bit-rate in your case. So if your up-link BW is less than 6 mbps, you cannot stream 2 videos simultaneously.
Other than that, live555 doesn't have any hard-coded limitations on this regard.
Currently using the lib's from FFPMEG to stream some MPEG2 TS (h264 encoded) video. The streaming is done via UDP multicast.
The issue I am having currently is two main things. There is a long initial connection time / getting the video to show (the stream also contains metadata, and that stream is detected by my media tool immediately).
Once the video gets going things are fine but it is always delayed by that initial connection time.
I am trying to get as near to LIVE streaming as possible.
Currently using the av_dict_set(&dict, "tune", "zerolatency", 0) and "profile" -> "baseline" options.
GOP size = 12;
At first I thought the issue was an i frame issue, but the initial delay is there if gopsize is 12 or default 250. Sometimes the video will connect quickly, but it is immediately dropped, the delay occurs, then it starts back up and is good from that point on.
According to documentation the zero latency option should be sending many i frames, to limit initial syncing delays.
I am starting to think its a buffering type issue, as when I close the application and leave the media player up, it then fast forwards through the delay till it hits basically where the file stops streaming.
So while I don't completely understand what was wrong, I at least fixed the problem I was having.
The issue came from using the av_write_interleaved_frame() vs. the regular av_write_frame()(this one works for live streaming), when writing out the video frames. Ill have to dig into the differences a bit more to fully understand it, but its funny sometimes how you figure out the problem you are having on a total whim after bashing your face for a few days.
I can get pretty good live ish video streaming with the tune "zerolatency" option set.
I am developing a player which open rtsp stream using Live555 and using FFMPEG to decode video stream. I am stuck at a point, where IDR frame is getting lost over the network, so that after decoding its successor B/P frames, it shows a jittering effect in video. It gives a very bad performance in video.
So my question is, How can I handle I-frame packet loss? I would like to know if there is any strategy/algorithm to handle packet loss, so that video should be smooth or clear.
Any help will be appreciated.
Thank You.
If it's a first approach, I guess you decode the frame synchronously, I mean the Live555 afterGetting callback call directly the avcodec_decode_video2 of FFMPEG.
In such case the receiving socket is not read during decoding, then packets are buffered till it overflow.
You can try different workaround like increasing the socket buffer, using RTP over TCP, but a real solution need to be more asynchronous, for instance afterGetting can push data to a fifo and the decoding thread can get from it.
Well, once an I-frame is lost, it's lost. You can't really do anything on the client side. The only way we could attack this problem was to configure the server (ie: streamer) in a way that it will send either more frequently I-frames (ie: MORE I-frames in a stream) or more infrequent I-frames (ie_ LESS I-frames in the stream) (if you use ffmpeg/libx264 it can be fine tuned to an incredible level of precision when to send I-frames).
I am writing an application which is a kinda video streamer.The client is receiving a video stream using udp socket.Now as I am receiving the stream I want to play it simultaneous.It is different from playing local video file lying in your hard disk in which case it can be as simple as running the file using system("vlc filename").But here many issues are involved like there can be delay in receiving and player will have to wait for the incoming data.I have come to know about using vlc to run a video stream.Can you please elaborate the step for playing the stream using vlc.I am implementing my application in c++.
EDIT: Can somebody give me some idea regarding VLC API which can be used to stream a given video to particular destination and receive that stream at other end play it.
with regards,
Mawia
Well you can always take a look at VideoLan's own homepage
Other than that, streaming is quite straightforward:
Decide on a video codec that supports streaming. (ok obvious and probably already done)
Choose appropriate packet size.
Choose appropriate video quality.
At the client side: pre-buffer at least 2 secs of video and audio.
Number 2 and 3 sound strange, but they are worth thinking about:
If you have a broadband connection, you can afford to pump big packets over to the client. Note: Packets here means consistent units of data that the client needs to have completely to decode the next bit of video. If you send big packets, say 4 secs of video, you risk lag due to waiting for the complete data unit of, well, full 4 seconds, whilst small 0.5 sec packets would get you laggy but still recognizable and relatively fluent video on a bad connection.
Same goes for quality. Pixelated and artifact ridden videos are bad, stuttering video/sound desyncing videos are worse. Rather switch down to a lower quality/higher compression setting.
If your question is purely about the getting it done part, well, points 1 and 4 should do for you.
You might ask:
"If I want to do real time live video?"
All of the advice above still applies, but all of it has to be done smarter. First things first: You cannot do realtime over bad connections. It's a reality thing. If your connection is fat enough you can reach almost real time, just pump each image and a small sound sample out without much processing or any buffering at all. It is possible to get a good client experience from that, but connections like that are highly unlikely. The trick here usually is, transmit a video quality slightly lower than the connection would allow in theory and still wiggle caching and packet reordering in there... have fun. It is hard.
Unfortunately really the only API vlc has is the command line or equivalent of the command line (you can start player instances, passing them essentially what you would have on the command line). You can use libvlc if you need multiple instances or callbacks but it's pretty opaque still...