I am developing a player which open rtsp stream using Live555 and using FFMPEG to decode video stream. I am stuck at a point, where IDR frame is getting lost over the network, so that after decoding its successor B/P frames, it shows a jittering effect in video. It gives a very bad performance in video.
So my question is, How can I handle I-frame packet loss? I would like to know if there is any strategy/algorithm to handle packet loss, so that video should be smooth or clear.
Any help will be appreciated.
Thank You.
If it's a first approach, I guess you decode the frame synchronously, I mean the Live555 afterGetting callback call directly the avcodec_decode_video2 of FFMPEG.
In such case the receiving socket is not read during decoding, then packets are buffered till it overflow.
You can try different workaround like increasing the socket buffer, using RTP over TCP, but a real solution need to be more asynchronous, for instance afterGetting can push data to a fifo and the decoding thread can get from it.
Well, once an I-frame is lost, it's lost. You can't really do anything on the client side. The only way we could attack this problem was to configure the server (ie: streamer) in a way that it will send either more frequently I-frames (ie: MORE I-frames in a stream) or more infrequent I-frames (ie_ LESS I-frames in the stream) (if you use ffmpeg/libx264 it can be fine tuned to an incredible level of precision when to send I-frames).
Related
I write the received packets in binary files. When the recording of the first file is completed, I call flush:
avcodec_send_frame(context, NULL);
This is the signal to end the stream. But when I send a new frame to the encoder, function return AVERROR_EOF (man: the encoder has been flushed, and no new frames can be sent to it). What to do to make the encoder take the frames after flushing?
Example: when decoding, you can call:
avcodec_flush_buffers(context);
This function changes the stream, but only for decoding.
Maybe analogic function for encoding?
Ideas:
1) do not call flush. But the encoder buffers frames inside and gives some packets only after flushing (using h.264 with b-frames), while some packets get into the next file.
2) Recreate codec context?
Details: use Win 7, Qt 5.10, ffmpeg 4.0.2
The correct answer is that you should create a new codec context for each file, or headache will follow. The little expense of additional headers and key frames should be small unless you are doing something very exotic.
B-frames can refer to both previous and future frames, how would you even decide such a beast?
In theory you could probably force a keyframe and hope for the best, but then there is really no point in not starting a new context, unless the hundreds of bytes or so of H264 init data is a problem.
I have written an application which triggers an IP Camera to stream it's data (MPEG4) over RTP. This works fine so far - I start to setup and start the stream with the corresponding RTSP commands ( DESCRIBE, SETUP and PLAY ).
While streaming I receive the usual Sender Reports and send my own Receiver Reports - Everything is working fine here.
Now with the application mentioned above, I do NOT read the stream. I have a seperate hardware , which just logs all the stuff going over the Ethernet ( a little bit like Wireshark ). Now when the whole streaming is finished I can download those logs from my hardware and extract data from them.
So what I have then is a logfile with all the data from the RTP stream as raw data.
My question would now is: How do I write this appropriately into a MPEG4 file? I know this is a very broad question and I don't expect to get a step-by-step tutorial. But actually I am a bit overwhelmed and don't know where to start.If I just memcpy all the Payload from the RTP messages sequentially into a MPEG4 file it doesn't work. Now I am also a bit confused by SDP and stuff.
Well maybe someone has a link or some help for me..?
You should first read RFC3016, which describes the RTP format of MPEG-4 stream, then you'll know how to extract MPEG-4 frames from the RTP stream.
I actually changed from MPEG4 to H.264 - it actually was a little bit easier to write a video file like this. For H.264 this answer covers it pretty much:
How to process raw UDP packets so that they can be decoded by a decoder filter in a directshow source filter
Totally new to this! As the title says, I'm trying to serve a stream from OpenCV through Live555 using H.264 that is captured from a webcam.
I've tried something like:
#define LOCALADDRESS "rtsp://localhost:8081" // Address media is served
#define FOURCCCODEC CV_FOURCC('H','2','6','4') // H.264 codec
#define FPS 25 // Frame rate things run at
m_writer = cvCreateVideoWriter(LOCALADDRESS, FOURCCCODEC, FPS, cvSize(VIDEOWIDTH, VIDEOHEIGHT));
as reading a rtsp stream, is done similarly:
CvCapture *capture = cvCreateFileCapture(LOCALADDRESS);
which doesn't work so I'm turning to Live555. How do I feed a CvCapture encoded in H.264 to be served by Live555? There doesn't seem to be a straitforward way to serve a bytestream from one to another or perhaps I'm missing something.
There really isn't a straight-forward way I know of; certainly nothing that will happen in anything less than a few hundred lines of code.
I'm assuming you want to use an on-demand RTSP server (this is where the server's just sitting there, waiting for a client to connect, and then it starts streaming when the client establishes a connection and makes a request)? If so, this item in the Live555 FAQ applies.
However, Live555 is a weird (possibly misguided?) library, so it's unfortunately a bit more complicated than that. Live555 uses a single thread of operation with an event loop, so what you'll have to do is shove your raw bytestream into a buffer or queue, and then in your subsession class for streaming H.264, you'll check and see if there's available data in the queue and if so, pass it along. If not, schedule another check in a few milliseconds. You'll also need to strip off any NALU identifiers before you pass them along to live555.
I am implementing RTSP in C# using an Axis IP Camera. Everything is working fine but when i try to display the video, I am getting first few frames with lots of Green Patches. I suspect the issue that I am not sending the i-frame first to the client.
Hence, I want to know the algorithm required to detect an i-frame in RTP Packet.
when initiating a RTSP-Session the server normaly starts the RTP-stream with config-data followed by the first I-Frame.
It is thinkable, that your Axis-camera is set to "always multicast" - in this case the RTSP-communication leads to a SDP description which tells the client all necessary network and streaming details for receiving the multicast stream.
Since the multicast stream is always present, you most probably receive some P- or B- frames first (depending on GOP-size).
You can detect these P/B-frames in your RTP client the same way you were detecting the I-frames as suggested by Ralf by identyfieng them via the NAL-unit type. Simply skip all frames in the RTP client until you receive the first I-frame.
Now you can forward all following frames to the decoder.
or you gave to change you camera settings!
jens.
ps: don't forget that you have fragmentation in your RTP stream - that means that beside of the RTP header there are some fragmentation information. Before identifying a frame you have to reassemble it.
It depends on the video media type. If you take H.264 for instance, you would look at the NAL unit header to check the nal unit type.
The green patches can indeed be caused by not having received an iframe first.
I am writing an application which is a kinda video streamer.The client is receiving a video stream using udp socket.Now as I am receiving the stream I want to play it simultaneous.It is different from playing local video file lying in your hard disk in which case it can be as simple as running the file using system("vlc filename").But here many issues are involved like there can be delay in receiving and player will have to wait for the incoming data.I have come to know about using vlc to run a video stream.Can you please elaborate the step for playing the stream using vlc.I am implementing my application in c++.
EDIT: Can somebody give me some idea regarding VLC API which can be used to stream a given video to particular destination and receive that stream at other end play it.
with regards,
Mawia
Well you can always take a look at VideoLan's own homepage
Other than that, streaming is quite straightforward:
Decide on a video codec that supports streaming. (ok obvious and probably already done)
Choose appropriate packet size.
Choose appropriate video quality.
At the client side: pre-buffer at least 2 secs of video and audio.
Number 2 and 3 sound strange, but they are worth thinking about:
If you have a broadband connection, you can afford to pump big packets over to the client. Note: Packets here means consistent units of data that the client needs to have completely to decode the next bit of video. If you send big packets, say 4 secs of video, you risk lag due to waiting for the complete data unit of, well, full 4 seconds, whilst small 0.5 sec packets would get you laggy but still recognizable and relatively fluent video on a bad connection.
Same goes for quality. Pixelated and artifact ridden videos are bad, stuttering video/sound desyncing videos are worse. Rather switch down to a lower quality/higher compression setting.
If your question is purely about the getting it done part, well, points 1 and 4 should do for you.
You might ask:
"If I want to do real time live video?"
All of the advice above still applies, but all of it has to be done smarter. First things first: You cannot do realtime over bad connections. It's a reality thing. If your connection is fat enough you can reach almost real time, just pump each image and a small sound sample out without much processing or any buffering at all. It is possible to get a good client experience from that, but connections like that are highly unlikely. The trick here usually is, transmit a video quality slightly lower than the connection would allow in theory and still wiggle caching and packet reordering in there... have fun. It is hard.
Unfortunately really the only API vlc has is the command line or equivalent of the command line (you can start player instances, passing them essentially what you would have on the command line). You can use libvlc if you need multiple instances or callbacks but it's pretty opaque still...