I'm using Amazon's sample code to upload a rtsp stream from an IP camera to Kinesis video streams. Code found here: https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp/blob/master/samples/kvs_gstreamer_sample.cpp
I would like to get the NTP time from the camera for each frame. My understanding is that the first step in doing this is reading the RTCP sender report to get the synchronizing time for the camera's RTP and NTP times.
To do that, I've added a callback on receiving RTCP packets like so:
g_signal_connect_after(session, "on-receiving-rtcp", G_CALLBACK(on_rtcp_callback), data);
Then, in my callback function after getting the SR packet from the buffer, I try to get the two timestamps:
gst_rtcp_packet_sr_get_sender_info (packet, ssrc, ntptime, rtptime, packet_count, octet_count);
When comparing the 'ntptime' and 'rtptime' variables I get here, with what I see in Wireshark, the rtp times match perfectly. However, the NTP time I get in my C++ code is very wrong, it shows a time from about a month ago, while the Wireshark packet shows an NTP time that appears correct.
Is there some setting causing gstreamer to overwrite the NTP time in my sender report packets, and if so, how do I disable that setting?
It turns out the NTP time provided gst_rtcp_packet_sr_get_sender_info is not in any format I've seen before. To convert to a meaningful timestamp you have to use gst_rtcp_ntp_to_unix which then gives you a unix time that actually makes some sense.
Related
I'm trying to understand the difference between the RTP timestamp as it occurs in RTP data packets vs as it is used in RTCP Sender Report (SR) packets.
For the RTP timestamp in data packets I have established that:
They are not based on wall-clock time but represent more of a counter
They typically have a random offset chosen at the beginning of a session
For simple standard audio codecs they typically increment by 160 (roughly speaking 1000ms / 20ms * 160samples = 8000 bit rate) and this increment also includes not sent silent packets
For the RTP timestamp in RTCP sender report packets I originally thought they were just a snapshot of the current RTP timestamp of the data packets and in conjunction with the NTP timestamp (which is typically wall-clock) could be used to calculate the wall-clock time of further incoming RTP packets which is also what I understood from this analysis on the subject.
However a sentence of the RFC 3550 Section 6.4.1 makes me wonder about that assumption:
Note that in most cases this timestamp will not be equal to the RTP timestamp in any adjacent data packet.
This ruins my assumption, because I assumed that the SR packet would contain an RTP timestamp that is found in a data packet that has just been sent by the same source. Unfortunately the next sentences are pretty much meaningless for me (maybe this is an English language barrier but it sound like non-helpful nonsense to me)
Rather, it MUST be
calculated from the corresponding NTP timestamp using the
relationship between the RTP timestamp counter and real time as
maintained by periodically checking the wallclock time at a
sampling instant.
Could you clarify for me how the RTP timestamp in an RTCP SR packet can be calculated?
The process of sending RTCP report packets is separated from sending the related RTP packet stream(s). With that I mean that they usually won't be sent at the same time. Therefore a sent RTP packet and RTCP report packet will typically contain different RTP timestamp values.
As you know a relationship exists between the NTP (wallclock) time and the RTP timestamp. Since RTCP report packets contain both an NTP timestamp and an RTP timestamp these packets can be used to learn how these values relate at the side of the sender of the RTCP packet. Any RTP packets received from the same sender will contain their own (typically different) RTP timestamp. Using the relationship learned from the received RTCP packets this RTP timestamp can be used to calculate the wallclock time of the moment the RTP packet was sent.
The answer to this stackoverflow question might also help you.
We are trying to use send scheduling on a Connectx-6 LX. If we set no timestamps on the packet buffers and manually send each packet at approximately the right time everything works. However if we set timestamps in the buffers then the first 25 packets are sent and the packets are received at the expected times but all subsequent calls to rte_eth_tx_burst return 0. If its relevant we are sending a single packet in each burst with timestamps 125 us apart.
We've tried setting the timestamps to low values and the packets are transmitted correctly and as expected the tx_pp_timestamp_past_errors value is incremented. We also set high values and this worked too with tx_pp_timestamp_future_errors incrementing.
Any ideas where to start debugging this? I couldn't see any API which would give an error code for rte_eth_tx_burst failing.
We're using DPDK 21.08 with the 5.4.3.1 Mellanox driver on Ubuntu 20.04.
It looks like this was caused by not having enough TX descriptors, we were only specifying 64, increasing to 1024 fixes the problem.
I was wondering what is the difference between using pkt.time and pkt[IP].time since they both give different times for the same packet.
I was also wondering how to interpret packet time such as 1430123453.564733
If anyone has an idea or knows where I can find such information it would be very helpful.
Thanks.
pkt.time gives you the epoch time that is included in the FRAME layer of the packet in wireshark.
Just after the notation pkt[IP].time gives you the time that is included in the IP layer of the packet in wireshark. But the IP layer has no time, so I don't think this command will work.
Currently using the lib's from FFPMEG to stream some MPEG2 TS (h264 encoded) video. The streaming is done via UDP multicast.
The issue I am having currently is two main things. There is a long initial connection time / getting the video to show (the stream also contains metadata, and that stream is detected by my media tool immediately).
Once the video gets going things are fine but it is always delayed by that initial connection time.
I am trying to get as near to LIVE streaming as possible.
Currently using the av_dict_set(&dict, "tune", "zerolatency", 0) and "profile" -> "baseline" options.
GOP size = 12;
At first I thought the issue was an i frame issue, but the initial delay is there if gopsize is 12 or default 250. Sometimes the video will connect quickly, but it is immediately dropped, the delay occurs, then it starts back up and is good from that point on.
According to documentation the zero latency option should be sending many i frames, to limit initial syncing delays.
I am starting to think its a buffering type issue, as when I close the application and leave the media player up, it then fast forwards through the delay till it hits basically where the file stops streaming.
So while I don't completely understand what was wrong, I at least fixed the problem I was having.
The issue came from using the av_write_interleaved_frame() vs. the regular av_write_frame()(this one works for live streaming), when writing out the video frames. Ill have to dig into the differences a bit more to fully understand it, but its funny sometimes how you figure out the problem you are having on a total whim after bashing your face for a few days.
I can get pretty good live ish video streaming with the tune "zerolatency" option set.
I am implementing RTSP in C# using an Axis IP Camera. Everything is working fine but when i try to display the video, I am getting first few frames with lots of Green Patches. I suspect the issue that I am not sending the i-frame first to the client.
Hence, I want to know the algorithm required to detect an i-frame in RTP Packet.
when initiating a RTSP-Session the server normaly starts the RTP-stream with config-data followed by the first I-Frame.
It is thinkable, that your Axis-camera is set to "always multicast" - in this case the RTSP-communication leads to a SDP description which tells the client all necessary network and streaming details for receiving the multicast stream.
Since the multicast stream is always present, you most probably receive some P- or B- frames first (depending on GOP-size).
You can detect these P/B-frames in your RTP client the same way you were detecting the I-frames as suggested by Ralf by identyfieng them via the NAL-unit type. Simply skip all frames in the RTP client until you receive the first I-frame.
Now you can forward all following frames to the decoder.
or you gave to change you camera settings!
jens.
ps: don't forget that you have fragmentation in your RTP stream - that means that beside of the RTP header there are some fragmentation information. Before identifying a frame you have to reassemble it.
It depends on the video media type. If you take H.264 for instance, you would look at the NAL unit header to check the nal unit type.
The green patches can indeed be caused by not having received an iframe first.