RTCP package's Cumulative number of packets lost is different from NACK message's FCI description - rtp

in first rtcp SR package, it show Cumulative number of packets lost is 2, but in NACK part it shows 3 frames lost, just curious about this, i though this two part should be same.

Related

RTP timestamp in data packets vs RTCP SR packets

I'm trying to understand the difference between the RTP timestamp as it occurs in RTP data packets vs as it is used in RTCP Sender Report (SR) packets.
For the RTP timestamp in data packets I have established that:
They are not based on wall-clock time but represent more of a counter
They typically have a random offset chosen at the beginning of a session
For simple standard audio codecs they typically increment by 160 (roughly speaking 1000ms / 20ms * 160samples = 8000 bit rate) and this increment also includes not sent silent packets
For the RTP timestamp in RTCP sender report packets I originally thought they were just a snapshot of the current RTP timestamp of the data packets and in conjunction with the NTP timestamp (which is typically wall-clock) could be used to calculate the wall-clock time of further incoming RTP packets which is also what I understood from this analysis on the subject.
However a sentence of the RFC 3550 Section 6.4.1 makes me wonder about that assumption:
Note that in most cases this timestamp will not be equal to the RTP timestamp in any adjacent data packet.
This ruins my assumption, because I assumed that the SR packet would contain an RTP timestamp that is found in a data packet that has just been sent by the same source. Unfortunately the next sentences are pretty much meaningless for me (maybe this is an English language barrier but it sound like non-helpful nonsense to me)
Rather, it MUST be
calculated from the corresponding NTP timestamp using the
relationship between the RTP timestamp counter and real time as
maintained by periodically checking the wallclock time at a
sampling instant.
Could you clarify for me how the RTP timestamp in an RTCP SR packet can be calculated?
The process of sending RTCP report packets is separated from sending the related RTP packet stream(s). With that I mean that they usually won't be sent at the same time. Therefore a sent RTP packet and RTCP report packet will typically contain different RTP timestamp values.
As you know a relationship exists between the NTP (wallclock) time and the RTP timestamp. Since RTCP report packets contain both an NTP timestamp and an RTP timestamp these packets can be used to learn how these values relate at the side of the sender of the RTCP packet. Any RTP packets received from the same sender will contain their own (typically different) RTP timestamp. Using the relationship learned from the received RTCP packets this RTP timestamp can be used to calculate the wallclock time of the moment the RTP packet was sent.
The answer to this stackoverflow question might also help you.

Connectx-6 LX scheduled sending only sending 25 packets

We are trying to use send scheduling on a Connectx-6 LX. If we set no timestamps on the packet buffers and manually send each packet at approximately the right time everything works. However if we set timestamps in the buffers then the first 25 packets are sent and the packets are received at the expected times but all subsequent calls to rte_eth_tx_burst return 0. If its relevant we are sending a single packet in each burst with timestamps 125 us apart.
We've tried setting the timestamps to low values and the packets are transmitted correctly and as expected the tx_pp_timestamp_past_errors value is incremented. We also set high values and this worked too with tx_pp_timestamp_future_errors incrementing.
Any ideas where to start debugging this? I couldn't see any API which would give an error code for rte_eth_tx_burst failing.
We're using DPDK 21.08 with the 5.4.3.1 Mellanox driver on Ubuntu 20.04.
It looks like this was caused by not having enough TX descriptors, we were only specifying 64, increasing to 1024 fixes the problem.

dropped frames over UDP

this is my first "question", I hope I do it right :)
I am experimenting with network programming and in particular I want to broadcast data from one machine to some other >10 devices using UDP, over a wireless network. The data comes in packets of about 300 bytes, and at about 30 frames per second, i.e., one every ~33ms.
My implementation is based on the qt example: http://qt-project.org/doc/qt-4.8/network-broadcastreceiver.html
I am testing the application with just one client and experiencing quite a few dropped frames, not really sure why. All works fine if I used ethernet cables. I hope someone here can help me find a reason.
I can spot dropped frames because the packets contain a timestamp: After I receive one datagram, I can check for the difference between its timestamp and the last one received, if this is greater than e.g. 50ms, it means that I lost one packet on the way.
This happens quite often, even though I have a dedicated wi-fi network (not connected to the internet and with just 3 machines connected to a router I just bought). Most of the times I drop one or two packets, which would not be a problem, but sometimes the difference between the timestamps suggests that some >30 packets are lost, which is not good for what I am trying to achieve.
When I ping from one machine to the other, I get these values:
50 packets transmitted, 50 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.244/91.405/508.959/119.074 ms
pretty bad for a new router, in a dedicated network with just 3 clients, isn't it? The router is advertised as a very fast Wi-Fi router, with three times faster performance than 802.11n routers.
Compare it with the values I get from an older router, sitting in the same room, with some 10 machines connected to it, during office hour:
39 packets transmitted, 39 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.458/47.297/142.201/37.186 ms
Perhaps the router is defective?
One thing I cannot explain is that, if I ping while running my UDP client/server application, the statistics improve:
55 packets transmitted, 55 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.164/6.174/197.962/26.181 ms
I was wondering if anyone had tips on what to test, hints on how to achieve a "reliable" UDP connection between these machines over wi-fi. By reliable I mean that I would be ok dropping 2 consecutive packets, but not more.
Thanks.
Edit
It seems that the router (?) sends the packets in bursts. I am measuring the time it passes between receiving two datagrams on the client and this value is about 3 ms for a sequence of ~10 packets, and then, around 300 ms for the next packet. I think my issues at the client is more related to this inconsistency in the intervals between frames, rather than the dropped frames. I probably just need to have a queue and a delay of >300ms wrt to the server.
The first and easiest way to tackle any problem related to network is to capture them on wireshark.
And also check if packets are really being sent out from broadcasting machine.
And also, based on your description if packets being transmitted fine with etherne cables and not with UDP then
it could be issue with UDP port too.

Marker Bit In RTP for Voice Samples for codec like AMR and G729

I want to know the significance of Marker Bit in RTP for Voice packets and is here any RFC which tell that.
I know that the for the Video packets marker bit means last packet for the same image and hence, its the last packet with PTS time-stamp corresponding to image but for the Voice Packets for a codec say AMR-NB or G711 alaw or G729, the Marker Bit is usually false in each of the RTP packet.
So, do the meaning of Marker bit changes in this case of RTP packets??
Regards
Nitin
In audio codecs, if you will analyse the wireshark traces for any codec. Lets say AMR, you will have the following observations
For voice packets, the marker bits indicates the beginning of a talkspurt. Beginning of talkspurts are good opportunities to adjust the playout delay at the receiver to compensate for differences between the sender and receiver clock rates as well as changes in the network delay jitter. Packets during a talkspurt need to played out continuously, while listeners generally are not sensitive to slight variations in the durations of a pause.
The marker bit is a hint; the beginning of a talkspurt can also be computed by comparing the difference in timestamps and sequence numbers between two packets, assuming the timestamp clock rate is known.
Packets may arrive out of order, so that the packet with the marker bit is received after the second packet in the talkspurt. As long as the playout delay is longer than this reordering, the receiver can still perform delay adaptation. If not, it simply has to wait for the next talkspurt.
Source: http://www.cs.columbia.edu/~hgs/rtp/faq.html#marker
The same thing can be read here too.
http://msdn.microsoft.com/en-us/library/dd944715(v=office.12).aspx
As per RFC
marker (M): 1 bit
The interpretation of the marker is defined by a profile. It is
intended to allow significant events such as frame boundaries to
be marked in the packet stream. A profile MAY define additional
marker bits or specify that there is no marker bit by changing the
number of bits in the payload type field .
My understanding is that for voice packet a data require for single frame (mostly for 20 ms) is not so big that we can send it in to more then 1 RTP packets.
So, for voice packet marker bit means start of new stream & consider time stamp from here.
When you look in to video packet (like H261, H263, ...) then single frame require multiple RTP packet. In that case marker bit represent end of single frame & after receiving that you can start parsing of whole frame.
This is also use for DTMF in RFC 2833 case where single event represented by multiple RTP packets.

TCP: How are the seq / ack numbers generated?

I am currently working on a program which sniffs TCP packets being sent and received to and from a particular address. What I am trying to accomplish is replying with custom tailored packets to certain received packets. I've already got the parsing done. I can already generated valid Ethernet, IP, and--for the most part--TCP packets.
The only thing that I cannot figure out is how the seq / ack numbers are determined.
While this may be irrelevant to the problem, the program is written in C++ using WinPCap. I am asking for any tips, articles, or other resources that may help me.
When a TCP connection is established, each side generates a random number as its initial sequence number. It is a strongly random number: there are security problems if anybody on the internet can guess the sequence number, as they can easily forge packets to inject into the TCP stream.
Thereafter, for every byte transmitted the sequence number will increment by 1. The ACK field is the sequence number from the other side, sent back to acknowledge reception.
RFC 793, the original TCP protocol specification, can be of great help.
I have the same job to do.
Firstly the initial seq# will be generated randomly(0-4294967297).
Then the receiver will count the length of the data it received and send the ACK of seq# + length = x to the sender. The sequence will then be x and the sender will send the data. Similarly the receiver will count the length x + length = y and send the ACK as y and so on... Its how the the seq/ack is generated...
If you want to show it practically try to sniff a packet in Wireshark and follow the TCP stream and see the scenario...
If I understand you correctly - you're trying to mount a TCP SEQ prediction attack. If that's the case, you'll want to study the specifics of your target OS's Initial Sequence Number generator.
There were widely publicized vulnerabilties in pretty much all the major OS's wrt their ISN generators being predictable. I haven't followed the fallout closely, but my understanding is that most vendors released patches to randomize their ISN increments.
Seems that the rest of the answers explained pretty much all about where to find detailed and official information about ACK's, namely TCP RFC
Here's a more practical and "easy understood" page that I found when I was doing similar implementations that may also help TCP Analysis - Section 2: Sequence & Acknowledgement Numbers
RFC 793 section 3.3 covers sequence numbers. Last time I wrote code at that level, I think we just kept a one-up counter for sequence numbers that persisted.
These values reference the expected offsets of the start of the payload for the packet relative to the initial sequence number for the connection.
Reference
Sequence number (32 bits) – has a dual
role If the SYN flag is set, then this
is the initial sequence number. The
sequence number of the actual first
data byte will then be this sequence
number plus 1. If the SYN flag is not
set, then this is the sequence number
of the first data byte
Acknowledgement
number (32 bits) – if the ACK flag is
set then the value of this field is
the next expected byte that the
receiver is expecting.
Numbers are randomly generated from both sides, then increased by number of octets (bytes) send.
The sequence numbers increment after a connection is established. The initial sequence number on a new connection is ideally chosen at random but a lot of OS's have some semi-random algorithm. The RFC's are the best place to find out more TCP RFC.