From RFC 3550:
If a receiver discovers that two other
sources are colliding, it MAY keep the packets from one and discard
the packets from the other when this can be detected by different
source transport addresses or CNAMEs. The two sources are expected
to resolve the collision so that the situation doesn't last.
In a unicast configuration with one receiver and two senders that only communicate with receiver, how SSRC collisions may be detected by senders?
One guess is that receiver should periodically send all known CNAMEs to all known participants (senders). Is it true? But in this case, how senders will associate received CNAME with a transport address?
Update:
As answered below, there are two separate RTP sessions with separate SSRC spaces, so no collision detection is needed.
The distinguishing feature of an RTP session is that each
maintains a full, separate space of SSRC identifiers
And:
The set of participants included in one RTP session
consists of those that can receive an SSRC identifier transmitted
by any one of the participants either in RTP as the SSRC or a CSRC
(also defined below) or in RTCP.
And there is even an example for the situation I have described:
For example, consider a three-
party conference implemented using unicast UDP with each
participant receiving from the other two on separate port pairs.
If each participant sends RTCP feedback about data received from
one other participant only back to that participant, then the
conference is composed of three separate point-to-point RTP
sessions.
As far as I understand, this rule is applicable only for multicasting and/or loops of packets. With the setup described by you (two senders are unicasting to one receiver), they don't know each other and have no measures to detect the collision. It's receiver task to deal with this issue. If the receiver is media processor, it likely will act as an end party, reformat the stream and resend needed contents under its own SSRC.
A Goodbye can be sent with a Reason set to the appropriate value..
See http://www.ietf.org/rfc/rfc3550.txt # 6.6 BYE: Goodbye RTCP Packet
By tradition I have seen the value "ssrc" used to indicate the SSRC is changing.
Additionally if a RTCP packet is received with a new SSRC the RTP packets ssrc should probably change as well and thus would be handled when verifying the sequence number, if the ssrc is changed but the sequence number is still valid then the new ssrc will be used.
Related
I raised this question when reading the source code of muduo (C++ network library).
If a client sends a big size message which will be segmented by TCP, what happens in server side? (Does server know this message is already segmented?)
And is it necessary for network library to wait for the whole message and do not interrupt the upper layer?
When dealing with a stream protocol like TCP, you already have to reassemble received data into chunks of your own choosing. That's either a fixed number of bytes per chunk, or it's decided dynamically by parsing the data in terms of your application's protocol (e.g. HTTP).
You don't know when you receive a packet from the network layer that it has been segmented: you only know that you received some data. You may know (because you understand your own protocol) that you're expecting more data to finish the chunk, but you won't know whether there is any more data until you receive it. If you do receive it.
Conversely, a single TCP packet may well contain more than a single chunk of your application-layer data! Again, you need to be aware that there is no direct relationship between the two things.
You can, however, depend on the TCP packets being delivered in the same order in which they were sent, which is nice.
Simple analogy: a big ol' ship, carrying cargo. It may be carrying 40 cars, or it may be carrying just half the quantity of parts required to construct an airplane. Or it may be carrying both! You don't know until you read the shipping manifest and consult your own records on delivery. It's then your responsibility to unpack what you've received and do what you need to do with it.
And is it necessary for network library to wait for the whole message and do not interrupt the upper layer?
If the library wants to pass a full "message" to the upper layer, then usually yes. Some approaches will just block waiting for a full message, but that's not common nowadays. Asynchronous I/O is your friend.
(This was a generic answer, written with no knowledge of what muduo does specifically.)
I've been using boost asio sockets (UDP and TCP) to handle a custom protocol between my client server program. Its been working great until I discovered that on TCP async_send/async_recieve calls that data can arrived in combined chunks.
For example, if I make two send calls each with it's own packet, they can arrive combined at a single receive call. I wrongly assumed that every send corresponds to a receive, but I'm obviously wrong. It however has worked well for the longest time until I found the issue running the client for a different OS.
So my question is: are there any guarantees to the completeness of the data on arrival for every receive call? (e.g. async_send 128 bytes arrive in multiples of 128 bytes, or how it arrives must always be treated as random, like 1 bytes arrives then 127 bytes is possible)
More specifically, does this mean that:
Data can arrive concatenated or partial for every send call, and I
have to always handle the concatenated/partial data manually
Is this true for both UDP and TCP asio sockets?
I searched around and couldn't find any documentation on this so I was wondering if anyone have any idea.
First its important to understand that boost asio socket receive and sends methods just mean that they ordered the underlying network stack to receive or send data. By network stack this could be the windows socket API.
If you are sending data right to the same computer, via so called loopback addresses, the operating system (if there is any) can just "give" it to the listening i.e. receiving program. Thats the scenario where you would be most lucky to get things in order and always complete for all cases.
However if you want you are addressing another computer or because the operating system is in the mood, you will have different behaviour:
TCP was designed that you will get you data in the order you have send it. But the chunks or packet size if will be sent differs even on the same connection and is a key feature of TCP. Your OS or hardware network adapter might do some send or receive buffering too, before informing you. However things won't get lost.
So in short for TCP: You can make sure the data is complete by waiting for a certain point in your data async_read_until is just there for this case. Data from multiple send calls might be in one receive or many
UDP was designed to have a low latency in contrast to TCP, but without its ordering and completeness guarantees. So when you send a UDP datagram i.e. packet, usually the OS and network adapter will try to send it out ASAP. However on the way to the other computer, the internet might loose it, or hold one packet back until the one you send after the first, so that data you send later, could be received later, while you can also get the sent first, later, or might not. But when you receive a datagram it's complete in it self.
So in short for UDP: Data will arrive in datagram chunks, but some datagrams might be missing, or might arrive in another order than sent. The data from one send might be in one receive, might not, or later
So after some more testing here's what I concluded: the answer is no. Boost Asio sockets does not have magic that can enforce data completeness beyond what the TCP/UDP protocols enforces.
Edit:
So here's more of my research:
For TCP, it acts like a data stream. So packets may arrive partial or combined and is complete. So the user application need to handle deserialization of combined or partial data.
For UDP, because it is a datagram packet, if the packet arrives, it is guaranteed to be independent and complete. So there is no need to handle partial or combined packets.
I am reimplementing an old network layer library, but using boost asio this time. Our software is tcpip dialoging with a 3rd party software. Several messages behave very well on both sides, but there is one case I misunderstand:
The 3rd party sends two messages (msg A and B) one after the other (real short timing) but I receive only a part of message A in tcp-packet 1, and the end of message A and the whole message B in tcp-packet 2. (I sniff with wireshark).
I had not thought of this case, I am wondering if it is common with tcp, and if my layer should be adaptative to that case - or should I say to the 3rd party to check what they do on their side so as I received both message in different packets.
Packets can be fragmented and arrive out-of-sequence. The TCP stack which receives them should buffer and reorder them, before presenting the data as an incoming stream to the application layer.
My problem is with message B, that I don't see because it's after the end of message one in the same packet.
You can't rely on "messages" having a one-to-one mapping to "packets": to the application, TCP (not UDP) looks like a "streaming" protocol.
An application which sends via TCP needs another way to separate messages. Sometimes that's done by marking the end of each message. For example SMTP marks the end-of-message as follows:
The transmission of the body of the mail message is initiated with a
DATA command after which it is transmitted verbatim line by line and
is terminated with an end-of-data sequence. This sequence consists of
a new-line (), a single full stop (period), followed by
another new-line. Since a message body can contain a line with just a
period as part of the text, the client sends two periods every time a
line starts with a period; correspondingly, the server replaces every
sequence of two periods at the beginning of a line with a single one.
Such escaping method is called dot-stuffing.
Alternatively, the protocol might specify a prefix at the start of each message, which will indicate the message-length in bytes.
If you're are coding the TCP stack, then you'll have access to the TCP message header: the "Data offset" field tells you how long each message is.
Yes, this is common. TCP/IP is a streaming protocol and your "logical" packet may be split across many "physical" packets, so the client is responsible for assembling the higher-level packets. Additionally, TCP/IP guarantees the proper ordering, so you don't have to worry about assembling out of order packets.
your problem has got nothing to do with TCP at all. your problem is that you expected asio to do the message parsing for you. it does not, you have to implement it.
if your messages are all the same size do an async read for that size.
if they are of different length do a async read for your header size, analyze the header and do an async read for the rest of the message according to the header.
if your messages are of variable length and the size is unknown but there is a defined end character or sequence then you have to save the remaining bytes behind that end sequence and append the next read to that remainder.
If I write a server, how can I implement the receive function to get all the data sent by a specific client if I don't know how that client sends the data?
I am using a TCP/IP protocol.
If you really have no protocol defined, then all you can do is accept groups of bytes from the client as they arrive. Without a defined protocol, there is no way to know that you have received "all the bytes" that the client sent, since there is always the possibility that a network failure occurred somewhere between the client and your server during transmission, causing the last part of the stream not to arrive at the server. In that case, you would get the usual end-of-stream indication from the TCP socket (e.g. recv() returning 0, or EWOULDBLOCK if you are using non-blocking sockets), so you would know that you aren't going to receive any more data from the client (because the TCP connection is now disconnected)... but that isn't quite the same thing as knowing you have received all of the data the client meant for you receive.
Depending on your application, that might be good enough. If not, then you'll have to work out a protocol, and trust that your clients will abide by the rules of that protocol. Having the client send a header first saying how many bytes it plans to send is a good approach; or having it send some special "Okay, that's all I meant to send" indicator is also possible (although if you do it that way, you have to watch out for false positives if the special indicator could appear by chance inside the data itself)
One call to send does not equal one call to recv. Either send a header so the receiver know how much data to expect, or send some sort of sentinel value so the the receiver knows when to stop reading.
It depends on how you want to design your protocol.
ASCII protocols usually use a special character to delimit the end of the data, while binary protocols usually send the length of the data first as a fixed-size integer (both sides know this size) and then the variable-length data follows.
You can combine size with your data in one buffer and call send once. People usually use first 2 bytes for size of data in a packet. Like this,
|size N (2 bytes) | data (N bytes) |
In this case, you can contain 65535 byte-long custom data.
Since TCP does not preserve message boundary, it doesn't matter how many times you call send. You have to call receive until you get N size(2 bytes) then you can keep calling receive until you have N bytes data you sent.
UPDATE: This is just a sample to show how to check message boundary in TCP. Security/Encryption is a whole different story and it deserves a new thread. That said, do not simply copy this design. :)
TCP is stream-based, so there is no concept of a "complete message": it's given by a higher-level protocol (e.g. HTTP) or you'd have to invent it yourself. If you were free to use UDP (datagram-based), then there would be no need to do send() multiple times, or receive().
A newer SCTP protocol also supports the concept of a message natively.
With TCP, to implement messages, you have to tell the receiver the size of the message. It can be the first few bytes (commonly 2, since that allows messages up to 64K -- but you have to be careful of byte order if you may be communicating between different systems), or it can be something more complicated. HTTP, for example, has a whole set of rules by which the receiver determines the length of the message. One of them is the Content-Length HTTP header, which contains a string representing the number of bytes in the body of the message. Header-only HTTP messages are simply delimited by a blank line. As you can see, there are no easy (or standard) answers.
TCP is a stream based protocol. As such there is no concept of length of data built into TCP in the same way as there is no concept of data length for keyboard input.
It is therefore up to the higher level protocol to specify the end of the message. This can be done by including the packet length in the protocol or specifying a special end-of-message byte sequence.
For example HTTP headers are terminated by a double \r\n sequence and the length of the message body can be obtains from the Content-Length header.
So I'm almost done an assignment involving Win32 programming and sockets, but I have to generate and analyze some statistics about the transfers. The only part I'm having trouble with is how to figure out the number of packets that were sent to the server from the client.
The data sent can be variable-length, so I can't just divide the total bytes received by a #define'd value.
We have to use asynchronous calls to do everything, so I've been trying to increment a counter with every FD_READ message I get for the server's socket. However, because I have to be able to accept a potentially large file size, I have to call recv/recvfrom with a buffer size around 64k. If I send a small packet (a-z), there are no problems. But if I send a string of 1024 characters 10x, the server reports 2 or 3 packets received, but 0% data loss in terms of bytes sent/received.
Any idea how to get the number of packets?
Thanks in advance :)
This really boils down to what you mean by 'packet.'
As you are probably aware, when a TCP/UDP message is sent on the wire, the data being sent is 'wrapped,' or prepended, with a corresponding TCP/UDP header. This is then 'wrapped' in an IP header, which is in turn 'wrapped' in an Ethernet frame. You can see this breakout if you use a sniffing package like Wireshark.
The point is this. When I hear the term 'packet,' I think of data at the IP level. IP data is truly packetized on the wire, so packet counts make sense when talking about IP. However, if you're using regular sockets to send and receive your data, the IP headers, as well as the TCP/UDP headers, are stripped off, i.e., you don't get this information from the socket. And without that information, it is impossible to determine the number of 'packets' (again, I'm thinking IP) that were transmitted.
You could do what others are suggesting by adding your own header with a length and a counter. This information will help you accurately size your receive buffers, but it won't help you determine the number of packets (again, IP...), especially if you're doing TCP.
If you want to accurately determine the number of packets using Winsock sockets, I would suggest creating a 'raw' socket as suggested here. This socket will collect all IP traffic seen by your local NIC. Use the IP and TCP/UDP headers to filter the data based on your client and server sockets, i.e., IP addresses and port numbers. This will give an accurate picture of how many IP packets were actually used to transmit your data.
Not a direct answer to your question but rather a suggestion for a different solution.
What if you send a length-descriptor in front of the data you want to transfer? That way you can already allocate the correct buffer size (not too much, not too little) on the client and also check if there were any losses when the transfer is over.
With TCP you should have no problem at all because the protocol itself handles the error-free transmission or otherwise you should get a meaningful error.
Maybe with UDP you could also split up your transfer into fixed-size chunks with a propper sequence-id. You'd have to accumulate all incoming packages before you sort them (UDP makes no guarantee on the receive-order) and paste the data together.
On the other hand you should think about it if it is really necessary to support UDP as there is quite some manual overhead if you want to get that protocol error-safe... (see the Wikipedia Article on TCP for a list of the problems to get around)
Do your packets have a fixed header, or are you allowed to define your own. If you can define your own, include a packet counter in the header, along with the length. You'll have to keep a running total that accounts for rollover in your counter, but this will ensure you're counting packets sent, rather than packets received. For an simple assignment, you probably won't be encountering loss (with UDP, obviously) but if you were, a packet counter would make sure your statistics reflected the sent message accurately.