I'm currently trying to implement a (T)LV protocol to be used on top of TCP. A very early version of this protocol was built by just sending one message per send-recv pair. (i.e. send("message to transmit" -- recv(... )). This is really bad bandwidth-wise - I guess because I'm sending really small packets.
So now I am trying to switch to a LV protocol, sending several messages at once only seperated by their respective length (I am now using Protocol Buffers to serialize my data).
I now have two questions:
In python I send by doing
sock.send(struct.pack("<H", len(gtMessage.SerializeToString())))
sock.send(gtMessage.SerializeToString())
If I now put this into a loop and sent several of those messages I'd end up with my old problem, as far as I understand. Can I somehow string the string to be sent together?
In C++ I receive receive first the length of the message and then read the number of bytes indicated by the length field.
Is it better performance-wise to first read everything from TCP and then parse it, or can I read one message, then parse it and only then read the next bit from the wire?
Edit: So after doing some more research I'd rephrase the first question as:
Is
sock.send("somestring")
sock.send("somestring")
the same as
sock.send("somestring"+"somestring")
?
Doing two sends in a row may result in two actual packets going out, which is not so great. To fix this you can concatenate the two pieces yourself, or use writev (aka "gather write"), or TCP_CORK on the first send to prevent it from turning into a packet all by itself.
As for the receive side, you should receive a big block (as much as you can up to some reasonable limit, say a couple megabytes or something), and then parse it. Do not try to receive just one or two bytes for the size then do another receive after that--this is inefficient and you may still end up with "short reads" if the sent message was fragmented.
Related
I'm parsing a file with lots of tcp packets which i need to parse. The problem is that they get segmented and i can't find any indication when and where they do so. No flags or anything else indicates, that the middle of current packet may contain the beginning of the next one. The protocol above tcp is FIX(used in online trading) but i'd like for my code to be able to work with any protocols(or at least understand which is protocol is it).
I'm writing code in C++ and can't use any additional libraries.
So, how do i figure out what is the protocol above tcp and where it gets segmented ?
You can't. TCP/IP is conceptually a stream, not a sequence of messages (the fact that it is ultimately implemented as a sequence of packets is irrelevant). When you write a sequence of bytes to a TCP/IP stream, that sequence is added to the stream; it is not treated as a message which should maintain its own identity. No notion of message begin/end is transmitted along with the stream, unless you do so yourself in your own protocol.
If you find this hard to believe, consider how it works for files: if you write a sequence of bytes to a file, that sequence does not somehow become a record that you can later identify and retrieve. If you want that kind of structure you have to add it yourself. The same is true for TCP/IP.
The transport packets used to implement TCP/IP have no relation to the data blocks you specify with your API calls; they are merely a way to implement the TCP/IP stream. For some use cases there may appear to be a mapping, but this is accidental.
The only way to split a TCP/IP stream back into separate messages is by using knowledge of the protocol running on top of TCP/IP. In your case this is FIX. I assume you know how that works; you can use that knowledge to correctly split the FIX data back into its original messages. A generic TCP/IP message splitter cannot be made.
As I can see your problem is to separate TCP packets. To solve it you can relay on length of payload (this answer) and checksum. If checksum is correct for data with specified length, than your packet is correct, if no - you need seek in thee previous part for start of the packet or drop this part of data. At least this approach will help you to find point where dada was segmented.
For more precise answer it will be better to see little part of data.
But main your problem is segmentation of packets. For better performance you should try to exclude this problem (maybe change network card to Intel).
I am reimplementing an old network layer library, but using boost asio this time. Our software is tcpip dialoging with a 3rd party software. Several messages behave very well on both sides, but there is one case I misunderstand:
The 3rd party sends two messages (msg A and B) one after the other (real short timing) but I receive only a part of message A in tcp-packet 1, and the end of message A and the whole message B in tcp-packet 2. (I sniff with wireshark).
I had not thought of this case, I am wondering if it is common with tcp, and if my layer should be adaptative to that case - or should I say to the 3rd party to check what they do on their side so as I received both message in different packets.
Packets can be fragmented and arrive out-of-sequence. The TCP stack which receives them should buffer and reorder them, before presenting the data as an incoming stream to the application layer.
My problem is with message B, that I don't see because it's after the end of message one in the same packet.
You can't rely on "messages" having a one-to-one mapping to "packets": to the application, TCP (not UDP) looks like a "streaming" protocol.
An application which sends via TCP needs another way to separate messages. Sometimes that's done by marking the end of each message. For example SMTP marks the end-of-message as follows:
The transmission of the body of the mail message is initiated with a
DATA command after which it is transmitted verbatim line by line and
is terminated with an end-of-data sequence. This sequence consists of
a new-line (), a single full stop (period), followed by
another new-line. Since a message body can contain a line with just a
period as part of the text, the client sends two periods every time a
line starts with a period; correspondingly, the server replaces every
sequence of two periods at the beginning of a line with a single one.
Such escaping method is called dot-stuffing.
Alternatively, the protocol might specify a prefix at the start of each message, which will indicate the message-length in bytes.
If you're are coding the TCP stack, then you'll have access to the TCP message header: the "Data offset" field tells you how long each message is.
Yes, this is common. TCP/IP is a streaming protocol and your "logical" packet may be split across many "physical" packets, so the client is responsible for assembling the higher-level packets. Additionally, TCP/IP guarantees the proper ordering, so you don't have to worry about assembling out of order packets.
your problem has got nothing to do with TCP at all. your problem is that you expected asio to do the message parsing for you. it does not, you have to implement it.
if your messages are all the same size do an async read for that size.
if they are of different length do a async read for your header size, analyze the header and do an async read for the rest of the message according to the header.
if your messages are of variable length and the size is unknown but there is a defined end character or sequence then you have to save the remaining bytes behind that end sequence and append the next read to that remainder.
If I write a server, how can I implement the receive function to get all the data sent by a specific client if I don't know how that client sends the data?
I am using a TCP/IP protocol.
If you really have no protocol defined, then all you can do is accept groups of bytes from the client as they arrive. Without a defined protocol, there is no way to know that you have received "all the bytes" that the client sent, since there is always the possibility that a network failure occurred somewhere between the client and your server during transmission, causing the last part of the stream not to arrive at the server. In that case, you would get the usual end-of-stream indication from the TCP socket (e.g. recv() returning 0, or EWOULDBLOCK if you are using non-blocking sockets), so you would know that you aren't going to receive any more data from the client (because the TCP connection is now disconnected)... but that isn't quite the same thing as knowing you have received all of the data the client meant for you receive.
Depending on your application, that might be good enough. If not, then you'll have to work out a protocol, and trust that your clients will abide by the rules of that protocol. Having the client send a header first saying how many bytes it plans to send is a good approach; or having it send some special "Okay, that's all I meant to send" indicator is also possible (although if you do it that way, you have to watch out for false positives if the special indicator could appear by chance inside the data itself)
One call to send does not equal one call to recv. Either send a header so the receiver know how much data to expect, or send some sort of sentinel value so the the receiver knows when to stop reading.
It depends on how you want to design your protocol.
ASCII protocols usually use a special character to delimit the end of the data, while binary protocols usually send the length of the data first as a fixed-size integer (both sides know this size) and then the variable-length data follows.
You can combine size with your data in one buffer and call send once. People usually use first 2 bytes for size of data in a packet. Like this,
|size N (2 bytes) | data (N bytes) |
In this case, you can contain 65535 byte-long custom data.
Since TCP does not preserve message boundary, it doesn't matter how many times you call send. You have to call receive until you get N size(2 bytes) then you can keep calling receive until you have N bytes data you sent.
UPDATE: This is just a sample to show how to check message boundary in TCP. Security/Encryption is a whole different story and it deserves a new thread. That said, do not simply copy this design. :)
TCP is stream-based, so there is no concept of a "complete message": it's given by a higher-level protocol (e.g. HTTP) or you'd have to invent it yourself. If you were free to use UDP (datagram-based), then there would be no need to do send() multiple times, or receive().
A newer SCTP protocol also supports the concept of a message natively.
With TCP, to implement messages, you have to tell the receiver the size of the message. It can be the first few bytes (commonly 2, since that allows messages up to 64K -- but you have to be careful of byte order if you may be communicating between different systems), or it can be something more complicated. HTTP, for example, has a whole set of rules by which the receiver determines the length of the message. One of them is the Content-Length HTTP header, which contains a string representing the number of bytes in the body of the message. Header-only HTTP messages are simply delimited by a blank line. As you can see, there are no easy (or standard) answers.
TCP is a stream based protocol. As such there is no concept of length of data built into TCP in the same way as there is no concept of data length for keyboard input.
It is therefore up to the higher level protocol to specify the end of the message. This can be done by including the packet length in the protocol or specifying a special end-of-message byte sequence.
For example HTTP headers are terminated by a double \r\n sequence and the length of the message body can be obtains from the Content-Length header.
I have to send mesh data via TCP from one computer to another... These meshes can be rather large. I'm having a tough time thinking about what the best way to send them over TCP will be as I don't know much about network programming.
Here is my basic class structure that I need to fit into buffers to be sent via TCP:
class PrimitiveCollection
{
std::vector<Primitive*> primitives;
};
class Primitive
{
PRIMTYPES primType; // PRIMTYPES is just an enum with values for fan, strip, etc...
unsigned int numVertices;
std::vector<Vertex*> vertices;
};
class Vertex
{
float X;
float Y;
float Z;
float XNormal;
float ZNormal;
};
I'm using the Boost library and their TCP stuff... it is fairly easy to use. You can just fill a buffer and send it off via TCP.
However, of course this buffer can only be so big and I could have up to 2 megabytes of data to send.
So what would be the best way to get the above class structure into the buffers needed and sent over the network? I would need to deserialize on the recieving end also.
Any guidance in this would be much appreciated.
EDIT: I realize after reading this again that this really is a more general problem that is not specific to Boost... Its more of a problem of chunking the data and sending it. However I'm still interested to see if Boost has anything that can abstract this away somewhat.
Have you tried it with Boost's TCP? I don't see why 2MB would be an issue to transfer. I'm assuming we're talking about a LAN running at 100mbps or 1gbps, a computer with plenty of RAM, and don't have to have > 20ms response times? If your goal is to just get all 2MB from one computer to another, just send it, TCP will handle chunking it up for you.
I have a TCP latency checking tool that I wrote with Boost, that tries to send buffers of various sizes, I routinely check up to 20MB and those seem to get through without problems.
I guess what I'm trying to say is don't spend your time developing a solution unless you know you have a problem :-)
--------- Solution Implementation --------
Now that I've had a few minutes on my hands, I went through and made a quick implementation of what you were talking about: https://github.com/teeks99/data-chunker There are three big parts:
The serializer/deserializer, boost has its own, but its not much better than rolling your own, so I did.
Sender - Connects to the receiver over TCP and sends the data
Receiver - Waits for connections from the sender and unpacks the data it receives.
I've included the .exe(s) in the zip, run Sender.exe/Receiver.exe --help to see the options, or just look at main.
More detailed explanation:
Open two command prompts, and go to DataChunker\Debug in both of them.
Run Receiver.exe in one of the
Run Sender.exe in the other one (possible on a different computer, in which case add --remote-host=IP.ADD.RE.SS after the executable name, if you want to try sending more than once and --num-sends=10 to send ten times).
Looking at the code, you can see what's going on, creating the receiver and sender ends of the TCP socket in the respecitve main() functions. The sender creates a new PrimitiveCollection and fills it in with some example data, then serializes and sends it...the receiver deserializes the data into a new PrimitiveCollection, at which point the primitive collection could be used by someone else, but I just wrote to the console that it was done.
Edit: Moved the example to github.
Without anything fancy, from what I remember in my network class:
Send a message to the receiver asking what size data chunks it can handle
Take a minimum of that and your own sending capabilities, then reply saying:
What size you'll be sending, how many you'll be sending
After you get that, just send each chunk. You'll want to wait for an "Ok" reply, so you know you're not wasting time sending to a client that's not there. This is also a good time for the client to send a "I'm canceling" message instead of "Ok".
Send until all packets have been replied with an "Ok"
The data is transfered.
This works because TCP guarantees in-order delivery. UDP would require packet numbers (for ordering).
Compression is the same, except you're sending compressed data. (Data is data, it all depends on how you interpret it). Just make sure you communicate how the data is compressed :)
As for examples, all I could dig up was this page and this old question. I think what you're doing would work well in tandem with Boost.Serialization.
I would like to add one more point to consider - setting TCP socket buffer size in order to increase socket performance to some extent.
There is an utility Iperf that let test speed of exchange over the TCP socket. I ran on Windows a few tests in a 100 Mbs LAN. With the 8Kb default TCP window size the speed is 89 Mbits/sec and with 64Kb TCP window size the speed is 94 Mbits/sec.
In addition to how to chunk and deliver the data, another issue you should consider is platform differences. If the two computers are the same architecture, and the code running on both sides is the same version of the same compiler, then you should, probably, be able to just dump the raw memory structure across the network and have it work on the other side. If everything isn't the same, though, you can run into problems with endianness, structure padding, field alignment, etc.
In general, it's good to define a network format for the data separately from your in-memory representation. That format can be binary, in which case numeric values should be converted to standard forms (mainly, changing endianness to "network order", which is big-endian), or it can be textual. Many network protocols opt for text because it eliminates a lot of formatting issues and because it makes debugging easier. Personally, I really like JSON. It's not too verbose, there are good libraries available for every programming language, and it's really easy for humans to read and understand.
One of the key issues to consider when defining your network protocol is how the receiver knows when it has received all of the data. There are two basic approaches. First, you can send an explicit size at the beginning of the message, then the receiver knows to keep reading until it's gotten that many bytes. The other is to use some sort of an end-of-message delimiter. The latter has the advantage that you don't have to know in advance how many bytes you're sending, but the disadvantage that you have to figure out how to make sure the the end-of-message delimiter can't appear in the message.
Once you decide how the data should be structured as it's flowing across the network, then you should figure out a way to convert the internal representation to that format, ideally in a "streaming" way, so you can loop through your data structure, converting each piece of it to network format and writing it to the network socket.
On the receiving side, you just reverse the process, decoding the network format to the appropriate in-memory format.
My recommendation for your case is to use JSON. 2 MB is not a lot of data, so the overhead of generating and parsing won't be large, and you can easily represent your data structure directly in JSON. The resulting text will be self-delimiting, human-readable, easy to stream, and easy to parse back into memory on the destination side.
Is there a good method on how to transfer a file from say... a client to a server?
Probably just images, but my professor was asking for any type of files.
I've looked around and am a little confused as to the general idea.
So if we have a large file, we can split that file into segments...? Then send each segment off to the server.
Should I also use a while loop to receive all the files / segments on the server side? Also, how will my server know if all the segments were received without previously knowing how many segments there are?
I was looking on the Cplusplus website and found that there is like a binary transfer of files...
Thanks for all the help =)
If you are using TCP:
You are right, there is no way to "know" how much data you will be receiving. This gives you a few options:
1) Before transmitting the image data, first send the number of bytes to be expected. So your first 4 bytes might be the 4-byte integer "4096". Then your client can read the first 4 bytes, "know" that it is expecting 4096 bytes, and then malloc(4096) so it can expect the rest. Then, your server can send() 4096 bytes worth of image data.
When you do this, be aware that you might have to recv() multiple times - for one reason or another, you might not have received all 4096 bytes. So you will need to check the return value of recv() to make sure you have gotten everything.
2) If you are just sending one file, you could just have your receiver read it. And it can keep recv()ing from the socket until the server closes the connection. This is a bit harder - you will have to keep track of how much you have received, and then if your buffer is full, you will have to reallocate it. I don't recommend this method, but it would technically accomplish the task.
If you are using UDP:
This means that you don't have reliable transfer. So packets might be dropped. They might also arrive out of order. So if you are going to use UDP, you must fragment your data into little segments. Both the sender and receiver must have agreement on how large a segment is (100 bytes? 1000 bytes?)
Not only that, but you must also transmit a sequence number with each packet - that is, label each packet #1, #2, etc. Because your client must be able to tell: if any packets are missing (you receive packets 1, 2 and 4 - and are thus missing #3) and to make sure they are in order (you receive 3, 2, then 1 - but when you save them to the file, you must make sure the packets are saved in the correct order, 1, 2, then 3).
So for your assignment, well, it will depend on what protocol you have to/are allowed to use.
If you use a UDP-based transfer protocol, you will have to break the file up into chunks for network transmission. You'll also have to reassemble them in the correct order on the receiving end and verify the results. If you use a TCP-based transfer protocol, all of this will be taken care of under the hood.
You should consult Beej's Guide to Network Programming for how best to send and receive data and use sockets in general. It explains most of the things about which you are asking.
There are many ways of transferring files. If your transferring files in a lossless manor, then your basically going to divide the file into chunks. Tag each chunk with a sequence number. Send the chunks to the other side and reconstitute the file. Stream oriented protocols are simpler since packets will be retransmitted if lost. If your using an unreliable protocol, then you will need to retransmit missing packets and resequenced chunks which are not in the correct order.
If lossy transfer is acceptable (like transferring video or on-line game data), then use an unreliable protocol. Lossy transfer is simpler because you don't have to retransmit missing chunks. All you need to do is make sure the chunks are processed in the proper sequence.
Many protocols send a terminator packet to indicate the end of transmission. You could use this strategy if you don't want to send the number of chunks to the other side before transmission.