Handling TCP Streams - c++

Our server is seemingly packet based. It is an adaptation from an old serial based system. It has been added, modified, re-built, etc over the years. Since TCP is a stream protocol and not a packet protocol, sometimes the packets get broken up. The ServerSocket is designed in such a way that when the Client sends data, part of the data contains the size of our message such as 55. Sometimes these packets are split into multiple pieces. They arrive in order but since we do not know how the messages will be split, our server sometimes does not know how to identify the split message.
So, having given you the background information. What is the best method to rebuild the packets as they come in if they are split? We are using C++ Builder 5 (yes I know, old IDE but this is all we can work with at the moment. ALOT of work to re-design in .NET or newer technology).

TCP guarantees that the data will arrive in the same order it was sent.
That beeing said, you can just append all the incoming data to a buffer. Then check if your buffer contains one or more packets, and remove them from the buffer, keeping all the remaining data into the buffer for future check.
This, of course, suppose that your packets have some header that indicates the size of the following data.
Lets consider packets have the following structure:
[LEN] X X X...
Where LEN is the size of the data and each X is an byte.
If you receive:
4 X X X
[--1--]
The packet is not complete, you can leave it in the buffer. Then, other data arrives, you just append it to the buffer:
4 X X X X 3 X X X
[---2---]
You then have 2 complete messages that you can easily parse.
If you do it, don't forget to send any length in a host-independant form (ntohs and ntohl can help).

This is often accomplished by prefixing messages with a one or two-byte length value which, like you said, gives the length of the remaining data. If I've understood you correctly, you're sending this as plain text (i.e., '5', '5') and this might get split up. Since you don't know the length of a decimal number, it's somewhat ambiguous. If you absolutely need to go with plain text, perhaps you could encode the length as a 16-bit hex value, i.e.:
00ff <255 bytes data>
000a <10 bytes data>
This way, the length of the size header is fixed to 4 bytes and can be used as a minimum read length when receiving on the socket.
Edit: Perhaps I misunderstood -- if reading the length value isn't a problem, deal with splits by concatenating incoming data to a string, byte buffer, or whatever until its length is equal to the value you read in the beginning. TCP will take care of the rest.
Take extra precautions to make sure that you can't get stuck in a blocking read state should the client not send a complete message. For example, say you receive the length header, and start a loop that keeps reading through blocking recv() calls until the buffer is filled. If a malicious client intentionally stops sending data, your server might be locked until the client either disconnects, or starts sending.

I would have a function called readBytes or something that takes a buffer and a length parameter and reads until that many bytes have been read. You'll need to capture the number of bytes actually read and if it's less than the number you're expecting, advance your buffer pointer and read the rest. Keep looping until you've read them all.
Then call this function once for the header (containing the length), assuming that the header is a fixed length. Once you have the length of the actual data, call this function again.

Related

How do I get the size of the msg_control buffer for recvmsg?

when using recvmsg I use MSG_TRUNC and MSG_PEEK like so:
msgLen = recvmsg(fd, &hdr, MSG_PEEK | MSG_TRUNC)
this gives me the size of the buffer to allocate for the next message
my question is how do I get the size of the buffer I should allocate for the msg_control field inside the header
Based on the doc, you need to allocate the buffer for msg_control of the size msg_controllen. To know the size beforehand, you could call like you did recvmsg(fd, &hdr, MSG_PEEK | MSG_TRUNC). MSG_PEEK won't remove the message and MSG_TRUNC will allow to return the size of the message, even if the buffer is too small.
a few solutions:
call recvmsg(fd, &hdr, MSG_PEEK | MSG_TRUNC) and init the buffer in hdr based on the size returned, and call it again without the flags.
allocate a buffer big enough, if you know the size of your messages beforehand, and call recvmsg. If an error occurs (returned -1), check the error code if the message was truncated (MSG_TRUNC or MSG_CTRUNC)
I cannot speak for other platforms than macOS (whose core is based upon a FreeBSD core, so maybe it's no different in BSD-systems, too) and the POSIX standard is not helpful either as it leaves pretty much all details to be defined by the protocol, but by default behavior of recvmsg on macOS for a UDP socket is to not deliver any control data at all. No matter what size you set msg_control on input, it will always be 0 on output. If you wish to receive any control data, you first have to explicitly enable that for the socket.
E.g. if you want to know both addresses, source and destination address of a packet (msg_name only gives you the source address of a received packet), then you have to do this:
int yes = 1;
setsockopt(soc, IPPROTO_IP, IP_RECVDSTADDR, &yes, sizeof(yes));
And now you'll get the destination address for IPv4 sockets documented as
The msg_control field in the msghdr structure points to a buffer that
contains a cmsghdr structure followed by the IP address. The cmsghdr
fields have the following values:
cmsg_len = sizeof(struct in_addr)
cmsg_level = IPPROTO_IP
cmsg_type = IP_RECVDSTADDR
This means you need to provide at least 16 bytes storage on my system, as struct cmsghdr alone is always 12 bytes on that system (four times 32 bit) and an IPv4 address is another 4 bytes, that's 16 bytes together. This value needs to be correctly rounded using CMSG_SPACE macro, but on my system the macro only makes sure it's a multiple of 32 bit and 16 byte already is such a multiple, so CMSG_SPACE(16) returns 16 for me.
As I know in advance which options I have enabled and which control data I will receive, I can exactly calculate the required space in advance.
For raw and other more obscure sockets, certain control data may always be included in the output by default, even if not explicitly enabled, but this control data will then always be equal in size and won't fluctuate from packet to packet as the packet payload size does. Thus once you know the correct size, you can rely upon the fact that it won't change, at least not without you enabling/disabling any options.
If your control data buffer was too small, the MSG_CTRUNC flag is set in the output, always (even if you don't set any flags on input), then you need to increase the control data buffer size and try again (with the next packet or with the same packet if you used MSG_PEEK as input flag), until you've once been able to make that call without getting the MSG_CTRUNC flag on output. Finally look at what the msg_control field says. On input it's the amount of buffer space available but on output it contains the exact amount of buffer space that was actually used. This is the exact buffer size you need to receive the control data of all future packets of that socket, unless you change options that will cause more/less control data to be sent and then you just have to detect that size again the same way as before.
For a more complete example, you may also have a look at:
https://stackoverflow.com/a/49308499/15809
I am afraid you can't get that value from the Posix.1g sockets API. Not sure about all implementations, but not possible in Linux. As you may notice, no control flow is provided in ancillary data buffers, so you will need to implement it yourself in case you are sending a lot of info between processes. On the other hand, for common case uses, you already know what you are going to receive at compile time (but you probably already know this). If you need to implement you own control flow, take into account that, in Linux, ancillary data seems to behave like a stream socket.
However, you can get/set the buffer length of the worst case scenario in /proc/sys/net/core/optmem_max, see cmsg(3). So, I guess you could set it to a reasonable value and declare a buffer that big.

How to determine length of buffer at client side

I have a server sending a multi-dimensional character array
char buff1[][3] = { {0xff,0xfd,0x18} , {0xff,0xfd,0x1e} , {0xff,0xfd,21} }
In this case the buff1 carries 3 messages (each having 3 characters). There could be multiple instances of buffers on server side with messages of variable length (Note : each message will always have 3 characters). viz
char buff2[][3] = { {0xff,0xfd,0x20},{0xff,0xfd,0x27}}
How should I store the size of these buffers on client side while compiling the code.
The server should send information about the length (and any other structure) of the message with the message as part of the message.
An easy way to do that is to send the number of bytes in the message first, then the bytes in the message. Often you also want to send the version of the protocol (so you can detect mismatches) and maybe even a message id header (so you can send more than one kind of message).
If blazing fast performance isn't the goal (and you are talking over a network interface, which tends to be slower than computers: parsing may be cheap enough that you don't care), using a higher level protocol or format is sometimes a good idea (json, xml, whatever). This also helps with debugging problems, because instead of debugging your custom protocol, you get to debug the higher level format.
Alternatively, you can send some sign that the sequence has terminated. If there is a value that is never a valid sequence element (such as 0,0,0), you could send that to say "no more data". Or you could send each element with a header saying if it is the last element, or the header could say that this element doesn't exist and the last element was the previous one.

Winsock2 tcp/ip - some data packets are ignored probably due to null terminator from the previous packet

I wrote a simple client-server program. Network.h is a header file which uses Winsock2.h (TCP/IP mode) to create socket, accept/connect in blocking mode, send/recv in non-blocking mode. I made it so that the function string TNetwork::Recv(int size) will return the string "Nothing" if it gets WSAWOULDBLOCK error (no data is received yet)
Here is my main function:
int main(){
string Ans;
TNetwork::StartUp(); //WSA start up, etc
cin >> Ans;
if (Ans == "0"){ // 0 --> server
TNetwork::SetupAsServer(); //accept connection (in blocking mode!)
while (true){
TNetwork::Send("\nAss" + '\0'); //without null terminator, the client may read extra bytes, causing undefined behavior (?)
TNetwork::Send("embly" + '\0');
cin >> Ans;
}
}
else{ // others --> regard Ans as IP address. e.g. I can type "127.0.0.1"
TNetwork::SetupAsClient(Ans);
string Rec;
while (true){
Rec = TNetwork::Recv(1000);
if (Rec != "Nothing"){
cout << Rec;
}
}
}
system("PAUSE");
}
Supposedly, the client would print "Assembly" when connected, and when the server enters anything to its console window. Sometimes, though, the client would only print out "\nAss" in the console without the "embly.
To my understanding, TCP/IP ensures all data to be sent and in the correct order, so I guess what happens is that both packets arrive at the same time, which happen quite often over the unstable internet. And due to this null terminator, the client would ignore the "embly", since the Recv() function stopped reading when it hits a null terminator.
So, how can I ensure that the client will always read all data packets correctly?
Yes, the network stack will send the data in the correct order and doesn't care what termination type you use. This has to do with how you're receiving and processing the data stream (note: not packets, stream). If you receive all 11 bytes and print it to the screen, the print function will stop when it reaches the zero, but the rest of the data is still there.
Note: since it's a stream, what happens if you received only 10 bytes of data from the stream? You need to scan what you receive for the zero to know if you've received a full "zero-terminated string" if that's how you want to communicate your data.
EDIT: Also, I don't think "\nAss" + '\0' is doing what you think it is. Instead of adding a 0 character to the end of the string (which already has one, by the way), it's adding 0 to your string pointer.
As #mark points out, TCP is all about streams, not packets. TCP takes care of ensuring that data is reliably transmitted from A to B and that the data is delivered to the consumer in the order in which it was transmitted. Yes, the data is packetized on the wire, but the TCP stack on the system takes those packets and builds the stream which it makes available to you through the recv() function. The TCP stack handles out-of-order data, missing data, and duplicated data such that by the time your application sees it, the stream is a mirror-copy of when the sender sent.
To properly receive TCP data, you will typically need some kind of loop that reads data from the socket when it becomes available. The way I normally do this is to have a thread that is dedicated to servicing the socket. In the thread function is a loop that reads data from the socket when it becomes available and is idle otherwise. This loop reads data into a buffer of, say, 1 KB. Once the data is received from the socket into this buffer, the buffer is copied to another thread for processing. In the thread function for the processing thread is a loop that receives the 1 KB buffers from the socket thread and adds them to the back end of a master buffer of, say, 1 MB. The processing thread then processes the messages out of this master buffer and makes them available to the application.
For a simple demo application, two threads may be overkill. The two threads I've described could be certainly be combined into one, but for my application, it is more efficient to have two threads and take advantage of the multiple cores on my system. The point is, if you're going to have a front-end UI, there's not going to be a way around using at least one thread and still have the UI be responsive.
One other thing. There are two commonly-used mechanisms for protocol design. You're using one, namely, a marker (e.g., a null terminator, etc.) to signal the begin/end of a message. I don't prefer this mechanism mainly because the marker may actually need to be part of the message at some point. The other mechanism is to have a header on each message that tells, at a minimum, how long the message is. I prefer this mechanism and include in my headers a sync word and the message type as well. For example,
struct Header
{
__int16 _sync; // a hex pattern, e.g., 0xABCD
__int16 _type;
__int32 _length;
}
That's a total of 8 bytes. So when processing from the master buffer, I read the first 8 bytes, verify the sync word, and get the length. I determine if there are 'length' bytes available in the master buffer. If not, I have to wait until the socket thread provides me more data before checking again. If so, I extract 'length' bytes from the master buffer and pass that to an object created according to the specified type, which knows how to interpret that particular message. Then repeat.
As I mentioned, I use a master buffer of 1 MB or so. As messages are processed, it is important to remove them from the master buffer so there is additional space available for new data on the back end. This involves simply copying the unprocessed data, if any, to the beginning of the buffer. In cases where data comes in faster than you can process it, the master buffer may need the ability to resize itself to accommodate the additional data.
I hope that's not overwhelming. Start simple and add as you go.

Socket Commuication with High frequency

I need to send data to another process every 0.02s.
The Server code:
//set socket, bind, listen
while(1){
sleep(0.02);
echo(newsockfd);
}
void echo (int sock)
{
int n;
char buffer[256]="abc";
n=send(sock,buffer,strlen(buffer),0);
if (n < 0) error("ERROR Sending");
}
The Client code:
//connect
while(1)
{
bzero(buffer,256);
n = read(sock,buffer,255);
printf("Recieved data:%s\n",buffer);
if (n < 0)
error("ERROR reading from socket");
}
The problem is that:
The client shows something like this:
Recieved data:abc
Recieved data:abcabcabc
Recieved data:abcabc
....
How does it happen? When I set sleep time:
...
sleep(2)
...
It would be ok:
Recieved data:abc
Recieved data:abc
Recieved data:abc
...
TCP sockets do not guarantee framing. When you send bytes over a TCP socket, those bytes will be received on the other end in the same order, but they will not necessarily be grouped the same way — they may be split up, or grouped together, or regrouped, in any way the operating system sees fit.
If you need framing, you will need to send some sort of packet header to indicate where each chunk of data starts and ends. This may take the form of either a delimiter (e.g, a \n or \0 to indicate where each chunk ends), or a length value (e.g, a number at the head of each chunk to denote how long it is).
Also, as other respondents have noted, sleep() takes an integer, so you're effectively not sleeping at all here.
sleep takes unsigned int as argument, so sleep(0.02) is actually sleep(0).
unsigned int sleep(unsigned int seconds);
Use usleep(20) instead. It will sleep in microseconds:
int usleep(useconds_t usec);
The OS is at liberty to buffer data (i.e. why not just send a full packet instead of multiple packets)
Besides sleep takes a unsigned integer.
The reason is that the OS is buffering data to be sent. It will buffer based on either size or time. In this case, you're not sending enough data, but you're sending it fast enough the OS is choosing to bulk it up before putting it on the wire.
When you add the sleep(2), that is long enough that the OS chooses to send a single "abc" before the next one comes in.
You need to understand that TCP is simply a byte stream. It has no concept of messages or sizes. You simply put bytes on the wire on one end and take them off on the other. If you want to do specific things, then you need to interpret the data special ways when you read it. Because of this, the correct solution is to create an actual protocol for this. That protocol could be as simple as "each 3 bytes is one message", or more complicated where you send a size prefix.
UDP may also be a good solution for you, depending on your other requirements.
sleep(0.02)
is effectively
sleep(0)
because argument is unsigned int, so implicit conversion does it for you. So you have no sleep at all here. You can use sleep(2) to sleep for 2 microseconds.Next, even if you had, there is no guarantee that your messages will be sent in a different frames. If you need this, you should apply some sort of delimiter, I have seen
'\0'
character in some implementation.
TCPIP stacks buffer up data until there's a decent amount of data, or until they decide that there's no more coming from the application and send what they've got anyway.
There are two things you will need to do. First, turn off Nagle's algorithm. Second, sort out some sort of framing mechanism.
Turning off Nagle's algorithm will cause the stack to "send data immediately", rather than waiting on the off chance that you'll be wanting to send more. It actually leads to less network efficiency because you're not filling up Ethernet frames, something to bare in mind on Gigabit where jumbo frames are required to get best throughput. But in your case timeliness is more important than throughput.
You can do your own framing by very simple means, eg by send an integer first that says how long the rest if the message will be. At the reader end you would read the integer, and then read that number of bytes. For the next message you'd send another integer saying how long that message is, etc.
That sort of thing is ok but not hugely robust. You could look at something like ASN.1 or Google Protocol buffers.
I've used Objective System's ASN.1 libraries and tools (they're not free) and they do a good job of looking after message integrity, framing, etc. They're good because they don't read data from a network connection one byte at a time so the efficiency and speed isn't too bad. Any extra data read is retained and included in the next message decode.
I've not used Google Protocol Buffers myself but it's possible that they have similar characteristics, and there maybe other similar serialisation mechanisms out there. I'd recommend avoiding XML serialisation for speed/efficiency reasons.

recv windows, one byte per call, what the?

c++
#define BUF_LEN 1024
the below code only receives one byte when its called then immediately moves on.
output = new char[BUF_LEN];
bytes_recv = recv(cli, output, BUF_LEN, 0);
output[bytes_recv] = '\0';
Any idea how to make it receive more bytes?
EDIT: the client connecting is Telnet.
The thing to remember about networking is that you will be able to read as much data as has been received. Since your code is asking for 1024 bytes and you only read 1, then only 1 byte has been received.
Since you are using a telnet client, it sounds like you have it configured in character mode. In this mode, as soon as you type a character, it will be sent.
Try to reconfigure your telnet client in line mode. In line mode, the telnet client will wait until you hit return before it sends the entire line.
On my telnet client. In order to do that, first I type ctrl-] to get to the telnet prompt and then type "mode line" to configure telnet in line mode.
Update
On further thought, this is actually a very good problem to have.
In the real world, your data can get fragmented in unexpected ways. The client may make a single send() call of N bytes but the data may not arrive in a single packet. If your code can handle byte arriving 1 by 1, then you know it will work know matter how the data arrives.
What you need to do is make sure that you accumulate your data across multiple receives. After your recv call returns, you should then append the data a buffer. Something like:
char *accumulate_buffer = new char[BUF_LEN];
size_t accumulate_buffer_len = 0;
...
bytes_recv = recv(fd,
accumulate_buffer + accumulate_buffer_len,
BUF_LEN - accumulate_buffer_len,
0);
if (bytes_recv > 0)
accumulate_buffer_len += bytes_recv;
if (can_handle_data(accumulate_buffer, accumulate_buffer_len))
{
handle_data(accumulate_buffer, accumulate_buffer_len);
accumulate_buffer_len = 0;
}
This code keeps accumulating the recv into a buffer until there is enough data to handle. Once you handle the data, you reset the length to 0 and you start accumulating afresh.
First, in this line:
output[bytes_recv] = '\0';
you need to check if bytes_recv < 0 first before you do that because you might have an error. And the way your code currently works, you'll just randomly stomp on some random piece of memory (likely the byte just before the buffer).
Secondly, the fact you are null terminating your buffer indicates that you're expecting to receive ASCII text with no embedded null characters. Never assume that, you will be wrong at the worst possible time.
Lastly stream sockets have a model that's basically a very long piece of tape with lots of letters stamped on it. There is no promise that the tape is going to be moving at any particular speed. When you do a recv call you're saying "Please give me as many letters from the tape as you have so far, up to this many.". You may get as many as you ask for, you may get only 1. No promises. It doesn't matter how the other side spit bits of the tape out, the tape is going through an extremely complex bunch of gears and you just have no idea how many letters are going to be coming by at any given time.
If you care about certain groupings of characters, you have to put things in the stream (ont the tape) saying where those units start and/or end. There are many ways of doing this. Telnet itself uses several different ones in different circumstances.
And on the receiving side, you have to look for those markers and put the sequences of characters you want to treat as a unit together yourself.
So, if you want to read a line, you have to read until you get a '\n'. If you try to read 1024 bytes at a time, you have to take into account that the '\n' might end up in the middle of your buffer and so your buffer may contain the line you want and part of the next line. It might even contain several lines. The only promise is that you won't get more characters than you asked for.
Force the sending side to send more bytes using Nagle's algorithm, then you will receive them in packages.