socket write in for loop mixes string buffers - c++

I call a function multiple times using a for loop like this:
for ( int con=0; con < this->controller_info.size(); con++ ) {
try {
this->pi.home_axis( this->controller_info.at(con).addr );
}
catch( std::out_of_range &e ) { ... }
}
where the home_axis() function is defined as:
long ServoInterface::home_axis( int addr ) {
std::stringstream cmd;
if ( addr > 0 ) cmd << addr << " ";
cmd << "FRF";
cmd << "\n";
int bytes=this->controller.Write( cmd.str() );
return NO_ERROR;
}
and the controller.Write() function is just a wrapper for the standard write(2) which writes the characters in the string to a socket file descriptor.
You can see that each time home_axis() is called it should have its own, fresh, std::stringstream cmd buffer. But what is happening is that, for the first time the for loop executes, the host that is receiving the bytes written by home_axis, is receiving a single string, once:
1 FRF2 FRF
but if I print the bytes written then it prints 6, twice. So the writer is writing correctly, 6 bytes two different times, but the host is receiving it apparently as a single buffer.
If I execute that for loop again, then the host receives (properly),
1 FRF
and then
2 FRF
handling the two received buffers each as they come in.
How can the std::stringstream cmd buffers be getting mixed like this?
There are no threads involved here.
In an effort to pick this apart a bit, if I insert just 1µsec of delay in that for loop, i.e. usleep(1); then it works properly. Also, if I call the home_axis() function manually, but equally rapid succession, without using a for loop like this,
this->pi.home_axis( this->controller_info.at(0).addr );
this->pi.home_axis( this->controller_info.at(1).addr );
then that also works.
So I'm wondering if it's possible there is a compiler optimization going on?

This has nothing to do with the compiler at all.
TCP is a byte stream. It has no concept of message boundaries. There is no 1:1 relationship between writes and reads. You can write 2 messages of 6 bytes each, and the receiver may receive all 12 bytes at a time, or 1 byte and then 11 bytes, or any combination in between. That is just the way TCP works. By default, it breaks up data packets as it sees fit to optimize transmissions.
What is important is that TCP guarantees the bytes will be delivered (unless the connection is lost), and it will deliver the bytes in the same order that they are written.
As such, the sender must indicate in the data itself where each message begins and ends. Either by sending a message's length before its content, or by separating each message with a unique delimiter (as you are).
On the receiving side, a single read may receive a partial message, or pieces of multiple messages, etc. It is the receiver's responsibility to buffer incoming bytes and extract only complete messages from that buffer as needed, regardless of however many reads it takes to complete them.
As you are delimiting your messages with a trailing \n, the receiver should buffer all bytes and extract only messages that have received their \n, leaving any incomplete message at the end of the buffer for subsequent reads to finish.
This way, message boundaries are preserved and handled correctly.

Related

How to tell if SSL_read has received and processed all the records from single message

Following is the dilemma,
SSL_read, on success returns number of bytes read, SSL_pending is used to tell if the processed record has more that to be read, that means probably buffer provided is not sufficient to contain the record.
SSL_read may return n > 0, but what if this happens when first records has been processed and message effectively is multi record communication.
Question: I am using epoll to send/receive messages, which means I have to queue up event in case I expect more data. What check will ensure that all the records have been read from single message and it's time to remove this event and queue up an response event that will write the response back to client?
PS: This code hasn't been tested so it may be incorrect. Purpose of the code is to share the idea that I am trying to implement.
Following is code snippet for the read -
//read whatever is available.
while (1)
{
auto n = SSL_read(ssl_, ptr_ + tail_, sz_ - tail_);
if (n <= 0)
{
int ssle = SSL_get_error(ch->ssl_, rd);
auto old_ev = evt_.events;
if (ssle == SSL_ERROR_WANT_READ)
{
//need more data to process, wait for epoll notification again
evt_.events = EPOLLIN | EPOLLERR;
}
else if (err == SSL_ERROR_WANT_WRITE)
{
evt_.events = EPOLLOUT | EPOLLERR;
}
else
{
/* connection closed by peer, or
some irrecoverable error */
done_ = true;
tail_ = 0; //invalidate the data
break;
}
if (old_ev != evt_.events)
if (epoll_ctl(epoll_fd_, EPOLL_CTL_MOD, socket_fd_, &evt_) < 0)
{
perror("handshake failed at EPOLL_CTL_MOD");
SSL_free(ssl_);
ssl_ = nullptr;
return false;
}
}
else //some data has been read
{
tail_ = n;
if (SSL_pending(ssl_) > 0)
//buffer wasn't enough to hold the content. resize and reread
resize();
else
break;
}
}
```
enter code here
SSL_read() returns the number of decrypted bytes returned in the caller's buffer, not the number of bytes received on the connection. This mimics the return value of recv() and read().
SSL_pending() returns the number of decrypted bytes that are still in the SSL's buffer and haven't been read by the caller yet. This would be equivalent to calling ioctl(FIONREAD) on a socket.
There is no way to know how many SSL/TLS records constitute an "application message", that is for the decrypted protocol data to dictate. The protocol needs to specify where a message ends and a new message begins. For instance, by including the message length in the message data. Or delimiting messages with terminators.
Either way, the SSL/TLS layer has no concept of "messages", only an arbitrary stream of bytes that it encrypts and decrypts as needed, and transmits in "records" of its choosing. Similar to how TCP breaks up a stream of arbitrary bytes into IP frames, etc.
So, while your loop is reading arbitrary bytes from OpenSSL, it needs to process those bytes to detect separations between protocol messages, so it can then act accordingly per message.
What check will ensure that all the records have been read from single message and it's time to remove this event and queue up an response event that will write the response back to client?
I'd have hoped that your message has a header with the number of records in it. Otherwise the protocol you've got is probably unparseable.
What you'd need is to have a stateful parser that consumes all the available bytes and outputs records once they are complete. Such a parser needs to suspend its state once it reaches the last byte of decrypted input, and then must be called again when more data is available to be read. But in all cases if you can't predict ahead of time how much data is expected, you won't be able to tell when the message is finished - that is unless you're using a self-synchronizing protocol. Something like ATM headers would be a starting point. But such complication is unnecessary when all you need is just to properly delimit your data so that the packet parser can know exactly whether it's got all it needs or not.
That's the problem with sending messages: it's very easy to send stuff that can't be decoded by the receiver, since the sender is perfectly fine with losing data - it just doesn't care. But the receiver will certainly need to know how many bytes or records are expected - somehow. It can be told this a-priori by sending headers that include byte counts or fixed-size record counts (it's the same size information just in different units), or a posteriori by using unique record delimiters. For example, when sending printable text split into lines, such delimiters can be Unicode paragraph separators (U+2029).
It's very important to ensure that the record delimiters can't occur within the record data itself. Thus you need some sort of a "stuffing" mechanism, where if a delimiter sequence appears in the payload, you can alter it so that it's not a valid delimiter anymore. You also need an "unstuffing" mechanism so that such altered delimiter sequences can be detected and converted back to their original form, of course without being interpreted as a delimiter. A very simple example of such delimiting process is the octet-stuffed framing in the PPP protocol. It is a form of HDLC framing. The record separator is 0x7E. Whenever this byte is detected in the payload, it is escaped - replaced by a 0x7D 0x5E sequence. On the receiving end, the 0x7D is interpreted to mean "the following character has been XOR'd with 0x20". Thus, the receiver converts 0x7D 0x5E to 0x5E first (it removes the escape byte), and then XORs it with 0x20, yielding the original 0x7E. Such framing is easy to implement but potentially has more overhead than framing with a longer delimiter sequence, or even a dynamic delimiter sequence whose form differs for each position within the stream. This could be used to prevent denial-of-service attacks, when the attacker may maliciously provide a payload that will incur a large escaping overhead. The dynamic delimiter sequence - especially if unpredictable, e.g. by negotiating a new sequence for every connection - prevents such service degradation.

How send big string in winsock using c++ from client to server

I am writing a server-client application using Winsock in c++ for sending a file line by line and I have a problem in sending huge string. The line size is very huge.
For getting the message from the client by the server I use the code below.
int result;
char message[200];
while (true)
{
recv(newSd, (char*)&message, sizeof(message), 0);
cout << "The Message from client: " << message << ";";
}
The above code working fine if I send small length of the message. But, what I wanted is to send an unknown size of lines in a file.
How to send a big unknown string instead of char message[200];
TCP is a byte stream, it knows nothing about messages or lines or anything like that. When you send data over TCP, all it knows about is raw bytes, not what the bytes represent. It is your responsibility to implement a messaging protocol on top of TCP to delimit the data in some meaningful way so the receiver can know when the data is complete. There are two ways to do that:
send the data length before sending the actual data. The receiver reads the length first, then reads however many bytes the length says.
send a unique terminator after sending the data. Make sure the terminator never appears in the data. The receiver can then read until the terminator is received.
You are not handling either of those in your recv() code, so I suspect you are not handling either of them in your send() code, too (which you did not show).
Since you are sending a text file, you can either:
send the file size, such as in a uint32_t or uint64_t (depending on how large the file is), then send the raw file bytes.
send each text line individually as-is, terminated by a CRLF or bare-LF line break after each line, and then send a final terminator after the last line.
You are also ignoring the return value of recv(), which tells you how many bytes were actually received. It can, and usually does, return fewer bytes than requested, so you must be prepared to call recv() multiple times, usually in a loop, to receive data completely. Same with send().

boost asio find beginning of message in tcp based protocol

I want to implement a client for a sensor that sends data over tcp and uses the following protocol:
the message-header starts with the byte-sequence 0xAFFEC0CC2 of type uint32
the header in total is 24 Bytes long (including the start sequence) and contains the size in bytes of the message-body as a uint32
the message-body is sent directly after the header and not terminated by a demimiter
Currently, I got the following code (assume a connected socket exists)
typedef unsigned char byte;
boost::system::error_code error;
boost::asio::streambuf buf;
std::string magic_word_s = {static_cast<char>(0xAF), static_cast<char>(0xFE),
static_cast<char>(0xC0), static_cast<char>(0xC2)};
ssize_t n = boost::asio::read_until(socket_, buf, magic_word_s, error);
if(error)
std::cerr << boost::system::system_error(error).what() << std::endl;
buf.consume(n);
n = boost::asio::read(socket_, buf, boost::asio::transfer_exactly(20);
const byte * p = boost::asio::buffer_cast<const byte>(buf.data());
uint32_t size_of_body = *((byte*)p);
unfortunately the documentation for read_until remarks:
After a successful read_until operation, the streambuf may contain additional data beyond the delimiter. An application will typically leave that data in the streambuf for a subsequent read_until operation to examine.
which means that I loose synchronization with the described protocol.
Is there an elegant way to solve this?
Well... as it says... you just "leave" it in the object, or temporary store it in another, and handle the whole message (below called 'packet') if it is complete.
I have a similar approach in one of my projects. I'll explain a little how I did it, that should give you a rough idea how you can handle the packets correctly.
In my Read-Handler (-callback) I keep checking if the packet is complete. The meta-data information (header for you) is temporary stored in a map associated with the remote-partner (map<RemoteAddress, InfoStructure>).
For example it can look like this:
4 byte identifier
4 byte message-length
n byte message
Handle incoming data, check if identifier + message-length are received already, continue to check if message-data is completed with received data.
Leave rest of the packet in the temporary buffer, erase old data.
Continue with handling when next packet arrives or check if received data completes next packet already...
This approach may sound a little slow, but I get even with SSL 10MB/s+ on a slow machine.
Without SSL much higher transfer-rates are possible.
With this approach, you may also take a look into read_some or its asynchronous version.

Winsock2 tcp/ip - some data packets are ignored probably due to null terminator from the previous packet

I wrote a simple client-server program. Network.h is a header file which uses Winsock2.h (TCP/IP mode) to create socket, accept/connect in blocking mode, send/recv in non-blocking mode. I made it so that the function string TNetwork::Recv(int size) will return the string "Nothing" if it gets WSAWOULDBLOCK error (no data is received yet)
Here is my main function:
int main(){
string Ans;
TNetwork::StartUp(); //WSA start up, etc
cin >> Ans;
if (Ans == "0"){ // 0 --> server
TNetwork::SetupAsServer(); //accept connection (in blocking mode!)
while (true){
TNetwork::Send("\nAss" + '\0'); //without null terminator, the client may read extra bytes, causing undefined behavior (?)
TNetwork::Send("embly" + '\0');
cin >> Ans;
}
}
else{ // others --> regard Ans as IP address. e.g. I can type "127.0.0.1"
TNetwork::SetupAsClient(Ans);
string Rec;
while (true){
Rec = TNetwork::Recv(1000);
if (Rec != "Nothing"){
cout << Rec;
}
}
}
system("PAUSE");
}
Supposedly, the client would print "Assembly" when connected, and when the server enters anything to its console window. Sometimes, though, the client would only print out "\nAss" in the console without the "embly.
To my understanding, TCP/IP ensures all data to be sent and in the correct order, so I guess what happens is that both packets arrive at the same time, which happen quite often over the unstable internet. And due to this null terminator, the client would ignore the "embly", since the Recv() function stopped reading when it hits a null terminator.
So, how can I ensure that the client will always read all data packets correctly?
Yes, the network stack will send the data in the correct order and doesn't care what termination type you use. This has to do with how you're receiving and processing the data stream (note: not packets, stream). If you receive all 11 bytes and print it to the screen, the print function will stop when it reaches the zero, but the rest of the data is still there.
Note: since it's a stream, what happens if you received only 10 bytes of data from the stream? You need to scan what you receive for the zero to know if you've received a full "zero-terminated string" if that's how you want to communicate your data.
EDIT: Also, I don't think "\nAss" + '\0' is doing what you think it is. Instead of adding a 0 character to the end of the string (which already has one, by the way), it's adding 0 to your string pointer.
As #mark points out, TCP is all about streams, not packets. TCP takes care of ensuring that data is reliably transmitted from A to B and that the data is delivered to the consumer in the order in which it was transmitted. Yes, the data is packetized on the wire, but the TCP stack on the system takes those packets and builds the stream which it makes available to you through the recv() function. The TCP stack handles out-of-order data, missing data, and duplicated data such that by the time your application sees it, the stream is a mirror-copy of when the sender sent.
To properly receive TCP data, you will typically need some kind of loop that reads data from the socket when it becomes available. The way I normally do this is to have a thread that is dedicated to servicing the socket. In the thread function is a loop that reads data from the socket when it becomes available and is idle otherwise. This loop reads data into a buffer of, say, 1 KB. Once the data is received from the socket into this buffer, the buffer is copied to another thread for processing. In the thread function for the processing thread is a loop that receives the 1 KB buffers from the socket thread and adds them to the back end of a master buffer of, say, 1 MB. The processing thread then processes the messages out of this master buffer and makes them available to the application.
For a simple demo application, two threads may be overkill. The two threads I've described could be certainly be combined into one, but for my application, it is more efficient to have two threads and take advantage of the multiple cores on my system. The point is, if you're going to have a front-end UI, there's not going to be a way around using at least one thread and still have the UI be responsive.
One other thing. There are two commonly-used mechanisms for protocol design. You're using one, namely, a marker (e.g., a null terminator, etc.) to signal the begin/end of a message. I don't prefer this mechanism mainly because the marker may actually need to be part of the message at some point. The other mechanism is to have a header on each message that tells, at a minimum, how long the message is. I prefer this mechanism and include in my headers a sync word and the message type as well. For example,
struct Header
{
__int16 _sync; // a hex pattern, e.g., 0xABCD
__int16 _type;
__int32 _length;
}
That's a total of 8 bytes. So when processing from the master buffer, I read the first 8 bytes, verify the sync word, and get the length. I determine if there are 'length' bytes available in the master buffer. If not, I have to wait until the socket thread provides me more data before checking again. If so, I extract 'length' bytes from the master buffer and pass that to an object created according to the specified type, which knows how to interpret that particular message. Then repeat.
As I mentioned, I use a master buffer of 1 MB or so. As messages are processed, it is important to remove them from the master buffer so there is additional space available for new data on the back end. This involves simply copying the unprocessed data, if any, to the beginning of the buffer. In cases where data comes in faster than you can process it, the master buffer may need the ability to resize itself to accommodate the additional data.
I hope that's not overwhelming. Start simple and add as you go.

recv windows, one byte per call, what the?

c++
#define BUF_LEN 1024
the below code only receives one byte when its called then immediately moves on.
output = new char[BUF_LEN];
bytes_recv = recv(cli, output, BUF_LEN, 0);
output[bytes_recv] = '\0';
Any idea how to make it receive more bytes?
EDIT: the client connecting is Telnet.
The thing to remember about networking is that you will be able to read as much data as has been received. Since your code is asking for 1024 bytes and you only read 1, then only 1 byte has been received.
Since you are using a telnet client, it sounds like you have it configured in character mode. In this mode, as soon as you type a character, it will be sent.
Try to reconfigure your telnet client in line mode. In line mode, the telnet client will wait until you hit return before it sends the entire line.
On my telnet client. In order to do that, first I type ctrl-] to get to the telnet prompt and then type "mode line" to configure telnet in line mode.
Update
On further thought, this is actually a very good problem to have.
In the real world, your data can get fragmented in unexpected ways. The client may make a single send() call of N bytes but the data may not arrive in a single packet. If your code can handle byte arriving 1 by 1, then you know it will work know matter how the data arrives.
What you need to do is make sure that you accumulate your data across multiple receives. After your recv call returns, you should then append the data a buffer. Something like:
char *accumulate_buffer = new char[BUF_LEN];
size_t accumulate_buffer_len = 0;
...
bytes_recv = recv(fd,
accumulate_buffer + accumulate_buffer_len,
BUF_LEN - accumulate_buffer_len,
0);
if (bytes_recv > 0)
accumulate_buffer_len += bytes_recv;
if (can_handle_data(accumulate_buffer, accumulate_buffer_len))
{
handle_data(accumulate_buffer, accumulate_buffer_len);
accumulate_buffer_len = 0;
}
This code keeps accumulating the recv into a buffer until there is enough data to handle. Once you handle the data, you reset the length to 0 and you start accumulating afresh.
First, in this line:
output[bytes_recv] = '\0';
you need to check if bytes_recv < 0 first before you do that because you might have an error. And the way your code currently works, you'll just randomly stomp on some random piece of memory (likely the byte just before the buffer).
Secondly, the fact you are null terminating your buffer indicates that you're expecting to receive ASCII text with no embedded null characters. Never assume that, you will be wrong at the worst possible time.
Lastly stream sockets have a model that's basically a very long piece of tape with lots of letters stamped on it. There is no promise that the tape is going to be moving at any particular speed. When you do a recv call you're saying "Please give me as many letters from the tape as you have so far, up to this many.". You may get as many as you ask for, you may get only 1. No promises. It doesn't matter how the other side spit bits of the tape out, the tape is going through an extremely complex bunch of gears and you just have no idea how many letters are going to be coming by at any given time.
If you care about certain groupings of characters, you have to put things in the stream (ont the tape) saying where those units start and/or end. There are many ways of doing this. Telnet itself uses several different ones in different circumstances.
And on the receiving side, you have to look for those markers and put the sequences of characters you want to treat as a unit together yourself.
So, if you want to read a line, you have to read until you get a '\n'. If you try to read 1024 bytes at a time, you have to take into account that the '\n' might end up in the middle of your buffer and so your buffer may contain the line you want and part of the next line. It might even contain several lines. The only promise is that you won't get more characters than you asked for.
Force the sending side to send more bytes using Nagle's algorithm, then you will receive them in packages.