How could I send message of many fields with c++ grpc stream? - c++

Suppose the message that I need to send from server to client is like this:
message BatchReply {
bytes data = 1;
repeated int32 shape = 2;
string dtype = 3;
repeated int64 labels = 4;
}
Here shape/dtype are only small variables and can be represented with a few int space, while data/labels are large memory buffers that can takes as much as 1G memory.
I am trying to send this message with stream:
service ImageService {
rpc get_batch (BatchRequest) returns (stream BatchReply) {}
}
My question is that the examples I could find to send message through stream are all about messages with only one field in the message struct, such as:
service TransferFile {
rpc Upload(stream Chunk) returns (Reply) {}
}
message Chunk {
bytes buffer = 1; // here is only on field of buffer, what if there is a field of int val = 2; ?
}
What if there are two fields in the struct of Chunk. Do I need to call set_val() each time when I call set_buffer() during the same stream feeding process ?

You can just send a message with multiple fields over grpc. That is the advantage of using protobuf.
I do not know if the transmission layer you are using is able to deal with such a large message as you are specifying. You could test that. If this does not work you can use the example you give of TransferFile.
When looking at Chunk I get the impression that they are sending segments of the whole data. On the other side they then reconstruct the segments into the complete set. The type used for Chunk is bytes. Those are just raw bytes and can represent anything you would like.
To send your BatchReply in chunks you can use the following steps:
Set the data in a BatchReply object.
Serialize the object to a byte array.
Set a chunk of the byte array in a Chunk object. For example 100 bytes each time.
Send the Chunk object using the TransferFile interface.
Repeat the process from step 3. until you reach the end of the byte array.
On the reception side you concatenate the chunks into one array and deserialize the array back to a BatchReply object.

Related

How to tell if SSL_read has received and processed all the records from single message

Following is the dilemma,
SSL_read, on success returns number of bytes read, SSL_pending is used to tell if the processed record has more that to be read, that means probably buffer provided is not sufficient to contain the record.
SSL_read may return n > 0, but what if this happens when first records has been processed and message effectively is multi record communication.
Question: I am using epoll to send/receive messages, which means I have to queue up event in case I expect more data. What check will ensure that all the records have been read from single message and it's time to remove this event and queue up an response event that will write the response back to client?
PS: This code hasn't been tested so it may be incorrect. Purpose of the code is to share the idea that I am trying to implement.
Following is code snippet for the read -
//read whatever is available.
while (1)
{
auto n = SSL_read(ssl_, ptr_ + tail_, sz_ - tail_);
if (n <= 0)
{
int ssle = SSL_get_error(ch->ssl_, rd);
auto old_ev = evt_.events;
if (ssle == SSL_ERROR_WANT_READ)
{
//need more data to process, wait for epoll notification again
evt_.events = EPOLLIN | EPOLLERR;
}
else if (err == SSL_ERROR_WANT_WRITE)
{
evt_.events = EPOLLOUT | EPOLLERR;
}
else
{
/* connection closed by peer, or
some irrecoverable error */
done_ = true;
tail_ = 0; //invalidate the data
break;
}
if (old_ev != evt_.events)
if (epoll_ctl(epoll_fd_, EPOLL_CTL_MOD, socket_fd_, &evt_) < 0)
{
perror("handshake failed at EPOLL_CTL_MOD");
SSL_free(ssl_);
ssl_ = nullptr;
return false;
}
}
else //some data has been read
{
tail_ = n;
if (SSL_pending(ssl_) > 0)
//buffer wasn't enough to hold the content. resize and reread
resize();
else
break;
}
}
```
enter code here
SSL_read() returns the number of decrypted bytes returned in the caller's buffer, not the number of bytes received on the connection. This mimics the return value of recv() and read().
SSL_pending() returns the number of decrypted bytes that are still in the SSL's buffer and haven't been read by the caller yet. This would be equivalent to calling ioctl(FIONREAD) on a socket.
There is no way to know how many SSL/TLS records constitute an "application message", that is for the decrypted protocol data to dictate. The protocol needs to specify where a message ends and a new message begins. For instance, by including the message length in the message data. Or delimiting messages with terminators.
Either way, the SSL/TLS layer has no concept of "messages", only an arbitrary stream of bytes that it encrypts and decrypts as needed, and transmits in "records" of its choosing. Similar to how TCP breaks up a stream of arbitrary bytes into IP frames, etc.
So, while your loop is reading arbitrary bytes from OpenSSL, it needs to process those bytes to detect separations between protocol messages, so it can then act accordingly per message.
What check will ensure that all the records have been read from single message and it's time to remove this event and queue up an response event that will write the response back to client?
I'd have hoped that your message has a header with the number of records in it. Otherwise the protocol you've got is probably unparseable.
What you'd need is to have a stateful parser that consumes all the available bytes and outputs records once they are complete. Such a parser needs to suspend its state once it reaches the last byte of decrypted input, and then must be called again when more data is available to be read. But in all cases if you can't predict ahead of time how much data is expected, you won't be able to tell when the message is finished - that is unless you're using a self-synchronizing protocol. Something like ATM headers would be a starting point. But such complication is unnecessary when all you need is just to properly delimit your data so that the packet parser can know exactly whether it's got all it needs or not.
That's the problem with sending messages: it's very easy to send stuff that can't be decoded by the receiver, since the sender is perfectly fine with losing data - it just doesn't care. But the receiver will certainly need to know how many bytes or records are expected - somehow. It can be told this a-priori by sending headers that include byte counts or fixed-size record counts (it's the same size information just in different units), or a posteriori by using unique record delimiters. For example, when sending printable text split into lines, such delimiters can be Unicode paragraph separators (U+2029).
It's very important to ensure that the record delimiters can't occur within the record data itself. Thus you need some sort of a "stuffing" mechanism, where if a delimiter sequence appears in the payload, you can alter it so that it's not a valid delimiter anymore. You also need an "unstuffing" mechanism so that such altered delimiter sequences can be detected and converted back to their original form, of course without being interpreted as a delimiter. A very simple example of such delimiting process is the octet-stuffed framing in the PPP protocol. It is a form of HDLC framing. The record separator is 0x7E. Whenever this byte is detected in the payload, it is escaped - replaced by a 0x7D 0x5E sequence. On the receiving end, the 0x7D is interpreted to mean "the following character has been XOR'd with 0x20". Thus, the receiver converts 0x7D 0x5E to 0x5E first (it removes the escape byte), and then XORs it with 0x20, yielding the original 0x7E. Such framing is easy to implement but potentially has more overhead than framing with a longer delimiter sequence, or even a dynamic delimiter sequence whose form differs for each position within the stream. This could be used to prevent denial-of-service attacks, when the attacker may maliciously provide a payload that will incur a large escaping overhead. The dynamic delimiter sequence - especially if unpredictable, e.g. by negotiating a new sequence for every connection - prevents such service degradation.

Is Arrow Streaming end-to-end copy-free

I am confused about Arrow Streaming. Many sources describing Arrow just paraphrase the following
The Arrow memory format supports zero-copy reads
and say that Arrow is zero-copy tool.
However, as far as I understand these paragraphs:
The primitive unit of serialized data in the columnar format is the “record batch”. Semantically, a record batch is an ordered collection of arrays, known as its fields, each having the same length as one another but potentially different data types. A record batch’s field names and types collectively form the batch’s schema.
In this section we define a protocol for serializing record batches into a stream of binary payloads and reconstructing record batches from these payloads without need for memory copying.
the description of the IPC Streaming Format and, my limited understanding, the source, data is serialized, and only deserialization is zero-copy.
In other words - systems that use Arrow Streaming actually copy the data on the way.
Is that correct?
As you said, Arrow is always zero-copy on the receiver side.
systems that use Arrow Streaming actually copy the data on the way.
It depends what you mean by "copy". Is data duplicated in-memory in the same process? No. The bytes have to be transported somehow from one virtual address space to another, whether you believe that technically constitutes a "copy" or not depends on your application (and point of view, perhaps).
Here is the actual C++ code (as of this writing) where the data written by the sender into an OutputStream which is a proxy for the receiver
// Now write the buffers
for (size_t i = 0; i < payload.body_buffers.size(); ++i) {
const std::shared_ptr<Buffer>& buffer = payload.body_buffers[i];
int64_t size = 0;
int64_t padding = 0;
// The buffer might be null if we are handling zero row lengths.
if (buffer) {
size = buffer->size();
padding = BitUtil::RoundUpToMultipleOf8(size) - size;
}
if (size > 0) {
RETURN_NOT_OK(dst->Write(buffer));
}
if (padding > 0) {
RETURN_NOT_OK(dst->Write(kPaddingBytes, padding));
}
}
Nothing is being forcibly copied here. If dst is standing in front of a socket-like interface then the bytes go on the wire to the receiver immediately (or with buffering, or whatever the OutputStream is doing). If dst is a file handle then the bytes are written to the file, etc.

Serialize and deserialize the message using google protobuf in socket programming in C++

Message format to send to server side as below :
package test;
message Test {
required int32 id = 1;
required string name = 2;
}
Server.cpp to do encoding :
string buffer;
test::Test original;
original.set_id(0);
original.set_name("original");
original.AppendToString(&buffer);
send(acceptfd,buffer.c_str(), buffer.size(),0);
By this send function it will send the data to client,i hope and i am not getting any error also for this particular code.
But my concern is like below:
How to decode using Google Protocol buffer for the above message in
the client side
So that i can see/print the message.
You should send more than just the protobuf message to be able to decode it on the client side.
A simple solution would be to send the value of buffer.size() over the socket as a 4-byte integer using network byte order, and the send the buffer itself.
The client should first read the buffer's size from the socket and convert it from network to host byte order. Let's denote the resulting value s. The client must then preallocate a buffer of size s and read s bytes from the socket into it. After that, just use MessageLite::ParseFromString to reconstruct your protobuf.
See here for more info on protobuf message methods.
Also, this document discourages the usage of required:
You should be very careful about marking fields as required. If at
some point you wish to stop writing or sending a required field, it
will be problematic to change the field to an optional field – old
readers will consider messages without this field to be incomplete and
may reject or drop them unintentionally. You should consider writing
application-specific custom validation routines for your buffers
instead. Some engineers at Google have come to the conclusion that
using required does more harm than good; they prefer to use only
optional and repeated. However, this view is not universal.

Receving TCP/IP completely before processing it. How to know if total data sent has been received?

Currently I am receiving data synchronously in the following manner
boost::array<char, 2000> buf;
while(true)
{
std::string dt;
size_t len = connect_sock->receive(boost::asio::buffer(buf, 3000));
std::copy(buf.begin(), buf.begin()+len, std::back_inserter(dt));
std::cout << dt;
}
My question is whether this method is efficient enough to receive data that exceed the buffer size . Is there any way that I could know exactly how much data is available so that I could adjust the buffer size accordingly ? (The reason for this is that my server sends a particular response to a request that needs to be processed only when an entire response has been stored in a string variable.
If you are sending data using TCP, you have to take care of this at the application protocol level.
For example, you could prefix each request with a header that would include the number of bytes that make up the request. The receiver, having read and parsed the header, would know how many more bytes it would need to read to get the rest of the request. Then it could repeatedly call receive() until it gets the correct amount of data.

Using Boost.Asio to get "the whole packet"

I have a TCP client connecting to my server which is sending raw data packets. How, using Boost.Asio, can I get the "whole" packet every time (asynchronously, of course)? Assume these packets can be any size up to the full size of my memory.
Basically, I want to avoid creating a statically sized buffer.
Typically when you build a custom protocol on the top of TCP/IP you use a simple message format where first 4 bytes is an unsigned integer containing the message length and the rest is the message data. If you have such a protocol then the reception loop is as simple as below (not sure what is ASIO notation, so it's just an idea)
for(;;) {
uint_32_t len = 0u;
read(socket, &len, 4); // may need multiple reads in non-blocking mode
len = ntohl(len);
assert (len < my_max_len);
char* buf = new char[len];
read(socket, buf, len); // may need multiple reads in non-blocking mode
...
}
typically, when you do async IO, your protocol should support it.
one easy way is to prefix a byte array with it's length at the logical level, and have the reading code buffer up until it has a full buffer ready for parsing.
if you don't do it, you will end up with this logic scattered all over the place (think about reading a null terminated string, and what it means if you just get a part of it every time select/poll returns).
TCP doesn't operate with packets. It provides you one contiguous stream. You can ask for the next N bytes, or for all the data received so far, but there is no "packet" boundary, no way to distinguish what is or is not a packet.