How to send byte data over gRPC with C/C++? - c++

So I'm using gRPC to store data in a key-value store.
The protos look like this:
syntax = "proto3";
package keyvaluestore;
service KeyValueStore {
rpc AddUser(Credentials) returns (Response) {}
rpc Get(Request) returns (Response) {}
rpc Put(Request) returns (Response) {}
rpc Cput(Request) returns (Response) {}
rpc Delete(Request) returns (Response) {}
}
message Credentials {
string user = 1;
string passwd = 2;
}
message Request {
string user = 1;
string key = 2;
bytes val = 3;
bytes val2 = 4;
string addr = 5;
string command = 6;
}
message Response {
bytes val = 1;
uint32 nbytes = 2;
string message = 3;
}
Right now, the issue is that if we send over say, an image as byte data which can include the null byte, then when the server receives it in the Request object, it treats it as a string; when it does this it only reads it up the the first null byte.
How we pack the Request object on the client side:
bool KeyValueStoreClient::Put(const string& user, const string& key, const char* val) {
Request req;
req.set_user(user);
req.set_key(key);
req.set_val(val);
ClientContext ctx;
Response res;
Status status = stub_->Put(&ctx, req, &res);
}
Server receives req->val() as a string instead of char*:
Status KeyValueStoreServiceImpl::Put(ServerContext* ctx, const Request* req, Response* res) {
// req->val() is a string
}

In your method Put the val argument is a const char*, not a std::string.
Protobuf documentation for C++ generated code (i.e., the interface to your messages) says that when setting a value using a const char* (the void set_foo(const char* value) overload) it uses the first \0 as a terminator.
Tell it the size explicitly by using the void set_foo(const char* value, int size) overload.

Related

Deserialize Json object from mqtt payload using ArduinoJson library

I'm trying to deserialize a Json object using the ArduinoJson 6 library. The object is passing through a mqtt callback using the PubSubClient library. The payload contains the following example: "{\"action\":\"message\",\"amount\":503}" but I am unable to retrieve the amount value. Only zeros are returned when using the following:
void messageReceived(char *topic, byte *payload, unsigned int length)
{
DynamicJsonDocument doc(1024);
deserializeJson(doc, payload, length);
const int results = doc["amount"];
Serial.print(results);
}
This works and returns 503 as needed:
DynamicJsonDocument doc(1024);
char json[] = "{\"action\":\"message\",\"amount\":503}";
deserializeJson(doc, json);
const int results = doc["amount"];
Serial.print(results);
I see the results of the payload when I use the following method:
void messageReceived(char *topic, byte *payload, unsigned int length)
{
for (unsigned int i = 0; i < length; i++)
{
Serial.print((char)payload[i]);
}
}
What is preventing me from being able to parse out the amount value from the first method?
When programming in C++, it always need to be aware the type of data that you are dealing with. The payload is a byte array, which is not what deserializeJson(doc, payload, length); is expecting, see the function signature of deserializeJson().
void messageReceived(char *topic, byte *payload, unsigned int length)
{
DynamicJsonDocument doc(128);
deserializeJson(doc, (char*) payload, length);
Serial.print(doc["amount"]);
}
Update & resolution:
The first method in my original post worked fine once I fixed the data that was being sent from the Lambda function to the IOT side. (Something I didn't include in my original question and didn't think it was relevant. Turns out it was.) Here is a snippet from the Lambda function that is receiving the data. The issue was that the data was being sent as a string and not parsed. Once I stringified the response and then parsed the output, it worked. Thank you #hcheung for the assistance and helpful info. Your suggestion works as well but only after I fixed the Lambda function.
async function sendToIOT(response) {
const data = JSON.stringify(response);
const iotResponseParams = {
topic: 'mythingname/subscribe',
payload: JSON.parse(data)
};
return iotdata.publish(iotResponseParams).promise()
}

Solidity - ecrecover function is returning an incorrect address than the expected one

I am trying to sign a message and verify it later. But while verifying,
the returned address from the ecrecover is very odd and not matching with any of the accounts I am using.
Solidity Code:-
library CryptoSuite { function splitSignature(bytes memory sig) internal pure returns(uint8 v, bytes32 r, bytes32 s) {
require(sig.length == 65);
assembly{
// first 32bytes
r := mload(add(sig, 32))
// next 32bytes
s := mload(add(sig, 64))
// last 32bytes
v := byte(0, mload(add(sig,96)))
}
return (v, r, s);
}
function recoverSigner(bytes32 message, bytes memory sig) internal pure returns (address) {
(uint8 v, bytes32 r, bytes32 s) = splitSignature(sig);
return ecrecover(message, v, r, s); //recovers the value of the person who signed this
}
}
function isMatchingSignature(bytes32 message, uint id, address issuer) public view returns (address) {
Certificate memory cert = certificates[id];
require(cert.issuer.id == issuer);
address recoveredSigner = CryptoSuite.recoverSigner(message, cert.signature);
return recoveredSigner == cert.issuer.id;
}
Java Script Code
it('should verify that the certificate signature matches the issuer', async () => {
const {inspector, manufacturerA} = this.defaultEntities;
const vaccineBatchId = 0;
const message = `Inspector(${inspector.id}) has certified vaccine batch #${vaccineBatchId} for Manufacturer (${manufacturerA.id}).`;
//const message = `Inspector has certified vaccine batch for Manufacturer (${manufacturerA.id}).`;
const certificate = await this.coldChainInstance.certificates.call(0);
console.log(certificate);
const signerMatches = await this.coldChainInstance.isMatchingSignature(
this.web3.utils.keccak256(message),
certificate.id,
inspector.id,
{ from : this.owner }
);
console.log(signerMatches);
assert.equal(signerMatches, true, "can't verify");
});

using a bytes field as proxy for arbitrary messages

Hello nano developers,
I'd like to realize the following proto:
message container {
enum MessageType {
TYPE_UNKNOWN = 0;
evt_resultStatus = 1;
}
required MessageType mt = 1;
optional bytes cmd_evt_transfer = 2;
}
message evt_resultStatus {
required int32 operationMode = 1;
}
...
The dots denote, there are more messages with (multiple) primitive containing datatypes to come. The enum will grow likewise, just wanted to keep it short.
The container gets generated as:
typedef struct _container {
container_MessageType mt;
pb_callback_t cmd_evt_transfer;
} container;
evt_resultStatus is:
typedef struct _evt_resultStatus {
int32_t operationMode;
} evt_resultStatus;
The field cmd_evt_transfer should act as a proxy of subsequent messages like evt_resultStatus holding primitive datatypes.
evt_resultStatus shall be encoded into bytes and be placed into the cmd_evt_transfer field.
Then the container shall get encoded and the encoding result will be used for subsequent transfers.
The background why to do so, is to shorten the proto definition and avoid the oneof thing. Unfortunately syntax version 3 is not fully supported, so we can not make use of any fields.
The first question is: will this approach be possible?
What I've got so far is the encoding including the callback which seems to behave fine. But on the other side, decoding somehow skips the callback. I've read issues here, that this happened also when using oneof and bytes fields.
Can someone please clarify on how to proceed with this?
Sample code so far I got:
bool encode_msg_test(pb_byte_t* buffer, int32_t sval, size_t* sz, char* err) {
evt_resultStatus rs = evt_resultStatus_init_zero;
rs.operationMode = sval;
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
/*encode container*/
container msg = container_init_zero;
msg.mt = container_MessageType_evt_resultStatus;
msg.cmd_evt_transfer.arg = &rs;
msg.cmd_evt_transfer.funcs.encode = encode_cb;
if(! pb_encode(&stream, container_fields, &msg)) {
const char* local_err = PB_GET_ERROR(&stream);
sprintf(err, "pb_encode error: %s", local_err);
return false;
}
*sz = stream.bytes_written;
return true;
}
bool encode_cb(pb_ostream_t *stream, const pb_field_t *field, void * const *arg) {
evt_resultStatus* rs = (evt_resultStatus*)(*arg);
//with the below in place a stream full error rises
// if (! pb_encode_tag_for_field(stream, field)) {
// return false;
// }
if(! pb_encode(stream, evt_resultStatus_fields, rs)) {
return false;
}
return true;
}
//buffer holds previously encoded data
bool decode_msg_test(pb_byte_t* buffer, int32_t* sval, size_t msg_len, char* err) {
container msg = container_init_zero;
evt_resultStatus res = evt_resultStatus_init_zero;
msg.cmd_evt_transfer.arg = &res;
msg.cmd_evt_transfer.funcs.decode = decode_cb;
pb_istream_t stream = pb_istream_from_buffer(buffer, msg_len);
if(! pb_decode(&stream, container_fields, &msg)) {
const char* local_err = PB_GET_ERROR(&stream);
sprintf(err, "pb_encode error: %s", local_err);
return false;
}
*sval = res.operationMode;
return true;
}
bool decode_cb(pb_istream_t *istream, const pb_field_t *field, void **arg) {
evt_resultStatus * rs = (evt_resultStatus*)(*arg);
if(! pb_decode(istream, evt_resultStatus_fields, rs)) {
return false;
}
return true;
}
I feel, I don't have a proper understanding of the encoding / decoding process.
Is it correct to assume:
the first call of pb_encode (in encode_msg_test) takes care of the mt field
the second call of pb_encode (in encode_cb) handles the cmd_evt_transfer field
If I do:
bool encode_cb(pb_ostream_t *stream, const pb_field_t *field, void * const *arg) {
evt_resultStatus* rs = (evt_resultStatus*)(*arg);
if (! pb_encode_tag_for_field(stream, field)) {
return false;
}
if(! pb_encode(stream, evt_resultStatus_fields, rs)) {
return false;
}
return true;
}
then I get a stream full error on the call of pb_encode.
Why is that?
Yes, the approach is reasonable. Nanopb callbacks do not care what the actual data read or written by the callback is.
As for why your decode callback is not working, you'll need to post the code you are using for decoding.
(As an aside, Any type does work in nanopb and is covered by this test case. But the type_url included in all Any messages makes them have a quite large overhead.)

How can I determine what type a (serialized and templatized) object received via socket is?

I'm using google protobuf to implement a simple Request/Response based protocol.
A peer can receive via socket both Request and Response, (of course serialized) as string.
I'm using my own c++ socket implementation, so I implemented operator>> this way (the same is for operator<<) to receive data from a socket object:
...
template<class M>
void operator>>(M& m) throw (socks::exception) {
std::string str;
if (!this->recv(str)) {
throw socks::exception(">> failed to retrieve stream via socket");
return;
}
if (!m.ParseFromString(str))
throw socks::exception(
"failed to parse the received stream via socket");
}
So template argument M can be objects Request and Response.
// some lines from req_res.proto
message Request {
required action_t action = 1;
}
enum result_t {
...
}
message Response {
required result_t result = 1;
...
}
How can I determine whether I received a Response or a Request using operator>> this way?
my_socket_object s;
...
for (;;) {
Request|Response r;
s >> r;
...
}
...
You can have one basic Message object and extend all other types used in your protocol from it:
message Message {
extensions 100 to max;
}
message Request {
extends Message {
optional Request request = 100;
}
required action_t action = 1;
}
message Response {
extends Message {
optional Response response = 101;
}
required result_t result = 1;
}
This is a bit more elegant, self contained and easier to extend, than the discriminator/union solution proposed in the other answer IMHO.
You can use this technique even further to structure e.g. your Request/Response messages like this
message Request {
extends Message {
optional Request request = 100;
}
extensions 100 to max;
}
message Action1 {
extends Request {
optional Action1 action1 = 100;
}
optional int32 param1 = 1;
optional int32 param2 = 2;
}
message Action2 {
extends Request {
optional Action2 action2 = 101;
}
optional int32 param1 = 1;
optional int32 param2 = 2;
}
One possible approach is to put any possible message inside another high level message as a kind of tagged-union:
enum protocolmessage_t {
Request = 1;
Response = 2;
}
message ProtocolMessage {
required protocolmessage_t type = 1;
optional Request request = 10;
optional Response response = 11;
}
Then you would provide this ProtocolMessage as the M parameter to the >> operator and you can check the type and extract the corresponding value element.
An alternative way is to prefix every message with 1 byte for the type, make a switch on that type and then call your >> operator with the corresponding type.

QuickFix C++: getting total size of FIX::Message passed to FIX::Application::fromApp()

I’m using QFIX_1_13_3 and I have the a question regarding the C++ API.
Is FIX::Message::bodyLength() (with default arguments) the correct API to call within FIX::Application::fromApp() in order to obtain the total size of the incoming binary message FIX::Message? From here it looks like it is, but just wanted to confirm:
http://www.quickfixengine.org/quickfix/doc/html/_message_8h_source.html#l00062
00197 int bodyLength( int beginStringField = FIELD::BeginString,
00198 int bodyLengthField = FIELD::BodyLength,
00199 int checkSumField = FIELD::CheckSum ) const
00200 { return m_header.calculateLength(beginStringField, bodyLengthField, checkSumField)
00201 + calculateLength(beginStringField, bodyLengthField, checkSumField)
00202 + m_trailer.calculateLength(beginStringField, bodyLengthField, checkSumField);
00203 }
What I intend to do is memcpy into a memory mapped file the entire received message FIX::Message :
void fromApp( const FIX::Message& message, const FIX::SessionID& sessionID ) {
...
memcpy(persistFilePos, &message, message::bodyLength());
}
Does this make sense?