I'm working with Cap'n'Proto and my understanding is there is no need to do serialization as it's already being done. So my question is, how would I access the serialized data and get it's size so that I can pass it in as a byte array to another library.
// person.capnp
struct Person {
name #0 :Text;
age #1 :Int16;
}
// ...
::capnp::MallocMessageBuilder message;
Person::Builder person = message.initRoot<Person>();
person.setName("me");
person.setAge(20);
// at this point, how do I get some sort of handle to
// the serialized data of 'person' as well as it's size?
I've seen the writePackedMessageToFd(fd, message); call, but didn't quite understand what was being passed and couldn't find any API docs on it. I also wasn't trying to write to a file descriptor as I need the serialized data returned as const void*.
Looking in Capnproto's message.h file is this function which is in the base class for MallocMessageBuilder which says it gets the raw data making up the message.
kj::ArrayPtr<const kj::ArrayPtr<const word>> getSegmentsForOutput();
// Get the raw data that makes up the message.
But even then, Im' not sure how to get it as const void*.
Thoughts?
::capnp::MallocMessageBuilder message;
is your binary message, and its size is
message.sizeInWords()
(size in bytes divided by 8).
This appears to be whats needed.
// ...
::capnp::MallocMessageBuilder message;
Person::Builder person = message.initRoot<Person>();
person.setName("me");
person.setAge(20);
kj::Array<capnp::word> dataArr = capnp::messageToFlatArray(message);
kj::ArrayPtr<kj::byte> bytes = dataArr.asBytes();
std::string data(bytes.begin(), bytes.end());
const void* dataPtr = data.c_str();
At this point, I have a const void* dataPtr and size using data.size().
Related
I'm working on making some Spotify API calls on an ESP32. I'm fairly new to C++ and while I seem to got it working how I wanted it to, I would like to know if it is the right way/best practice or if I was just lucky. The whole thing with chars and pointers is still quite confusing for me, no matter how much I read into it.
I'm calling the Spotify API, get a json response and parse that with the ArduinoJson library. The library returns all keys and values as const char*
The library I use to display it on a screen takes const char* as well. I got it working before with converting it to String, returning the String with the getTitle() function and converting it back to display it on screen. After I read that Strings are inefficient and best to avoid, I try to cut out the converting steps.
void getTitle()
{
// I cut out the HTTP request and stuff
DynamicJsonDocument doc(1024);
DeserializationError error = deserializeJson(doc, http.getStream(), );
JsonObject item = doc["item"];
title = item["name"]; //This is a const char*
}
const char* title = nullptr;
void loop(void) {
getTitle();
u8g2.clearBuffer();
u8g2.setDrawColor(1);
u8g2.setFont(u8g2_font_6x12_tf);
u8g2.drawStr(1, 10, title);
u8g2.sendBuffer();
}
Is it okay to do it like that?
This is not fine.
When seeing something like this, you should immediately become suspicious.
This is because in getTitle, you are asking a local object (item) for a pointer-- but you use the pointer later, when the item object no longer exists.
That means your pointer might be meaningless once you need it-- it might no longer reference your data, but some arbitrary other bytes instead (or even lead to crashes).
This problem is independent of what exact library you use, and you can often find relevant, more specific information by searching your library documentation for "lifetime" or "object ownership".
FIX
Make sure that item (and also DynamicJsonDocument, because the documentation tells you so!) both still exist when you use the data, e.g. like this:
void setTitle(const char *title)
{
u8g2.clearBuffer();
u8g2.setDrawColor(1);
u8g2.setFont(u8g2_font_6x12_tf);
u8g2.drawStr(1, 10, title);
u8g2.sendBuffer();
}
void updateTitle()
{
DynamicJsonDocument doc(1024);
DeserializationError error = deserializeJson(doc, http.getStream(), );
JsonObject item = doc["item"];
setTitle(item["name"]);
}
See also: https://arduinojson.org/v6/how-to/reuse-a-json-document/#the-best-way-to-use-arduinojson
Edit: If you want to keep parsing/display update decoupled
You could keep the JSON document "alive" for when the parsed data is needed:
/* "static" visibility, so that other c/cpp files ("translation units") can't
* mess mess with our JSON doc directly
*/
static DynamicJsonDocument doc(1024);
static const char *title;
void parseJson()
{
[...]
// super important to avoid leaking memory!!
doc.clear();
DeserializationError error = deserializeJson(doc, http.getStream(), );
// TODO: robustness/error handling (e.g. inbound JSON is missing "item")
title = doc["item"]["name"];
}
// may be nullptr when called before valid JSON was parsed
const char* getTitle()
{
return title;
}
Being new to C++, I am still struggling with pointers-to-pointers and I am not sure if my method below is returning decoded image bytes properly.
This method gets a base64 encoded image string from API. The method has to follow this signature as it is part of legacy code that is not allowed to abbreviate from the way it was written originally. So the signature has to stay the same. Also, I have omitted here async calls and continuations, exceptions etc for code simplicity.
int __declspec(dllexport) GetInfoAndPicture(CString uid, char **image, long *imageSize)
{
CString request = "";
request.Format(url);
http_client httpClient(url);
http_request msg(methods::POST);
...
http_response httpResponse;
httpResponse = httpClient.request(msg).get(); //blocking
web::json::value jsonValue = httpResponse.extract_json().get();
if (jsonValue.has_string_field(L"img"))
{
web::json::value base64EncodedImageValue = jsonValue.at(L"img");
utility::string_t imageString = base64EncodedImageValue.as_string();
std::vector<unsigned char> imageBytes = utility::conversions::from_base64(imageString);
image = (char**)&imageBytes; //Is this the way to pass image bytes back?
*imageSize = imageBytes.size();
}
...
}
The caller calls this method like so:
char mUid[64];
char* mImage;
long mImageSize;
...
resultCode = GetInfoAndPicture(mUid, &mImage, &mImageSize);
//process image given its data and its size
I know what pointer to pointer is, my question is specific to this line
image = (char**)&imageBytes;
Is this the correct way to return the image decoded from base64 into the calling code via the char** image formal parameter given the above method signature and method call?
I do get error "Program .... File: minkernel\crts\ucrt\src\appcrt\convert\isctype.cpp ... "Expression c >= -1 && c <= 255"" which I believe is related to the fact that this line is not correctly passing data back.
Give the requirements there isn't any way to avoid allocating more memory and copying the bytes. You cannot use the vector directly because that is local to the GetInfoAndPicture function and will be destroyed when that function exits.
If I understand the API correctly then this is what you need to do
//*image = new char[imageBytes.size()]; //use this if caller calls delete[] to deallocate memory
*image = (char*)malloc(imageBytes.size()); //use this if caller calls free(image) to deallocate memory
std::copy(imageBytes.begin(), imageBytes.end(), *image);
*imageSize = imageBytes.size();
Maybe there is some way in your utility::conversions functions of decoding directly to a character array instead of to a vector, but only you would know about that.
The problem is with allocating (and freeing) memory for that image; who is responsible for that?
You can't (shouldn't) allocate memory in one module and free it in another.
Your two options are:
Allocate large enough buffer on the caller side, and have DLL use it utility::conversions::from_base64(). The issue here is: what is large enough? Some Win APIs provide an additional method to query the required size. Doesn't fit this scenario as the DLL would either have to get that image for the second time, or hold it (indefinitely) until you ask for it.
Allocate required buffer in the DLL and return a pointer to it. You need to ensure that it won't be freed until the caller request to free it (in a separate API).
I am receiving messages from a socket.
The socket is packed within a header (that is basically the size of the message) and a footer that is a crc (a kind of code to check if the message is not corrupted)
So, the layout is something like :
size (2 bytes) | message (240 bytes) | crc (4 byte)
I wrote a operator>>
The operator>> is as following :
std::istream &operator>>(std::istream &stream, Message &msg) {
std::int16_t size;
stream >> size;
stream.read(reinterpret_cast<char*>(&msg), size);
// Not receive enough data
if (stream.rdbuf()->in_avail() < dataSize + 4) {
stream.setstate(std::ios_base::failbit);
return stream;
}
std::int16_t gotCrc;
stream >> gotCrc;
// Data not received correctly
if(gotCrc != computeCrc(msg)) {
stream.setstate(std::ios_base::failbit);
}
return stream;
}
The message can arrive byte by byte, or can arrive totally. We can even receive several messages in once.
Basically, what I did is something like this :
struct MessageReceiver {
std::string totalDataReceived;
void messageArrived(std::string data) {
// We add the data to totaldataReceived
totalDataReceived += data;
std::stringbuf buf(totalDataReceived);
std::istream stream(&buf);
std::vector<Message> messages(
std::istream_iterator<Message>(stream),
std::istream_iterator<Message>{});
std::for_each(begin(messages), end(messages), processMessage);
// +4 for crc and + 2 for the size to remove
auto sizeToRemove = [](auto init, auto message) {return init + message.size + 4 + 2;};
// remove the proceed messages
totalDataReceived.remove(accumulate(begin(messages), end(messages), 0, sizeToRemove);
}
};
So basically, we receive data, we insert it into a total array of data received. We stream it, and if we got at least one message, we remove it from the buffer totalDataReceived.
However, I am not sure it is the good way to go. Indeed, this code does not work when a compute a bad crc... (The message is not created, so we don't iterate over it). So each time, I am going to try to read the message with a bad crc...
How can I do this? I can not keep all the data in totalDataReceived because I can receive a lot of messages during the execution life time.
Should I implement my own streambuf?
I found what you want to create is a class which acts like a std::istream. Of course you can choose to create your own class, but I prefer to implement std::streambuf for some reasons.
First, people using your class are accustomed to using it since it acts the same as std::istream if you inherit and implement std::streambuf and std::istream.
Second, you don't need to create extra method or don't need to override operators. They're already ready in std::istream's class level.
What you have to do to implement std::streambuf is to inherit it, override underflow() and setting get pointers using setg().
I would like to serialize/deserialize some structured data in order to send it over the network via a char* buffer.
More precisely, suppose I have a message of type struct Message.
struct Message {
Header header;
Address address;
size_t size; // size of data part
char* data;
} message
In C, I would use something such as:
size = sizeof(Header) + sizeof(Address) + sizeof(size_t) + message.size;
memcpy(buffer, (char *) message, size);
to serialize, and
Message m = (Message) buffer;
to deserialize.
What would be the "right" way to do it in C++. Is it better to define a class rather than a struct. Should I overload some operators? are there alignment issues to consider?
EDIT: thanks for pointing the "char *" problem. The provided C version is incorrect. The data section pointed to by the data field should be copied separately.
Actually there are many flavors:
You can boost let it do for you: http://www.boost.org/doc/libs/1_52_0/libs/serialization/doc/tutorial.html
Overloading the stream operators << for serialization and >> for deserialization works well with file and string streams
You could specify a constructor Message (const char*) for constructing from a char*.
I am a fan of static methods for deserialization like:
Message {
...
static bool desirialize (Message& dest, char* source);
}
since you could catch errors directly when deserializing.
And the version you proposed is ok, when applying the modifications in the comments are respected.
Why not insert a virtual 'NetworkSerializable' Class into your inheritance tree? A 'void NetSend(fd socket)' method would send stuff, (without exposing any private data), and 'int(bufferClass buffer)' could return -1 if no complete, valid message was deserilalized, or, if a valid message has been assembled, the number of unused chars in 'buffer'.
That encapsulates all the assembly/disassembly protocol state vars and other gunge inside the class, where it belongs. It also allows message/s to be assembled from multiple stream input buffers.
I'm not a fan of static methods. Protocol state data associated with deserialization should be per-instance, (thread-safety).
I have a WiFi Listener registered as a callback (pointer function) with a fixed 3rd party interface. I used a static member of my function to register the callback function and then that static function calls a nonstatic member through a static cast. The main problem is that I cannot touch the resulting char * buff with any members of my class nor can I even change an int flag that is also a member of my class. All result in runtime access violations. What can I do? Please see some of my code below. Other problems are described after the code.
void *pt2Object;
TextWiFiCommunication::TextWiFiCommunication()
{
networkDeviceListen.rawCallback = ReceiveMessage_thunkB;
/* some other initializing */
}
int TextWiFiCommunication::ReceiveMessage_thunkB(int eventType, NETWORK_DEVICE *networkDevice)
{
if (eventType == TCP_CLIENT_DATA_READY)
static_cast<TextWiFiCommunication *>(pt2Object)->ReceiveMessageB(eventType,networkDevice);
return 1;
}
int TextWiFiCommunication::ReceiveMessageB(int eventType, NETWORK_DEVICE *networkDevice)
{
unsigned char outputBuffer[8];
// function from an API that reads the WiFi socket for incoming data
TCP_readData(networkDevice, (char *)outputBuffer, 0, 8);
std::string tempString((char *)outputBuffer);
tempString.erase(tempString.size()-8,8); //funny thing happens the outputBuffer is double in size and have no idea why
if (tempString.compare("facereco") == 0)
cmdflag = 1;
return 1;
}
So I can't change the variable cmdflag without an access violation during runtime. I can't declare outputBuffer as a class member because nothing gets written to it so I have to do it within the function. I can't copy the outputBuffer to a string type member of my class. The debugger shows me strlen.asm code. No idea why. How can I get around this? I seem to be imprisoned in this function ReceiveMessageB.
Thanks in advance!
Some other bizzare issues include: Even though I call a buffer size of 8. When I take outputBuffer and initialize a string with it, the string has a size of 16.
You are likely getting an access violation because p2tObject does not point to a valid object but to garbage. When is p2tObject initialized? To what does it point?
For this to work, your code should look something like this:
...
TextWifiCommunication twc;
p2tObject = reinterpret_cast<void*>(&twc);
...
Regarding the string error, TCP_readData is not likely to null-terminate the character array you give it. A C-string ends at the first '\0' (null) character. When you convert the C-string to a std::string, the std::string copies bytes from the C-string pointer until it finds the null terminator. In your case, it happens to find it after 16 characters.
To read up to 8 character from a TCP byte stream, the buffer should be 9 characters long and all the bytes of the buffer should be initialized to '\0':
...
unsigned char outputBuffer[9] = { 0 };
// function from an API that reads the WiFi socket for incoming data
TCP_readData(networkDevice, (char *)outputBuffer, 0, 8);
std::string tempString((char *)outputBuffer);
...