protobuf for configuration files - c++

Related to using protobuf as a textual configuraton file I'd like to use protobuf for configuration file.
I expect that protobuf allows me to use simple parser with exact structure.
My configuration structure looks like
//my.proto
package my_config;
message MyConfigItem {
required string type = 1;
required string name = 2;
repeated string inputNames = 3 [packed=true];
repeated string outputNames = 4 [packed=true];
}
And bunch of different items in config files like
MyConfigItem {
type = "type1";
name = "name1";
inputNames = {"input1", "input2"};
}
What's the best way for organizing that?

You may write config file like:
type : "type1"
name : "name1"
inputNames : {"input1", "input2"}
So I think that you must use ':' instead of '=', and "do not" write the items wrapped in the brackets.

Related

"Too many message types specified in schema definition" when saving Schema

I would like to create this protobuf schema in google cloud pub/sub console.
I have a little local script where I successfully serialized and deserialized data with this schema with protobuf.
syntax = "proto2";
package mypackage;
message VideoImpression {
optional string user_id = 1;
optional string candidate_id = 2;
optional int64 event_timestamp = 3;
}
message VideoImpressionsList {
repeated VideoImpression video_impressions = 1;
}
When I save the schema I get this error:
Too many message types specified in schema definition.
I tried splitting both messages into seperate schema definitions but then it would complaint that e.g. when saving message VideoImpressionsList:
"VideoImpression" is not defined.
How can I make pub/sub to accept my above schema with two message types defined?
Thanks for any help
If you want to use one message type inside another, you should define one inside the other like this:
syntax = "proto2";
package mypackage;
message VideoImpressionsList {
message VideoImpression {
optional string user_id = 1;
optional string candidate_id = 2;
optional int64 event_timestamp = 3;
}
repeated VideoImpression video_impressions = 1;
}

Deal will dict Using Maps in terraform

I'm looking for way to define a list of ssh keys in a variables file so that I could retrieve them in the tf module code for my compute instance like this :
metadata = {
ssh-keys = join("\n", [for user, key in var.ssh_keys : "${user}:${key}"])
}
Here is the content of the variables file I wrote to achieve that :
variable "ssh_keys" {
type = "map"
default = {
{
user = "amary"
key = "${file("/Users/nixmind/.ssh/amary.pub")}"
}
{
user = "nixmind"
key = "${file("/Users/nixmind/.ssh/nixmind.pub")}"
}
}
}
But I'm having this error :
Error: Missing attribute value
on variables.tf line 8, in variable "ssh_keys":
4:
5:
6:
7:
8:
9:
Expected an attribute value, introduced by an equals sign ("=").
I'm not sure to really get what to do there.
There are a few different problems here. I'll talk about them one at a time.
The first is that your default expression is not using correct map syntax. Here's a corrected version:
variable "ssh_keys" {
type = map(string)
default = {
amary = file("/Users/nixmind/.ssh/amary.pub")
nixmind = file("/Users/nixmind/.ssh/nixmind.pub")
}
}
The second problem is that a variable default value cannot include function calls, so the calls to file above are invalid. There are a few different options here about how to deal with this, but if this is a variable in a root module then I expect it would be most convenient to have the variable be a map of filenames rather than a map of the contents of those files, and then the module itself can read the contents of those files in a later step:
variable "ssh_key_files" {
type = map(string)
default = {
amary = "/Users/nixmind/.ssh/amary.pub"
nixmind = "/Users/nixmind/.ssh/nixmind.pub"
}
}
Your for expression for building the list of "user:key" strings was correct with how you had the variable defined before, but with the adjustment I've made above to use filenames instead of contents we'll need an extra step to actually read the files:
locals {
ssh_keys = { for u, fn in var.ssh_key_files : u => file(fn) }
}
We can then use local.ssh_keys to get the map from username to key needed for the metadata expression:
metadata = {
ssh-keys = join("\n", [for user, key in local.ssh_keys : "${user}:${key}"])
}
If you do want this module to accept already-loaded SSH key data rather than filenames then that is possible but the variable will need to be required rather than having a default, because it'll be up to the calling module to load the files.
The definition without the default value will look like this:
variable "ssh_keys" {
type = map(string)
}
Then your calling module, if there is one (that is, if this isn't a root module) can be the one to call file to load those in:
module "example" {
source = "./modules/example"
ssh_keys = {
amary = file("/Users/nixmind/.ssh/amary.pub")
nixmind = file("/Users/nixmind/.ssh/nixmind.pub")
}
}
The above is a reasonable interface for a shared module that will be called from another module like this, but it's not a convenient design for a root module because the "caller" in that case is the person or script running the terraform program, and so providing the data from those files would require reading them outside of Terraform and passing in the results.
My problem was semantic not syntactic, cause I can't use a map of map as tried to. Instead I used a liste and the error desapeared.
variable ssh_keys {
type = list(object({
user=string
key=string
}))
default = [
{
"user" = "amaret93"
"key" = "/Users/valerietchala/.ssh/amaret93.pub"
},
{
"user" = "nixmind"
"key" = "/Users/valerietchala/.ssh/nixmind.pub"
}
]
}
But the Martin's answer above is also a more good approach

Creating json string using json lib

I am using jsonc-libjson to create a json string like below.
{ "author-details": {
"name" : "Joys of Programming",
"Number of Posts" : 10
}
}
My code looks like below
json_object *jobj = json_object_new_object();
json_object *jStr1 = json_object_new_string("Joys of Programming");
json_object *jstr2 = json_object_new_int("10");
json_object_object_add(jobj,"name", jStr1 );
json_object_object_add(jobj,"Number of Posts", jstr2 );
this gives me json string
{
"name" : "Joys of Programming",
"Number of Posts" : 10
}
How do I add the top part associated with author details?
To paraphrase an old advertisement, "libjson users would rather fight than switch."
At least I assume you must like fighting with the library. Using nlohmann's JSON library, you could use code like this:
nlohmann::json j {
{ "author-details", {
{ "name", "Joys of Programming" },
{ "Number of Posts", 10 }
}
}
};
At least to me, this seems somewhat simpler and more readable.
Parsing is about equally straightforward. For example, let's assume we had a file named somefile.json that contained the JSON data shown above. To read and parse it, we could do something like this:
nlohmann::json j;
std::ifstream in("somefile.json");
in >> j; // Read the file and parse it into a json object
// Let's start by retrieving and printing the name.
std::cout << j["author-details"]["name"];
Or, let's assume we found a post, so we want to increment the count of posts. This is one place that things get...less tasteful--we can't increment the value as directly as we'd like; we have to obtain the value, add one, then assign the result (like we would in lesser languages that lack ++):
j["author-details"]["Number of Posts"] = j["author-details"]["Number of Posts"] + 1;
Then we want to write out the result. If we want it "dense" (e.g., we're going to transmit it over a network for some other machine to read it) we can just use <<:
somestream << j;
On the other hand, we might want to pretty-print it so a person can read it more easily. The library respects the width we set with setw, so to have it print out indented with 4-column tab stops, we can do:
somestream << std::setw(4) << j;
Create a new JSON object and add the one you already created as a child.
Just insert code like this after what you've already written:
json_object* root = json_object_new_object();
json_object_object_add(root, "author-details", jobj); // This is the same "jobj" as original code snippet.
Based on the comment from Dominic, I was able to figure out the correct answer.
json_object *jobj = json_object_new_object();
json_object* root = json_object_new_object();
json_object_object_add(jobj, "author-details", root);
json_object *jStr1 = json_object_new_string("Joys of Programming");
json_object *jstr2 = json_object_new_int(10);
json_object_object_add(root,"name", jStr1 );
json_object_object_add(root,"Number of Posts", jstr2 );

Is it possible to restore the .proto file when a message uses package, imports, and field options?

My goal is to restore the lost .proto files written by someone else from existing c++ protobuf messages. By using the Descriptor and EnumDescriptor I was able to do the following:
const google::protobuf::EnumDescriptor* logOptionDesc =
bgs::protocol::LogOption_descriptor();
std::string logOptionStr = logOptionDesc->DebugString();
bgs::protocol::EntityId entityId;
const google::protobuf::Descriptor* entityIdDesc = entityId.GetDescriptor();
std::string entityIdStr = entityIdDesc->DebugString();
The logOptionStr string I got looked something like this:
enum LogOption {
HIDDEN = 1;
HEX = 2;
}
and entityIdStr:
message EntityId {
required fixed64 high = 1 [(.bgs.protocol.log) = HEX];
required fixed64 low = 2 [(.bgs.protocol.log) = HEX];
}
Notice the EntityId message contains some field options. Without resolving this dependency I cannot generate a FileDescriptor that can help me restore the .proto files. I suspect the EntityId string should look something like the following:
import "LogOption.proto";
package bgs.protocol;
extend google.protobuf.FieldOptions {
optional LogOptions log = HEX;
}
message EntityId {
required fixed64 high = 1 [(.bgs.protocol.log) = HEX];
required fixed64 low = 2 [(.bgs.protocol.log) = HEX];
}
Is it possible to restore the .proto files that require additional information such as package, field options and imports? What else do I need to do to restore the .proto files?

Error while compiling protocol buffer on mac

I am following the tutorial for protocol buffers and I keep running into different errors while compiling. my addressbook.proto file is in /Users/flexmaster411/protobuffer
protoc -I=/Users/flexmaster411/protobuffer --python_out= /Users/flexmaster411/protobuffer/addressbook.proto /Users/flexmaster411/protobuffer
I keep getting the following error even though I have syntax = "proto3" on my proto file
[libprotobuf WARNING google/protobuf/compiler/parser.cc:471] No syntax specified for the proto file. Please use 'syntax = "proto2";' or 'syntax = "proto3";' to specify a syntax version. (Defaulted to proto2 syntax.)
Not sure if I have correctly done the destination folders set up which is causing this or not Any help appreciated
syntax = "proto3";
package tutorial;
message Person {
string name = 1;
int32 id = 2; // Unique ID number for this person.
string email = 3;
enum PhoneType {
MOBILE = 0;
HOME = 1;
WORK = 2;
}
message PhoneNumber {
string number = 1;
PhoneType type = 2;
}
repeated PhoneNumber phones = 4;
}
// Our address book file is just one of these.
message AddressBook {
repeated Person people = 1;
}
It looks like you reversed the order of the parameters. /Users/flexmaster411/protobuffer is your output directory, so it should appear with --python_out. Since you specified it second, protoc thinks you're telling it that /Users/flexmaster411/protobuffer is an input. So it's trying to open a directory and then parse it as a .proto file. Amusingly, read() on a directory returns no data, which protoc interprets as a perfectly valid .proto file that simply doesn't declare anything! But it then gives you a warning because this empty file doesn't have any syntax line.
I think what you meant to type is:
protoc -I=/Users/flexmaster411/protobuffer --python_out=/Users/flexmaster411/protobuffer /Users/flexmaster411/protobuffer/addressbook.proto