Sending data using Hashtable on Photon Cloud - c++

I am trying to send data using Hashtable on photon cloud, I do receive the data with correct eventCode, but the key-value pair returns some random numbers. My code is like this while sending data:-
void NetworkLogic::sendEvent(void)
{
ExitGames::Common::Hashtable* table =new ExitGames::Common::Hashtable;
table->put<int,int>(4,21);
const ExitGames::Common::Hashtable temp = (const ExitGames::Common::Hashtable)*table;//= new ExitGames::Common::Hashtable;
mLoadBalancingClient.opRaiseEvent(false, temp, 100);
}
While receiving data, the code is like this:-
void NetworkLogic::customEventAction(int playerNr, nByte eventCode, const ExitGames::Common::Hashtable& eventContent)
{
// you do not receive your own events, unless you specify yourself as one of the receivers explicitly, so you must start 2 clients, to receive the events, which you have sent, as sendEvent() uses the default receivers of opRaiseEvent() (all players in same room like the sender, except the sender itself)
PhotonPeer_sendDebugOutput(&mLoadBalancingClient, DEBUG_LEVEL_ALL, L"");
cout<<((int)(eventContent.getValue(4)));
}
What I get printed on console is some random values or int, while it should be 21. What am I doing wrong here?
Edit:
In customEventAction() when I used the following statement:
cout<<eventContent.getValue(4)->getType()<<endl;
cout<<"Event code = "<<eventCode<<endl;
I got the following output:
i
Event code = d
I searched and found that 'i' is the value of EG_INTEGER which means the value I am sending is getting received properly. I am just not been able to convert it back to int. And why the event code is coming as 'd'?

eventContent.getValue(4)
returns an Object.
You can't simply cast that Object to int, but have to access the int value inside it:
if(eventContent.getValue(4))
myInt = ExitGames::Common::ValueObject<int>(eventContent.getValue(4)).getDataCopy();
cout << myInt;

Related

qt - cannot add more than one struct onto a qlist

About:
p.s.I know this is a specific question, but I am at a loss, and have no idea why this is happening
The algorithm below should add a struct to a list. Each struct is a server_connection object, and the receiving list, is contained within another struct named VPN_Server.
When adding the struct to the list, it holds only the very first struct added, and does not add any more.
This is confirmed by the debugger window:
for each new IP, a new VPN_Server struct is created and a new server_connection is created and push_back'ed onto the ServerInstancesList list. But when in the foreach loop, attempting to add the object is fruitless.
Problem:
When pushing a server_connection object onto a specific VPN_Server struct, it holds this data in the temporary foreach container, but does not apply it.
I Have tried:
adding a custom addConnection() method to the VPN_Server struct
void addConnection(server_connection s_con){
ServerInstancesList.push_back(s_con);
}
Creating a temporary list, adding the server_connection and creating a new VPN_server object, and setting that equal to the current foreach container.
None of these help.
Testing & Indepth description:
In the algorithm, my 1st and 3rd vpn_connection struct have the same IP. On the third iteration, the foreach loop is executed and the following occurs.
VPN_Server ser contains info of the 1st struct info, i.e. an IP and one object in its QList<server_connection> called ServerInstancesList.
ser has an object added, via the addConnection(s_con). Afterwards, the loop is terminated with the return. However ser registered the added object, while outside the foreach loop, no new object was added. Not even to any struct within the list.
It seems to be an easy fix, but I just cannot find it,
help would be appreciated!
Code:
server_connection
struct server_connection{
//Is a connection contained by IP
QString cipher,protocol;
int port;
bool lzo_compression;
server_connection(){}
server_connection(QString _cipher, QString _protocol, int _port, bool _comp){
cipher = _cipher;
protocol = _protocol;
port = _port;
lzo_compression = _comp;
}
};
VPN_Server
struct VPN_Server{
//Holds IP as sort value and list of connection info
QString VPN_IP;
QList<server_connection> ServerInstancesList;
VPN_Server(){
ServerInstancesList = QList<server_connection>();
}
VPN_Server(QString ip, QList<server_connection> server_con_list){
VPN_IP = ip;
ServerInstancesList = server_con_list;
}
void addConnection(server_connection s_con){
ServerInstancesList.push_back(s_con);
}
};
Algorithm:
QList<VPN_Server> data_mananger::parseVPNConnections(QList<VPNConnection> l){
//Init var
QList<VPN_Server> server_list = QList<VPN_Server>();
VPNConnection v_con;
bool bAdded = false;
server_connection s_con;
//processing all vpn_connections, this is raw form sent, contains as fields, ip, cipher, protocol, port, compression
foreach (v_con, l) {
//create server_connection - data sorted by ip
s_con = server_connection(v_con.cipher, v_con.protocol, v_con.port, v_con.lzo_compression);
//pass through existing data, checking for a matching ip
foreach (VPN_Server ser, server_list) {
if (ser.VPN_IP == v_con.ip) {
//ip match found -> there already exists and ip with a server connection, adding another one with s_con
ser.addConnection(s_con);
bAdded = true;
//break current loop short searching for a matching ip
break;
}
}
//bAdded = false -> no matching IP has been found, thus creating a nw VPNServer
if (!bAdded) {
VPN_Server serv;
//creating new connection list and adding s_con to this list
QList<server_connection> list = QList<server_connection>();
list.push_back(s_con);
//adding VPNServer to list containing VPNServers
serv = VPN_Server(v_con.ip, list);
server_list.push_back(serv);
}
else
//data has been added, reseting add flag
bAdded = false;
}
return server_list;
}
Thanks to all who helped!
Please visit the link to the QT forums below and give a thumbs up to VRonin for his solution.
Also kudo's to #YuriyIvaskevych for helping me here on SO!
So to reiterate (budum tsss*) the doc page
Qt automatically takes a copy of the container when it enters a
foreach loop. If you modify the container as you are iterating, that
won't affect the loop. [...] using a non-const reference for the
variable does not allow you to modify the original container
Solution: created by #VRonin from forum.qt.io
for (auto serIter= server_list.begin(); serIter!=server_list.end(); ++serIter) {
if (serIter->VPN_IP == v_con.ip) {
//ip match found -> there already exists and ip with a server connection, adding another one with s_con
serIter->addConnection(s_con);
bAdded = true;
//break current loop short searching for a matching ip
break;
}
}

How to get data from Photon eventContent dictionary

We are receiving this callback using ExitGames Photon Realtime engine when an event is fired
customEventAction(int playerNr,
nByte eventCode,
const ExitGames::Common::Object& eventContent)
If the object is a string we use this code to extract it
ExitGames::Common::JString str =
ExitGames::Common::ValueObject<ExitGames::Common::JString>(eventContent).getDataCopy();
However, the object being sent is a dictionary. It's being sent from the server using BroadcastEvent.
How do we get data out of it ?
We've tried this, but it doesn't make any sense:
ExitGames::Common::Dictionary<byte,ExitGames::Common::Object> pdic
= ExitGames::Common::ValueObject<ExitGames::Common::Dictionary<byte,ExitGames::Common::Object>>(eventContent).getDataCopy();
I've found code to get the data from a hashtable, but that doesn't work either.
thanks
Shaun
ExitGames::Common::Dictionary<nByte, ExitGames::Common::Object> dic = ExitGames::Common::ValueObject<ExitGames::Common::Dictionary<nByte, ExitGames::Common::Object> >(eventContent).getDataCopy();
is absolutely correct and works for me.
The cause of your problem must be inside another line.
When you replace the implementations of sendEvent() and customEventAction() in demo_loadBalancing inside one of the Photon C++ client SDKs with the following snippets, then that demo successfully sends and receives a Dictionary:
send:
void NetworkLogic::sendEvent(void)
{
ExitGames::Common::ValueObject<ExitGames::Common::JString> obj(L"test");
ExitGames::Common::Dictionary<nByte, ExitGames::Common::Object> dic;
dic.put(1, obj);
mLoadBalancingClient.opRaiseEvent(false, dic, 0);
}
receive:
void NetworkLogic::customEventAction(int /*playerNr*/, nByte /*eventCode*/, const ExitGames::Common::Object& eventContent)
{
EGLOG(ExitGames::Common::DebugLevel::ALL, L"");
ExitGames::Common::Dictionary<nByte, ExitGames::Common::Object> dic = ExitGames::Common::ValueObject<ExitGames::Common::Dictionary<nByte, ExitGames::Common::Object> >(eventContent).getDataCopy();
const ExitGames::Common::Object* pObj = dic.getValue(1);
ExitGames::Common::JString str = ExitGames::Common::ValueObject<ExitGames::Common::JString>(pObj).getDataCopy();
mpOutputListener->write(L"received the following string as Dictionary value: " + str);
}
This gives me the following line of output on the receiving client:
received the following string as Dictionary value: test

Some Problems of Indy 10 IdHTTP Implementation

In regard to Indy 10 of IdHTTP, many things have been running perfectly, but there are a few things that don't work so well here. That is why, once again, I need your help.
Download button has been running perfectly. I'm using the following code :
void __fastcall TForm1::DownloadClick(TObject *Sender)
{
MyFile = SaveDialog->FileName;
TFileStream* Fist = new TFileStream(MyFile, fmCreate | fmShareDenyNone);
Download->Enabled = false;
Urlz = Edit1->Text;
Url->Caption = Urlz;
try
{
IdHTTP->Get(Edit1->Text, Fist);
IdHTTP->Connected();
IdHTTP->Response->ResponseCode = 200;
IdHTTP->ReadTimeout = 70000;
IdHTTP->ConnectTimeout = 70000;
IdHTTP->ReuseSocket;
Fist->Position = 0;
}
__finally
{
delete Fist;
Form1->Updated();
}
}
However, a "Cancel Resume" button is still can't resume interrupted downloads. Meant, it is always sending back the entire file every time I call Get() though I've used IdHTTP->Request->Ranges property.
I use the following code:
void __fastcall TForm1::CancelResumeClick(TObject *Sender)
{
MyFile = SaveDialog->FileName;;
TFileStream* TFist = new TFileStream(MyFile, fmCreate | fmShareDenyNone);
if (IdHTTP->Connected() == true)
{
IdHTTP->Disconnect();
CancelResume->Caption = "RESUME";
IdHTTP->Response->AcceptRanges = "Bytes";
}
else
{
try {
CancelResume->Caption = "CANCEL";
// IdHTTP->Request->Ranges == "0-100";
// IdHTTP->Request->Range = Format("bytes=%d-",ARRAYOFCONST((TFist->Position)));
IdHTTP->Request->Ranges->Add()->StartPos = TFist->Position;
IdHTTP->Get(Edit1->Text, TFist);
IdHTTP->Request->Referer = Edit1->Text;
IdHTTP->ConnectTimeout = 70000;
IdHTTP->ReadTimeout = 70000;
}
__finally {
delete TFist;
}
}
Meanwhile, by using the FormatBytes function, found here, has been able to shows only the size of download files. But still unable to determine the speed of download or transfer speed.
I'm using the following code:
void __fastcall TForm1::IdHTTPWork(TObject *ASender, TWorkMode AWorkMode, __int64 AWorkCount)
{
__int64 Romeo = 0;
Romeo = IdHTTP->Response->ContentStream->Position;
// Romeo = AWorkCount;
Download->Caption = FormatBytes(Romeo) + " (" + IntToStr(Romeo) + " Bytes)";
ForSpeed->Caption = FormatBytes(Romeo);
ProgressBar->Position = AWorkCount;
ProgressBar->Update();
Form1->Updated();
}
Please advise and give an example. Any help would sure be appreciated!
In your DownloadClick() method:
Calling Connected() is useless, since you don't do anything with the result. Nor is there any guarantee that the connection will remain connected, as the server could send a Connection: close response header. I don't see anything in your code that is asking for HTTP keep-alives. Let TIdHTTP manage the connection for you.
You are forcing the Response->ResponseCode to 200. Don't do that. Respect the response code that the server actually sent. The fact that no exception was raised means the response was successful whether it is 200 or 206.
You are reading the ReuseSocket property value and ignoring it.
There is no need to reset the Fist->Position property to 0 before closing the file.
Now, with that said, your CancelResumeClick() method has many issues.
You are using the fmCreate flag when opening the file. If the file already exists, you will overwrite it from scratch, thus TFist->Position will ALWAYS be 0. Use fmOpenReadWrite instead so an existing file will open as-is. And then you have to seek to the end of the file to provide the correct Position to the Ranges header.
You are relying on the socket's Connected() state to make decisions. DO NOT do that. The connection may be gone after the previous response, or may have timed out and been closed before the new request is made. The file can still be resumed either way. HTTP is stateless. It does not matter if the socket remains open between requests, or is closed in between. Every request is self-contained. Use information provided in the previous response to govern the next request. Not the socket state.
You are modifying the value of the Response->AcceptRanges property, instead of using the value provided by the previous response. The server tells you if the file supports resuming, so you have to remember that value, or query it before then attempting to resumed download.
When you actually call Get(), the server may or may not respect the requested Range, depending on whether the requested file supports byte ranges or not. If the server responds with a response code of 206, the requested range is accepted, and the server sends ONLY the requested bytes, so you need to APPEND them to your existing file. However, if the server response with a response code of 200, the server is sending the entire file from scratch, so you need to REPLACE your existing file with the new bytes. You are not taking that into account.
In your IdHTTPWork() method, in order to calculate the download/transfer speed, you have to keep track of how many bytes are actually being transferred in between each event firing. When the event is fired, save the current AWorkCount and tick count, and then the next time the event is fired, you can compare the new AWorkCount and current ticks to know how much time has elapsed and how many bytes were transferred. From those value, you can calculate the speed, and even the estimated time remaining.
As for your progress bar, you can't use AWorkCount alone to calculate a new position. That only works if you set the progress bar's Max to AWorkCountMax in the OnWorkBegin event, and that value is not always know before a download begins. You need to take into account the size of the file being downloaded, whether it is being downloaded fresh or being resumed, how many bytes are being requested during a resume, etc. So there is lot more work involved in displaying a progress bar for a HTTP download.
Now, to answer your two questions:
How to retrieve and save the download file to a disk by using its original name?
It is provided by the server in the filename parameter of the Content-Disposition header, and/or in the name parameter of the Content-Type header. If neither value is provided by the server, you can use the filename that is in the URL you are requesting. TIdHTTP has a URL property that provides the parsed version of the last requested URL.
However, since you are creating the file locally before sending your download request, you will have to create a local file using a temp filename, and then rename the local file after the download is complete. Otherwise, use TIdHTTP.Head() to determine the real filename (you can also use it to determine if resuming is supported) before creating the local file with that filename, then use TIdHTTP.Get() to download to that local file. Otherwise, download the file to memory using TMemoryStream instead of TFileStream, and then save with the desired filename when complete.
when I click http://get.videolan.org/vlc/2.2.1/win32/vlc-2.2.1-win32.exe then the server will process requests to its actual url. http://mirror.vodien.com/videolan/vlc/2.2.1/win32/vlc-2.2.1-win32.exe. The problem is that IdHTTP will not automatically grab through it.
That is because VideoLan is not using an HTTP redirect to send clients to the real URL (TIdHTTP supports HTTP redirects). VideoLan is using an HTML redirect instead (TIdHTTP does not support HTML redirects). When a webbrowser downloads the first URL, a 5 second countdown timer is displayed before the real download then begins. As such, you will have to manually detect that the server is sending you an HTML page instead of the real file (look at the TIdHTTP.Response.ContentType property for that), parse the HTML to determine the real URL, and then download it. This also means that you cannot download the first URL directly into your target local file, otherwise you will corrupt it, especially during a resume. You have to cache the server's response first, either to a temp file or to memory, so you can analyze it before deciding how to act on it. It also means you have to remember the real URL for resuming, you cannot resume the download using the original countdown URL.
Try something more like the following instead. It does not take into account for everything mentioned above (particularly speed/progress tracking, HTML redirects, etc), but should get you a little closer:
void __fastcall TForm1::DownloadClick(TObject *Sender)
{
Urlz = Edit1->Text;
Url->Caption = Urlz;
IdHTTP->Head(Urlz);
String FileName = IdHTTP->Response->RawHeaders->Params["Content-Disposition"]["filename"];
if (FileName.IsEmpty())
{
FileName = IdHTTP->Response->RawHeaders->Params["Content-Type"]["name"];
if (FileName.IsEmpty())
FileName = IdHTTP->URL->Document;
}
SaveDialog->FileName = FileName;
if (!SaveDialog->Execute()) return;
MyFile = SaveDialog->FileName;
TFileStream* Fist = new TFileStream(MyFile, fmCreate | fmShareDenyWrite);
try
{
try
{
Download->Enabled = false;
Resume->Enabled = false;
IdHTTP->Request->Clear();
//...
IdHTTP->ReadTimeout = 70000;
IdHTTP->ConnectTimeout = 70000;
IdHTTP->Get(Urlz, Fist);
}
__finally
{
delete Fist;
Download->Enabled = true;
Updated();
}
}
catch (const EIdHTTPProtocolException &)
{
DeleteFile(MyFile);
throw;
}
}
void __fastcall TForm1::ResumeClick(TObject *Sender)
{
TFileStream* Fist = new TFileStream(MyFile, fmOpenReadWrite | fmShareDenyWrite);
try
{
Download->Enabled = false;
Resume->Enabled = false;
IdHTTP->Request->Clear();
//...
Fist->Seek(0, soEnd);
IdHTTP->Request->Ranges->Add()->StartPos = Fist->Position;
IdHTTP->Request->Referer = Edit1->Text;
IdHTTP->ConnectTimeout = 70000;
IdHTTP->ReadTimeout = 70000;
IdHTTP->Get(Urlz, Fist);
}
__finally
{
delete Fist;
Download->Enabled = true;
Updated();
}
}
void __fastcall TForm1::IdHTTPHeadersAvailable(TObject*Sender, TIdHeaderList *AHeaders, bool &VContinue)
{
Resume->Enabled = ( ((IdHTTP->Response->ResponseCode == 200) || (IdHTTP->Response->ResponseCode == 206)) && TextIsSame(AHeaders->Values["Accept-Ranges"], "bytes") );
if ((IdHTTP->Response->ContentStream) && (IdHTTP->Request->Ranges->Count > 0) && (IdHTTP->Response->ResponseCode == 200))
IdHTTP->Response->ContentStream->Size = 0;
}
#Romeo:
Also, you can try a following function to determine the real download filename.
I've translated this to C++ based on the RRUZ'function. So far so good, I'm using it on my simple IdHTTP download program, too.
But, this translation result is of course still need value improvement input from Remy Lebeau, RRUZ, or any other master here.
String __fastcall GetRemoteFileName(const String URI)
{
String result;
try
{
TIdHTTP* HTTP = new TIdHTTP(NULL);
try
{
HTTP->Head(URI);
result = HTTP->Response->RawHeaders->Params["Content-Disposition"]["filename"];
if (result.IsEmpty())
{
result = HTTP->Response->RawHeaders->Params["Content-Type"]["name"];
if (result.IsEmpty())
result = HTTP->URL->Document;
}
}
__finally
{
delete HTTP;
}
}
catch(const Exception &ex)
{
ShowMessage(const_cast<Exception&>(ex).ToString());
}
return result;
}

How to report invalid data while processing data with Google dataflow?

I am looking at the documentation and the provided examples to find out how I can report invalid data while processing data with Google's dataflow service.
Pipeline p = Pipeline.create(options);
p.apply(TextIO.Read.named("ReadMyFile").from(options.getInput()))
.apply(new SomeTransformation())
.apply(TextIO.Write.named("WriteMyFile").to(options.getOutput()));
p.run();
In addition to the actual in-/output, I want to produce a 2nd output file that contains records that which are considered invalid (e.g. missing data, malformed data, values were too high). I want to troubleshoot those records and process them separately.
Input: gs://.../input.csv
Output: gs://.../output.csv
List of invalid records: gs://.../invalid.csv
How can I redirect those invalid records into a separate output?
You can use PCollectionTuples to return multiple PCollections from a single transform. For example,
TupleTag<String> mainOutput = new TupleTag<>("main");
TupleTag<String> missingData = new TupleTag<>("missing");
TupleTag<String> badValues = new TupleTag<>("bad");
Pipeline p = Pipeline.create(options);
PCollectionTuple all = p
.apply(TextIO.Read.named("ReadMyFile").from(options.getInput()))
.apply(new SomeTransformation());
all.get(mainOutput)
.apply(TextIO.Write.named("WriteMyFile").to(options.getOutput()));
all.get(missingData)
.apply(TextIO.Write.named("WriteMissingData").to(...));
...
PCollectionTuples can either be built up directly out of existing PCollections, or emitted from ParDo operations with side outputs, e.g.
PCollectionTuple partitioned = input.apply(ParDo
.of(new DoFn<String, String>() {
public void processElement(ProcessContext c) {
if (checkOK(c.element()) {
// Shows up in partitioned.get(mainOutput).
c.output(...);
} else if (hasMissingData(c.element())) {
// Shows up in partitioned.get(missingData).
c.sideOutput(missingData, c.element());
} else {
// Shows up in partitioned.get(badValues).
c.sideOutput(badValues, c.element());
}
}
})
.withOutputTags(mainOutput, TupleTagList.of(missingData).and(badValues)));
Note that in general the various side outputs need not have the same type, and data can be emitted any number of times to any number of side outputs (rather than the strict partitioning we have here).
Your SomeTransformation class could then look something like
class SomeTransformation extends PTransform<PCollection<String>,
PCollectionTuple> {
public PCollectionTuple apply(PCollection<String> input) {
// Filter into good and bad data.
PCollectionTuple partitioned = ...
// Process the good data.
PCollection<String> processed =
partitioned.get(mainOutput)
.apply(...)
.apply(...)
...;
// Repackage everything into a new output tuple.
return PCollectionTuple.of(mainOutput, processed)
.and(missingData, partitioned.get(missingData))
.and(badValues, partitioned.get(badValues));
}
}
Robert's suggestion of using sideOutputs is great, but note that this will only work if the bad data is identified by your ParDos. There currently isn't a way to identify bad records hit during initial decoding (where the error is hit in Coder.decode). We've got plans to work on that soon.

Protocol buffer polymorphism

I have a C++ program that sends out various events, e.g. StatusEvent and DetectionEvent with different proto message definitions to a message service (currently Active MQ, via activemq-cpp APU). I want to write a message listener that receives these messages, parses them and writes them to cout, for debugging purposes. The listener has status_event_pb.h and detection_event_pb.h linked.
My question is: How can I parse the received event without knowing its type? I want to do something like (in pseudo code)
receive event
type = parseEventType(event);
if( type == events::StatusEventType) {
events::StatusEvent se = parseEvent(event);
// do stuff with se
}
else {
// handle the case when the event is a DetectionEvent
}
I looked at this question but I'm not sure if extensions are the right way to go here. A short code snippet pointing the way will be much appreciated. Examples on protobuf are so rare!
Thanks!
It seems extensions are indeed the way to go but I've got one last point to clear up. Here's the proto definition that I have so far:
// A general event, can be thought as base Event class for other event types.
message Event {
required int64 task_id = 1;
required string module_name = 2; // module that sent the event
extensions 100 to 199; // for different event types
}
// Extend the base Event with additional types of events.
extend Event {
optional StatusEvent statusEvent = 100;
optional DetectionEvent detectionEvent = 101;
}
// Contains one bounding box detected in a video frame,
// representing a region of interest.
message DetectionEvent {
optional int64 frame = 2;
optional int64 time = 4;
optional string label = 6;
}
// Indicate status change of current module to other modules in same service.
// In addition, parameter information that is to be used to other modules can
// be passed, e.g. the video frame dimensions.
message StatusEvent {
enum EventType {
MODULE_START = 1;
MODULE_END = 2;
MODULE_FATAL = 3;
}
required EventType type = 1;
required string module_name = 2; // module that sent the event
// Optional key-value pairs for data to be passed on.
message Data {
required string key = 1;
required string value = 2;
}
repeated Data data = 3;
}
My problem now is (1) how to know which specific event that the Event message contains and (2) make sure that it contains only one such event (according to the definition, it can contain both a StatusEvent and a DetectionEvent).
I would not use Protocol Buffers for that, but that's perhaps a combination of little use and other habits.
Anyway, I think I would use an abstract class here, to ease general handling and to contain routing information. Class that would not be defined using protobuf, and would contain a protobuf message.
class Message
{
public:
Type const& GetType() const;
Origin const& GetOrigin() const;
Destination const& GetDestination() const;
// ... other informations
template <class T>
void GetContent(T& proto) const
{
proto.ParseFromIstream(&mContent); // perhaps a try/catch ?
}
private:
// ...
std::stringstream mContent;
};
With this structure, you have both general and specific handling at the tip of your fingers:
void receive(Message const& message)
{
LOG("receive - " << message.GetType() << " from " << message.GetOrigin()
<< " to " << message.GetDestination());
if (message.GetType() == "StatusEvent")
{
StatusEvent statusEvent;
message.Decode(statusEvent);
// do something
}
else if (message.GetType() == "DetectionEvent")
{
DetectionEvent detectionEvent;
message.Decode(detectionEvent);
// do something
}
else
{
LOG("receive - Unhandled type");
}
}
Of course, it would be prettier if you used a std::unordered_map<Type,Handler> instead of a hardcoded if / else if + / else chain, but the principle remains identical:
Encode the type of message sent in the header
Decode only the header upon reception and dispatch based on this type
Decode the protobuf message in a part of the code where the type is known statically