C++ serialization library that supports partial serialization? [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Are there any good existing C++ serialization libraries that support partial serialization?
By "partial serialization" I mean that I might want to save the values of 3 specific members, and later be able to apply that saved copy to a different instance. I'd only update those 3 members and leave the others intact.
This would be useful for synchronizing data over a network. Say I have some object on a client and a server, and when a member changes on the server I want to send the client a message containing the updated value for that member and that member only. I don't want to send a copy of the whole object over the wire.
boost::serialization at a glance looks like it only supports all or nothing.
Edit: 3 years after originally writing this I look back at it and say to myself, 'wut?' boost::serialization lets you define what members you want saved or not, so it would support 'partial serialization' as I seem to have described it. Further, since C++ lacks reflection serialization libraries require you to explicitly specify each member you're saving anyway unless they come with some sort of external tooling to parse the source files or have a separate input file format that is used to generate C++ code (e.g. what Protocol Buffers does). I think I must have been conceptually confused when I wrote this.

You're clearly not looking for serialization here.
Serialization is about saving an object and then recreating it from the stream of bytes. Think video games saves or the session context for a webserver.
Here what you need is messaging. Google's FlatBuffers is nice for that. Specify a message that will contain every single field as optional, upon reception of the message, update your object with the fields that do exist and leave the others untouched.
The great thing with FlatBuffers is that it handles forward and backward compatibility nicely, as well as text and binary encoding (text being great for debugging and binary being better for pure performance), on top of a zero-cost parsing step.
And you can even decode the messages with another language (say python or ruby) if you save them somewhere and want to throw together a html gui to inspect it!

Although I'm not familiar with them, you could also check out Google's Protocol Buffers
.

Related

Better way to parse a config File in C++ [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am currently working on a parser project to get some file xml format 1 to another xml format.
I finished my project, and there is quiet a lot of parameters to know, most of them are paths to some files, lists of parameters, ...
I would like it to be easy to use so I create a settings.txt : it contains lines like
someparameter = defaultvalue
This file is easy to modify.I consider case where people want to parse files with same parameters, so my application is like :
Do you want to change parameters? (Y/N)
if(yes){
Do you want to change paramater 1? //If no value, [] stays
[pathbydefault]
}
...
else{ load from settings.txt}
To implement this, I use getline(), splitting on first '=' and put them in a std::map.
I think this is a bad choice, I searched and I found a lot of architecture for c++: list (really heavy for 10 parameters), table (not easy to read, the code is dark for other people).
What solution would you advise me to use?
Of course, i dont ask for implementation, just for some solution to consider.
Some info: I dont think if it matters a lot, but I work on Unix based system (Mac OS or Linux) so I cant use windows libraries. I saw a windows solution but I did not go further into this.
The rule of thumb with this stuff (IMHO) is to always use a good library to parse a standard data format, at least in any situation where your needs might continue to scale. For example, I tend to really be fond of JSON because it's a simple, easy to humanly read/write format, with quite a few good quality choices in C++.
This avoids having to deal with any bug prone parsing logic yourself. It also makes it very easy to e.g. write python scripts that generate or verify your configuration files (since python has very easy access to json as well, as about every language does).
In your code it may also be a good idea to cleanly separate the data format, and the collection into a map or whatever data structure you're using. That way, the portion of the code that will change if you decide to change which data format you use is contained.
One future consideration would be using XML transformation language to transition between an old XML file format and a new XML file format. I'm not sure if you're already doing this or not, so if you are, please disregard. If you're unfamiliar, the most common of these languages is XSLT (eXtensible Stylesheet Language Transformations).
Rather than having the end-user enter a myriad of options/paths/values to an INI-styled configuration file, the end-user would ideally only supply XML (which can easily be edited by hand) / XSLT template files to the XSLT processor. You could update your C++ application to focus less on obtaining every input from the end-user and more on creating/choosing the right XML/XSLT template files.
As you mention working on Unix-based systems, you might want to consider this processor: http://xml.apache.org/xalan-c/

C++/Objective-C - how to analyse a big project (Unix way)? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Normally, to analyse big C projects, I prefer grep/GNU command line tools, lint, simple Python scripts. Saying "to analyse" C project I mean to collect code statistics, to understand project's structure, its data structures and flow of execution - what function calls what, entry points in different modules, static members, threads, etc. But it works not so good with an object-oriented code.
Whenever I have a big C++ (or Objective-C) project, containing large number of source files and several directories, I would like to see it's class diagram, data fields, methods, messages, instances, etc.
I am looking for a most Unix way solution. Can you help me?
Doxygen is the closest i could find, when i was searching last time. It is not unix way, but it is available free for linux/windows/mac. It generated descent graphs for me. Hope it helps.
http://www.doxygen.nl/
http://en.wikipedia.org/wiki/Doxygen
With message passing and dynamic dispatch going around you are pretty much screwed. It doesn't even depend on language, message is as well used in C++ world. There is no tool that can analyze the code and tell what the application flow will look like. In those cases, the whole data/execution flow may depend on configuration files, how you hook up producers/consumers together etc.. and change significantly. If you are lucky, there would be some high-level documentation, maybe with pictures and description of overall ideas etc. Otherwise, the only option here is to run it under debugger for any given configuration and see what is going on in there, step by step. Isn't that a true UNIX way?
Your request is for a variety of views, some text-based, some structure based.
You might consider Understand for C++ which does a mixture of these. Don't know if it does ObjectiveC.
Our Source Code Search Engine (SCSE) is rather more limited, but provides a much faster way to "grep" than grep does. For large code bases this matters. It will handle multiple languages and dialects. We don't have an Objective C dialect, but I think our C or C++ front ends would actually work pretty well for this, since Objective C uses pretty much the same lexical syntax.

Architectural tips on building a shared resource for different processess [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
In the company I work at we're dealing with a huge problem: we have a system that consists in several units of processing. We made it this way so each module has specific functionality. The integration between these modules is done using a queue system (which is not fast but we're working on it) and replicating messages between these modules. The problem is that this is generating a great deal of overhead as four of these systems are requiring the same kind of data, and maintaining consistency for these modules is bad.
Another requirement for the system is redundancy, so I was thinking to kill these two problems in one shot.
So I was thinking of using some kind of shared resource. I've looked at shared memories (which are great but could lead to locking inconsistencies if the module crashes leading to inconsistencies in the program), and maybe do some "raw copy" from the segment to another computer to do redundancy.
So I've began to search for alternatives, ideas and something like that. I've found one that is noSQL, but I don't know if the speed that I'm requiring would suffice this.
I need something (ideally):
Memory-like fast
That could provide me redundancy (active-passive is ok, active active is good)
I also think that shared-memory is the way to go. To provide redundancy, let every process copy the data that is going to be changed to local/non-shared memory. Only after the module has done its work, copy it back to shared memory. Make sure the 'copy-to-shared-memory' part is as small as possible and nothing can go wrong while doing the copy. Some tricks you could use are:
Prepare all data in local memory and use one memcpy operation to copy it to shared memory
Use a single value to indicate that the written data is valid. This could be a boolean or something like a version number that indicates the 'version' of the data written in shared memory.

IDL-like parser that turns a document definition into powerful classes? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am looking for an IDL-like (or whatever) translator which turns a DOM- or JSON-like document definition into classes which
are accessible from both C++ and Python, within the same application
expose document properties as ints, floats, strings, binary blobs and compounds: array, string dict (both nestable) (basically the JSON type feature set)
allow changes to be tracked to refresh views of an editing UI
provide a change history to enable undo/redo operations
can be serialized to and from JSON (can also be some kind of binary format)
allow to keep large data chunks on disk, with parts only loaded on demand
provide non-blocking thread-safe read/write access to exchange data with realtime threads
allow multiple editors in different processes (or even on different machines) to view and modify the document
The thing that comes closest so far is the Blender 2.5 DNA/RNA system, but it's not available as a separate library, and badly documented.
I'm most of all trying to make sure that such a lib does not exist yet, so I know my time is not wasted when I start to design and write such a thing. It's supposed to provide a great foundation to write editing UI components.
ICE is the closest product I could think of. I don't know if you can do serialization to disk with ICE, but I can't think of a reason why it wouldn't. Problem is it costs $$$. I haven't personally negotiated a license with them, but ICE is the biggest player I know of in this domain.
Then you have Pyro for python which is Distributed Objects only.
Distributed Objects in Objective-C (N/A for iPhone/iPad Dev, which sucks IMHO)
There are some C++ distributed objects libraries but they're mostly dead and unusable (CORBA comes to mind).
I can tell you that there would be a lot of demand for this type of technology. I've been delving into some serialization and remote object stuff since off-the-shelf solutions can be very expensive.
As for open-source frameworks to help you develop in-house, I recommend boost::asio's strands for async thread-safe read/write and boost::serialization for serialization. I'm not terribly well-read in JSON tech but this looks like an interesting read.
I wish something freely available already existed for this networking/serialization glue that so many projects could benefit from.
SWIG doesn't meet all your requirements, but does make interfacing c++ <-> python a lot easier.

What's the best(easiest) way to transfer data on C/C++ [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 months ago.
Improve this question
Currently I'm working on a C/C++ cross-platform client/server software. I'm very experienced developer when it comes to low level socket development.
The problem with Berkley sockets/Winsock, is that you always have to make some kind of parser to get things right on the receiver side. I mean, you have to interpret data, and concatenate packets in order to transmit correctly. (packets often get sliced)
Have in mind that the communication is going to be bidirectional. Is pure socket the best way to transmit data nowadays? Would you recommend SOAP, Webservices or another kind of encapsulation to this application?
I can highly recommend Google Protocol Buffers.
These days many people use Web Services and SOAP. There are C++ packages available for this purpose. These will use sockets for you and handle all the data wrangling. If you are on Unix/Linux, give or take System V.4 network handles, your data will eventually travel over sockets.
On Windows there are other choices if you only want to talk to other Windows boxes.
You could also look into CORBA, but it's not the common practice.
In any data transfer, there is going to be a need to serialize and deserialize the objects.
The first question you want to ask is whether you want a binary or text format for the transfer. Binary data formats have the distinct advantage that they are totally easy to parse (provided they are simple POD structures - you can just cast them into a struct).
Text based transfers should be easier to debug since you can just read the text. You are still going to have to parse them.
SOAP based web services are simple XML based packets sent normally over HTTP. Something will have to parse the HTTP and the XML. The ease of use is not intrinsic but rather dependent of the tools at your disposal. If you have good tools, the by all means, but the same applies to any form of data exchange.
You can take a look at the Boost Serialization Library. It is a fairly complex library and does require you to write code indicating what members need to be serialized. IT does have good support for both text (including xml) and binary serialization. It is also cross platform.
I have used ZMQ with grate success. I highly recommend it as it is a middle-level library witch takes care the socket related overhead. It also supports binary packets/messages.