I have essentially two groups of messages bubbling around in my enterprise (over DDS essentially). One group is raw system data and another group is complex visual data.
I have an application that can create publishers and subscribers for most of these messages.
How can I write an .idl file such that it can grab system data instances (multiple), aggregate them perhaps with a little math thrown in and then publish them out as a single visual data message?
It is expected that this application is recompiled with the addition of the generated .IDL.
What i'm looking for is examples of:
how do I write an .idl to handle this conversion
how do I expose the system message subscribers to be useable for the .idl's generated logic
similarly how do I expose the visual publishers to be accessible for the .idl's logic?
please help. Examples would be awesome and/or specific links would be welcome too.
The Interface Definition Language (IDL) is a language that describes data types and interfaces. It is not a 'programming' language in the sense that it does not describe executable code; and therefore, it does not provide a mechanism to operate on data. Specifically, it does not allow you to "grab system data ... and publish them out" -- those tasks are part of the application.
[There are many compilers available to 'compile' IDL defined types and interfaces into standard programming languages. Any available DDS or CORBA implementation will probably include such an IDL compiler.]
So, to accomplish your goal, you'll need to do something like this:
define the desired data type[s] in IDL and compile that to the target programming language
write code to collect the system data in some arbitrary format
write code to assign the system data to the IDL specified data type[s]
write code to publish the data type[s] via a middleware (such as the Data Distribution Service (DDS))
Related
I'm writing a C++ library for an existing networking protocol (one with an document specifying the exact packet layout). As there are a considerable number of packet definitions, rather than writing all the serialization/de-serialization methods manually, are there any serialization libraries which are capable of specifying a packet layout specifically?
I've been looking at things like Google Protobuf and Apache Thrift, but they seem to be focused towards developing a server and client in tandem, where the packet layout does not matter along as it is consistent across a single release of the software. I need to serialize to an existing specification, so need to determine the field ordering, length, endianness, etc. explicitly. Is there anything that can help make this less of a chore?
There is a library/tools called PADS which should be ideal for this. See this SO answer here, the project home page here, some GitHub-ish stuff here. There seems to be some Haskell related stuff here. I've just tried and succeeded in downloading PADS/C from the homepage (note that the download server's username and password are given at the bottom of their license agreement).
It's a bit like writing a Google Protocol Buffer schema, except you're specifying bits/bytes in an arbitrary data stream, which is what you have.
I tried to get PADS/ML downloaded from https://github.com/yitzhakm/PADS-ML working some time ago, but ran into a lot of trouble and ultimately failed.
As you're interested in C (which is about as close to C++ as you're going to get) you might try the PADS/C library.
Since many days, I inquire a lot of informations about Big Data and especially about Thrift and HDFS/Hadoop.
I have many many XML files which I want to store in a HDFS file system. (and after, make statistics etc... from the data of these files)
So I would like to serialize my XML files with Thrift. (to validate the structure and to make durable ..)
Then, stock them in HDFS.
Is it possible ? ( XML => Thrift => HDFS ) without use RPC service.
To do the test, I would like to use a linux VM (for HDFS) and PHP language (for thrift).
Thank you.
You can use the serialization part without the RPC part, yes. Look for "serializer" in the Thrift source tree, you should find some examples. If not for PHP, then for sure for some other languages.
You have to do a little work on your own, because there is not such a thing a "the" way to convert XML into Thrift structures. The steps are - roughly - as follows
define the data structures to hold the XML data as Thrift IDL constructs
generate the desired code using the Thrift Compiler
add the serializer code as needed
put together some code that
reads each XML file
builds the Thrift structures from it
serializes the data and puts them into HDFS
Depending on the layout of your XML data and on the number of XML structures used, this may need some effort. It could be an idea to generate at least the IDL file programmatically by some other tool, maybe even some of the other code needed. Thrift cannot support you with this, although it could be an option - again, depending on your current situation, language and tools available.
A "checkResult" service deployed on a node machine is defined to return the result on the node to a cluster controller that sends the request.The result on node ,which is in the form of file, may vary drastically in length,as is often the case with daily log files.
At first,i thought it might be ok just using a single string to pack the whole content of the file,so i defined
checkResult(inType *in,OutType *out)
where the OutType* is char*. Then i realized that the string could be in KB length or even more. So i wonder whether it is proper to use string here.
I googled a lot and could not find the max length permitted in wsdl(maybe conflict with the local maxbuffer length as well) and did not find any information about transferring a file type parameter either.
Using struct type may be suggested ,but it could be so nested for the file and difficult to parse when some of the elements inside could be nil and absent.
What'd you do when you need to return a file type result or large amount of data in a webservice?
p.s the server and client both in C.
When transferring a large amount of data in a (SOAP) web service request or response, it is generally better practice to use an attachment mechanism versus including the data as part of the body. Probably the order for considering attachment mechanism (broadest to narrowest adoption):
Message Transmission Optimization Mechanism (MTOM) - The newest of these specifications (http://www.w3.org/TR/soap12-mtom/) which is supported in many of the mainstream languages.
SOAP with Attachments - This specification (http://www.w3.org/TR/SOAP-attachments) has been around for many years and is supported in several languages but notably not by Microsoft.
Direct Internet Message Encapsulation (DIME) - This specification (http://bgp.potaroo.net/ietf/all-ids/draft-nielsen-dime-02.txt) was pushed by Microsoft and support has been provided in multiple languages/frameworks including java and .NET.
Ideally, you would be able to work with a framework to give you code stub generation directly from a WSDL indicating MTOM-based web service.
The critical parts of such a WSDL document include:
MTOM policy declaration
Policy application in the binding
Placeholder for the reference to the attachment in the types (schema) section
If you are working contract-first and have a WSDL in hand, the example in section 1.2 of this site (http://www.w3.org/Submission/WS-MTOMPolicy/) shows the simple additions to be made to declare and apply the MTOM policy. Appendix I of the same site shows an example of a schema element which allows a web service client or server to identify a reference to the MTOM attachment.
I have not implemented a web service or client in C, but a brief scan of recently-updated packages revealed gSoap (http://www.cs.fsu.edu/~engelen/soap.html) as a possibility for helping in your endeavors.
Give those documents a look and see if they help to advance your project.
I am interested to learn about what libraries, tools, or frameworks there are for having a C++ program record data for later analysis and extraction. I provide a description of what I envision to give an idea of what I'm looking to do, but your suggestions need not fit it exactly.
I'd like to specify different record types for my program to record. For example, there might be a distinct record type for each type of message I get from a device, a record type for the results of major algorithms, a record type for each kind of operator input. Ideally the code changes for adding a new record type would be fairly minimal: Define a struct for the data to record, correlate it to a record type ID, and add the code to record instances to file.
After the main program runs, I'd like to run a data extraction tool that could give a summary of the data recorded and allow me to extract specific record types over a specified time period of the run. I could provide the exec to the tool and it would use some of the same hooks a debugger tool uses to figure out the names of the fields in the struct for use in the extraction report. It would be nice if the extraction report could be specified as .txt, .xml, .csv (for opening in Excel), or .hdf (for opening in Matlab).
This would be for Linux and GCC compiler. Ideally suggestions would be FOSS, but proprietary solutions are welcome too. Let me know!
What you described isn't anything special. Just generic serialization and de-serialization. If you want some specific library you should describe what exactly you want to do with the recorded data.
For serialization support, look into Boost::Serialization and s11n.
I'm just learning C++, just started to mess around with QT, and I am sitting here wondering how most applications save data? Is there an industry standard? Do they store it in a XML file, text file, SQLite? What about sensitive data that say accounting software would need to save? I'm just interested in learning what the best practices for this are.
Thanks
This question is way too broad. The only answer is it depends on the nature of the particular application and the data, and whether or not it is written in C++ has very little to do with it.
For example, user-configurable application settings are often stored in text files, but on Windows they are typically stored in the Registry. Accounting applications typically keep their data in a database of some sort.
There are many good ways to store application data (call it serialization).
Personally, I think for larger datasets, using an open format is much, much easier for debugging. If you go with XML, for example, you can store your data in an open form so that if you have file corruption issues (i.e. a client can't open your file for some reason), it's easier to find. If you have sensitive data in there, you can always encrypt it before writing it to file using key encryption. Microsoft, for instance, has gone from using a proprietary format to open xml in their office docs. They use .*x extension (.docx, .xlsx, etc). It's really just a compressed folder with xml files.
Using binary serialization is, of course, the industry standard at the moment for most standalone applications. Most likely that is because of the application framework they are using (such as MFC, which is old). If you take a look at most of the serialization techniques in modern application frameworks, XML serialization is very well supported.
First you need to clarify what kind of data you would like to save.
If you just want to save some application settings, use QSettings to save your settings to an INI file or registry.
If it is much more than just some application settings, go for XML files or SQL.
There is no standard practice, however if you want to use complex structured data, consider using an embedded database engine such as SQLite or Metakit, or Berkeley DB files. XML files would also do the job and be human readable/writable. Preferences can use INI files or the Windows registry, and so on. In short, it really depends on your usage pattern.
This is a general question. Like many things, the right answer depends on your application and its needs.
Most desktop applications save end-user data to a file (think Word and Excel). The format is up to you, XML, binary, etc. And if you can serialize/deserialize objects to file it will probably make your life easier.
Internal application data such as configuration files or temporary data might be saved to an XML file or an lightweight, local database such as SQLite
Often, "enterprise" applications used internally by a business will save their data to a back-end database such as SQL Server or Oracle. This is so all of the enterprise's data is saved to a single central location. And then it is available for reporting, etc.
For accounting software, you would need to consider the business domain and end users. For example, if the software is to be sold to large businesses you would probably use some form of a database to save data. Otherwise a binary file would be fine, perhaps with some form of encryption if you are really paranoid.
When you say "the best way", then you have to define what you mean by "good".
The problem is that various requirements conflict with each other, therefore so you can't satisfy all of them simultaneously.
For example, if one requirement is "concurrent multi-user access to the data" then this suggests using a database engine, but that conflicts with "as small as possible" and "minimize dependencies on 3rd-party software".
If a requirement is "portable data format" then this suggests XML, but that conflicts with "compact" and "indexed".
Do they store it in a XML file, text file, SQLite?
Yes.
Also, Binary files and relational databases.
Anything else?