I need to write an application that fetches element name value (time-series data) pair from any xml source, be it file, web server, any other server. the application would consume the XML and take out values of interest, it has to be very very fast (lets say 50000 events/seconds or more) also the XML document size would be huge and frequency of these document could be high as well (for ex. 2500 files/min - more than 500MB of XML data/file).
I just want to see how you experienced people think I should approach this. I am a novice who just got started although I can do any solution you suggest me, no matter how tough/easy.
Thank you very much.
If you use SAX parsing, your bottleneck is the I/O involved, not the XML string processing. And given your 500 MB number, I'd say you'd have to do SAX parsing instead of DOM parsing. So, anything with a SAX type interface should be just fine.
I'm a fan of Xerces, I think you are going to have to try them out to see what has the best performance for your application. Like Warren said you will want to use SAX processing. Realistically if you truly need the performance you should use a specialized XML appliance to do the processing.
I use libxml2 in our projects. It supports both SAX and DOM.
As Warren Young said, you should use SAX. You could give Expat a try.
Related
Our program currently uses the libxml2 DOM API (xmlReadFile) to load an entire file into memory. Unfortunately, this breaks down on "large" XML files, as the basic memory consumption of libxml2 DOM is about 4-5 times the base file size.
It seems libxml2 offers two APIs for reading XML when I don't want to store the whole tree in memory: SAX2 and xmlReader.
I haven't dug into the APIs yet, but I'm wondering which one is preferable under which circumstances?
Note: All I need to do with the XML file is populate some C++ datastructures with the data found in the XML file. And these will in turn be a lot smaller than the (very verbose) XML definition. At the moment, with xmlReadFile and the DOM API the process takes about 100MB memory for a 20MB XML file. The C++ data in memory for such a file is more like 5MB -- so I could go from 1:4 to 4:1, which would already help a lot.
I follow this approach, if the processing is sparse (need only an element here and there) xmlReader is better, if you need to process all elements, SAX is better. Although, opinion could come in to play as to whether you want to push the processing or you want the processing to push your code...
If you need to process large XML documents then size becomes the primary consideration. As you saw with 20MB -> 100MB for DOM parsing, if you get much larger than this that can be prohibitively expensive and SAX may be the only way to process it. For embedded or memory constrained devices SAX may be required even for small files.
If you want to start parsing before the file is complete SAX is the way to go. If you are writing a browser, are streaming XML, or require responsiveness then you will need to use SAX.
SAX is more of a pain, if you can get away with DOM parsing that will usually lead to less code and simpler code, for simpler DOM queries you can avoid a state machine for example. If you only care about a handful of fields in the document you could even avoid querying a DOM parser directly and query XSLT instead.
I have a large xml file (contains about few million records) and need to get about 100 records (based on id or something like that)
I tried TinyXml and Xalan-C but both of them using DOM, therefore it cause a out of memory issue.
Is there a C/C++ library that can do that without loading all data to memory as DOM?
How about Apache Xerces?
It's pretty damn mature and is optimized for performance (i.e. it won't read your complete files into memory!).
You need a SAX parser like Xerces
The Saxon-EE XSLT processor can handle a subset of XSLT in streaming mode (that is, without building a tree in memory). For details see
http://www.saxonica.com/documentation/sourcedocs/streaming.xml
It's not C/C++, but you don't say whether that's a hard constraint.
i am trying to find xml parser with xpath support that uses small amount of memory , or rather constant amount of memory , i am trying to parse large xml files , like almost 1 Giga , i have been reading about xqilla , and it seems that is uses very large amount of memory because it is dom based, correct me if i'm wrong..
anyways , any idea for such xml parser for C++ & linux ?
If you can process the XML in essentially a single pass, a SAX parser would be a good idea. How about Apache Xerces C++?
Saxon-EE supports streaming of large XML documents using XSLT or XQuery (streaming is better supported in XSLT than in XQuery). Details at
Streaming of Large Documents
You might look at
pugixml enables very fast, convenient and memory-efficient XML document processing. However, since pugixml has a DOM parser, it can't process XML documents that do not fit in memory; also the parser is a non-validating one, so if you need DTD/Schema validation, the library is not for you
However, it is explicitely not a streaming parser. I know streaming and xpath do not generally jive well (due to potential random-access requirements). Allthough, in .NET the ever-famous XPathReader seemed to have bridged the gap for a popular subset of XPath :)
(Not sure if this should be CW or not, you're welcome to comment if you think it should be).
At my workplace, we have many many different file formats for all kinds of purposes. Most, if not all, of these file formats are just written in plain text, with no consistency. I'm only a student working part-time, and I have no experience with using xml in production, but it seems to me that using xml would improve productivity, as we often need to parse, check and compare these outputs.
So my questions are: given that I can only control one small application and its output (only - the inputs are formats that are used in other applications as well), is it worth trying to change the output to be xml-based? If so, what are the best known ways to do that in C++ (i.e., xml parsers/writers, etc.)? Also, should I also provide a plain-text output to make it easy for the users (which are also programmers) to get used to xml? Should I provide a script to translate xml-plaintext? What are your experiences with this subject?
Thanks.
Don't just use XML because it's XML.
Use XML because:
other applications (that only accept XML) are going to read your output
you have an hierarchical data structure that lends itself perfectly for XML
you want to transform the data to other formats using XSL (e.g. to HTML)
EDIT:
A nice personal experience:
Customer: your application MUST be able to read XML.
Me: Er, OK, I will adapt my application so it can read XML.
Same customer (a few days later): your application MUST be able to read fixed width files, because we just realized our mainframe cannot generate XML.
Amir, to parse an XML you can use TinyXML which is incredibly easy to use and start with. Check its documentation for a quick brief, and read carefully the "what it does not do" clause. Been using it for reading and all I can say is that this tiny library does the job, very well.
As for writing - if your XML files aren't complex you might build them manually with a string object. "Aren't complex" for me means that you're only going to store text at most.
For more complex XML reading/writing you better check Xerces which is heavier than TinyXML. I haven't used it yet I've seen it in production and it does deliver it.
You can try using the boost::property_tree class.
http://www.boost.org/doc/libs/1_43_0/doc/html/property_tree.html
http://www.boost.org/doc/libs/1_43_0/doc/html/boost_propertytree/tutorial.html
http://www.boost.org/doc/libs/1_43_0/doc/html/boost_propertytree/parsers.html#boost_propertytree.parsers.xml_parser
It's pretty easy to use, but the page does warn that it doesn't support the XML format completely. If you do use this though, it gives you the freedom to easily use XML, INI, JSON, or INFO files without changing more than just the read_xml line.
If you want that ability though, you should avoid xml attributes. To use an attribute, you have to look at the key , which won't transfer between filetypes (although you can manually create your own subnodes).
Although using TinyXML is probably better. I've seen it used before in a couple of projects I've worked on, but don't have any experience with it.
Another approach to handling XML in your application is to use a data binding tool, such as CodeSynthesis XSD. Such a tool will generate C++ classes that hide all the gory details of parsing/serializing XML -- all that you see are objects corresponding to your XML vocabulary and functions that you can call to get/set the data, for example:
Person p = person ("person.xml");
cout << p.name ();
p.name ("John");
p.age (30);
ofstream ofs ("person.xml");
person (ofs, p);
Here's what previous SO threads have said on the topic. Please add others you know of that are relevant:
What is the best open XML parser for C++?
What is XML good for and when should i be using it?
What are good alternative data formats to XML?
BTW, before you decide on an XML parser, you may want to make sure that it will actually be able to parse all XML documents instead of just the "simple" ones, as discussed in this article:
Are you using a real XML parser?
I have to parse the XML file and build objects representation based on that, now once I get all these data I create entries in various database for these data objects. I have to do second pass over that for value as in the first pass all I could do is build the assets in various databases. and in second pass I get the values for all the data and put it in the database.
I have a feeling that this can be done in a single pass but I just want to see what are your opinions. As I am just a student who started with professional work, experienced ppl please help.
Can someone who have ideas or done similar work, please provide some light on the topic so that I can think over the possibility of the work and get the prototype going based on your suggestion.
Thanks a lot for your precious time, I honestly appreciate it.
You might be interested in learning several techniques of building XML parsers like DOM or SAX. As it is said in SAX description the only thing which requires second pass could be the XML validation but not the creating the tree.
Beside DOM and SAX parsing, you can use XQuery for querying data from XML files.It is fast, robust and efficient.
here is a link
You can use Qt Xml module for DOM ,SAX and XQuery, btw it is open source.
Another option is xml - C++ data binding, Here is the link.You can create C++ codes from definition directly.It is an elegant solution.
EDIT:
the latter one is at compile time.
You can also use Apache Licensed http://xmlbeansxx.touk.pl/. It works under Windows and Linux.
you could take a look at the somewhat simpler 'pull' api called stax instead of using sax (event based).