I'm wondering if someone can help me trying to remove the XML declaration from a string containing an XML doc. Any help would be appreciated. We're using MSXML 4.0, but I was having difficulties using that and ended up just doing a substring. I'm not very familiar with the ATL and other Microsoft SDKs. It works, but a little part of me died inside and I would prefer to have this done in a less fragile manner.
Edit: Currently I am doing a sub-string on the first occurrence of a newline character. I was trying to tokenize or sub-string on the "?>" of the XML declaration, but I'm having issues on getting the character matching (using wcstok and substring). I tried "\?>", "\?>" and "?>". The ideal solution would be to load the document into XMLDocument object and just get the text of the message body.
Look up the XML specification, particularly the grammar for the prolog:
[22] prolog ::= XMLDecl? Misc* (doctypedecl Misc*)?
[23] XMLDecl ::= '<?xml' VersionInfo EncodingDecl? SDDecl? S? '?>'
So, your handspun code should be able to parse VersionInfo, EncodingDecl and SDDecl along with the XML declaration tag start and end tokens. For more info on these individual items see the specification.
However, my suggestion would be to use the right tool for the right job: Use a XML toolkit/parser. (The difference between a parser and a toolkit is mainly that the toolkit will support advanced operations such as DTD validation, Namespace handling, XPath etc.).
MSXML4 is pretty old. MSXML6 is the latest. However, MSXML6 is pretty useless for anything but small XML files. So, choose a parser depending on your input file size (if performance is important). There are freely available libraries like Xerces, RapidXML, pugixml etc. which have much better performance.
Also, can you specify what difficulties you have faced with MSXML4?
Related
I have virtually no knowledge of how to use Source Graph but I do know what Source Graph is and what RegEx is and its application across platforms. I am trying to learn how to better search for strings, variables, etc. in Source Graph so I can solve coding issues at work. I am not a coder/programmer/engineer but I have some general knowledge of programming in C and Python and using Query Languages.
I have gone to Source Graph's instructional page about RegEx but I honestly have a hard time understanding it.
Example:
I am trying to find "Delete %(folder_name)s and %(num_folders)s other folder from your ..." without the actual quotes and ellipses.
That is how I receive the code at work but this apparently is not how it is represented in Source Graph in its source file.
If I copy and paste that above line into Source Graph, I get no returns.
Here is what I found how the source file actually looks like in Source Graph:
"Delete \u201c%(folder_name)s\u201d and %(num_folders)s other folder from your ..." , again without actual quotes and ellipses.
I would have no idea that the \u201c and \201d were there in the original code. Is there a way around this?
What I usually have to work with and figure out how to find in Source Graph are singular variables or strings:
%(num_folders)s
This is a problem because the fewer items I have for searching, the harder it is to hunt down their source. I don't know who the author/engineer is until I find the code in Source Graph and check the blame feature (sadly it's a little disorganized at my work).
Sorry if this doesn't make any sense. This is my very first Stack Overflow post.
I can't the snippet you mentioned on sourcegraph.com, so I assume you are hosting Sourcegraph yourself.
In general, you could search for a term like Delete \u201c%(folder_name)s without turning on regular expressions to get literal matches. If you want to convert this into a regular expression, you would need to escape it like this:
Delete \\u201c%\(folder_name\)s
If %(folder_name) is meant to be a placeholder for any other expression, try this one instead:
Delete .*s and .*s other folder from your
https://regex101.com/ is my personal recommendation for learning more about how regular expressions work.
Currently, I'm working on a feature that involves parsing XML that we receive from another product. I decided to run some tests against some actual customer data, and it looks like the other product is allowing input from users that should be considered invalid. Anyways, I still have to try and figure out a way to parse it. We're using javax.xml.parsers.DocumentBuilder and I'm getting an error on input that looks like the following.
<xml>
...
<description>Example:Description:<THIS-IS-PART-OF-DESCRIPTION></description>
...
</xml>
As you can tell, the description has what appears to be an invalid tag inside of it (<THIS-IS-PART-OF-DESCRIPTION>). Now, this description tag is known to be a leaf tag and shouldn't have any nested tags inside of it. Regardless, this is still an issue and yields an exception on DocumentBuilder.parse(...)
I know this is invalid XML, but it's predictably invalid. Any ideas on a way to parse such input?
That "XML" is worse than invalid – it's not well-formed; see Well Formed vs Valid XML.
An informal assessment of the predictability of the transgressions does not help. That textual data is not XML. No conformant XML tools or libraries can help you process it.
Options, most desirable first:
Have the provider fix the problem on their end. Demand well-formed XML. (Technically the phrase well-formed XML is redundant but may be useful for emphasis.)
Use a tolerant markup parser to cleanup the problem ahead of parsing as XML:
Standalone: xmlstarlet has robust recovering and repair capabilities credit: RomanPerekhrest
xmlstarlet fo -o -R -H -D bad.xml 2>/dev/null
Standalone and C/C++: HTML Tidy works with XML too. Taggle is a port of TagSoup to C++.
Python: Beautiful Soup is Python-based. See notes in the Differences between parsers section. See also answers to this question for more
suggestions for dealing with not-well-formed markup in Python,
including especially lxml's recover=True option.
See also this answer for how to use codecs.EncodedFile() to cleanup illegal characters.
Java: TagSoup and JSoup focus on HTML. FilterInputStream can be used for preprocessing cleanup.
.NET:
XmlReaderSettings.CheckCharacters can
be disabled to get past illegal XML character problems.
#jdweng notes that XmlReaderSettings.ConformanceLevel can be set to
ConformanceLevel.Fragment so that XmlReader can read XML Well-Formed Parsed Entities lacking a root element.
#jdweng also reports that XmlReader.ReadToFollowing() can sometimes
be used to work-around XML syntactical issues, but note
rule-breaking warning in #3 below.
Microsoft.Language.Xml.XMLParser is said to be “error-tolerant”.
Go: Set Decoder.Strict to false as shown in this example by #chuckx.
PHP: See DOMDocument::$recover and libxml_use_internal_errors(true). See nice example here.
Ruby: Nokogiri supports “Gentle Well-Formedness”.
R: See htmlTreeParse() for fault-tolerant markup parsing in R.
Perl: See XML::Liberal, a "super liberal XML parser that parses broken XML."
Process the data as text manually using a text editor or
programmatically using character/string functions. Doing this
programmatically can range from tricky to impossible as
what appears to be
predictable often is not -- rule breaking is rarely bound by rules.
For invalid character errors, use regex to remove/replace invalid characters:
PHP: preg_replace('/[^\x{0009}\x{000a}\x{000d}\x{0020}-\x{D7FF}\x{E000}-\x{FFFD}]+/u', ' ', $s);
Ruby: string.tr("^\u{0009}\u{000a}\u{000d}\u{0020}-\u{D7FF}\u{E000}-\u{FFFD}", ' ')
JavaScript: inputStr.replace(/[^\x09\x0A\x0D\x20-\xFF\x85\xA0-\uD7FF\uE000-\uFDCF\uFDE0-\uFFFD]/gm, '')
For ampersands, use regex to replace matches with &: credit: blhsin, demo
&(?!(?:#\d+|#x[0-9a-f]+|\w+);)
Note that the above regular expressions won't take comments or CDATA
sections into account.
A standard XML parser will NEVER accept invalid XML, by design.
Your only option is to pre-process the input to remove the "predictably invalid" content, or wrap it in CDATA, prior to parsing it.
The accepted answer is good advice, and contains very useful links.
I'd like to add that this, and many other cases of not-wellformed and/or DTD-invalid XML can be repaired using SGML, the ISO-standardized superset of HTML and XML. In your case, what works is to declare the bogus THIS-IS-PART-OF-DESCRIPTION element as SGML empty element and then use eg. the osx program (part of the OpenSP/OpenJade SGML package) to convert it to XML. For example, if you supply the following to osx
<!DOCTYPE xml [
<!ELEMENT xml - - ANY>
<!ELEMENT description - - ANY>
<!ELEMENT THIS-IS-PART-OF-DESCRIPTION - - EMPTY>
]>
<xml>
<description>blah blah
<THIS-IS-PART-OF-DESCRIPTION>
</description>
</xml>
it will output well-formed XML for further processing with the XML tools of your choice.
Note, however, that your example snippet has another problem in that element names starting with the letters xml or XML or Xml etc. are reserved in XML, and won't be accepted by conforming XML parsers.
IMO these cases should be solved by using JSoup.
Below is a not-really answer for this specific case, but found this on the web (thanks to inuyasha82 on Coderwall). This code bit did inspire me for another similar problem while dealing with malformed XMLs, so I share it here.
Please do not edit what is below, as it is as it on the original website.
The XML format, requires to be valid a unique root element declared in the document.
So for example a valid xml is:
<root>
<element>...</element>
<element>...</element>
</root>
But if you have a document like:
<element>...</element>
<element>...</element>
<element>...</element>
<element>...</element>
This will be considered a malformed XML, so many xml parsers just throw an Exception complaining about no root element. Etc.
In this example there is a solution on how to solve that problem and succesfully parse the malformed xml above.
Basically what we will do is to add programmatically a root element.
So first of all you have to open the resource that contains your "malformed" xml (i. e. a file):
File file = new File(pathtofile);
Then open a FileInputStream:
FileInputStream fis = new FileInputStream(file);
If we try to parse this stream with any XML library at that point we will raise the malformed document Exception.
Now we create a list of InputStream objects with three lements:
A ByteIputStream element that contains the string: <root>
Our FileInputStream
A ByteInputStream with the string: </root>
So the code is:
List<InputStream> streams =
Arrays.asList(
new ByteArrayInputStream("<root>".getBytes()),
fis,
new ByteArrayInputStream("</root>".getBytes()));
Now using a SequenceInputStream, we create a container for the List created above:
InputStream cntr =
new SequenceInputStream(Collections.enumeration(str));
Now we can use any XML Parser library, on the cntr, and it will be parsed without any problem. (Checked with Stax library);
I'm working on my program where I would import an XML file and scan for parser state (also Element, line number, content).
Right now, I have all the elements and line numbers stored in vectors of string and integer.
My question is, how can I scan for an attribute which at the end I would be able to indicate the root, children, and direct children? I have tried using <map> but still no clue.
Here are some examples:
Output will be
It would be easy if I used additional libraries but I would like to use the standard C/C++ library only.
If you want an XML parser, use an XML parser. Simple as that. If you don't want to use an existing one, but only use the tools that C++ gives you, you'd write your own XML parser. Doing that would be insane.
And if people tell you to e.g. use regular expressions on XML: XML is not a regular language, and you'll need a lexer that understands XML, which is an XML parser. To see what happens when you try to RegEx-analyze things like HTML, see this answer.
I have a file which contains a ABNF Grammar with tags like in this simplified example:
$name = Bertha {userID=013} | Bob {userID=429} | ( Ben | Benjamin ) {userID=265};
$greet = Hi | Hello | Greetings;
$S = $greet $name;
Now the task is to obtain the userID by parsing a given sentence for this grammar. For example, parsing the sentence
Greetings Bob
should give us the userID 429. The grammars have to be read in at runtime because they can change between runs.
My approach for now is the following:
parse the grammar into one or multiple trees, putting the tags at the leaves or nodes they belong to
parse the sentence with this/those tree(s) to construct a tree which creates the given sentence(I'm thinking about using Earley for this)
use this tree to obtain the tags (unlike in the example, there will be multiple different tags in such a tree)
My question is, are there any software components that I can use or at least modify to solve this task? Especially steps 1 and 2 seem to be quite generic (1. reading a ABNF grammar into a C++ internal representation (e.g. trees); 2. Early-algorithm (or something like that) working with the internal representation from 1.) and writing a complete, fault-proof ABNF parser for step 1 will be a really time consuming task for me.
I know that VoiceXML grammars work like this, but i was unable to find a parser for them. Basically all I could find were parser generators which will generate C++ code for a single grammar, which is not practical for me because the grammars are not known at compile time.
Any ideas?
Back in 2001 I wrote a C++ library that will generate a parser from rules specified at run-time. It is available on SourceForge as project BuildParse with a LGPL license. I've used it in a couple of other projects, and I updated it to work with C++ as of 2009. If it doesn't matter if the parser is fast, it might work for you or save you some work rolling your own.
Basically, you'd need a parser to parse your grammar into the data structures that buildparse uses (you can use buildparse for that as well) and then run the buildparse parser generator to generate a something that can recognize tokens.
I'm working with SOAP and XML to interact with some web-services.
I noticed that gsoap-generated routines do not have newlines and they work correctly. I successively tried to write my own routines by using libxml2, which indents the XML with newlines.
While all the web-services that I tested were able to deal with the code generated by gsoap, not all of them where able to deal with my hand-written code, and the triggering error was the presence of the newlines.
So my question is: are newlines forbidden in XML+SOAP? Do I have to write all the code on a single line? Or did I just face some broken services?
Newlines aren't forbidden. The whole reason, why gSoap writes the messages into one line is because there is no need to structure the message (bring it into a human readable form), it would be an additional cost with no effect.
There must be some other reason. Compare your XML Messages with those generated by gSoap, is the content really the same, the only differences the newlines?