An invalid XML character (Unicode: 0x1e) in Coldfusion XML output - coldfusion

I am attempting to output a query to a simple XML document. I have used this same code multiple times for other queries and it worked fine. It seems there is a bad character in the description field somewhere and XMLformat() is not filtering it out. I have tried numerous REReplace() filters to no avail. Also tried Ben Nadel's technique found here. http://www.bennadel.com/blog/1155-Cleaning-High-Ascii-Values-For-Web-Safeness-In-ColdFusion.htm and everything has resulted in the same error. I did dump the output and search for bad characters and I found nothing. This simple thing has turned into quite the mystery. I am using the code below.
<cfquery name="list" datasource="theDatasource">
SELECT ItemID, ItemCode, BrandName, description
FROM theTable
</cfquery>
<cfxml variable="outputXML">
<itemsBrand>
<cfoutput query="list">
<itemBrand>
<ItemID>#XmlFormat(ItemID)#</ItemID>
<ItemCode>#XmlFormat(ItemCode)#</ItemCode>
<BrandName>#XmlFormat(BrandName)#</BrandName>
<description>#XmlFormat(description)#</description>
</itemBrand>
</cfoutput>
</itemsBrand>
</cfxml>
<cffile action="write" file="#GetDirectoryFromPath(GetCurrentTemplatePath())#itemBrand.xml" output="#toString(outputXML)#"/>
Is resulting in the error "An invalid XML character (Unicode: 0x1e) was found in the element content of the document." Any help would be appreciated.

It seems a bit odd it's barfing on a 0x1E, which is just a greater-than symbol. (edit: no it ain't; not sure why I came to that conclusion. AC).
It might be an idea to swap out the <cfxml> for <cfsavecontent> temporarily so you can build the string, then parse it to find out what's not right about it. That should give you more clue how to sort it out. But xmlFormat() is supposed to deal with wayward angle-brackets.
One other thing: if all you're doing with this XML is to then serialise it and write it to file, you don't actually need to use <cfxml> anyhow. Just continue to use <cfsavecontent> anyhow. If you want a string: just make a string.

Related

Duplicate line and replace string

I have an XML file that contains more than 10,000 items. Each item contains a line like this.
<g:id><![CDATA[FBM00101816_BLACK-L]]></g:id>
For each item I need to add another line below like this:
<sku><![CDATA[FBM00101816]]></sku>
So I need to duplicate each g:id line, replace the g:id with sku and trim the value to delete all characters after the underscore (including it). The final result would be like this:
<g:id><![CDATA[FBM00101816_BLACK-L]]></g:id>
<sku><![CDATA[FBM00101816]]></sku>
Any ideas how to accomplish this?
Thanks in advance.
In XSLT, it's
<xsl:template match="g:id">
<xsl:copy-of select="."/>
<sku><xsl:value-of select="substring-before(., '_')"/></sku>
</xsl:template>
Or using Saxon's Gizmo (https://www.saxonica.com/documentation11/index.html#!gizmo) it's
follow //g:id with <sku>{substring-before(., '_')}</sku>
Don't try to do this sort of thing in a text editor (or any other tool that doesn't involve a real XML parser) unless it's a one-off. Your code will be too sensitive to trivial variations in the way the source XML is written and will almost inevitably have bugs - which might not matter for a one-off, but do matter if it's going to be used repeatedly over a period of time.
Note also, the CDATA tags in your input (and output) are a waste of space. CDATA tags have no significance unless the element content includes special characters like < and &, which isn't the case in your examples.
Okay, so after commenting, I couldn't help myself. This seemed to do what you asked for.
find: <g:id><!\[CDATA\[([^\_]+)?(.+)?\]></g:id>
replace: $0\n<sku><![CDATA[$1]></sku>
I don't have BBEdit, but this is what it looked like in Textmate:

XML parsers failed to an xml file with &qt; [duplicate]

Currently, I'm working on a feature that involves parsing XML that we receive from another product. I decided to run some tests against some actual customer data, and it looks like the other product is allowing input from users that should be considered invalid. Anyways, I still have to try and figure out a way to parse it. We're using javax.xml.parsers.DocumentBuilder and I'm getting an error on input that looks like the following.
<xml>
...
<description>Example:Description:<THIS-IS-PART-OF-DESCRIPTION></description>
...
</xml>
As you can tell, the description has what appears to be an invalid tag inside of it (<THIS-IS-PART-OF-DESCRIPTION>). Now, this description tag is known to be a leaf tag and shouldn't have any nested tags inside of it. Regardless, this is still an issue and yields an exception on DocumentBuilder.parse(...)
I know this is invalid XML, but it's predictably invalid. Any ideas on a way to parse such input?
That "XML" is worse than invalid – it's not well-formed; see Well Formed vs Valid XML.
An informal assessment of the predictability of the transgressions does not help. That textual data is not XML. No conformant XML tools or libraries can help you process it.
Options, most desirable first:
Have the provider fix the problem on their end. Demand well-formed XML. (Technically the phrase well-formed XML is redundant but may be useful for emphasis.)
Use a tolerant markup parser to cleanup the problem ahead of parsing as XML:
Standalone: xmlstarlet has robust recovering and repair capabilities credit: RomanPerekhrest
xmlstarlet fo -o -R -H -D bad.xml 2>/dev/null
Standalone and C/C++: HTML Tidy works with XML too. Taggle is a port of TagSoup to C++.
Python: Beautiful Soup is Python-based. See notes in the Differences between parsers section. See also answers to this question for more
suggestions for dealing with not-well-formed markup in Python,
including especially lxml's recover=True option.
See also this answer for how to use codecs.EncodedFile() to cleanup illegal characters.
Java: TagSoup and JSoup focus on HTML. FilterInputStream can be used for preprocessing cleanup.
.NET:
XmlReaderSettings.CheckCharacters can
be disabled to get past illegal XML character problems.
#jdweng notes that XmlReaderSettings.ConformanceLevel can be set to
ConformanceLevel.Fragment so that XmlReader can read XML Well-Formed Parsed Entities lacking a root element.
#jdweng also reports that XmlReader.ReadToFollowing() can sometimes
be used to work-around XML syntactical issues, but note
rule-breaking warning in #3 below.
Microsoft.Language.Xml.XMLParser is said to be “error-tolerant”.
Go: Set Decoder.Strict to false as shown in this example by #chuckx.
PHP: See DOMDocument::$recover and libxml_use_internal_errors(true). See nice example here.
Ruby: Nokogiri supports “Gentle Well-Formedness”.
R: See htmlTreeParse() for fault-tolerant markup parsing in R.
Perl: See XML::Liberal, a "super liberal XML parser that parses broken XML."
Process the data as text manually using a text editor or
programmatically using character/string functions. Doing this
programmatically can range from tricky to impossible as
what appears to be
predictable often is not -- rule breaking is rarely bound by rules.
For invalid character errors, use regex to remove/replace invalid characters:
PHP: preg_replace('/[^\x{0009}\x{000a}\x{000d}\x{0020}-\x{D7FF}\x{E000}-\x{FFFD}]+/u', ' ', $s);
Ruby: string.tr("^\u{0009}\u{000a}\u{000d}\u{0020}-\u{D7FF}\u{E000‌​}-\u{FFFD}", ' ')
JavaScript: inputStr.replace(/[^\x09\x0A\x0D\x20-\xFF\x85\xA0-\uD7FF\uE000-\uFDCF\uFDE0-\uFFFD]/gm, '')
For ampersands, use regex to replace matches with &: credit: blhsin, demo
&(?!(?:#\d+|#x[0-9a-f]+|\w+);)
Note that the above regular expressions won't take comments or CDATA
sections into account.
A standard XML parser will NEVER accept invalid XML, by design.
Your only option is to pre-process the input to remove the "predictably invalid" content, or wrap it in CDATA, prior to parsing it.
The accepted answer is good advice, and contains very useful links.
I'd like to add that this, and many other cases of not-wellformed and/or DTD-invalid XML can be repaired using SGML, the ISO-standardized superset of HTML and XML. In your case, what works is to declare the bogus THIS-IS-PART-OF-DESCRIPTION element as SGML empty element and then use eg. the osx program (part of the OpenSP/OpenJade SGML package) to convert it to XML. For example, if you supply the following to osx
<!DOCTYPE xml [
<!ELEMENT xml - - ANY>
<!ELEMENT description - - ANY>
<!ELEMENT THIS-IS-PART-OF-DESCRIPTION - - EMPTY>
]>
<xml>
<description>blah blah
<THIS-IS-PART-OF-DESCRIPTION>
</description>
</xml>
it will output well-formed XML for further processing with the XML tools of your choice.
Note, however, that your example snippet has another problem in that element names starting with the letters xml or XML or Xml etc. are reserved in XML, and won't be accepted by conforming XML parsers.
IMO these cases should be solved by using JSoup.
Below is a not-really answer for this specific case, but found this on the web (thanks to inuyasha82 on Coderwall). This code bit did inspire me for another similar problem while dealing with malformed XMLs, so I share it here.
Please do not edit what is below, as it is as it on the original website.
The XML format, requires to be valid a unique root element declared in the document.
So for example a valid xml is:
<root>
<element>...</element>
<element>...</element>
</root>
But if you have a document like:
<element>...</element>
<element>...</element>
<element>...</element>
<element>...</element>
This will be considered a malformed XML, so many xml parsers just throw an Exception complaining about no root element. Etc.
In this example there is a solution on how to solve that problem and succesfully parse the malformed xml above.
Basically what we will do is to add programmatically a root element.
So first of all you have to open the resource that contains your "malformed" xml (i. e. a file):
File file = new File(pathtofile);
Then open a FileInputStream:
FileInputStream fis = new FileInputStream(file);
If we try to parse this stream with any XML library at that point we will raise the malformed document Exception.
Now we create a list of InputStream objects with three lements:
A ByteIputStream element that contains the string: <root>
Our FileInputStream
A ByteInputStream with the string: </root>
So the code is:
List<InputStream> streams =
Arrays.asList(
new ByteArrayInputStream("<root>".getBytes()),
fis,
new ByteArrayInputStream("</root>".getBytes()));
Now using a SequenceInputStream, we create a container for the List created above:
InputStream cntr =
new SequenceInputStream(Collections.enumeration(str));
Now we can use any XML Parser library, on the cntr, and it will be parsed without any problem. (Checked with Stax library);

CFScript's writeDump Doesn't Always Work | cfdump -vs- writeDump

On ACF 10, I have a function with the following signature...
<cffunction name="...">
<cfargument ...>
<cfdump var="#arguments#" output="#expandPath('/files/logs/debug.txt')#">
<cfscript>
writeDump(var="#arguments#", output="#expandPath('/files/logs/debug.txt')#");
</cfscript>
</cffunction>
The first tag-based dump works, but the second script-based dump does not. I've had this problem before and I've definitely gotten the tag based dump to work before. It seems random. Sometimes it works, sometimes it doesn't. Once it starts working, it continues... but when it doesn't I can't figure out why.
On the script based variant, I've also tried...
WriteDump(...)
removing the comma separating the two attributes (an other working example is written this way oddly enough)
removing the trailing semi-colon which apparently isn't always necessary per this... Why writedump function doesn't require semicolon in cfscript?
Anyone else having this problem and or know what the solution could be?
PS. Anywhere there's a "..." above, it's just to note that that bit is unimportant

A XML to CSV transformation, with complications

I swear I have looked at the existing threads! But I still need help.
I need to take some very messy XML and convert it to a very neat CSS file for upload to a website database.
I don't really need a finished solution, but I need help with understanding the process I should follow to solve my problem in XSLT. I won't ask you all to code for me, just tell me the elements and template structure I need. I would also love if the community could explain the logic behind the process, so that I can modify it as needed.
I have xml that has records in all orders and numbers:
<record-list>
<record>
<title>Title One</title
<author>Author One</author>
<subject>
Subject One A
Subject One B
Subject One C
</subject>
<subject>Subject Two</subject>
<subject>Subject Three</subject>
<subject>Subject Four</subject>
</record>
<record>
<subject>Subject Five</subject>
<title>Title Two</title>
<useless-element>Extra Stuff One</useless-element>
</record>
<record>
<title>Title Three</title>
<subject>Subject Six</subject>
<author/>
</record>
</record-list>
So I have multiple numbers of repeated elements, some missing elements, some empty elements, elements out of order, and some elements with extra line breaks.
I need a CSV file which reads as below, or with a different number of subject repeats (see requirements below)
"Title","Subject","Subject","Subject","Author"
"Title One","Subject One A ; Subject One B ; Subject One C","Subject Two","Subject Three","Author One"
"Title Two", "Subject Five","","",""
"Title Three","Subject Six","","",""
Requirements for the final output
-The number of columns of any repeated elements either needs to match the record with the most repeats of that element, or the program needs to chop off any repeats past a certain number.
-Each new record needs a line break and no other line breaks can exist in the files (only as record delimiters).
-The elements each need to be in the same order for each record.
-Each element text needs quotes around it (to handle intrinsic commas).
-Missing or empty elements need blank, comma surrounded quotes.
-Extra elements can't be sent through to the output
What I have done:
I have figured out how to get rid of the extra line breaks within the elements using the translate function, although I would love a solution that lets me replace the line breaks with more than one character (right now I will have to run find-and-replace to change a placeholder character to a space-semicolon-space in my output). I can get the quotes, commas, and line breaks in the output with text elements and strip-whitespace.
However, I don't know how to straighten out the order of the elements, handle the element repeats, or put through only some elements while still using the element as the cue for the line-break.
Right now, I just need a solution that works, even if all sorts of manual manipulation or multiple style-sheets are required. I can even do a find and replace in a text editor, as long as the output is good. Please help with an XSLT solution, I don't even begin to know any other suitable programing languages (college matlab many years ago is not helping).
I think I need to run two transforms. I looked at the XSLT bible, Mangano's XSLT Cookbook, where he used two transforms for a similar problem. However, his solution is so generalized, I can't understand it. If I can't figure out how it works, I can't modify it for my needs. Sorry, but without a programming background, the explanations on this site and in the text are challenging at best. However, I think I am presenting a problem with some novel features, compared to others asked on this forum.
Any help, be it non-generalized code, or even just a suggested procedure for multiple runs through my processor would be wonderful. I have been struggling with this for over a week and have made very little progress.
Thanks
CAMc
I'd suggest having a look at A CSV to XML converter in XSLT 2.0. There's a lot of useful info on that page, including how to run it.

ColdFusion -- Do I need URLDecode with form POSTs? / URLDecode randomly removes one character

I'm using a WYSIWYG to allow users to format text. This is the error-causing text:
<p><span style="line-height: 115%">This text starts with a 'T'</span></p>
The error is that the 'T' in "This", or whatever the first letter happens to be, is randomly removed when using URLDecode and saving to the DB. Removing URLDecode on the server side seems to fix it without any negative side-effects (the DB contains the same information).
The documentation says that
Query strings in HTTP are always URL-encoded.
Is this really the case? If so, why doesn't removing URLDecode seem to mess everything up?
So two questions:
Why is URLDecode causing the first text character to be removed like this (it seems to only happen when the line-height property is present)?
Do I really need (or would I even want) to use URLDecode before putting POSTed data into the database?
Edit: I made a test page to echo back the decoded text, and URLDecode is definitely removing that character, but I have no idea why.
I believe decoding is done automatically when form scope is populated. That's why characters after % (this char is used for encoding) are removed -- you are trying to decode the string second time.
For security reasons you might be interested in stripping script tags, or even cleaning up HTML using white-list. Try to search in CFLib.org for applicable functions.