Internet Explorer 9 and XSLT - xslt

I have some javascript code that, based on the browser you're using, applies an XSL transformation to some XML received. This works in all browsers except IE9. Although there's a provision in the logic for IE (to use tranformNode instead of new XSLTProcessor()) it would seem that IE9 does not define transformNode anymore.
I've been searching for some time to see if this is a problem for others without any luck. Which is puzzling and makes me think I'm doing something terribly wrong.
Here's the code that works with IE7/8 (from jstree - although slightly modified for clarity):
xm = document.createElement('xml');
xs = document.createElement('xml');
xm.innerHTML = xml;
xs.innerHTML = xsl;
xm.transformNode(xs.XMLDocument)
All I could find regarding IE9 and XSLT is that "it has been changed to be more standards compliant". I think it was referring to the way that the transformations were done, not so much the API.

From the author of jsTree (which uses XSLT transformations to render XML source data to the tree):
if(window.ActiveXObject) {
var xslt = new ActiveXObject("Msxml2.XSLTemplate");
var xmlDoc = new ActiveXObject("Msxml2.DOMDocument");
var xslDoc = new ActiveXObject("Msxml2.FreeThreadedDOMDocument");
xmlDoc.loadXML(xml);
xslDoc.loadXML(xsl);
xslt.stylesheet = xslDoc;
var xslProc = xslt.createProcessor();
xslProc.input = xmlDoc;
xslProc.transform();
callback.call(null, xslProc.output);
return true;
}
http://code.google.com/p/jstree/issues/detail?id=907&q=IE9&colspec=ID%20Type%20Status%20Priority%20Owner%20Summary

Related

Exception handling in Saxonica URIResolver

I am using saxonica EE version for xslt transformation, and throw an exception from custom URI Resolver class (given below), it is working fine for #include but same not working for #document(),
is there anyway we can stop transformation by throwing the exception while resolving document().
is it possible to apply URI resolver to document() during the compilation itself(while generating SEF).
public class CustomURIResolver implements URIResolver {
#Override
public Source resolve(String href, String base) {
String formatterOrlookUpKey = getKey(href);
if (formatterMap.containsKey(formatterOrlookUpKey)) {
return new StreamSource(new StringReader(formatterMap.get(formatterOrlookUpKey)));
} else {
throw new RuntimeException("did not find the lookup/formatter xsl " + href+" key:"+formatterOrlookUpKey);
}
}}
XSLT compilation :
Processor processor = new Processor(true);
XsltCompiler compiler = processor.newXsltCompiler();
compiler.setJustInTimeCompilation(false);
compiler.setURIResolver(new CigURIResolver(formatterMap));
XsltExecutable stylesheet = compiler.compile(new StreamSource(new StringReader(xsl)));
stylesheet.export(destination);
Transformation
Processor processor = new Processor(true);
XsltCompiler compiler = processor.newXsltCompiler();
compiler.setJustInTimeCompilation(true);
XsltExecutable stylesheet = compiler.compile(new StreamSource(new StringReader(sef)));
final StringWriter writer = new StringWriter();
Serializer out = processor.newSerializer(writer);
out.setOutputProperty(Serializer.Property.METHOD, "xml");
out.setOutputProperty(Serializer.Property.INDENT, "yes");
Xslt30Transformer trans = stylesheet.load30();
trans.setURIResolver(new CigURIResolver(formatterMap));
trans.setErrorListener(errorHandler);
trans.transform(new StreamSource(new StringReader(xml)), out);
Object obj = out.getOutputDestination();
I'm a little surprised by the observed effect, and would need a repro to investigate it. But I'm also a bit surprised that you're choosing to throw a RuntimeException, rather than a TransformerException which is what the URIResolver interface declares. If you want to explore this further please raise a support request with runnable code.
The rules for document() are a bit complex because of the XSLT 1.0 legacy of "recoverable errors": you might find that doc() behaves more predictably.
As regards compile-time resolution of doc() calls, Saxon does have an option to enable that, but it doesn't play well with SEF files: generally having external documents in a SEF file gets very messy, especially if for example you have several global variables bound to different parts of the same document.

Saxonica Generate SEF file from xslt and apply the same for transformation

I am trying to find/know the correct approach to save sef in memory and use the same for transformation
Found below two approaches to generate sef file:
1. using xsltpackage.save(File) : it works fine but here need to save content to a File which doesn't suit our requirement as we need store in memory/db.
2. XsltExecutable.export() : it generated file but if i use the same .sef file for transformation, i am getting empty content as output(result).
I use xsl:include and document in xslt and i resolved them using URI resolver.
I am using below logic to generate and transform.
Note: i am using Saxon ee (trial version).
1.XsltExecutable.export()
public static String getCompiledXslt(String xsl, Map<String, String> formatterMap) throws SaxonApiException, IOException {
try(ByteArrayOutputStream destination = new ByteArrayOutputStream()){
Processor processor = new Processor(true);
XsltCompiler compiler = processor.newXsltCompiler();
compiler.setURIResolver(new CigURIResolver(formatterMap));
XsltExecutable stylesheet = compiler.compile(new StreamSource(new StringReader(xsl)));
stylesheet.export(destination);
return destination.toString();
}catch(RuntimeException ex) {
throw ex;
}
}
use the same SEF for transformation:
Processor processor = new Processor(true);
XsltCompiler compiler = processor.newXsltCompiler();
if (formatterMap != null) {
compiler.setURIResolver(new CigURIResolver(formatterMap));
}
XsltExecutable stylesheet = compiler.compile(new StreamSource(new StringReader(standardXsl)));
Serializer out = processor.newSerializer(new File("out4.xml"));
out.setOutputProperty(Serializer.Property.METHOD, "xml");
out.setOutputProperty(Serializer.Property.INDENT, "yes");
Xslt30Transformer trans = stylesheet.load30();
if (formatterMap != null) {
trans.setURIResolver(new CigURIResolver(formatterMap));
}
trans.transform(new StreamSource(new StringReader(sourceXMl)), out);
System.out.println("Output written to out.xml");
}
when use the sef generated from above export method to transform , i am getting empty content..same code works fine with sef generated from XsltPackage.save().
UPDATE : solved the issue by setting false to property (by default it is true) compiler.setJustInTimeCompilation(false);
There's very little point (in fact, I would say there is no point) in saving a SEF file in memory. It's much better to keep and reuse the XsltExecutable or XsltPackage object rather than exporting it to a SEF structure and then reimporting it. The only reason for doing an export/import is if the exporter and importer don't share memory.
You can do it, however: I think the only thing you need to change is that you need to close the destination stream after writing to it. Saxon tries to stick to the policy "Anyone who creates a stream is responsible for closing it"

Regular Expression in put request [duplicate]

This question already has answers here:
Safely turning a JSON string into an object
(28 answers)
Closed 7 years ago.
I want to parse a JSON string in JavaScript. The response is something like
var response = '{"result":true,"count":1}';
How can I get the values result and count from this?
The standard way to parse JSON in JavaScript is JSON.parse()
The JSON API was introduced with ES5 (2011) and has since been implemented in >99% of browsers by market share, and Node.js. Its usage is simple:
const json = '{ "fruit": "pineapple", "fingers": 10 }';
const obj = JSON.parse(json);
console.log(obj.fruit, obj.fingers);
The only time you won't be able to use JSON.parse() is if you are programming for an ancient browser, such as IE 7 (2006), IE 6 (2001), Firefox 3 (2008), Safari 3.x (2009), etc. Alternatively, you may be in an esoteric JavaScript environment that doesn't include the standard APIs. In these cases, use json2.js, the reference implementation of JSON written by Douglas Crockford, the inventor of JSON. That library will provide an implementation of JSON.parse().
When processing extremely large JSON files, JSON.parse() may choke because of its synchronous nature and design. To resolve this, the JSON website recommends third-party libraries such as Oboe.js and clarinet, which provide streaming JSON parsing.
jQuery once had a $.parseJSON() function, but it was deprecated with jQuery 3.0. In any case, for a long time, it was nothing more than a wrapper around JSON.parse().
WARNING!
This answer stems from an ancient era of JavaScript programming during which there was no builtin way to parse JSON. The advice given here is no longer applicable and probably dangerous. From a modern perspective, parsing JSON by involving jQuery or calling eval() is nonsense. Unless you need to support IE 7 or Firefox 3.0, the correct way to parse JSON is JSON.parse().
First of all, you have to make sure that the JSON code is valid.
After that, I would recommend using a JavaScript library such as jQuery or Prototype if you can because these things are handled well in those libraries.
On the other hand, if you don't want to use a library and you can vouch for the validity of the JSON object, I would simply wrap the string in an anonymous function and use the eval function.
This is not recommended if you are getting the JSON object from another source that isn't absolutely trusted because the eval function allows for renegade code if you will.
Here is an example of using the eval function:
var strJSON = '{"result":true,"count":1}';
var objJSON = eval("(function(){return " + strJSON + ";})()");
alert(objJSON.result);
alert(objJSON.count);
If you control what browser is being used or you are not worried people with an older browser, you can always use the JSON.parse method.
This is really the ideal solution for the future.
If you are getting this from an outside site it might be helpful to use jQuery's getJSON. If it's a list you can iterate through it with $.each
$.getJSON(url, function (json) {
alert(json.result);
$.each(json.list, function (i, fb) {
alert(fb.result);
});
});
If you want to use JSON 3 for older browsers, you can load it conditionally with:
<script>
window.JSON ||
document.write('<script src="//cdnjs.cloudflare.com/ajax/libs/json3/3.2.4/json3.min.js"><\/scr'+'ipt>');
</script>
Now the standard window.JSON object is available to you no matter what browser a client is running.
The following example will make it clear:
let contactJSON = '{"name":"John Doe","age":"11"}';
let contact = JSON.parse(contactJSON);
console.log(contact.name + ", " + contact.age);
// Output: John Doe, 11
If you pass a string variable (a well-formed JSON string) to JSON.parse from MVC #Viewbag that has doublequote, '"', as quotes, you need to process it before JSON.parse (jsonstring)
var jsonstring = '#ViewBag.jsonstring';
jsonstring = jsonstring.replace(/"/g, '"');
You can either use the eval function as in some other answers. (Don't forget the extra braces.) You will know why when you dig deeper), or simply use the jQuery function parseJSON:
var response = '{"result":true , "count":1}';
var parsedJSON = $.parseJSON(response);
OR
You can use this below code.
var response = '{"result":true , "count":1}';
var jsonObject = JSON.parse(response);
And you can access the fields using jsonObject.result and jsonObject.count.
Update:
If your output is undefined then you need to follow THIS answer. Maybe your json string has an array format. You need to access the json object properties like this
var response = '[{"result":true , "count":1}]'; // <~ Array with [] tag
var jsonObject = JSON.parse(response);
console.log(jsonObject[0].result); //Output true
console.log(jsonObject[0].count); //Output 1
The easiest way using parse() method:
var response = '{"a":true,"b":1}';
var JsonObject= JSON.parse(response);
this is an example of how to get values:
var myResponseResult = JsonObject.a;
var myResponseCount = JsonObject.b;
JSON.parse() converts any JSON String passed into the function, to a JSON object.
For better understanding, press F12 to open the Inspect Element of your browser, and go to the console to write the following commands:
var response = '{"result":true,"count":1}'; // Sample JSON object (string form)
JSON.parse(response); // Converts passed string to a JSON object.
Now run the command:
console.log(JSON.parse(response));
You'll get output as Object {result: true, count: 1}.
In order to use that object, you can assign it to the variable, let's say obj:
var obj = JSON.parse(response);
Now by using obj and the dot(.) operator you can access properties of the JSON Object.
Try to run the command
console.log(obj.result);
Without using a library you can use eval - the only time you should use. It's safer to use a library though.
eg...
var response = '{"result":true , "count":1}';
var parsedJSON = eval('('+response+')');
var result=parsedJSON.result;
var count=parsedJSON.count;
alert('result:'+result+' count:'+count);
If you like
var response = '{"result":true,"count":1}';
var JsonObject= JSON.parse(response);
you can access the JSON elements by JsonObject with (.) dot:
JsonObject.result;
JsonObject.count;
I thought JSON.parse(myObject) would work. But depending on the browsers, it might be worth using eval('('+myObject+')'). The only issue I can recommend watching out for is the multi-level list in JSON.
An easy way to do it:
var data = '{"result":true,"count":1}';
var json = eval("[" +data+ "]")[0]; // ;)
If you use Dojo Toolkit:
require(["dojo/json"], function(JSON){
JSON.parse('{"hello":"world"}', true);
});
As mentioned by numerous others, most browsers support JSON.parse and JSON.stringify.
Now, I'd also like to add that if you are using AngularJS (which I highly recommend), then it also provides the functionality that you require:
var myJson = '{"result": true, "count": 1}';
var obj = angular.fromJson(myJson);//equivalent to JSON.parse(myJson)
var backToJson = angular.toJson(obj);//equivalent to JSON.stringify(obj)
I just wanted to add the stuff about AngularJS to provide another option. NOTE that AngularJS doesn't officially support Internet Explorer 8 (and older versions, for that matter), though through experience most of the stuff seems to work pretty well.
If you use jQuery, it is simple:
var response = '{"result":true,"count":1}';
var obj = $.parseJSON(response);
alert(obj.result); //true
alert(obj.count); //1

Slow Apache FOP Transformation after Saxon XSLT Transformation

In a Java application I am using Saxon HE (9.9) for the XML-FO transformation. Afterwards I am using Apache FOP (2.3) for creating the PDF file. The FOP transformation is slow compared to the execution time on the cli of both transformations subsequently (approx. 12s vs 2s for the FOP part only).
// XML->FO
Processor proc = new Processor(false);
ExtensionFunction highlightingImage = new OverlayImage();
proc.registerExtensionFunction(highlightingImage);
ExtensionFunction mergeImage = new PlanForLandRegisterMainPageImage();
proc.registerExtensionFunction(mergeImage);
ExtensionFunction rolImage = new RestrictionOnLandownershipImage();
proc.registerExtensionFunction(rolImage);
ExtensionFunction fixImage = new FixImage();
proc.registerExtensionFunction(fixImage);
ExtensionFunction decodeUrl = new URLDecoder();
proc.registerExtensionFunction(decodeUrl);
XsltCompiler comp = proc.newXsltCompiler();
XsltExecutable exp = comp.compile(new StreamSource(new File(xsltFileName)));
XdmNode source = proc.newDocumentBuilder().build(new StreamSource(new File(xmlFileName)));
Serializer outFo = proc.newSerializer(foFile);
XsltTransformer trans = exp.load();
trans.setInitialContextNode(source);
trans.setDestination(outFo);
trans.transform();
// FO->PDF
FopFactory fopFactory = FopFactory.newInstance(fopxconfFile);
OutputStream outPdf = new BufferedOutputStream(new FileOutputStream(pdfFile));
Fop fop = fopFactory.newFop(MimeConstants.MIME_PDF, outPdf);
TransformerFactory factory = TransformerFactory.newInstance();
Transformer transformer = factory.newTransformer();
Source src = new StreamSource(foFile);
Result res = new SAXResult(fop.getDefaultHandler());
transformer.transform(src, res);
So far I'm pretty sure, that it does not depend on some file handling issues with the produced FO-file. The FO transformation is even slow if I transform a completely different FO file as the one produced with Saxon. Even the output in the console is different when not executing the XML-FO transformation:
Dec 25, 2018 1:54:47 AM org.apache.fop.apps.FOUserAgent processEvent
INFO: Rendered page #1.
Dec 25, 2018 1:54:47 AM org.apache.fop.apps.FOUserAgent processEvent
INFO: Rendered page #2.
This output will not be printed in the console when executing the XML-FO transformation before.
Is there anything in the XML-FO transformation step which has to be closed?
What is the reason for this behaviour?
I think if you use Saxon's own API to set up a Processor and your extension functions but then want to pipe the transformation XSL-FO result directly to the Apache FOP processor you can directly set up a SAXDestination:
XsltTransformer trans = exp.load();
trans.setInitialContextNode(source);
FopFactory fopFactory = FopFactory.newInstance(fopxconfFile);
OutputStream outPdf = new BufferedOutputStream(new FileOutputStream(pdfFile));
Fop fop = fopFactory.newFop(MimeConstants.MIME_PDF, outPdf);
trans.setDestination(new SAXDestination(fop.getDefaultHandler()));
trans.transform();
outPdf.close();
see http://svn.apache.org/viewvc/xmlgraphics/fop/trunk/fop/examples/embedding/java/embedding/ExampleXML2PDF.java?view=markup together with Saxon's http://saxonica.com/html/documentation/javadoc/net/sf/saxon/s9api/XsltTransformer.html#setDestination-net.sf.saxon.s9api.Destination-.

Aspose Wordsfor .NET: replace link contents keeping (setting) it's style

There is much info over internet telling how to change link contents with Aspose Wordsfor .NET. Also, there is enough info about setting link style after insertion.
But I have a problem: I need to modify existed link (from template) keeping (or just setting) it's visual style (underlined blue text). By defaul, after link change (se code below) it's style is broken.
foreach (Field field in docTemplate.Range.Fields)
{
if (field.Type == FieldType.FieldHyperlink)
{
var hyperlink = (FieldHyperlink)field;
if (hyperlink.Result.Equals("<<[model.Id]>>"))
{
hyperlink.Address = model.IdUrl;
hyperlink.Result = model.Id;
}
}
}
Does any solution for this case exist? Will appreciate any help.
I have tested your scenario with Aspose.Words for .NET 17.4 and unable to notice hyperlink style issue, it remains intact after modification. If you are using some old version of Aspose.Words for .NET then please upgrade to latest version, hopefully it will resolve the issue.
However, if your issue persists then please share your complete code along with your input,output and expected documents. It will help to understand your issue exactly.
I'm Tilal, developer evangelist at Aspose.
Document doc = new Document("Hyperlink.docx");
//You may change the color of Hyperlink style, if required.
//doc.Styles[StyleIdentifier.Hyperlink].Font.Color = Color.Blue;
//doc.Styles[StyleIdentifier.FollowedHyperlink].Font.Color = Color.Blue;
foreach (Field field in doc.Range.Fields){
if (field.Type == FieldType.FieldHyperlink){
FieldHyperlink link = (FieldHyperlink) field;
if (link.Result.Equals("aspose.com"))
{
link.Result = "google";
link.Target = "www.google.com";
}
}
}
doc.Save("Hyperlink_174.docx");
Edit: If you want to modify a specific hyperlink then use following code snippet.
Document doc = new Document("E:/Data/Hyperlink.docx");
DocumentBuilder builder = new DocumentBuilder(doc);
foreach (Field field in doc.Range.Fields)
{
if (field.Type == FieldType.FieldHyperlink)
{
FieldHyperlink link = (FieldHyperlink)field;
if (link.Result.Equals("aspose.com"))
{
builder.MoveToField(link, false);
builder.Font.ClearFormatting();
// Specify font formatting for the hyperlink.
builder.Font.Color = Color.Blue;
builder.Font.Underline = Underline.Single;
// Insert the link.
builder.InsertHyperlink("google", "http://www.google.com", false);
link.Remove();
}
}
}
doc.Save("UpdatedHyperlink.docx");