When transforming on client side, the browser gets all the includes everytime the transformation is applied. I find it makes the application run slower.
Is there a way, with XSLT, to cache the includes so it does not download each file everytime is used again?
I am using a service worker now but I was wondering if there is another way.
I would have expected the included files to be cached the way everything else is. However, even if it's cached, there's still a signficant cost in that you're compiling the stylesheet every time it's used. The way to avoid that is to redesign your site as a "single page application", so following a link loads new content and transforms it using the existing already-loaded stylesheet.
SaxonJS [disclaimer: my company's product] is designed very much on the basis that you will compile the stylesheet once and then use it repeatedly. You can even precompile the stylesheet on the server.
Related
Is there an indicator or flag in gspread that indicates whether or not a change has been made to a sheet or worksheet? This appears to have been present as an attribute called updated before version 2.0, or maybe that served a different purpose?
You're looking for Detect Changes guide in Drive API.
For Google Drive apps that need to keep track of changes to files, the
Changes collection provides an efficient way to detect changes to all
files, including those that have been shared with a user. The
collection works by providing the current state of each file, if and
only if the file has changed since a given point in time.
There's a github code demo for testing purposes.
We have a collection of VB.NET / IIS web services on some of our servers, and they have web.config files in the websites' root directories that they're already reading configurations from. There is a new configuration that needed to be added that will immediately be quite a bit longer than the others, and it'll only stand to grow. It's essentially a comma-separated value, and I'm wanting to keep it specifically in a configuration file of some sort.
At first I started doing this with a text file, but there was a problem with that. The text file's contents could change while web service threads and processes are running, so they would need to essentially re-read the file every time they needed to access its values. I thought about using some sort of caching, but unless the web services are completely restarted each time the file is updated, caching would block updates to the file from being used immediately. But reading from a text file each time is slow...
Then came the idea of putting that value in web.config, along with the other configurations the services are already using. When web.config is altered, the changes are able to be cached in the code, on top of coming into play immediately. However web.config is, well, web.config, and it's not a totally trivialized text file that is simply read out of in the code. IIS treats web.config in a special manner.
I'm tempted to think any negative consequences of putting a comma-separated value in web.config would be outweighed, in comparison to storing them in a text file (or a database, which probably can't be used for this anyway), but I guess I better ask.
What are the implications of storing a possibly lengthy, comma-separated value in web.config, instead of in its own little text file? Is either file a particularly good or bad idea? To me, it seems like web.config would be easy to get along with without having to re-read the file over and over, but there's certainly more to it than the common user is aware. Thanks!
I recommend using the Application Cache for this:
http://msdn.microsoft.com/en-us/library/vstudio/6hbbsfk6(v=vs.100).aspx
In our project we got an application that uses an external configuration file (say server.xml). Now we need to design a setup tool GUI in C++/QT to read/edit such configuration file and it should be able to handle all the different versions of such file. The user will choose the file version and then proceed with the editing. From one version to another doesn't change too much, maybe there is a new xml tag, a tag with a different name or in a different position.
What's the best design approach to do so? We are planning to go for a standard MVC design pattern but how to deal with all the different configuration versions without rewriting the same GUI code again n again?
Here the sample config file:
<?xml version="1.0" encoding="utf-8"?>
<Server_configuration ver="11">
<core>
<enable-tms>true</enable-tms>
<enable-gui-messages>true</enable-gui-messages>
<waiting-for-config-timeout>10000</waiting-for-config-timeout>
<remoting>
<port>50000</port>
<join-timeout>5000</join-timeout>
<ismultithread>true</ismultithread>
<maxconcurrentrequests>20</maxconcurrentrequests>
</remoting>
</core>
<content>
<ftp>
<ip>192.168.0.227</ip>
<port>21</port>
<userid>******</userid>
<passwd>******</passwd>
</ftp>
<library>
<ip>192.168.0.227</ip>
<port>50023</port>
</library>
<local>
<asset-root>/assetroot</asset-root>
<kdm-expiration-warning>172800000</kdm-expiration-warning>
</local>
<hula-store-daemon>
<ip>127.0.0.1</ip>
<port>5567</port>
</hula-store-daemon>
</content>
</Server_configuration>
This is no means a drop in solution but I here are some things to do/consider. Every situation will differ.
Have an explicit version identifier in your config files. Fingerprinting them is a real (error prone) pain.
Consider having a tool that will update from version to version. It will be easier than reading old versions and trying to apply them.
I may be easier to do every version step individually but this can make the conversions less "lossless". A happy hybrid is to do minor updates from version to version but have "checkpoint" major upgrades that will jump right to the latest (or the latest "checkpoint"). This is kinda like incremental backups with full backup snapshots every once and a while.
Keep the user informed. A sysadmin won't be happy if you are changing his settings. You might want to make the process interactive or put comments into the file of every added/moved/removed setting. I would also recommend keeping removed settings in some section of the file for user reference. (Put a note why they are there as well).
Backup the old file. Your script will crash and it will eat data. Do something like naming the current file ${oldname}.old-${ver}~. Saving the settings in a different section of the file won't always be enough and this will save your users a lot of heartache.
Versioning should always be designed as robust and as simple as possible. It is crucial for you to determine whether each version of your application must be compatible with each version of the setup tool (which is rare), or whether you can, for example, meet your needs if any newer setup tool works with any same or older application, but not vice versa.
One way compatibility
One possibility to design for the latter is to add a version attribute to the XML file but try to keep it at the same fixed value forever by always only changing the structure and semantics of the XML file in backward compatible ways. For example, adding an element is backward compatible as long as the setup tool can interpret its absence the same way both the old setup tool and the application would behave. It does not hurt that the new setup tool always writes an (equivalent) value to the new element, because two-way compatibility with the old application is not required.
Once the day comes when you cannot maintain backward compatibility on input, you just change the value of the version attribute and start special casing it in the setup tool.
If you validate the XML against an XSD, notice that XSD can actually do one frequently useful thing for you: assign default attribute values. This way, your setup tool's source code may not even actually notice that the underlying document was missing a recently added attribute!
Two way compatibility
Strict versioning is needed. A schema definition (XSD, RelayNG,...) should be defined for each version of the XML file and the file should be validated against it both when it is read by the setup tool, written by the setup tool, or read by the application. The schema definition may be identical for several consecutive versions, if the interpretation of the same XML has changed, so when in doubt, always increase the version number.
Do what you can educating everyone that they cannot just edit the latest schema and do away with that. Unreliable versioning is worse than no versioning.
Advanced performance question here. Here's my scenario:
I have a database that contains thousands of XSLT documents. One for each page of a website so these translate XML into HTML. An ASP.NET web server (farm) loads the XSLT documents from the database and uses them to render HTML for each web request.
I've implemented the optimization of using XslCompiledTransform and caching it between database refreshes (every 30 minutes). I'm looking to notch performance up further by pre-compiling the XSLT to DLLs with xsltc.exe. This is supposed to eliminate all the Dynamic Method Invocations that XslCompiledTransform creates.
So, I have a separate server writing the XSLTs to files and running through them with xsltc.exe. Takes about 20 minutes but that's OK. I then drop the DLLs onto each webserver. Now I can just have the webserver dynamically load the DLLs on an as-needed basis. Here's the code I'm using to load the assembly into XslCompiledTransform:
byte[] bytes = File.ReadAllBytes(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "XsltDlls\\" + fileName + ".dll"));
Assembly assembly = Assembly.Load(bytes);
Type type = assembly.GetType(fileName);
XslCompiledTransform compiledTransform = new XslCompiledTransform();
compiledTransform.Load(type);
Shoud I ReBase.exe the DLLs in the directory and/or NGEN.exe them? ReBase takes about 5 minutes and NGEN.exe with /queue will take about 10 minutes during which CPU is hit hard - likely causing an impact to the traffic serving function of the webserver. Given how I'm loading the assembly by reading bytes from the assembly will the native NGEN image even be referenced or will the JIT fire up anyway?
Any/all insight into this will be MUCHLY appreciated!
malcolm
Wow!
Assembly.Load(string) does permit native images to be loaded. However, I suspect that the overload that takes a byte array may not use it. I can't find a reference for that, but perhaps some experimentation using the Assembly Binding Log Viewer on a test project might prove either way.
You also have to make sure that your assemblies are strongly named for the native image to be used.
As for rebasing, this blog suggests that it's not required on Vista generation OSes or later.
I'm seeking C++ help in writing HTML code to a new tab in Firefox within an extension.
Our C++ code has been partially wrapped by an XPCOM wrapper and embedded within a Firefox extension thanks to the work of a consultant we have lost contact with, and still partially implemented by calling out to a standalone executable.
To get our output displayed from the standalone executable, the C++ code writes the output to a file and simply calls system(firefox file.html) which then comes up with a file:-based URI.
This no longer works in all situations, based on a report from a user running Vista. So it seems to be time to do it right, and navigate the DOM, likely integrating the rest of the C++ code into the XPCOM-wrapped part. Perhaps there's a right way to do it from the standalone executable using the DOM model?
The "current working directory" seems to no longer match the directory in which the extension installed the standalone executable, with a "VirtualStore" path element.
We also generate parallel output in a different MIME type, VRML to be specific.
Any suggestions or examples for how to properly generate output into a Firefox browser pane under C++ programmatic control would be very much appreciated.
You could call Firefox with a fully specified file:/// URL, not a relative URL (file.html).
Or you if you want to dump a separate executable, you could implement a protocol handler or a simpler about module (where ios.newChannel would be replaced by your own channel implementation that generates the data).
I'd say keeping the file-generation solution is OK and doesn't seem very bad, so I'd go with (1), perhaps changing the generated file location to a temporary folder and specifying it fully both for the executable that generates it and for Firefox.