We have inherited a silverlight application used in banks. This is a huge silverlight application with a single xap file of 6.5MB size.
Recently one the core banking applications has updated their services to delete the entire browser cache from the users machine on daily basis.
This impacts the silverlight application directly. We cannot afford to download the 6 MB file every day. On a long term basis I know we need to break this monolith in to smaller manageable pieces and load them dynamically.
I wanted to check if there are any short term alternatives.
Can we have the silverlight runtime load the xap in to different director ?
Will making the application Out of Browser application give us any additional flexibility in terms of where we are loading the xap from ?
Any other suggestions which can help us to give a short term solution will be helpful.
What is inside your xap file ? (change extension to .zip and check what is inside)
Are you including images, sound inside your xap file ?
Are all dlls used necessary ?
Short-term alternatives are :
do some cleanup of your application (remove unused dlls, images, code,...)
rezip your xap file using a more powerful compression tool to save some place
Also, some tools exists to "minify" the size of your xap file. (but I never tried them)
Here is a link that has helped me to reduce my xap size :
http://www.silverlightshow.net/items/My-XAP-file-is-5-Mb-size-is-that-bad.aspx
Edit to answer your comment :
I would suggest using the Isolated Storage.
Quote from http://www.silverlightshow.net/items/My-XAP-file-is-5-Mb-size-is-that-bad.aspx :
Use Isolated Storage: keep in cache XAP files, DLL’s, resources and application data. This won't enhance the first load of the application, but in subsequent downloads you can check whether the resource is already available on the client local storage and avoid downloading it from the server. Three things to bear in mind: It’s up to you building a timestamp mechanism to invalidate the cached information, by default isolated storage is limited to 1 Mb size (by user request it can grow), and users can disable isolated storage on a given app. More info:Basics about Isolated Storage and Caching and sample.
Related links :
http://msdn.microsoft.com/en-us/magazine/dd434650.aspx
http://timheuer.com/blog/archive/2008/09/24/silverlight-isolated-storage-caching.aspx
Related
I am loading .obj models using the preload EMSCRIPTEN flag so that I am able to use them in WASM/WebGL from C++/OpenGL ES, the memory consumption goes over the limit when loading a 64mb .obj, I am able to load smaller models but from that size onward I crash. What is the correct way of loading large files so that I can access them in C++? I also tried the embed command but that doesn't work either.
For a large file, place it in the server as a static resource and use the Fetch API. You can let the browser cache it by using the EMSCRIPTEN_FETCH_PERSIST_FILE flag. It uses the HTML5 IndexedDB designed to store large data, such as 1GB. See this question for the size limit.
I have a web application developed using Wt framework in C++. I have some files on local disc and I want to provide file downloading functionality in my web application. Anyone's help through this will be helpful. Thank you!
Two methods exists two provide access to static files:
Specifying the docroot location (= folder containing the static files) as a command line argument. See the Wt: Library overview for more information. This approach is mostly used for images, css files, ...
Using a Wt::WResource (or f.ex. the specialised Wt::WFileResource). Advantage of this approach is that you could limit the access to those (static) resources to specific users only. This approach could also be used to download on the fly generated resources.
I am looking for a very general answer to the feasibility of the idea, not a specific implementation.
If you want to serve small variations of the same media file to different people (say, an ePub or music file), is it possible to serve most of the file to everybody but individualized small portions of the file to each recipient for watermarking using something like Amazon WS.
If yes, would it be possible to create a dropbox-like file hosting service with these individualized media files where all users “see” most of the same physical stored file but with tiny parts of the file served individually? If, say, 1000 users had the same 10 MB mp3 file with different watermarks on a server that would amount to 10 GB. But if the same 1000 users were served the same file except for a tiny 10 kB individual watermarked portion it would only amount to 20 MB in total.
An EPUB is a single file and must be served/downloaded as such, not in pieces. Why don't you implement simple server-side logic to customize the necessary components, build the EPUB from the common assets and the customized ones, and then let users download that?
The answer is, of course, yes, it can be done, using an EC2 instance -- or any other machine that can run a web server, for that matter. The problem is that any given type of media file has different levels of complexity when it comes to customizing the file... from the simplest, where the file contains a string of bytes at a known position that can simply be overwritten with your watermark data, to a more complex format that would have to be fully or partially disassembled and repackaged every time a download is requested.
The bottom line is that for any format I can think of, the server would spent some amount of CPU resources -- possibly a significant amount -- crunching the data and preparing/reassembling the file for download. The ultimate solution would be very format-specific, and, as a side note, has really nothing to do with anything AWS other than the fact that you can host web servers in EC2.
I'm building a server that implement the tftp protocol. My homework requests the creation of a cache of recent requested files. What I would like to understand is about design, watch the example:
Server request file to the cache
The cache doesn't have file
The cache read from filesystem the file
The cache serves the file to the server
Should the cache read from the file system or should the server read from the cache and if the file is missing, read it from the file system and put it in the cache?
From a complexity standpoint, I would definitely recommend that the cache reads the file from the file system. Your implementation will be much cleaner that way.
Digging deeper: you're touching on the single responsibility principle. Ideally, you want components of your system to do one thing, and do it well. What you are trying to avoid is a God object that does everything, as this stops your code from being scalable and reusable. Now lets take a look at the two options you presented:
Option 1: Server reads file system & saves to cache.
In this instance, the server is the center of the universe. The cache serves as nothing more than a memory pool for the server, and without the server, it has little purpose. The server must know everything about both the file system and the cache, and enhancing the system requires that the server be changed.
This description alone makes it obvious why it breaks the single responsibility principle. Any change to any component of the system demands a change to the server--this is bad.
Option 2: Cache reads file system.
In this option, the cache serves as a complete abstraction between the server and the file system. The server does not need to know how the cache works. For that matter, there could even be multiple levels of caching! No matter how it works, the server doesn't have to care. The server uses the cache for one use and that is to retrieve a file.
The division also goes both ways. The cache does not need not know how it is being used, but merely that it exists to make accessing the file system faster. This allows it to be switched out with something better at a later date (if you decide to do so), and also makes it so that it can be reused in other projects you create.
Your code will be much easier and much cleaner if you go with option 2.
Advanced performance question here. Here's my scenario:
I have a database that contains thousands of XSLT documents. One for each page of a website so these translate XML into HTML. An ASP.NET web server (farm) loads the XSLT documents from the database and uses them to render HTML for each web request.
I've implemented the optimization of using XslCompiledTransform and caching it between database refreshes (every 30 minutes). I'm looking to notch performance up further by pre-compiling the XSLT to DLLs with xsltc.exe. This is supposed to eliminate all the Dynamic Method Invocations that XslCompiledTransform creates.
So, I have a separate server writing the XSLTs to files and running through them with xsltc.exe. Takes about 20 minutes but that's OK. I then drop the DLLs onto each webserver. Now I can just have the webserver dynamically load the DLLs on an as-needed basis. Here's the code I'm using to load the assembly into XslCompiledTransform:
byte[] bytes = File.ReadAllBytes(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "XsltDlls\\" + fileName + ".dll"));
Assembly assembly = Assembly.Load(bytes);
Type type = assembly.GetType(fileName);
XslCompiledTransform compiledTransform = new XslCompiledTransform();
compiledTransform.Load(type);
Shoud I ReBase.exe the DLLs in the directory and/or NGEN.exe them? ReBase takes about 5 minutes and NGEN.exe with /queue will take about 10 minutes during which CPU is hit hard - likely causing an impact to the traffic serving function of the webserver. Given how I'm loading the assembly by reading bytes from the assembly will the native NGEN image even be referenced or will the JIT fire up anyway?
Any/all insight into this will be MUCHLY appreciated!
malcolm
Wow!
Assembly.Load(string) does permit native images to be loaded. However, I suspect that the overload that takes a byte array may not use it. I can't find a reference for that, but perhaps some experimentation using the Assembly Binding Log Viewer on a test project might prove either way.
You also have to make sure that your assemblies are strongly named for the native image to be used.
As for rebasing, this blog suggests that it's not required on Vista generation OSes or later.