Load a large .obj file to WASM OpenGL - c++

I am loading .obj models using the preload EMSCRIPTEN flag so that I am able to use them in WASM/WebGL from C++/OpenGL ES, the memory consumption goes over the limit when loading a 64mb .obj, I am able to load smaller models but from that size onward I crash. What is the correct way of loading large files so that I can access them in C++? I also tried the embed command but that doesn't work either.

For a large file, place it in the server as a static resource and use the Fetch API. You can let the browser cache it by using the EMSCRIPTEN_FETCH_PERSIST_FILE flag. It uses the HTML5 IndexedDB designed to store large data, such as 1GB. See this question for the size limit.

Related

insert base64 strings in Dexiejs

I am building an ionic 3 app and I want to set up an upload based on the ImagePicker Cordova plugin.
I use Dexie to persist some data, and I wonder if persisting whole base64 strings would be alright. Or is it too heavy?
I want to persist the images chosen with the image picker. When an upload is suspended or stopped i would be able to restart the upload for those.
Anybody using any other type of persistence of Base64 images?
Thank you
It depends on the size of the images. Unless images are larger than 10 megabytes, I think you are safe. There is no direct limit of document sizes in indexedDB except for the quota you are given for the whole db instance, which can vary per platform and can be extended on modern platforms using navigator.storage.persist(). Do not index the property containing the large string though, since it would affect performance badly and eventually trigger unknown bugs.
In case you target modern platforms (Chromium, Firefox and Safari 10.1), you don't need to convert the images to base64. Instead you can store the binary data directly in a property of type Uint8Array.

Extract data from large files excel

I'm using Pentaho Data Integration to create a transformation from xlsx files to mysql, but I can't import data from large files with Excel 2007 xlsx(apache POI Straiming). It gives me out of memory errors.
Did you try this option ?
Advanced settings -> Generation mode -> Less memory consumed for large excel(Event mode
(You need to check "Read excel2007 file format" first)
I would recommend you to increase jvm memory allocation before running the transformation. By default, pentaho data integration aka kettle comes with low memory allocation which would cause issues with running ETLs involving large files. You would need to modify the -Xmx value so that it specifies a larger upper memory limit in spoon.bat accordingly.
If you are using spoon in windows and edit spoon.bat in the line show below.
if "%PENTAHO_DI_JAVA_OPTIONS%"=="" set PENTAHO_DI_JAVA_OPTIONS="-Xmx512m" "-XX:MaxPermSize=256m"
If you are using kitchen or pan, edit in those pan.bat or kitchen.bat accordingly. If you are using in linux, change in .sh files.

How to optimize loading of a large XAP file

We have inherited a silverlight application used in banks. This is a huge silverlight application with a single xap file of 6.5MB size.
Recently one the core banking applications has updated their services to delete the entire browser cache from the users machine on daily basis.
This impacts the silverlight application directly. We cannot afford to download the 6 MB file every day. On a long term basis I know we need to break this monolith in to smaller manageable pieces and load them dynamically.
I wanted to check if there are any short term alternatives.
Can we have the silverlight runtime load the xap in to different director ?
Will making the application Out of Browser application give us any additional flexibility in terms of where we are loading the xap from ?
Any other suggestions which can help us to give a short term solution will be helpful.
What is inside your xap file ? (change extension to .zip and check what is inside)
Are you including images, sound inside your xap file ?
Are all dlls used necessary ?
Short-term alternatives are :
do some cleanup of your application (remove unused dlls, images, code,...)
rezip your xap file using a more powerful compression tool to save some place
Also, some tools exists to "minify" the size of your xap file. (but I never tried them)
Here is a link that has helped me to reduce my xap size :
http://www.silverlightshow.net/items/My-XAP-file-is-5-Mb-size-is-that-bad.aspx
Edit to answer your comment :
I would suggest using the Isolated Storage.
Quote from http://www.silverlightshow.net/items/My-XAP-file-is-5-Mb-size-is-that-bad.aspx :
Use Isolated Storage: keep in cache XAP files, DLL’s, resources and application data. This won't enhance the first load of the application, but in subsequent downloads you can check whether the resource is already available on the client local storage and avoid downloading it from the server. Three things to bear in mind: It’s up to you building a timestamp mechanism to invalidate the cached information, by default isolated storage is limited to 1 Mb size (by user request it can grow), and users can disable isolated storage on a given app. More info:Basics about Isolated Storage and Caching and sample.
Related links :
http://msdn.microsoft.com/en-us/magazine/dd434650.aspx
http://timheuer.com/blog/archive/2008/09/24/silverlight-isolated-storage-caching.aspx

boost property tree performance

I am planning to use boost property tree for our application http://www.boost.org/doc/libs/1_41_0/doc/html/property_tree.html. Now I wonder, everytime we call this method pt.get("debug.level", 0); does it read the whole file again or the value is served form internal cache. Is there any performance evaluation result of this library? Does it read the whole file in memory and serves the data from there? Anybody can share their experience using this library?
The library works well. You load the file into memory, operate on the property tree (query, update, whatever), and then write it out again when you finish.
We have used it for some JSON files large enough to run out of address space when loading them on a 32 bit machine using a boost::property_tree with std::string. Replacing std::string with a caching string class worked fine.
For most applications where you're really just looking at configuration files it will be fine.

Exporting *.png sequence from *.fla with C++

I need an animation in my program. My designer draws animation in Flash and provides me with *.fla file. All I need is to grab 30-40 PNGs from this file and store them within my internal storage.
Is it possible grab resources from *.fla with C++ ? Probably, some Adobe OLE objects can help?
Please, advice.
Thanks in advance.
If I asked an artist to make me an icon I wouldn't expect to need to write code to convert a .3DS model into a usable icon format.
You can save yourself a lot of time and hassle by having your designer use File->Export and give you PNGs of the layers and frames instead of a .FLA file if that's the format you require for your implementation.
If that's not possible for some reason then you can probably find a flash decompiler that has a command line option which you could launch from your program to extract assets as part of your loading sequence but that is generally frowned upon because this is not the intended use of the proprietary format for .swf/.fla anymore than you should design applications to extract source code from a binary executable.
Assuming
You are using CS5
The assets used internally in the FLA are already PNG's as you want them to be.
Then simply get the FLA saved as a XFL file, and you will be able to grab them from the library folder ( but then why not just get them to mail you the pngs ? )
So if for some reason you can only get access to the fla and not the designer, then you can do it programatically by renaming the fla to .zip, extracting.. and you have the XFL format.