I am loading .obj models using the preload EMSCRIPTEN flag so that I am able to use them in WASM/WebGL from C++/OpenGL ES, the memory consumption goes over the limit when loading a 64mb .obj, I am able to load smaller models but from that size onward I crash. What is the correct way of loading large files so that I can access them in C++? I also tried the embed command but that doesn't work either.
For a large file, place it in the server as a static resource and use the Fetch API. You can let the browser cache it by using the EMSCRIPTEN_FETCH_PERSIST_FILE flag. It uses the HTML5 IndexedDB designed to store large data, such as 1GB. See this question for the size limit.
I am a transmission planning engineer and trying to automate the execution of PSSE 100 times or more at one go through a Python code. I already runs, change loads, reruns psse and write bus based summary report to *.csv file. What I really want to do is select the first active power load variable of a PSSE case and increase it by 1 MW. Then run psse, write results to a csv file. Change the selected load back to its original value and move on to the next active load to do the same again and again until I have done same for all load busses.
This will help me to calculate transmission loss factors for entire network with one go.
Thanks
#dsmtlk, if you're experienced in Python, you can readily find the information you need in the PSSE API Manual located in your PSSE program folder (mine is in C:\Program Files (x86)\PTI\PSSE33\DOCS). The API routines for getting bus data are in section 8.6. The routine for changing bus data—viz., psspy.load_data_4()—is in section 2.21.
If you're new to Python, here are a couple links I found helpful when I first started:
https://docs.python.org/2/tutorial/
http://www.tutorialspoint.com/python/
I'm using Pentaho Data Integration to create a transformation from xlsx files to mysql, but I can't import data from large files with Excel 2007 xlsx(apache POI Straiming). It gives me out of memory errors.
Did you try this option ?
Advanced settings -> Generation mode -> Less memory consumed for large excel(Event mode
(You need to check "Read excel2007 file format" first)
I would recommend you to increase jvm memory allocation before running the transformation. By default, pentaho data integration aka kettle comes with low memory allocation which would cause issues with running ETLs involving large files. You would need to modify the -Xmx value so that it specifies a larger upper memory limit in spoon.bat accordingly.
If you are using spoon in windows and edit spoon.bat in the line show below.
if "%PENTAHO_DI_JAVA_OPTIONS%"=="" set PENTAHO_DI_JAVA_OPTIONS="-Xmx512m" "-XX:MaxPermSize=256m"
If you are using kitchen or pan, edit in those pan.bat or kitchen.bat accordingly. If you are using in linux, change in .sh files.
I would like to know if it is possible to build tcl scripts debugger using Tcl Library API and/or Tcl internal interfaces (I mean if they contain sufficient data to do so). I've noticed that existing tcl debuggers instrument tcl scripts and work with this additional layer. My idea was to use Tcl_CreateObjTrace to trace every evaluated command and use it as a point to retrive callstack, locals etc. Problem is that it seems that not every information is accessible from API at a time of evaluation. For example I would like to know which line is currently evaluated but Interp has such info only for top evaluations (iPtr->cmdFramePtr->line is empty for procedures' bodies). Anyone has tried such approach? Does it make any sense? Maybe should I look into hashed entries in Interp? Any clues and opinions would be appreciated (the best for Tcl 8.5).
Your best bet for a non-intrusive debugging system might be to try using an execution step trace (called for each command called during the execution of the command to which the trace is attached) with info frame to actually get the information. Here's a simple version, attaching to source so that you can watch an entire script:
proc traceinfo args {
puts [dict get [info frame -2] cmd]
}
trace add execution source enterstep traceinfo
source yourscript.tcl
Be prepared for plenty of output. The dictionary out of info frame can have all sorts of relevant entries, such as information about what the line number of the command is and what the source file is; the cmd entry is the unsubstituted source for the command called (if you want the substituted version, see the relevant arguments to the trace callback, traceinfo above).
I am writing a program that produces a formatted file for the user, but it's not only producing the formatted file, it does more.
I want to distribute a single binary to the end user and when the user runs the program, it will generate the xml file for the user with appropriate data.
In order to achieve this, I want to give the file contents to a char array variable that is compiled in code. When the user runs the program, I will write out the char file to generate an xml file for the user.
char* buffers = "a xml format file contents, \
this represent many block text \
from a file,...";
I have two questions.
Q1. Do you have any other ideas for how to compile my file contents into binary, i.e, distribute as one binary file.
Q2. Is this even a good idea as I described above?
What you describe is by far the norm for C/C++. For large amounts of text data, or for arbitrary binary data (or indeed any data you can store in a file - e.g. zip file) you can write the data to a file, link it into your program directly.
An example may be found on sites like this one
I'll recommend using another file to contain data other than putting data into the binary, unless you have your own reasons. I don't know other portable ways to put strings into binary file, but your solution seems OK.
However, note that using \ at the end of line to form strings of multiple lines, the indentation should be taken care of, because they are concatenated from the begging of the next line:
char* buffers = "a xml format file contents, \
this represent many block text \
from a file,...";
Or you can use another form:
char *buffers =
"a xml format file contents,"
"this represent many block text"
"from a file,...";
Probably, my answer provides much redundant information for topic-starter, but here are what I'm aware of:
Embedding in source code: plain C/C++ solution it is a bad idea because each time you will want to change your content, you will need:
recompile
relink
It can be acceptable only your content changes very rarely or never of if build time is not an issue (if you app is small).
Embedding in binary: Few little more flexible solutions of embedding content in executables exists, but none of them cross-platform (you've not stated your target platform):
Windows: resource files. With most IDEs it is very simple
Linux: objcopy.
MacOS: Application Bundles. Even more simple than on Windows.
You will not need recompile C++ file(s), only re-link.
Application virtualization: there are special utilities that wraps all your application resources into single executable, that runs it similar to as on virtual machine.
I'm only aware of such utilities for Windows (ThinApp, BoxedApp), but there are probably such things for other OSes too, or even cross-platform ones.
Consider distributing your application in some form of installer: when starting installer it creates all resources and unpack executable. It is similar to generating whole stuff by main executable. This can be large and complex package or even simple self-extracting archive.
Of course choice, depends on what kind of application you are creating, who are your target auditory, how you will ship package to end-users etc. If it is a game and you targeting children its not the same as Unix console utility for C++ coders =)
It depends. If you are doing some small unix style utility with no perspective on internatialization, then it's probably fine. You don't want to bloat a distributive with a file no one would ever touch anyways.
But in general it is a bad practice, because eventually someone might want to modify this data and he or she would have to rebuild the whole thing just to fix a typo or anything.
The decision is really up to you.
If you just want to keep your distributive in one piece, you might also find this thread interesting: Store data in executable
Why don't you distribute your application with an additional configuration file? e.g. package your application executable and config file together.
If you do want to make it into a single file, try embed your config file into the executable one as resources.
I see it more of an OS than C/C++ issue. You can add the text to the resource part of your binary/program. In Windows programs HTML, graphics and even movie files are often compiled into resources that make part of the final binary.
That is handy for possible future translation into another language, plus you can modify resource part of the binary without recompiling the code.