I'm building in C++ what can be easily described as an app that graphically shows the evolution in time of a system of partial differential equation (PDE from now on).
A list of important constraints:
The user provides the PDE system by writing them using a simplified programming code in a txt.
The user then launches the app. As interpreting is not viable (see below), the PDE txt must be read here, but there are not heavy time limits - the user can well tolerate anything from ten seconds up to a minute of loading.
When the app is ready, the user can and will ask it to update the system potentially hundreds of times per second, which is by far the biggest performance bottleneck in the app - imagine 10-100 intertwined PDE, with 1k - 10k spatial locations interacting with each other.
No third-party software must be installed on the user machine. I can't ask the user to compile code or anything like that; anything more complex than editing that txt and launching the app is off chart.
(Lesser constraint, which can be dropped) It would be cool to be able to edit the txt and reload it while the app is running through a proper UI.
As you can see, I need to parse this txt (easily done) and use its contents in the tightest (performance-wise) cycle of the app. So I can't interpret it on the fly as it would have nightmarish performance. Even if I store a parsed tree of the txt, running through the tree every iteration would still have a lot of overhead.
I know, as I saw it working, that something like this can be done with good enough performance. So how is it done? Can you dinamically create something that has similar execution time to a statically compiled function?
Any link to materials would be appreciated.
Related
I have made an application in C++ and would like to know how to go about implementing a usage statistics system so that I may gather some data regarding how users use the program.
Eg. IP Address, Number of hours spent in application, and OS used.
In theory I know I can code this myself if I must, but I was wondering if there is a framework available to make this easier to do. Unfortunately I was unable to find anything on google.
Though there is no any kind of such framework, you could reduce the work you have to do (in order to retrieve all these information) by using some approaches and techniques, which I tried describe below. Please, anybody feel free to correct me.
Let's summarise, what groups of information do we need to complete the task:
User Environment Information. I suggest you to look at Microsoft's WMI infrastructure, in particular to WMI classes: Desktop, File System, Networking, etc. Using this classes in your application can help you retrieve almost all kind of system information. But if you don't satisfy with this, see #2.
Application and System Performance. Under these terms I mean overall system performance, processor's count, processes running in OS, etc. To retrieve these data you can use the NtQuerySystemInformation function. With its help, you will get an access to detailed SystemProcessInformation, SystemProcessorPerformanceInformation (retrieves info about each processor) information, and much more.
User Related Information. It's hard to find a framework to do such things, so I suggest you simply start writing code, having in mind your requirements:
counting how many times each button was pressed, each text field was changes, etc.
measuring delay time between consecutive actions in some kind of predefined sequences (for example, if you have a settings gui form and you expect from the user to fill very fast all required text fields, so using a time delay measuments can give you an information if the user acted as we expected from him or delayed after TextBox2 for a 5 minutes).
anything that could be interested to you.
So, how you could implement the last item (User Related Information) requirements? As for me, I'd do something like folowing (some may seem very hard to implement or too pointless):
- creating a kind of base Counter class and derive from it some controls (buttons, edits, etc).
- using a windows hooks for mouse or keybord while getting a child handle (to recognize a control, for example).
- using Callback class, which can do all "dirty" work (counting, measuring, performing additional actions).
You could store all this information either in a textfile or an SQLite database or there wherever you prefer.
I would recommend taking a look at DeskMetrics. This StackOverflow post summarizes the issue.
Building your own framework could take you months of development (apart from maintenance). With something like Trackerbird Software Analytics you can integrate a DLL with your app and start tracking in 30 minutes and you get all the cool real-time visualizations.
Disclaimer: I am affiliated with company.
I hope the title is chosen well enough to ask this question.
Feel free to edit if not and please accept my apologies.
I am currently laying out an application that is interacting with the web.
Explanation of the basic flow of the program:
The user is entering a UserID into my program, which is then used to access multiple xml-files over the web:
http://example.org/user/userid/?xml=1
This file contains several ID's of products the user owns in a DRM-System. This list is then used to access stats and informations about the users interaction with the product:
http://example.org/user/appid/stats/?xml=1
This also contains links to various images which are specific to that application. And those may change at any time and need to be downloaded for display in the app.
This is where the horror starts, at least for me :D.
1.) How do I store that information on the PC of the user?
I thought about using a directory for the userid, then subfolders with the appid to cache images and the xml-files to load them on demand. I also thought about using a zipfile while using the same structure.
Or would one rather use a local db like sqlite for that?
Average Number of Applications might be around ~100-300 and stats and images per app from basically 5-700.
2.) When should I refresh the content?
The bad thing is, the website from where this data is downloaded, or rather the xmls, do not contain any timestamps when it was refreshed/changed the last time. So I would need to hash all the files and compare them in the moment the user is accessing that data, which can take an inifite amount of time, because it is webbased. Okay, there are timeouts, but I would need to block the access to the content until the data is either downloaded and processed or the timeout occurs. In both cases, the application would not be accessible for a short or maybe even long time and I want to avoid that. I could let the user do the refresh manually when he needs it, but then I hoped there are some better methods for that.
Especially with the above mentioned numbers of apps and stuff.
Thanks for reading and all of that and please feel free to ask if I forgot to explain something.
It's probably worth using a DB since it saves you messing around with file formats for structured data. Remember to delete and rebuild it from time to time (or make sure old stuff is thoroughly removed and compact it from time to time, but it's probably easier to start again, since it's just a cache).
If the web service gives you no clues when to reload, then you'll just have to decide for yourself, but do be sure to check the HTTP headers for any caching instructions as well as the XML data[*]. Decide a reasonable staleness for data (the amount of time a user spends staring at the results is a absolute minimum, since they'll see results that stale no matter what you do). Whenever you download anything, record what date/time you downloaded it. Flush old data from the cache.
To prevent long delays refreshing data, you could:
visually indicate that the data is stale, but display it anyway and replace it once you've refreshed.
allow staler data when the user has a lot of stuff visible, than you do when they're just looking at a small amount of stuff. So, you'll "do nothing" while waiting for a small amount of stuff, but not while waiting for a large amount of stuff.
run a background task that does nothing other than expiring old stuff out of the cache and reloading it. The main app always displays the best available, however old that is.
Or some combination of tactics.
[*] Come to think of it, if the web server is providing reasonable caching instructions, then it might be simplest to forget about any sort of storage or caching in your app. Just grab the XML files and display them, but grab them via a caching web proxy that you've integrated into your app. I don't know what proxies make this easy - you can compile Squid yourself (of course), but I don't know whether you can link it into another app without modifying it yourself.
I currently have a GUI single-threaded application in C++ and Qt. It takes a good 1 minute to load (read from disk) and ~5 seconds to close (saving settings, finalize connections, ...).
What can I do to make my application appear to be faster?
My first thought was to have a server component of the app that does all the works while the GUI component is only for displaying. The communication is done via socket, pipe or memory map. That seems like an overkill (in term of development effort) since my application is only used by a handful of people.
The first step is to start profiling. Use an actual, low-overhead profiling tool (eg, on Linux, you could use oprofile), not guesswork. What is your app doing in that one minute it takes to start up? Can any of that work be deferred until later, or perhaps skipped entirely?
For example, if you're loading, say, a list of document templates, you could defer that until the user tells you to create a new document. If you're scanning the system for a list of fonts, load a cached list from last startup and use that until you finish updating the font list in a separate thread. These are just examples - use a profiler to figure out where the time's actually going, and then attack the code starting with the largest time figures.
In any case, some of the more effective approaches to keep in mind:
Skip work until needed. If you're doing initialization for some feature that's used infrequently, skip it until that feature is actually used.
Defer work until after startup. You can take care of a lot of things on a separate thread while the UI is responsive. If you are collecting information that changes infrequently but is needed immediately, consider caching the value from a previous run, then updating it in the background.
For your shutdown time, hide your GUI instantly, and then spend those five seconds shutting down in the background. As long as the user doesn't notice the work, it might as well be instantaneous.
You could employ the standard trick of showing something interesting while you load.
Like many games nowadays show a tip or two while they are loading
It looks to me like you're only guessing at where all this time is being burned. "Read from disk" would not be high on my list of candidates. Learn more about what's really going on.
Use a decent profiler.
Profiling is a given, of course.
Most likely, you may find I/O is substantial - reading in your startup files. As bdonlan notes, deferring work is a standard technique. Google 'lazy evaluation'.
You can also consider caching data that does not change. Save a cache in a faster format, such as binary. This is most useful if you happen to have a large static data set read into something like an array.
I currently have a django site, and it's kind of slow, so I want to understand what's going on. How can I profile it so to differentiate between:
effect of the network
effect of the hosting I'm using
effect of the javascript
effect of the server side execution (python code) and sql access.
any other effect I am not considering due to the massive headache I happen to have tonight.
Of course, for some of them I can use firebug, but some effects are correlated (e.g. javascript could appear slow because it's doing slow network access)
Thanks
client side:
check with firebug if/which page components take long to load, and how long the browser needs to render the page after loading is completed. If everything is fast but rendering takes its time, then probably your html/css/js is the problem, otherwise it's server side.
server side (i assume you sit on some unix-alike server):
check the web server with a small static content (a small gif or a little html page), using apache bench (ab, part of the apache webserver package) or httperf, the server should be able to answerat least 100 requests per second (of course this depends heavily on the size of your test content, webserver type, hardware and other stuff, so dont take that 100 to seriously). if that looks good,
test django with ab or httperf on a "static view" (one that doesnt use a database object), if thats slow it's a hint that you need more cpu power. check cpu utilization on the server with top. if thats ok, the problem might be in the way the web server executes the python code
if serving semi-static content is ok, your problem might be the database or IO-bound. Database problems are a wide field, here is some general advice:
check i/o throughput with iostat. if you see lot's of writes then you have get a better disc subsystem, faster raid, SSD hard drives .. or optimize your application to write less.
if its lots of reads, the host might not have enough ram dedicated as file system buffer, or your database queries might not be optimized
if i/o looks ok, then the database might be not be suited for your workload or not correctly configured. logging slow queries and monitoring database activity, locks etc might give you some idea
if you let us know what hardware/software you use i might be able to give more detailed advice
edit/PS: forgot one thing: of course your app might have a bad design and does lots of unnecessary/inefficient things ...
Take a look at the Django debug toolbar - that'll help you with the server side code (e.g. what database queries ran and how long they took); and is generally a great resource for Django development.
The other non-Django specific bits you could profile with yslow.
There are various tools, but problems like this are not hard to find because they are big.
You have a problem, and when you remove it, you will experience a speedup. Suppose that speedup is some factor, like 2x. That means the program is spending 50% of its time waiting for the slow part. What I do is just stop it a few times and see what it's waiting for. In this case, I would see the problem 50% of the times I stop it.
First I would do this on the client side. If I see that the 50% is spent waiting for the server, then I would try stopping it on the server side. Then if I see it is waiting for SQL queries, I could look at those.
What I'm almost certain to find out is that more work is being requested than is actually needed. It is not usually something esoteric like a "hotspot" or an "algorithm". It is usually something dumb, like doing multiple queries when one would have been sufficient, so as to avoid having to write the code to save the result from the first query.
Here's an example.
First things first; make sure you know which pages are slow. You might be surprised. I recommend django_dumpslow.
SQLite is a great little database, but I am having an issue with it on Windows. It can take up to 50 seconds to perform a query on a 100MB database the first time the application is launched. Subsequent loads take 10% of that time.
After some discussions on the SQLite mailing list, I am told
"The bug is in Windows. It aggressively pre-caches big database files
-- reads in big chunks of the files -- to make it look as if programs
like Outlook are better than they really are. Unfortunately although
this speeds up some programs it makes others act jerky because they
have no control over how much is read when they ask for just a few
bytes of file."
This problem is compounded because there is no way to get progress information while all this is happening from SQLite, so my users think something is broken. (I could display a dummy progress report, but that is really cheesy for a sharp tool.)
I believe there is a way to turn the pre-caching off globally, but is there some way around this programmatically?
I don't know how to fix the caching problem, but 50 seconds sounds extreme. If the query itself takes 10% of that, that means 45 seconds to load a 100mb file. Even if Windows does read in the entire file in one go, that shouldn't take more than a couple of seconds given normal harddrive speeds.
Is the file very fragmented or something?
It sounds to me like there's more than just precaching at play here.
I'm too having the same problem with my first query. The problem returns after not querying the database for a long time. It seems to be a memory caching problem. My software runs 24/7 and every once in a while the user performs the SELECT query. I am also performing the query on a database of the same size.