Ms chart controls for automated report/metrics generation to image, web or win forms? - mschart

Are there reasons to go one way or the other?
System.Web.UI.DataVisualization vs. System.Windows.Forms.DataVisualization
I'm building a process for a build server, where it will simply output a chart to an image based on a sql data rows fetched at run-time. I don't have anything vested in either direction, and have experience in both worlds, but none in Ms Charting.
I assume there are good reasons to go either way.
What are the things to consider about going either direction?

We went the WinForms route because it took a long time for our data intensive charts to be generated on the fly. With a service you can have the charts generated slightly ahead of time (5 minute cycle for us) and as a result the webpage loads instantly. Also this means that at most our DB is queried 1 time every five minutes whereas each person who loads the web chart would hit the database individually resulting in a lot more load on the DB.

Related

Problem loading large amount of geoJSON data to Leaflet map [duplicate]

I'm doing some tests with an HTML map in conjunction with Leaflet. Server side I have a Ruby Sinatra app serving json markers fetched by a MySQL table. What are the best practices working with 2k-5k and potentially more markers?
Load all the markers in the first place and then delegate everything to Leaflet.markercluster.
Load the markers every time the map viewport change, sending southWest & northEast points to the server, elaborate the clipping server side and then sync the marker buffer client side with the server-fetched entries (what I'm doing right now).
A mix of the two above approaches.
Thanks,
Luca
A few months have passed since I originally posted the question and I made it through!
As #Brett DeWoody correctly noted the right approach is to be strictly related to the number of DOM elements on the screen (I'm referring mainly to markers). The more the merrier if your device is faster (CPU especially). Since the app I was developing has both desktop and tablet as target devices, CPU was a relevant factor just like the marker density of different geo-areas.
I decided to separate DBase querying/fetching and map representation/displaying. Basically, the user adjusts controls/inputs to filter the whole dataset, afterward records are fetched and Leaflet.markercluster does the job of representation. When a filter is modified the cycle starts over. Users can choose the map zoom level of clusterization depending on their CPU power.
In my particular scenario, the above-mentioned represented the best approach (verified by console.time). I found that viewport optimization was good for lower marker-density areas (a pity).
Hope it may be helpful.
Cheers,
Luca
Try options and optimize when you see issues rather than optimizing early. You can probably get away with just Leaflet.markercluster unless your markers have lots of data attached to them.

Django app performing slowly (even when cached)

Launching my second-ever Django site.
I've had problems in the past with Django's ORM (basically, the SQL it was generating just wasn't what I wanted and even using things like select_related() I couldn't wrangle it into what it should've been) -- I ended up just writing all my DB queries by hand in my views and using this function, taken from the Django docs, to turn the cursor's responses into usable dictionaries:
def dictfetchall(cursor, returnMultiDictAnyway=False):
"Returns all rows from a cursor as a dict"
desc = cursor.description
rows = [
dict(zip([col[0] for col in desc], row))
for row in cursor.fetchall()
]
if len(rows) == 1 and not returnMultiDictAnyway:
return rows[0]
return rows
I'm almost ready to launch my site but I'm finding pretty huge performance problems on the two different webservers I've tried hosting the app with.
Locally, it doesn't run blazingly fast, but I generally put this down to my machine in general being a little slow. I don't have the numbers to hand (will add later on) but the SQL times aren't crazily high and I've made the effort to optimise MySQL (adding missing indexes etc).
Here's the app, running on two different webhosts (using bit.ly to avoid Google spidering these URLs, sorry!):
http://bit.ly/10iEWYt (hosted on Dreamhost, using Passenger WSGI)
http://bit.ly/UZ9adS (hosted on WebFaction, also using WSGI)
At the moment I have Debug=False on both of those hosts (so there shouldn't be a loading penalty) and a file-based cache of 15 minutes for each one. On the Dreamhost one I have an experimental cronjob hitting the homepage every 15 minutes in an effort to see if this keeps the Python server alive -- this doesn't seem to have done much.
If you try those links you should see how long it takes for the server to respond as you click around, even including the cache (try going from the homepage to another page then back home).
I've tried this profiling middleware but not really sure how to interpret results (can add them to this post later on when I'm home) -- in any case, the functions/lines it pointed to were all inside Django's own code so I struggled to relate that to my own views etc.
Is it likely that the dictfetchall() method above could be an issue here? I use that to work with the results of every DB query on the site (~5-10 per page, most on the homepage). I do have a few included templates but nothing too crazy. I have a context processor for common things like showing album reviews, which I use all over the place. I'm stumped about what else could be causing this slowness.
Thanks, hope this is enough info to be helpful.
EDIT: okay, here's a profiling trace of the site homepage: http://pastebin.com/raw.php?i=c7kHNXAZ -- struggling to interpret it, to be honest.
Also, I looked at the Debug Toolbar stats: 8 SQL queries in 246ms (looking currently at further optimising these), but total time for render of 3235ms (locally). This is what's confusing me.

Windows Phone 7 - Best Practices for Speeding up Data Fetch

I have a Windows Phone 7 app that (currently) calls an OData service to get data, and throws the data into a listbox. It is horribly slow right now. The first thing I can think of is because OData returns way more data than I actually need.
What are some suggestions/best practices for speeding up the fetching of data in a Windows Phone 7 app? Anything I could be doing in the app to speed up the retrieval of data and putting into in front of the user faster?
Sounds like you've already got some clues about what to chase.
Some basic things I'd try are:
Make your HTTP requests as small as possible - if possible, only fetch the entities and fields you absolutely need.
Consider using multiple HTTP requests to fetch the data incrementally instead of fetching everything in one go (this can, of course, actually make the app slower, but generally makes the app feel faster)
For large text transfers, make sure that the content is being zipped for transfer (this should happen at the HTTP level)
Be careful that the XAML rendering the data isn't too bloated - large XAML structure repeated in a list can cause slowness.
When optimising, never assume you know where the speed problem is - always measure first!
Be careful when inserting images into a list - the MS MarketPlace app often seems to stutter on my phone - and I think this is caused by the image fetch and render process.
In addition to Stuart's great list, also consider the format of the data that's sent.
Check out this blog post by Rob Tiffany. It discusses performance based on data formats. It was written specifically with WCF in mind but the points still apply.
As an extension to the Stuart's list:
In fact there are 3 areas - communication, parsing, UI. Measure them separately:
Do just the communication with the processing switched off.
Measure parsing of fixed ODATA-formatted string.
Whether you believe or not it can be also the UI.
For example a bad usage of ProgressBar can result in dramatical decrease of the processing speed. (In general you should not use any UI animations as explained here.)
Also, make sure that the UI processing does not block the data communication.

How to convert concurrent users into hits per second?

SRS for the system I'm currently working on includes the following non-functional requirement: "the SuD shall be scalable to 200 concurrent users". How can I convert this statement to a more measurable characteristic: "hits per second"?
Assuming you're talking about a web application (based on your desire to estimate "hits" per second), you have to work on a number of assumptions.
- How long will a user spend between interactions? For typical content pages, that might be 10 seconds; for interactive web apps, perhaps only 5 seconds.
- Divide the number of users by the "think time" to get hits per second - 200 concurrent users with a think time of 10 seconds gives you 20 concurrent users on average.
- Then multiply by a "peak multiplier" - most web sites are relatively silent during the night, but really busy around 7PM. So your average number needs to take account of that - typically, I recommend a peak of between 4 and 10 times.
This gives you a peak page requests per second - this is usually the limiting factor for web applications (though by no means always - streaming video is often constrained by bandwidth, for instance).
If you really want to know "hits", you then need to work through the following:
- How many assets on your page? Images, stylesheets, javascript files etc. - "hit" typically refers to any kind of request, not just the HTML page (or ASPX or PHP or whatever). Most modern web apps include dozens of assets.
- How cacheable are your pages and/or assets? Most images, CSS, JS files etc. should be set to cacheable by the browser.
Multiply the page requests by the number of non-cacheable assets. Add to this the number of visitors multiplied by the number of assets if you want to be super precise.
All of this usually means you have to make lots and lots of assumptions - so the final number is an indicator at best. For scalability measurements, I usually spend more time trying to understand the bottlenecks in the system and observing the system under load.
Well that's impossible to answer without knowing anything about your app or what it does. You need to figure out how many hits per second one user is likely to make when using the app, and multiply by 200.
Incidently, hits/second is not the only metric you need to be concerned with. With 200 concurrent users how much memory overhead will that be? How much disk access or open file handles? How many db reads/writes? How much bandwidth (does the app involve streaming media)? Can it all be handled by one machine? etc etc

Windows Pre-Caching SQLite problem

SQLite is a great little database, but I am having an issue with it on Windows. It can take up to 50 seconds to perform a query on a 100MB database the first time the application is launched. Subsequent loads take 10% of that time.
After some discussions on the SQLite mailing list, I am told
"The bug is in Windows. It aggressively pre-caches big database files
-- reads in big chunks of the files -- to make it look as if programs
like Outlook are better than they really are. Unfortunately although
this speeds up some programs it makes others act jerky because they
have no control over how much is read when they ask for just a few
bytes of file."
This problem is compounded because there is no way to get progress information while all this is happening from SQLite, so my users think something is broken. (I could display a dummy progress report, but that is really cheesy for a sharp tool.)
I believe there is a way to turn the pre-caching off globally, but is there some way around this programmatically?
I don't know how to fix the caching problem, but 50 seconds sounds extreme. If the query itself takes 10% of that, that means 45 seconds to load a 100mb file. Even if Windows does read in the entire file in one go, that shouldn't take more than a couple of seconds given normal harddrive speeds.
Is the file very fragmented or something?
It sounds to me like there's more than just precaching at play here.
I'm too having the same problem with my first query. The problem returns after not querying the database for a long time. It seems to be a memory caching problem. My software runs 24/7 and every once in a while the user performs the SELECT query. I am also performing the query on a database of the same size.