I'm using Ember JS 2.3.0, Ember Data 2.3.3 and jQuery 2.1.4 and I'm having an issue when trying to transition away from a page with a HTML table component on it. The table component is populated with data from this Ember Data call:
this.store.query('log', {filter: {
created_at_from: moment().subtract(1, 'month').format('DD/MM/YYYY'),
created_at_to: moment().format('DD/MM/YYYY'),
object: 'IsoApplication'
}})
which having looked in the developer tools "Timeline" and "Network" tabs is resolving in good time. It has no special jQuery events etc attached to it and is simply a plain HTML table made dynamic with Ember.
However, when clicking a "link-to" helper that transitions away from the page with the component on it there is a huge 10 second delay before the destination page renders. During this time nothing seems to happen and having used the developer tools "Timeline" and drilled down it seems that the delay occurs when the "removeFromListeners" function is being called (see attached timeline screenshot). No asynchronous stuff is happening at this point and therefore no loader appears but it seems that tearing down the table is the problem because if I remove the table component from the template the issue disappears. Without the table component the page transitions almost instantly.
I have attached the timeline screenshot to see if anyone can help to pinpoint the possible cause of this problem.
Any suggestions would be welcome.
Thanks.
Ok, after significantly reducing the number of rows in my table I discovered that the issue went away. #runspired provided a detailed and very helpful description of why my issue occurred, which I have posted below to inform others who may have similar issues:
there's a number of things going on here
I've given a few talks on this (including two podcasts on Ember Weekend (episodes 38 and 39))
I wish I had my chicago-ember talk on it recorded but I don't
but basically, comes down to three things as to why this is slow
and none of it is actually because "dom is slow", that argument masks the truth of the matter
1.) don't render the universe, render the scene
easiest analogy is take any video game you can think of, especially one in which you can "pivot" the camera view quickly
that pivot is a lot like scroll
you really only want to render what's visible at the moment, and prepare content that's closer to the camera's view
but it makes no sense to prepare content that's far out of view
this is a lesson that every native SDK and game SDK has baked in, but which no JS framework has yet realized
when people say DOM is slow, it's really because they are trying to render far more things than they ought to
2.) is a bug in most browsers
memory allocated for DOM is always retained by the tab that allocated it
this allocation persists beyond the lifecycle of your page
it's a true leak that hopefully Chrome etc. will fix soon
or basically, the allocation heap size made for DOM can only ever grow
you can reuse allocated memory, but you can't shrink it
this means that if you generate a huge amount of DOM nodes, even once you tear them down there's a high memory overhead for that tab
which causes performance issues in other areas because less memory is available for JS allocations etc.
this problem is bigger on mobile, especially android, than on desktop. But it can affect desktop too.
Something going hand in hand with 1 and 2
is there's a definite limit to how many DOM nodes you can render before the browser starts chugging
I've found that limit to be roughly 40k DOM nodes, varies from as low as 20k on older android devices to > 100k on Safari
but 40k seems to be a limit which if held to as a target works well for all platforms
if you have 700-2000 table rows, and each row has maybe 3 TDs, and each TD has just one link or span or other element, and then in that you have some text
6,300 - 18,000 nodes
or basically, you're dealing with half of your max budget with that setup
just as the content of the rows
so 7002 - 20,002 nodes counting the rows
and it's likely that 9 nodes per row is a very low estimate
if we double that
you're exceeding that max
and 18 nodes sounds more likely
3.) GC events stop the world
[9:28]
even though the memory allocated by the tab is retained
the allocation IS cleaned up
(so it can be reused for more DOM later)(edited)
so you're releasing the memory for at LEAST your max DOM size (likely larger) all at once
that's a lot for the browser to clean up on
it's also likely a lot of arrays and array observers are being torn down in this time.
#runspired is also the author of
https://github.com/runspired/smoke-and-mirrors which is a project that:
focuses on improving initial and re-render performance in high-stress situations by providing components and primitives for performant lists and svelte renders to match a core belief: Don't render the universe, render the scene.
Related
I'm doing some tests with an HTML map in conjunction with Leaflet. Server side I have a Ruby Sinatra app serving json markers fetched by a MySQL table. What are the best practices working with 2k-5k and potentially more markers?
Load all the markers in the first place and then delegate everything to Leaflet.markercluster.
Load the markers every time the map viewport change, sending southWest & northEast points to the server, elaborate the clipping server side and then sync the marker buffer client side with the server-fetched entries (what I'm doing right now).
A mix of the two above approaches.
Thanks,
Luca
A few months have passed since I originally posted the question and I made it through!
As #Brett DeWoody correctly noted the right approach is to be strictly related to the number of DOM elements on the screen (I'm referring mainly to markers). The more the merrier if your device is faster (CPU especially). Since the app I was developing has both desktop and tablet as target devices, CPU was a relevant factor just like the marker density of different geo-areas.
I decided to separate DBase querying/fetching and map representation/displaying. Basically, the user adjusts controls/inputs to filter the whole dataset, afterward records are fetched and Leaflet.markercluster does the job of representation. When a filter is modified the cycle starts over. Users can choose the map zoom level of clusterization depending on their CPU power.
In my particular scenario, the above-mentioned represented the best approach (verified by console.time). I found that viewport optimization was good for lower marker-density areas (a pity).
Hope it may be helpful.
Cheers,
Luca
Try options and optimize when you see issues rather than optimizing early. You can probably get away with just Leaflet.markercluster unless your markers have lots of data attached to them.
I cant wrap my head around properly creating a responsive design using foundation 5 when dealing with grid systems.
Is it imperative that everything be set the column width using the grid system?
To be more clear, does every element on the page require a number of columns for each width (small, medium, large) for the site to be considered truly responsive? Or is it sufficient to set width in % and ems and simple explicit media queries to achieve that goal?
I don't know anything about Foundation 5, but in principle, no, you do not even need a grid system at all to be responsive. As you rightly suspected, media queries used to rearrange the elements you want to move for different size screens will make any page responsive. And yes, set things in em units or %ages - especially the media query breakpoints.
Grids are of use if your page content naturally needs to be displayed in a grid like fashion.
By the way, you would probably have had an answer much earlier if you had tagged it correctly. Responsive design makes it a CSS question, so it needs a CSS tag! I just happened to see this because I was searching for something else.
I am debugging a performance problem for an app. Drilled down to what appears to be a clue: Ember.RenderBuffer.string() took over 4 seconds for one of the elements. When drilling down more into the code, it was the function setInnerHTMLWithoutFix that was the culprit: it searches the template HTML for script tags injected by Metamorph, and then replaces them all one by one, but to do this it has to traverse the dom for each of the matches: and there were over 400 matches! We must have a large view, but I'd love to see if anyone encountered this before and/or any pointers to fixing or working around this problem.
Now you can use the super B feature on ember inspector get it here for chrome
that now has render performance and you can see and evaluate any bottleneck in Render
Lately I have been building a mobile phone application using Sproutcore20 and now Ember.JS.
This works great on my iPhone (3GS) though it stutters on many android devices.
The simplest thing, like switching from main menu item, and thus loading in a different view feels all but native.
Currently it makes use of template views which are appended and removed in a statechart. Each main menu item has a main state in which the corresponding views are appended (and removed on exit).
This method seems optimal to me but it does not function optimal, so I am wondering if an alternative approach like appending all views on load (deferred) and visibility toggling would improve performance (1)? It would make the DOM larger and thus operations on the DOM slower.
What is an optimal Ember.js code structure for building a mobile application, and what considerations need to be taken into account when building for mobile devices(2)?
It really depends on your application. In general, limiting the number of DOM elements and the number of DOM changes helps performance. Deciding whether to leave elements in the DOM and hide/show them, or to create/destroy elements on demand depends on the usage patterns of the app. If you expect the user to move quickly back and forth between views, hide/show can make sense.
I have a Windows Phone 7 app that (currently) calls an OData service to get data, and throws the data into a listbox. It is horribly slow right now. The first thing I can think of is because OData returns way more data than I actually need.
What are some suggestions/best practices for speeding up the fetching of data in a Windows Phone 7 app? Anything I could be doing in the app to speed up the retrieval of data and putting into in front of the user faster?
Sounds like you've already got some clues about what to chase.
Some basic things I'd try are:
Make your HTTP requests as small as possible - if possible, only fetch the entities and fields you absolutely need.
Consider using multiple HTTP requests to fetch the data incrementally instead of fetching everything in one go (this can, of course, actually make the app slower, but generally makes the app feel faster)
For large text transfers, make sure that the content is being zipped for transfer (this should happen at the HTTP level)
Be careful that the XAML rendering the data isn't too bloated - large XAML structure repeated in a list can cause slowness.
When optimising, never assume you know where the speed problem is - always measure first!
Be careful when inserting images into a list - the MS MarketPlace app often seems to stutter on my phone - and I think this is caused by the image fetch and render process.
In addition to Stuart's great list, also consider the format of the data that's sent.
Check out this blog post by Rob Tiffany. It discusses performance based on data formats. It was written specifically with WCF in mind but the points still apply.
As an extension to the Stuart's list:
In fact there are 3 areas - communication, parsing, UI. Measure them separately:
Do just the communication with the processing switched off.
Measure parsing of fixed ODATA-formatted string.
Whether you believe or not it can be also the UI.
For example a bad usage of ProgressBar can result in dramatical decrease of the processing speed. (In general you should not use any UI animations as explained here.)
Also, make sure that the UI processing does not block the data communication.