How should I evaluate different data store components for Ember? - ember.js

Just in my reading over the last few days I've found at least three different extensions offering data store support for Ember: ember-data, sproutcore-datastore, ember-ezdata, and I think I might be missing one.
This range of options gives rise to several questions.
Obviously ember-data is the "official" extension, but it's also pretty heavily fenced with qualifications ("This isn't ready for production") from the core team.
How should I compare and evaluate these options?
In the SproutCore 1.x series, development was usually done with fixtures, and the data source wired in later. Can any of these options support that sort of workflow? Can I load some production data this way (might change with release versions, but not user-editable) and other data from my back-end data source?
ETA: Here's a related question.

Personally, I'm expecting big things from ember-data, but it does seem to have a little way to go to be "production ready".
When I started using Ember, the ember-data project had just begun, so I decided to create a simple persistence layer of my own. I ended up with ember-rest, which is a pretty thin layer over jQuery.ajax(). You can see it in use in this Rails example. By the way, you can load JSON data directly into ember-rest without hooking into a backend.
I'm under the impression that sproutcore-datastore is no longer maintained. I've never tried ember-ezdata.
Another worthwhile option to check out is ember-resource.
I hope this is enough to get you started.

Related

Using Breez.js with Ember.js

This is more of an exploration question.
We have been using breeze.js in our projects for a while and have been quite satisfied with it; it ties nicely into concepts we already know and use (change tracking, metadata, backend agnostic...)
Now we are evaluating the possibility to move our front-end from backbone/marionette/knockout stack to ember.js (which seems to be more of a complete framework than a "pick-only-what-you-need" approach in backbone). We are exploring to possibility to keep using breeze.js as our data management layer on the client-front.
But how can the integration of models returned from the server (server stack: node.js + mongoDB) be done with models defined by ember.js?
A typical ember model definition could be:
var Person = Ember.Object.extend({
Property1: 'foo',
Property2: 'bar'
});
where you can see the extension "Ember.Object.extend"
Breeze.js upon receiving data from server attach a AspectEntity to them (and if Knockout is defined, wrap properties in observables ). How could returned objects somehow be "transformed" into ember.js objects?
Maybe the way we approach this problem and the way we think about this integration is totally wrong or that we should entirely reject this "ember-breeze" combo. From google search we have not found cases where breeze is used with ember.js or some guidance. That is why we come to the stackoverflow community: is it possible? what are the possible pitfalls to look ahead?
There does not seem to be much in the way of existing solutions. The Breeze.js User Voice has some 2-year-old issues that never got much traction; support from that library does not seem to be imminent.
I would recommend exploring Ember-Data as your data management layer. It supports many (if not all) of the concepts you mentioned, and it's the default recommended data library for Ember applications.
That said, as far as trying to make Breeze.js work, you could probably take the objects returned from Breeze and turn them into Ember.Objects using Ember.Object.create, but you'd have to build a layer between Breeze and Ember that essentially serializes between Breeze objects and Ember objects in both directions.

How does Ember Data manage large amount of records?

I have been working with Ember Data and i'm trying to understand some concepts. I have a quite heavy data intensive app, my back-end has endpoints that return a lot of records.
So, basically i have Route's that have something like this.store.findAll('places') which can return thousands of places having each one several text intensive fields like services or description.
This is only one of the resources, there are a few more that handle that amount of data as well.
My main concern is that the app hits some kind of limit or becomes unresponsive. So my question is that: How does Ember Data manage large amount of records ? Is there any best practice to handle those kind of scenarios ?
How does Ember Data manage large amount of records?
The same way as it handles a small amount of records. It's not going to do anything special for performance if you try to load/fetch a large number of records. You need to handle that yourself.
Is there any best practice to handle those kind of scenarios?
Unfortunately, no. Pagination of some sort is really the only way to accomplish this. But as you can see in this thread, there's quite a bit of discussion about the "best" way to do it. There are adapters and plugins made to handle this scenario, as well as server-side boilerplate designed to make it easy. But there really is no canonical way of doing pagination with Ember Data.
In my opinion, the best way to handle large amounts of data is to design a query endpoint and implement it on your server, handling everything yourself. This will be the most tailored to your application and the easiest to understand. If it sounds complicated, that's because it is. Data set segmentation/pagination is not a simple problem to solve, you will definitely run into issues along the way. That's why there's no agreed-upon best practice yet.
Update: Javier Cadiz mentioned the JSON API in the comments so I thought I would mention it. The JSON API does seem to be the new defacto standard for Ember Data, and it does specifiy a pagination method. However, the JSON API is fairly new and isn't widely adopted yet. I believe it wasn't until very recently that Ember Data switched to the JSON API adapter as its default. Using this pagination would mostly likely require you to conform to the entire API, not just the pagination aspect. (Although you can always steal certain ideas from it.) Because of that, I'm not sure if I'd call it a best practice just yet.
Bottom line: the JSON API way of pagination may be the way of the future, but it's not currently very popular. (Although that's just my opinion based on what I see/read. There's no saying how many people are using it privately.)

SQLite, iCloud, and maybe Core Data—which to use for storing files and sharing them with all of the user's devices?

I've been tasked with porting personal face recognition software to iOS and Mac OS X as well as helping keep the basic SDK and much of the software as cross-platform as possible. One of the things one of my associates and I want to do is store data on the user's face in an SQL database (probably SQLite). We would also like to allow users to put their data on iCloud so they don't have to train each of their devices separately to recognize them. What's bugging me is how to do both these tasks, and I'm confronted with enough choices to feel overwhelmed. (I am still new to some of the technologies involved.)
For implementing SQL, I could embed SQLite directly in my program and write code for it, or I could use Core Data and have it talk to SQLite for me. (The database is not meant to be shared, so this is OK. And SQL is not fun.) However Core Data is anything but portable (not to mention not intended for a model encoded as C++ objects), while writing directly for SQL would mean we could reuse more code on other platforms.
Things get messier when factoring in iCloud, which has something like five or six possible ways of integrating it with a program. The only method I have definitively ruled out so far is iCloud key-value storage. (At the very least, there's a good chance a user would get into trouble with the 1 MB limit, and it is clearly not intended for anything as complex as I'm dealing with.) Core Data can integrate with iCloud through UIManagedDocument or NSPersistentStore, but, again, that means less in the way of reusable code. I can use SQLite together with UIDocument or NSDocument, but what I am trying to do seems to be not quite what these objects were intended for. The files I am dealing with are essentially large preference files, not meant for end-users to interact with directly; UIDocument and NSDocument seem to be meant for user-viewable and -editable files. And then there are iCloud Drive and CloudKit, which are still in beta. (On the other hand, these two are due to be released fairly soon. Considering that iOS users tend to upgrade to the latest version of the system software quickly, arguments about using either of these based on how many devices they will be able to run on should quickly become weak and obsolete.)
Can anyone recommend which way is best suited for my purposes? Thanks in advance.
Aaron Solomon Adelman
First off, you don't want to try and share the SQLite file directly. That's extremely likely to corrupt the file, because SQLite wasn't built with that kind of use in mind.
However:
You could use SQLite for local on-device storage only and use a separate API to send data back and forth. Apple's CloudKit would probably be a good choice if you can require iOS 8+. Numerous third party solutions exist (for example, Parse). You'll have to write your own code to translate between SQLite and the network API. Your SQLite schema and your data files would be portable to other platforms, and maybe some of the code if you use SQLite's direct API instead of an Objective-C wrapper (and I highly recommend using either FMDB or PLDatabase if you use SQLite).
Core Data does have built-in support for iCloud, which probably makes it a viable option. Your comment that "...I could use Core Data and have it talk to SQLite for me." suggests you might have somewhat misunderstood Core Data. Core Data is not a SQLite wrapper; it presents a completely different API, and uses its own schema. You can't really take a Core Data persistent store file and use it on other platforms unless you want to spend some time reverse-engineering the schema. Also, using Core Data with iCloud does not require the use of UIManagedDocument, though it does still require a lot of other Core Data-specific classes.
If you want to be able to sync data across multiple devices which are not all Apple devices then you need a third party API. None of Apple's cloud APIs will be useful here. There are many providers that can help out with this. For local data storage, either SQLite or Core Data would work, but you should look at the third party services and see what storage option(s) they support, then try to work with them.
The best approach depends on your needs. If you expect to copy data files from an iOS app to other platforms, SQLite is good. You'll still have a lot of platform-specific code, the savings there are much less. If you don't plan to move data files around like that, Core Data is probably easier to deal with.

Router Basics in 1.0.0-Pre.4 - what is the right way to write a router in current release?

I hate to ask such a newbie and vague question, but I imagine there must be others out there whose brains are also about to explode. I see related questions, but none that directly addresses my confusion.
I've just been introduced to Ember.js and I'm trying to learn the basics of the Router, but I can't find two sources that agree on how this is done. I suspect that I'm jumping in during an unstable transition. I'm using the latest 1.0.0-Pre.4 release.
The best I can figure, Router is the new mechanism, and possibly replaces StateManager - yes? Yet the classes listed under 1.0.0-Pre.4 API on the web site don't even list a Router object, nor does the guide make mention of it... yet, I get no complaints from javascript when I use sample code that extends Em.Router.
Ok cool, however it then barfs on the Router member "transitionTo" which is present in many of the demo projects, but is unrecognized in the current release.
So, I guess what I'm asking is not so much a direct question, as I am looking for a grounding point in a sea of contradictory information.
If starting out with Ember.js as it is RIGHT NOW (1.0.0-pre.4), with no history to contend with, what routing mechanism should I be looking at, and is there any tutorial or simple sample app that demonstrates and runs against this version of the library? Can you confirm my suspicion that the documentation is out-of-date in regard to routing?
Ember.js is a lot to learn, and if I ever hope to figure it out, I need to know what to ignore and what to embrace.
Thank you.
The best I can figure, Router is the new mechanism, and possibly replaces StateManager - yes?
Yes, Router is the new mechanism. It does not replace StateManager per-se. Early version of the Ember Router were based on StateManager. The new one (1.0.0-pre.4) is not, but StateManager is still an important part of the ember library. Many of ember's core components (models, views) rely are built on StateManager.
Yet the classes listed under 1.0.0-Pre.4 API on the web site don't even list a Router object, nor does the guide make mention of it... yet, I get no complaints from javascript when I use sample code that extends Em.Router.
The Router does not have API docs yet. I imagine these are in the works. When in doubt about a fast-moving open source project I always have a look at the tests. Ember has a really solid test suite, and in the case of routing you can learn a lot by reading through the integration tests here: routing/basic_test.js
Ok cool, however it then barfs on the Router member "transitionTo" which is present in many of the demo projects, but is unrecognized in the current release.
Sounds like those demo projects are out of date.
Can you confirm my suspicion that the documentation is out-of-date in regard to routing?
Re: the official docs I think both the API and Guides can be considered current, but be aware that not every ember feature has API docs so far. For sure there are many out-of-date sources floating around. Trek has been working to compile a list of out-of-date sources so that we can reach out to authors for a refresh. Here on Stack Overflow, anything related to the old router should now be tagged https://stackoverflow.com/questions/tagged/ember-old-router.
If starting out with Ember.js as it is RIGHT NOW (1.0.0-pre.4), with no history to contend with, what routing mechanism should I be looking at, and is there any tutorial or simple sample app that demonstrates and runs against this version of the library?
The ember team has been putting a lot of effort over the past few months into the Ember.js Guides - AFAIK they are all up to date WRT (1.0.0-pre.4) and are becoming more solid every day. They include a lot of detail about the new Router - see Ember.js - Routing for the most up-to-date information.
As for tutorials, there are several new ones that are worth a look. Check out this SO post for a few recommendations: Could someone point me to an ember.js project that uses the latest routing system? Bonus points if it uses ember-data as well
tip: build your own version of ember from master branch - they fixed few bugs :)

Optimisation tips when migrating data into Sitecore CMS

I am currently faced with the task of importing around 200K items from a custom CMS implementation into Sitecore. I have created a simple import page which connects to an external SQL database using Entity Framework and I have created all the required data templates.
During a test import of about 5K items I realized that I needed to find a way to make the import run a lot faster so I set about to find some information about optimizing Sitecore for this purpose. I have concluded that there is not much specific information out there so I'd like to share what I've found and open the floor for others to contribute further optimizations. My aim is to create some kind of maintenance mode for Sitecore that can be used when importing large columes of data.
The most useful information I found was on Mark Cassidy's blogpost http://intothecore.cassidy.dk/2009/04/migrating-data-into-sitecore.html. At the bottom of this post he provides a few tips for when you are running an import.
If migrating large quantities of data, try and disable as many Sitecore event handlers and whatever else you can get away with.
Use BulkUpdateContext()
Don't forget your target language
If you can, make the fields shared and unversioned. This should help migration execution speed.
The first thing I noticed out of this list was the BulkUpdateContext class as I had never heard of it. I quickly understood why as a search on the SND forum and in the PDF documentation returned no hits. So imagine my surprise when i actually tested it out and found that it improves item creation/deletes by at least ten fold!
The next thing I looked at was the first point where he basically suggests creating a version of web config that only has the bare essentials needed to perform the import. So far I have removed all events related to creating, saving and deleting items and versions. I have also removed the history engine and system index declarations from the master database element in web config as well as any custom events, schedules and search configurations. I expect that there are a lot of other things I could look to remove/disable in order to increase performance. Pipelines? Schedules?
What optimization tips do you have?
Incidentally, BulkUpdateContext() is a very misleading name - as it really improves item creation speed, not item updating speed. But as you also point out, it improves your import speed massively :-)
Since I wrote that post, I've added a few new things to my normal routines when doing imports.
Regularly shrink your databases. They tend to grow large and bulky. To do this; first go to Sitecore Control Panel -> Database and select "Clean Up Database". After this, do a regular ShrinkDB on your SQL server
Disable indexes, especially if importing into the "master" database. For reference, see http://intothecore.cassidy.dk/2010/09/disabling-lucene-indexes.html
Try not to import into "master" however.. you will usually find that imports into "web" is a lot faster, mostly because this database isn't (by default) connected to the HistoryManager or other gadgets
And if you're really adventureous, there's a thing you could try that I'd been considering trying out myself, but never got around to. They might work, but I can't guarantee that they will :-)
Try removing all your field types from App_Config/FieldTypes.config. The theory here is, that this should essentially disable all of Sitecore's special handling of the content of these fields (like updating the LinkDatabase and so on). You would need to manually trigger a rebuild of the LinkDatabase when done with the import, but that's a relatively small price to pay
Hope this helps a bit :-)
I'm guessing you've already hit this, but putting the code inside a SecurityDisabler() block may speed things up also.
I'd be a lot more worried about how Sitecore performs with this much data... assuming you only do the import once, who cares how long that process takes. Is this going to be a regular occurrence?