Difference between Ember objects and Ember Data ones - ember.js

What's the difference between Ember Objects and the ones from Ember Data? I know that I should use Ember Data models when there is some data on the server, but when and where should I use either of them?

Note: This is rather long, biased and represents my own opinion on this matter. May not be the answer.
The type Object is what you could call the most "simple" object type in Ember. It has the most essential features you would likely use in modern applications like computed properties and observables. And allied to the runtime it also allows bindings, filtering, etc. I would call it the general purpose object, which can be extended to create other types, can also be combined with mixins to further enhance its usage. It has a great but finite number of features, but I wouldn't call it backend friendly, only because I know of DS.Model and its features.
Ember-Data's DS.Model heavily extends the features in Object in order to provide more features that make sense when working with backend data in (most cases) a RESTful environment. Much like an object supported by an ORM (e.g.: .NET's EntityFramework or Ruby's ActiveRecord), it provides a set of features so the objects of that type (DS.Model) can be managed through a data store (DS.Store), and beyond the features already present in an Object it will allow state management (isDirty, isNew, isError, isNew, etc), the ability to commit and rollback and object in a store (and subsequently the backend API), relationships/associations, etc.
If you are using Ember-Data at all, you should use the type Model, since it (was intended to be used with the store, and) uses the model type on sideloading, associations, plurals and throughout the whole AJAX request/response workflow. In fact, one of the advantages of using a Model backed by a Store is exactly this: Have the framework do the heavy lifting by building the AJAX request to the correct RESTful resource on its own, managing the response, do the sideloading of the JSON payload into an object of the correct type, while giving you a promise that you can use the model while the data is being requested/processed/materialized (so you can transition views/routes while this is happening).
It also gives you a great deal of convenient features within the store backed object itself (e.g.: record.deleteRecord(); store.commit()) and in the end of the day, makes us more productive and we can build applications a lot faster.
With that said, there is criticism of this approach because a large number of developers usually don't like or don't feel confortable resorting to what people call technomagic; in other words, they don't feel like using it because they feel they are not 100% in control of what happens under the hood. In my personal opinion, at the same time I can see where these people are coming from, I believe Ember-Data does nothing be help me to be more productive and the only things it asks in return is that I am consistent with my code and that I follow certain conventions and I'm happy with that.
Back to the Object, if you are not using Ember-Data, you should use the Object type as your models. This means that you will have do all of these tasks manually (generally not a big deal). So you will have to create the AJAX requests manually, handle the response, load the response data into your objects, and basically maintain all the communication workflow between your client app and your API. The advantage is that you will be 100% in control, but with a little more effort, as described here by Robin Ward. You will still be able to use the routing API and most great features that make Ember what it is.
So the question of when and where to use each of these types really depends on what architecture you have on your backend and what level of flexibility you have around that.
This is not something that can have a definite answer for everybody, but can be dealt with by answering a few questions that will asses the viability of using Ember-Data (think on the long run).
Does my API return JSON in the same format defined by Ember conventions?
If that's true only partially, can my team and I simply define mappings on a model basis to have everything conforming to conventions?
If not, can I change my backend API to conform to these conventions?
If not, where can I find an adapter that is specific for my backend technology?
I can't find one; would it be feasible to write my own adapter?
After answering these questions, take into account the development iterations and life cycle; think of what would take to maintain it with either approach in the long run; and also consider what path other people in the community have taken when deciding their architecture and/or development strategy.
In the end of the day, you have to understand what these objects bring in terms of features and whether or not you will need them in order to build your application. IMHO, Ember-Data is the way to go for most cases and it can only get better as we approach to (possibly RC3 and then) Ember 1.0 final, which is likely to include Ember-Data as part of the package.
I hope this can help

Related

Modeling frontend and backend in a use case diagram

I am trying to make a use case diagram for my project, the backend is going to be made using Django rest framework and the front end using react, my question is how can i model this situation in the right way, should i model the frontend and represent the backend as an actor or the opposite, since i am thinking of making a mobile application as a second front end?
The right answer here is the Standard Answer of the Business Analyst no 1: It depends.
The question is - what do you want to model and why. Then - what is the correct tool (diagram) to do it.
The goal of the Use Case diagram is to show what functionalities a system is going to offer. Now the system can be treated as a whole, in which case you show the functionalities without depicting how the system is internally organised (this is the most common scenario and most probable the best way to use Use Case diagram in your case - but it does not show the fact of having FE and BE, note that this type of diagram isn't really best suited to do so, so keep reading).
You may also tread e.g. BE as the system itself (it can make sense especially when you're preparing headless API and really separate BE from FE; even more so when your BE and FE teams are totally separate). In such case FE will become an actor (just like e.g. other system that can interact with your BE). Obviously FE can be treated in the same way (i.e. be considered the system with BE being an actor), however usually there's less reason to do so.
Now having said that, if you want to depict the distinction between BE and FE, you should consider other types of diagrams. Keep in mind that Use Case diagram is a dynamic diagram, and the internal structure of the system is static, so obviously it should be one of the static diagrams instead. One that is dedicated to show the internal structure of the system is the Component diagram and it would most likely serve best the purpose of indicating existence of FE and BE (potentially with further level of details, e.g. existing microservices).
If on the other hand you would like to show specific technology in use, Deployment diagram might be your best shot. It allows to show the actual runtime environments, artifacts and their technologies.
Keep in mind - tying to use one type of diagram, or even worse one diagram, to show everything is usually a bad idea and a mistake often made by newbies. Be smarter than that.
Use-case are about a set of behaviors with an observable result that is of value for the actors. They should not care about the internals of a system:
UseCases define the offered Behaviors of the subject without reference to its internal structure.
Therefore, you should in principle not care about the distinction between front-end and back-end, but focus on actor goals with the system.
The only situation where you'd care for the back-end in a use-case diagram, is the case where the front-end would be an independent application that is of value on its own, but can interact with actors that represent external independent systems. (More here)

UML modeling without deep knowledge of underlying libraries

For my next project I would like to try UML modeling. There are several reason - mainly documentation + to break ground for development to avoid re-coding everything over and over again.
I've tried it several times in the past, but I had a feeling like without a deep knowledge of the background libraries my work will depend on, It's not a trivial task, as at the very beginning I don't know, what kind of member variables and functions I would need.
Usually I was coding to get familiar with the libraries and API my app was interface and I get into a state, where the work was almost done or let's say it was from 50% ready, where it made no sense to me to start modeling something.
Am I true you really need to understand well the background or there are ways/techniques how to overcome this?
Another point is, do you built up the model from bottom to top or from top to bottom or it depends on the use case?
Thank you for any recommendations how to proceed.
If I understand well, your main challenge is to get an understanding of the libraries and API that you are using.
If you intend to create an UML diagram for reverse-engineering the library and understand it, you might loose your time: You'd be able to make a meaningful model only once you've understood how the pieces fit together. And for this discovery and knowledge acquisition, you already use the most effective approach:
Usually I was coding to get familiar with the libraries and API my app was interfaced.
Now, if the library or the API is delivered with an UML model, it's another story: an existing design model (not all the details of the implementation, but the core elements of the design, and interaction scenario that are difficult to grasp from the code) could help you to grasp faster how the library works, which will help you to go faster through the exploratory phase.
It's also a different story when you are reverese-engineering an undocumented app: there you don't have a tutorial, and it's difficult to write code to use the existing elements in a meaninful way. There it could make sense to document the system post-mortem. But again, do not lose yourself in a detailed implementation model with all the details: focus on the core elements, whose understanding will really matter for your maintenance fellows.
The three main purposes of making UML class models when developing an app are:
Describing the entity types of the app's problem domain for analyzing and better understanding the requirements for the app in a conceptual (domain) model.
Designing the schema of the app's underlying database (this is typically an RDB schema defined with a bunch of CREATE TABLE statements).
Designing the model classes of the data model of your app, which will be coded, e.g., as Java Entity classes or C# classes with EF annotations.
For 1 and 2, you may take a look at my book An introduction to information modeling and databases, while for 3 you may check out a book on model-based development, e.g. for Java Backend Apps or JavaScript Frontend Apps.
If your goal is to model the dependencies of your app, this may indeed be another purpose. However, as argued by #Christope, reverse-engineering a library is itself a big project that may easily consume more time than you have for developing your app.

Ember.js or Backbone.js for Restful backend [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I already know that ember.js is a more heavy weight approach in contrast to backbone.js. I read a lot of articles about both.
I am asking myself, which framework works easier as frontend for a rails rest backend. For backbone.js I saw different approaches to call a rest backend. For ember it seems that I have to include some more libraries like 'data' or 'resources'. Why are there two libraries for this?
So whats the better choice? There arent a lot of examples to connect the frontend with the backend too. Whats a good working example for a backend rest call to this:
URI: ../restapi/topics
GET
auth credentials: admin/secrect
format: json
Contrary to popular opinion Ember.js isn't a 'more heavy weight approach' to Backbone.js. They're different kinds of tools that target totally different end products. Ember's sweet spot is applications where the user will keep the application open for long periods of time, perhaps all day, and interactions with the application's views or underlying data trigger deep changes in the view hierarchy. Ember is larger than Backbone, but thanks to Expires, Cache-Control this only matters on the the first load. After two days of daily use that extra 30k will be overshadowed by data transfers, sooner if your content involves images.
Backbone is ideal for applications with a small number of states where the view hierarchy remains relatively flat and where the user tends to access the app infrequently or for shorter periods of time. Backbone's code gets to remain short and sweet because it makes the assumption that the data backing the DOM will get thrown away and both items will be memory collected: https://github.com/documentcloud/backbone/issues/231#issuecomment-4452400 Backbone's smaller size also makes it better suited to brief interactions.
The apps people write in both frameworks reflect these uses: Ember.js apps include Square's web dashboard, Zendesk (at least the agent/ticketing interface), and Groupon's scheduler: all applications a user might spend all day working in.
Backbone apps focus more on brief or casual interactions, that are often just small sections of a larger static page: airbnb, Khan Academy, Foursquare's map and lists.
You can use Backbone to make the kinds of applications that Ember targets (e.g. Rdio) by a)
increasing the amount of application code you're responsible for to avoid problems like memory leaks or zombie events (I don't personally recommend this approach) or b) by adding 3rd party libraries like backbone.marionette or Coccyx – there are many of these libraries that all try to provide similar overlapping functionality and you'll probably end up assembling your own custom framework that is bigger and requires more glue code than if you'd just used Ember.
Ultimately the question of "which to use" has two answers.
First, "Which should I use, generally, in my career": Both, just like you'll end up learning any tools specific to work you'll want to do in the future. You'd never ask "Backbone or D3?"; "Backbone or Ember" is an equally silly question.
Second, "Which should I use, specifically, on my next project": Depends on the project. Both will communicate with a Rails server with equal ease. If your next project involves a mix of pages generated by the server with so-called "islands of richness" provided by JavaScript use Backbone. If your next project pushes all the interaction into the browser environment, use Ember.
To give a brief, simplified answer: for a RESTful backend, at the moment, you should use Backbone.
To give a more complex answer: It really depends on what you're doing. As others have said, Ember is designed for different things, and will appeal to a different set of people. My short answer is based on your inclusion of the RESTful requirement.
At the moment, Ember-Data (which seems to be the default persistence mechanism within Ember) is far from production ready. What this means is that it has quite a few bugs and, crucially, doesn't support nested URIs (/posts/2/comments/4556 for example). If REST is your requirement, then you'll have to work around this for the time being if you choose Ember (i.e. you'll either have to hack it in, wait, implement something like Ember-Data from scratch yourself, or use not-very-RESTful URIs). Ember-Data is not strictly part of Ember, so this is entirely possible.
The main differences between the two, aside from size, are basically:
Ember tries to do as much as possible for you, so that you don't have to write as much code. It is very hierarchical and, if your app is also very hierarchical, will likely be a good fit. Because it does so much for you, it can be difficult to figure out where bugs are coming from and to reason why unexpected behaviour is happening (there is a lot of "magic"). If you have an app that fits naturally into the type of app that Ember expects you to be building though, this likely won't be a problem.
Backbone tries to do as little as possible for you so that you can reason about what is going on and build an architecture that fits your app (rather than building an app that fits the architecture of the framework you're using). It's a lot easier to get started with but, unless you're careful, you can end up with a mess very quickly. It doesn't do stuff like computed properties, auto-unbinding events, etc and leaves them up to you, so you will need to implement a lot of stuff yourself (or at least pick libraries that do that for you), although that is rather the whole point.
Update: It appears that, as of recently, Ember does now support nested URIs, so I suppose the question comes down to how much magic you like and whether Ember is a good fit, architecturally, for your app.
I think that your question will soon be blocked :) There are a few contentions between the two frameworks.
Basically Backbone does not do a lot of things, and that's why I love it : you will have to code a lot, but you will code at the right place. Ember does a lot of things, so you'd better watch what it is doing.
Server discussion is one of the few things that Backbone does, and it does a great job with it. So I would start with Backbone and then give a try to Ember if you are not totally satisfied.
You can also listen to this podcast where Jeremy Ashkenas, creator of Backbone, and Yehuda Katz, member of Ember, have a nice discussion

SOA: Is it preferable to implement a service instead of just writing service-ready code, when no external access is needed?

I'm working on the initial architecture for a solution for which an SOA approach has been recommended by a previous consultant. From reading the Erl book(s) and applying to previous work with services (and good design patterns in general), I can see the benefits of such an approach. However, this particular group does not currently have any traditional needs for implementing web services -- there are no external consumers, and no integration with other applications.
What I'm wondering is, are there any advantages to going with web services strictly to stick to SOA, that we couldn't get from just implementing objects that are "service ready"?
To explain, an example. Let's say you implement the entity "Person" as a service. You have to implement:
1. Business object/logic
2. Translator to service data structure
3. Translator from service data structure
4. WSDL
5. Service data structure (XML/JSON/etc)
6. Assertions
Now, on the other hand, if you don't go with a service, you only have to implement #1, and make sure the other code accesses it through a loose reference (using dependency injection, or a wrapper, etc). Then, if it later becomes apparent that a service is needed, you can just have the reference instead point to #2/#3 logic above in a wrapper object (so all caller objects do not need updating), and implement the same amount of objects without a penalty to the amount of development you have to do -- no extra objects or code have to be created as opposed to doing it all up front.
So, if the amount of work that has to be done is the same whether the service is implemented initially or as-needed, and there is no current need for external access through a service, is there any reason to initially implement it as a service just to stick to SOA?
Generally speaking you'd be better to wait.
You could design and implement a web service which was simply a technical facade that exposes the underlying functionality - the question is would you just do a straight one for one 'reflection' of that underlying functionality? If yes - did you design that underlying thing in such a way that it's fit for external callers? Does the API make sense, does it expose members that should be private, etc.
Another factor to consider is do you really know what the callers of the service want or need? The risk you run with building a service is that (as you're basically only guessing) you might need to re-write it when the first customers / callers come along. This can could result in all sorts of work including test cases, backwards compatibility if it drives change down to the lower levels, and so on.
having said that the advantage of putting something out there is that it might help spark use of the service - get people thinking - a more agile principled approach.
If your application is an isolated Client type application (a UI that connects to a service just to get data out of the Database) implementing a SOA like architecture is usually overkill.
Nevertheless there could be security, maintainability, serviceability aspects where using web services is a must. e.g. some clients needs access to the data outside the firewall or you prefer to separate your business logic/data access from the UI and put it on 1 server so that you don’t need to re-deploy the app every time some bus. rules changes.
Entreprise applications require many components interacting with each other and many developers working on it. In this type of scénario using SOA type architecture is the way to go.
The main reason to adopt SOA is to reduce the dependencies.
Enterprise Applications usually depends on a lot of external components (logic or data) and you don’t want to integrate these components by sharing assemblies.
Imagine that you share a component that implements some specific calculation, would you deploy this component to all the dependent applications? What will happen if you want to change some calculation logic? Would you ask all teams to upgrade their references and recompile and redeploy their app?
I recently posted on my blog a story where the former Architect had also choosed not to use web services and thought that sharing assemblies was fine. The result was chaos. Read more here.
As I mentioned, it depends on your requirements. If it’s a monolithically application and you’re sure you’ll never integrate this app and that you’ll never reuse the bus. Logic/data access a 2 tier application (UI/DB) is good enough.
Nevertheless this is an Architectural decision and as most of the architectural decisions it’s costly to change. Of course you can still factor in a web service model later on but it’s not as easy as you could think. Refactoring an existing app to add a service layer is usually a difficult task to accomplish even when using a good design based on interfaces. Example of things that could go wrong: data structure that are not serializable, circular references in properties, constructor overloading, dependencies on some internal behaviors…

Common information model for SOA systems

We are looking at the possibility of implementing a Common Information Model for data across several systems in a SOA architecture.
Many of these services will be consumed by a composite UI, we therefore see a benefit in having common data types.
What we are wondering is if this is a feasible approach, or if we should just map to common types in the client?
This question is framed pretty broadly, so my answer is going to remain pretty broad as well.
The key consideration here would seem to be location independence - though you're working with several applications, they're all going to share certain sorts of data (though not, as far as I can see from your question, actual data). An obvious use case for this is authentication and authorization data.
If you have determined that the common data is truly cooked enough to isolate in the fashion you're describing then I think it makes perfect sense to layer it off into a service. I think the perfect example of this is Windows Identity Framework. It takes something that we as architects have always treated as data and turns it into a service.
What you lose with the location independence is a little bit of efficiency that you would otherwise have in making batches calls to the same server, though SOA applications lose this efficiency early in their design, in my experience. But the efficiency you gain from "patternizing" a section of your apps generally outweighs that enormously.
Having a common information model doesn't imply common data types or common classes. Simply defining the relationships between, for instance, Customer, Order, OrderItem and Product goes a great distance toward common business logic and the ability to have different services and applications be able to interoperate in an SOA environment.
You might consider having an actual common model in some modeling language. From this, concrete data types and classes could be generated for particular circumstances. One might use UML for this, but I personally prefer to use NORMA, an Object-Role Modeling tool. It works at the conceptual level, so creates models that are independent of the data store technology.
NORMA runs as an add-in to Visual Studio Standard edition or above, but out of the box generates artifacts for several databases, as well as LINQ to SQL classes and even PHP web services, all from the same model. It is extensible so that you can generate your own artifacts from the model. And of course, the model is represented as XML, so you can do whatever you like with it.