EMBER.JS Define models if I only need to read? - ember.js

Is there any utility in defining models in Ember if I am only going to read from the server?

Yes there is. Modelling your frontend to be similar to the backend will always be helpful.
The plus points I can think of are
Hassle free handling of relationships.
Caching of results in the store would save on server calls.
Writing back to the server becomes easy if you suddenly need that.
Selective data loading with the {async:true} option in relationships. Load only what you need.
Check out this link http://emberjs.com/blog/2014/03/18/the-road-to-ember-data-1-0.html for more information.

Related

how to display large list of data using reactJS as frontend and django rest framework as backend

I am having large list of data of ingredients required for cooking. More than 3000+
I am using Django rest framework as the backend and ReactJs as frontend.
Each item in the list has a name, id, measurementunit, density in kg/ltr and cost/measurementunit
In Django i have created an api endpoint to supply the data in JSON format.
I want to display the data in a table format and with search filter on the top. Also at a time i want to show maximum 300 results.
Can someone guide me how to achieve this. Should i fetch all the list at a time or use pagination from django. Should i use seperate api for search or do it using reactjs on the frontend.
Presently i dont need any authorization in django. The data is for local use only.
3000 records is a lot to send down to the client in one chunk. It is easier to develop against, but it doesn't scale well and is likely to create a measurable load time. If you're OK with that, and you don't expect your data set to grow, then perhaps the pragmatic approach is to keep it as a big list... but it goes against best practices.
Either way, you likely don't want to show a giant 3k-row list to the user, so the UI will have some sort of pagination mechanism. That might be "next", "previous" pages, or it might be infinite scrolling. Either way, the data abstraction should be considering it as paged data.
Paging API
Assuming you decide to make your API support paging, you should use the backend framework's paging capabilities, like the Django Paging API or some other abstraction for your REST API. There are lots of resources out there.
Search
The moment you decide to paginate the API, you are committing to handling search (filtering) on the backend as well. The only way you can manage client-side filtering is if the client has access to all the data. Since that isn't the case, you'll need to include a search filter in your data request. Searching and pagination aren't mutually exclusive, so make sure your API supports both at the same time. A common way to handle this would be like this:
http://yoursite.com/api/ingredients?page=5&page_size=100&search=carrot
Client
On the React side, you can build your own UI (it is easy to do), or you can use a component which abstracts this for you, like react-js-pagination or react-paginate. The client component shouldn't really care if the API is paged or not. Instead, it just notifies you when to display different records and the rest is up to you.
If you decide to keep everything in one big REST call, then you still need to slice the data out of your array to display. If you paginate your API, then you can keep an instance cache on the client side of the pages you've received. If you don't have the data, make the REST call to get it, and populate an array with the data. That way, if a user goes forwards and then backwards, you aren't re-fetching.
Conclusion
I hope this helps a bit. Enjoy :)

Extend Ember RESTAdapter to work with CouchDB

I am using CouchDB for basically my entire back-end, and Ember for basically my entire front-end, and I need to find a way to make the json data compatible between the two, especially regarding the 'named root' convention (here are the ember json expectations and the couch api). I'm not using Ruby or any other libraries, so I'm quite sure this couch adapter isn't available to me.
I read here that I'll need to extend the extract method to get this to work properly, but I'm freaked out by this whole thing and am not sure where to start. What's the relationship between RESTAdapter and JSONSerializer? I'm not sure how this all fits together and I'm terrified of wasting time and possibly screwing something up. And is there maybe an easier way to do this?
Forgive my noobiness.
There's a CouchDB adapter that does seem to be up to date (last updated 2 days ago), at https://github.com/roundscope/ember-couchdb-kit. As it says, "Inspired by pangratz/ember-couchdb-adapter and contains many fixes and newbie features."
While the installation is easy within a Rails project, it's still fine outside of a Rails project. Just include everything from the dist directory in this order:
ember-couchdb-kit
registry
document-adapter
attachment-adapter
revs-adapter
changes-feed
I read here that I'll need to extend the extract method to get this to work properly
The SO post (and the couch adapter) you referenced are out-of-date. Ember Data changed a lot in the past few weeks, so lots of the old answers out there could be misleading.
I'm freaked out by this whole thing and am not sure where to start.
The good news is it's become somewhat easier to do the kind of thing you are trying. While writing a custom adapter and serializer used to be an advanced topic it is now really straightforward.
At present the best resources are:
the guides
the ember-data transition doc
discussion on migrating to the beta
What's the relationship between RESTAdapter and JSONSerializer?
An adapter is an object that receives requests from a store and translates them into the appropriate action to take against your persistence layer. The RESTAdapter is an adapter that knows how to talk to a restful HTTP server by transmitting JSON via XHR.
A serializer is responsible for serializing and deserializing a group of records. The JSONSerializer is just a serializer that knows how to read and write JSON.
With new ember-data it should be pretty straightforward to extend/configure the rest-adapter and json serializer to speak to a CouchDB backend. Have a look at rest-adapter-and-serializer-configuration for some examples.

Long polling with EmberJS / Ember-Data?

I have setup a very basic first application where I can add and remove names from a list, which are then added/removed from a database using a RESTful API, using Ember-Data with the default REST Adapter.
I'd like to implement some form of polling/long-polling so my interface remains up-to-date.
So for example, lets say I open my 'list' in two tabs, delete a few names in one tab - I'd like for the changes to then (eventually) show up in the other tab.
How can this be done easily with Ember?
What you want to do is really a job for WebSockets, which would allow you to push changes to your models from the server to the Ember app whenever they happen. This type of approach can easily take care of keeping thing in sync between tabs. I would recommend checking out Socket.io, which has a great client-side JS library and many server side libraries. By default it will try to use WebSockets, which are better than long-polling, but will degrade to long-polling if it needs to. This might force you to change a bunch of your application set-up, but I would consider this the "right" way to go.

Retaining state between Django views

As a little backstory, I'm working on an application which pipes KML to googleearth based on packet data from a mesh network. Example:
UDP Packet ---> Django ORM to place organized data in DB ---> Django view to read the DB and return a KML representation of the packet data (gps, connections, etc) to Google Earth.
The problem here being that the DB rows tell a story and doing a query, or a series of queries, isn't enough to "paint a picture" of this mesh network. I need to retain some internal python structures and classes to maintain a "state" of the network between requests/responses.
Here is where I need help. Currently, to retain this "state", I use Django's low level cache API to store a class with unlimited timeout. And every request, I just retrieve that class from the cache, add to it's structures, and save it back to the cache. This seems to be working, and pretty well actually; but it doesn't feel right.
Maybe I should ditch Django and extend Python's BaseHTTP class to handle my requests/responses?
Maybe I should create a separate application to retain the "state" and Django pipes it request data through a socket?
I just feel like I'm misusing Django and being unsafe with crucial data. And help?
I know this is unconventional and a little crazy.
(Note: I'm currently using Django's ORM outside of a Django instance for the UDP socket listener, so I am aware I can use Django's environment outside of an instance.)
Maybe I should ditch Django and extend Python's BaseHTTP class to handle my requests/responses?
Ditching Django for Python's BaseHTTP won't change the fact that HTTP is a stateless protocol and you want to add state to it. You are correct that storing state in the cache is somewhat volatile depending on the cache backend. It's possible you could switch this to the session instead of the cache.
Maybe I should create a separate application to retain the "state" and Django pipes it request data through a socket?
Yes this seems like a viable option. Again HTTP is stateless so if you want state you need to persist it somewhere and the DB is another place you could store this.
This really sounds like the kind of storage problem Redis and MongoDB are made to efficiently handle. You should be able to find a suitable data structure to keep track of your packet data and matching support for creating cheap, atomic updates to boot.

Tracing requests of users by logging their actions to DB in django

I want to trace user's actions in my web site by logging their requests to database as plain text in Django.
I consider to write a custom decorator and place it to every view that I want to trace.
However, I have some troubles in my design.
First of all, is such logging mecahinsm reasonable or because of my log table will be enlarging rapidly it causes some preformance problems ?
Secondly, how should be my log table's design ?
I want to keep keywords if the user call search view or keep the item's id if the user call details of item view.
Besides, IP addresses of user's should be kept but how can I seperate users if they connect via single IP address as in many companies.
I am glad to explain in detail if you think my question is unclear.
Thanks
I wouldn't do that. If this is a production service then you've got a proper web server running in front of it, right? Apache, or nginx or something. That can do logging, and can do it well, and can write to a form that won't bloat your database, and there's a wealth of analytical tools for log analysis.
You are going to have to duplicate a lot of that functionality in your decorator, such as when you want to switch it on or off, or change the log level. The only thing you'll get by doing it all in django is the possibility of ultra-fine control, such as only logging views of blog posts with id numbers greater than X or something. But generally you'd not want that level of detail, and you'd log everything and do any stripping at the analysis phase. You've not given any reason currently why you need to do it from Django.
If you really want it in a RDBMS, reading an apache log file into Postgres or MySQL or one of those expensive ones is fairly trivial.
One thing you should keep in mind is that SQL databases don't offer you a very good writing performance (in comparison with reading), so if you are experiencing heavy loads you should probably look for a better in-memory solution (eg. some key-value-store like redis).
But keep in mind, that, especially if you would use a non-sql solution you should be aware what you want to do with the collected data (just display something like a 'log' or do some more in-deep searching/querying on the data).
If you want to identify different users from the same IP address you should probably look for a cookie-based solution (if you are using django's session framework the session's are per default identified through a cookie - so you could just simply use sessions). Another solution could be doing the logging 'asynchronously' via javascript after the page has loaded in the browser (which could give you more possibilities in identifying the user and avoid additional load when generating the page).