Updating front end data as backend does analysis - django

I've been self studying web design and want to implement something, but I'm really not sure how to accomplish it, even if I can.
The only frontend I have dealt with is angular 4, and the only backend I have dealt with is django rest framework. I have managed to get user models done in drf, and the frontend to get the user authenticated with json web tokens, and done different kinds of get and post requests.
What I want to do is on the front end have a button, when the button is hit, it will send some get request, that basically runs a text mining algorithm that will produce a list, it may take some time to fully complete, maybe in the range of 20-30 seconds, but I don't want the user to wait that long to get back the single response containing the fully compiled list.
Is it possible to say create a table in angular, and then every couple of seconds the backend sends another response containing more data, where the backend then appends the new results to that table. Something like:
00.00s | button -> GET request
01.00s drf starts analysis
05.00s drf returns the first estimated 10% of overall list
09.00s drf finds 10% more, returns estimated 20% of overall list
then repeat this process until the algorithm has stopped. The list will be very small in size, probably a list of around 20 strings, of about 15 words in each,..
I already tried in django to send multiple responses in a for loop, but the angular front end just receives the first one and then doesnt listen anymore.

No, that's not possible. For each request will be one response, not multiple.
You have two options:
- Just start your algorithm with an endpoint like /start, and check the state in an interval on an endpoint like /state
- Read about websockets or try firebase (or angularfire). This provides a two way communication

Related

Business logic and restful API design

Let's assume we have a simple API allowing clients to fetch a list of items of a specific type:
GET /items/foo
GET /items/bar
GET /items/blah
A response is a list of items of the requested type, each entry has an unique ID.
The client will usually display these items in table/grid/etc.
Now in the client we must implement a pinning feature so another API allows pinning/unpinning items based on their ID & their type. So I was discussing with my colleagues possibilities to inform the client about which items are pinned or not.
An option was to have another API GET /pinning/{type} to return the list of all the pinned items of a specified type.
Another solution was to use a similar API GET /pinning/{type} to return the list of the IDs of all the pinned items. Let the client sort it out.
The first solution was accepted. Their argument was that the backend is responsible for business logic and that the client shouldn't be involved in business logic so the client should just display data it receives from the server. This argument didn't sell it for me. I'm thinking the server should in this case provide the data that allows the client to perform additional presentation logic.
Which solution is better? Or what other solutions are possible?
If the server would only return ItemIds at GET /pinning/{type}, the client would have to repeatedly call something like GET /items/{itemId} in order to obtain data it can display on the UI, right? This in turn would just increase the load on the server. If the id would be enough, you can probably get away with the proposed solution. Since both the client and the server seem to be under the same umbrella (as in your company is also the API consumer), you have enough information to make a decision.
Even if it were a Public API with lots of clients I would still go down the route of returning items instead of just itemIds - probably in a paged manner, for performance reasons.

Django: Execute async Model manipulation tasks after getting a request

One part of my application requires fetching around 200 objects of a Model (Customer) to update certain fields on each and every request.
It first fetches a Customer object on every request and attains a list of Grouped Customers each with certain values from an outside API. This list of grouped customers has around 200 entries (each being of type Customer). I want to update all the Customer's fields with the values by fetching all 200 of them and updating and saving. Getting 200 objects for every single request and then waiting for it to finish updating then returning a response seems rather silly and slow. Is there anyway to do this updating asynchronously? Like I can return the response right after the I fetch the first Customer and then have a an async function handle the updating process.
Is there some possibility this can be done or do I have to wait for all 200 objects to be done updating?
As the first Google result for "django asynchronous tasks" would have shown you, the canonical way of doing this is to use Celery.

Best Practices to update multiple records with a single server request

I have a User model which hasMany phones. The UI for the user allows to add/delete/update phones on the single form.
When user submits the form all changes to the phone list are sent to the server with a single request.
I have extended the App.UserSerializer with custom serializeHasMany to include all the phone details in the single request.
The real problem is to sync the store state after the request is complete.
Basically I need to solve these two problems:
Remove deleted records from the store. I could not find any methods which just removes a record from a store.
Update new records with the ids generated by server. (Or just remove the new records from the store and hasMany array since response creates the dups for the added records)
Is there any best practices or work arounds for this kind of scenarios?
Thank you.
I think the best practice for now is just sticking to regular REST. In your case this will mean a few extra requests (really though, how many phones can a user have?), but it will spare you a lot of effort in handling things manually.
Ember may support bulk updates in the future (https://github.com/emberjs/data/blob/master/TRANSITION.md, "We plan to support batch saving with a single HTTP request through a dedicated API in the future.")

Ember choking upon encountering large data sets

Looking for a solution to an issue caused by large data sets forcing Ember to lock up the browser while it tries to process the data.
For pagination, I'm using tchak's handy pagination mixin to paginate approximately 13,000+ objects being loaded from a backend API.
The Ember Data objects contain an ID, one text attribute and several number attributes.
The problem is it takes close to a minute before the browser finishes processing the data, rendering the browser unusable in the meantime. Firefox even goes as far as to issue a warning that a script is using up all browser resources and suggests that script be terminated.
I've written my own pagination mixin that requests objects by range, i.e. items 10-25, and it works generally well except for one serious limitation: sorting. To sort the data, I need to make additional requests to the backend and reload the objects even if some of them have already been loaded.
I would love to be able to load all of the content upfront to simplify the process of sorting without doing additional requests to the backend API. I'm looking for guidance on how to tackle this issue but I'm open to an entirely alternative approach.
If nothing else, is it possible to reduce the resource footprint Ember places on the browser as it tries to load all 13k objects into the ArrayController?
I'm using Ember 1.0.0-pre2 with the latest Ember Data (currently at Revision 10).
On the backend is Rails 3.2.8.
Update I sidestepped the issue by loading data into an ArrayController property other than content. This brought the load times down from over a minute to only a few seconds. I then slice the requested number of items and load those into content. This works well for any number of items, at the cost of not being able to easily sort the data.
I suggest you take a look at Ember Table. The demo shows a table with 500 000 records and works very fast. Digging around the source code might help.
Can't you query a view from your db that handles the sorting? Pass in the sort conditions in the query string ?sortBy=name&sortAsc=true

Calling next results from WebService

I develop WP7 app and I'm calling last 20 results from webservice and I wonder how to call next 20 when user goes to the end of listbox?
I have found some topics how to recognize when user reaches end of the list but I'm struggling how to re-call WebService and ask for next entries.
EDIT:
So okay, here is the thing. In my API I have two options:
- take some amount of results (like 10, 20, 30) and then show them all on the list
- second options is to ask API to give me like 3 pages of 20 records on each page
Thinking about second option: okay I can display just 1/3 pages and then when user goes down call another page (already stored on phone) but that makes no sense as user will download all records (even he don't want to see more than top 5...
The only idea is to call next results, but don't know how to re-call webservice on some point
Your problem seems more of a web-services related than a windows phone related. Because if you are getting some data from a web service then the web service provider should ideally provide you with some documentation on how to fetch next/previous records or entries.
Here are two links from Twitter API which gives you some idea on fetching the data in pages.
Getting the home_timeline data
Working with Timelines
Here is another link which gives idea on how to implement paging in a Silverlight Application (I am not sure how far this method is compatible with WP app)
If this data couldn't answer your question, then update your question with some additional data like which url you are using to fetch the first 20 records etc