I'm programming the API service, so the iOS app wants to "pull down to refresh" some records (AKA streams), but I'm not sure how to program the "pagination"-like feature.
I was thinking of making a query using offset and limit (start and end in python) but I think this is not the right approach.
Does anyone have an idea how this should be done?
My RESTful API is buit on django.
Thanks in advance.
If you only want to return the latest records, the app should query your API with a last_updated timestamp.
Based on that you can filter your queryset to match the records that have been added the last time the user has refreshed his timeline.
If no timestamp is set, you return all records (or a part of it ordered by date of creation).
Related
I am developing an application in which I am explicitly using memcache with Google Appengine's NDB library. I want something like this.
1) Get 100 records from datastore and put them in memcache.
2) Now whenever user wants these records I would get these records from memcache instead of datastore.
3) I would invalidate the memcache if there is a new record in datastore and then populate the memcache with 101 records.
I am thinking of an approach like I compare the number of records in memcache and datastore and if there is a difference, I would update the memcache.
But if we see documentation of NDB, we can only get count by retrieving all the records, and this is not required as datastore query is not being avoided in this way.
Any help anyone? Or any different approach I could go with?
Thanks in advance.
Rather than relying on counts, you could give each record a creation timestamp, and keep the most recent timestamp in memcache. Then to see if there are new records you just need to check if there are any timestamps newer than that, which assuming you have an index on that field is a very quick query.
I currently have an ember app that's trying to retrieve data from a 3rd party API using Ember Data. The urls are in the format, /user_id/date/food, which would retrieve the user's consumed food on the given day. I want to retrieve the list of foods the user consumed given a date range (2015-06-07, 2015-08-10).
I tried to use Ember.query and filter out the unnecessary data, but the API doesn't have an endpoint which would return all of the consumed foods.
Currently I'm supporting the single day query using queryRecord and passing the day in.
If /user_id/date/food is the only endpoint the 3rd party API is providing, the only option you would have is to make a request for every day in the range you want to get. You might want to use Ember.RSVP.all to wait till all requests are retrieved and finally concatenate the result for example.
But this is not really recommended, because a large range would be very inefficient.
With FQL being phased out I need to achieve the same functions via the Graph API.
My application checks for new posts, comments and replies on a company page every X seconds.
I use 1 FQL to get new comments and replies by doing
SELECT post_id,time,fromid, text,id from comment WHERE time > (lastcheck) and post_id in (select post_id from stream where source_id = (PageID) limit 1000) order by time desc
This appears to work well ,I can add a comment to a 5 month old post and it picks it up.
How can the same be achieved with the Graph API?
I think that if what you have works then you do not need to change it. Contrary to what Facebook wants you to do (use the graph api), not every query can be translated to it. FQL is alive and kicking and used heavily both in the Facebook website and mobile apps.
I've started a django project that will include an analytics app. I want that app to use either couchDB or mongoDB for storing data.
The initial idea was (since the client already is using Google Analytics) to once a day/week/month grab data from GA, and store store it locally as values in database. Which would ultimately build a database of entries - one entry per user per month - with summed values like
{"date":"11.2011""clicks": 21, "pageviews": 40, "n": n},
for premium users there could be one entry per user per week or even day.
The question would be:
grab analytics from GA, do a sum entries for clicks, visits etc.
or
store clicks and whatever values locally and once a month do sums for display ?
Lukasz, unless Google Analytics has really relaxed their privacy levels, you're not going to be able to access user-level records (but check out the answer here: Django saving the whole request for statistics, whats available?)
Right, old question but I've just finished the project so I'll just write what I did.
Since I didn't need concurrency and wanted more speed approach, I found that mongodb is better for that.
The final document schema that I've used is
{'date': '11.2009', 'pageviews': 40, 'clicks': 13, 'otherdata': 'that i can use as filters'}
The scope of my local analytics is monthly, so I create one entry in mongdb per user per month, and update it each day. As said just now, I update data daily, and store only summaries and averages of those.
What else. Re: Jamie's answer... The system is using GA events, so I've got access to all data that i need.
Hope someone may find it interesting.
cheers and thanks for ideas !
I have a cronjob that runs every hours and parse 150,000+ records. Each record is summarized individually in a MySQL tables. I use two web services to retrieve the user information.
User demographic (ip, country, city etc.)
Phone information (if landline or cell phone and if cell phone what is the carrier)
Every time I get 1 record I check if I have information and if not I call these web services. After tracing my code I found out both of these calls takes 2 to 4 seconds and it makes my cronjob very slow and I can't compile statistics on time.
Is there a way to make these web service faster?
Thanks
simple:
get the data locally and use mellissa data:
for ip: http://w10.melissadata.com/dqt/websmart/ip-locator.htm
for phone: http://www.melissadata.com/fonedata.html
you can also cache them using memcache or APC which will make it faster since he does not have to request the data from the api or database.
A couple of ideas... if the same users are returning, caching the data in another table would be very helpful... you would only look it up once and have it for returning users. Upon re-reading the question it looks like you are doing that.
Another option would be to spawn new threads when you need to do the look-ups. This could be a new thread for each request, or if this is not feasible you could have n service threads ready to do the look-ups and update the results.