RealmSwift limiting and fetching last 30 records into tableview - swift3

I'm a newbie of using RealmSwift and i'm creating chat like application using swift 3.0 with backend database as RealmSwift. while inserting the chat works good into realm, but the thing when fetch the records
let newChat = uiRealm.objects(Chats.self).filter(
"(from_id == \(signUser!.user_id)
OR from_id == \(selectedList.user_id))
AND (to_id == \(signUser!.user_id)
OR to_id == \(selectedList.user_id))"
).sorted(byProperty: "id", ascending: true)
i don't know how to limit the last 30 records for the chat conversation. In the above code i just fetch the records from "Chat" table with filtering the chat as "SIGNED USERID AND TO USERID". and also if i list all the records(like more than 150 chat conversation) for the particular chat, then scrolling up the records from tableview got stuck up or hang for a while. So please give some idea about how to limit last 30 records and stop hanging the tableview. Thanks in advance

Like I wrote in the Realm documentation, because Realm Results objects are lazily-loaded, it doesn't matter if you query for all of the objects and then simply load the ones you need.
If you want to line it up to a table view, you could create an auxiliary method that maps the last 30 results to a 0-30 index range, which would be easier to then pass straight to the table view's data source:
func chat(atIndex index: Integer) -> Chats {
let mappedIndex = (newChat.count - 30) + index
return newChat[mappedIndex]
}
If you've already successfully queried and started accessing these objects (i.e. the query itself didn't hang), I'm not sure why the table view would hang after the fact. You could try running the Time Profiler in Instruments to track down exactly what's causing the main thread to be blocked.

Related

Shopify API to get All Records for Customers,Orders & Products (Django)

I have searched for getting all customer one by one.But after some study understand the whole way to solve.
for getting 250 data from shopify api we can use limit but pagination and synchronization for getting all data we need some step to get all data.
shop_url = "https://%s:%s#%s.myshopify.com/admin/api/%s/" % (API_KEY, PASSWORD, SHOP_NAME, API_VERSION)
endpoint = 'customers.json?limit=250&fields=id,email&since_id=0'
r = requests.get(shop_url + endpoint)
Step 1:Where we need to put the initial id to start extraction and store to your db
customers.json??limit=250&fields=id,email&since_id=0
Step 2:Next changes the since_id value with with last id of your extraction like my image.
last id=5103249850543 (suppose)
Mentioned in Fields Data
customers.json??limit=250&fields=COLUMN_YOUNEED_FOR_CHK&since_id=5103249850543

Which model notifications should I listen to in order to calculate the sum of an IG column?

I am using Apex 18.2. I have a page with an interactive grid with a column "Total" whose sum value should get calculated through looping over the model whenever the sum changes for example, when a new row is created, a row is deleted, a row column's value has changed, etc. I am subscribing to the model to accomplish the task. But there are many model notifications one could listen to. I only need to listen to the model notifications that would affect the sum of the Total column to avoid looping through the model when unnecessary. Could you tell me which notifications are they?
https://docs.oracle.com/en/database/oracle/application-express/18.2/aexjs/model.html
The best way to learn about this is to explore. Add the following to your page's Execute when Page Loads attribute:
var model = apex.region('REGION_ID').widget().interactiveGrid("getCurrentView").model;
model.subscribe({
onChange: function(changeType, change) {
console.log(changeType, change);
}
});
Then work with your IG and note the changeType values logged - those are the notification names that are listed in the doc.
Note that there are rows on the server, rows in the model, and rows displayed in the DOM - the numbers may or may not be different so keep that in mind for aggregate functions that need to work with "all" of the rows.

Better way to query and join multiple querysets other than using a forloop in django?

I have a model Item:
Item:
batch_no
batch_no can be anything from 1 to 20. And there are 1000s of items in the database.
Now. I need to get first 4 elements of each batch_no.
I know to do it by querying and appending using forloop.
batches = Item.objects.values('batch_no').exclude(batch_no__isnull=True).distinct()
blist=[]
for batch in batches:
bitems= Item.objects.filter(batch_no=batch['batch_no'])[:4]
blist.append(bitems)
return blist
Is there a better way than this? To do in a single Query?
I'm new to Django.
Here is a similar question. The short answer is:
Item.objects.annotate(rank=Window(
expression=Rank(),
order_by=F('batch_no').desc(),
partition_by=[F('batch_no')])
).filter(rank__lte=4)
Check out Rank window function in Django 2.0. It will allow you to grab Top N elements within a group as in an example here

Select distinct count cloudant/couchdb

I am starting a project using Cloudant.
It's a simple system for logging, so I can track the usage of my apps.
My documents looks like this:
{
app:'name of the app',
type:'page view | login | etc..',
owner:'email_of_the_user',
device: 'iphone | android | etc..', date:
'yyyy-mm-dd'
}
I've tried to do some map reducing and faceted searches, but couldn't find so far the result for what I want.
I want to count the number of distinct documents grouped by same owner, date (yyyy-mm-dd), and app.
[For example, if a the same guy logs in the app twice or 20 times in the same date, it will be counted only once.
I want to count how many single users used an app each day, no matter what's the type of the log, or the device he used.]
If it was SQL, assuming that each key of the document is a column, I would query something like this:
SELECT app, date, count(*) FROM LOGS group by date, owner, app
ant the result would be something like:
'App1', '2015-06-01', 200
'App1', '2015-06-02', 232
'App2', '2015-06-01', 142
'App2', '2015-06-02', 120
How can I get the same result using Cloudant/CouchDB?
You can do this using design documents, as Cesar mentioned. A concrete example would be to create a view where your map function emits the field on where you want to group on, such as:
function(doc) {
emit(doc.email, 1);
}
Then, you select your desired reduce function (such as _count). When viewing this on Cloudant dashboard, make sure you select Reduce as part of the query options. When accessing the view via URL you need to pass the appropriate parameters (reduce=true&group=true).
The documentation on Views here is pretty thorough: https://docs.cloudant.com/creating_views.html
For what you need there is a feature on couldant/couchdb called design document. You can check their documentation for this feature for details or this guide:
http://guide.couchdb.org/draft/design.html
Cloudant documentation:
https://docs.cloudant.com/design_documents.html
Design documents are similar views on the SQL world.
Regards,
We were able to do this in our project using the Cloudant Java API...
https://github.com/cloudant/java-cloudant
You should be able to get this sort of result by creating a view that has a map function like this...
function(doc) {
emit([doc.app, doc.date, doc.owner], 1);
}
The reduce function should look like this:
function(keys, values, rereduce){
if (rereduce){
return sum(values);
} else {
return sum(values);
}
}
Then we used the following query to get the data we wanted.
Database db = ....
db.view(viewName).startKey(startKeys).endKey(endKeys)
.group(true).includeDocs(false).query(castClass)
We supplied the view name and some start and end keys (since we emitted a compound key and we needed to supply a filter) and then used the group method to get the data back as you need it.
Revised..
With this new emit key in the map function you should get results like this:
{[
{[app1, 2015,06,28, john#somewhere.net], 12}, <- john visited 12 times on that day...
{[app1, 2015,06,29, john#somewhere.net], 10},
{[app1, 2015,06,28, ann#somewhere.net], 1}
]}
If you use good start and end keys, the amount of records you're querying will stay small and the number of records you get back is the unique visitors you are seeking. Note that in this scenario you are getting back a bit more than you want, but it does work.

Optimization of bulk update/insert

I'm writing a web application that is going to show player statistics for an online game, using Django 1.6 and PostgreSQL 9.1. I've created a script using django-extensions "runscript" which fetches all players that are online and insert/updates into my table. This script is executed 4 times per hour using cron. I need to either insert or update since the player already could be in the table (and thus should be updated) or not be in the table.
To my problem: there is around 25,000 players online at peak hours and I'm not really sure how I should optimize this (minimize hdd i/o). This is how I've done so far:
#transaction.commit_manually
def run():
for fetched_player in FetchPlayers():
defaults = {
'level': fetched_player['level'],
'server': fetched_player['server'],
'last_seen': date.today(),
}
player, created = Player.objects.get_or_create(name=fetched_player['name'], defaults)
if not created:
player.level = fetched_player['level']
if player.server != fetched_player['server']:
# I save this info to another table
player.server = fetched_player['server']
player.last_seen = date.today()
player.save()
transaction.commit()
Would it (considerably) faster to bypass Django and access the database using psycopg2 or similar? Would Django be confused when 'someone else' is modifying the database? Note that Django only reads the database, all writes are done by this script.
What about to (using either Django or psycopg2) bulk fetch players from the database, update those that was found, and then insert those players that were not found? If this is even possible? The query would get huge: 'SELECT * FROM player WHERE name = name[0] OR name = name[1] OR ... OR name[25000]'. :)
If you want to reduce the number of queries, here is what I suggest:
Call update() directly for each player, which returns the number of rows updated, if the count is 0 (meaning the player is new), put the player data in a temporary list. When you are done with all fetched players, use a bulk_create() to insert all new players with one SQL statement.
Assume you have M+N players (M new, N updated), the number of queries:
Before: (M+N) selects + M inserts + N updates
After: (M+N) updates + 1 bulk insert.