How to jump to specific page using Query Cursor? - python-2.7

I am developing my website in python(webapp2) and google data store in back end.
I have added query cursor for pagination and it's working good but it has only next and previous functions for pagination, the question is that how i will jump to specific page like i am on page 1 and i want to jump to page 3, how i will manage it in Query Cursor?
i have also visited below links but didn't find any solution
https://www.the-swamp.info/blog/pagination-google-app-engine/
https://cloud.google.com/appengine/docs/standard/python/datastore/query-cursors

You can't. Datastore doesn't know the number of results found by your query.
You can however use some tricks to simulate a full pagination.
For example one technique is to generate a certain number of cursors in a loop and generating the "last page" one.
Like in the following pseudo-code:
(results, next_curs, more) = model.query....fetch_page(...)
for p in xrange(5):
# generate cursor for page number 'b'
# by using the next_cursor from previous page
(results, next_curs, more) = model.....fetch_page(cursor=next_cursor,...)
As you don't access the results cursor the performance is somewhat acceptable (depending on your models complexity, query complexity and such).
You can tweak it on your data.

Related

Kibana: can I store "Time" as a variable and run a consecutive search?

I want to automate a few search in one, here are the steps:
Search in Kibana for this ID:"b2c729b5-6440-4829-8562-abd81991e2a0" which will return me a bunch of logs. Of these logs I need to take the first and the last timestamp:
I now would like to store these two data FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524 in 2 variables
Run a second search in Kibana for the word "fail" in between these two variable of time
How to automate the whole process without need of copy/paste and running a second query?
EDIT:
SHORT STORY LONG: I work in a company that produce a software for autonomous vehicles.
SCENARIO: A booking is rejected and we need to understand why.
WHERE IS THE PROBLE: I need to monitor just a few seconds of logs on 3 different machines. Each log is completely separated, there is no relation between the logs so I cannot write a query in discover, I need to run 3 separated queries.
EXAMPLE:
A booking was rejected, so I open Chrome and I search on "elk-prod.myhost.com" for the BookingID:"b2c729b5-6440-4829-8562-abd81991e2a0" and I have a dozen of logs returned during a range of 2 seconds (FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524).
Now I need to know what was happening on the car so I open a new Chrome tab and I search on "elk-prod.myhost.com" for the CarID: "Tesla-45-OU" on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
Now I need to know why the server which calculate the matching rejected the booking so I open a new Chrome tab and I search for the word CalculationMatrix always on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
CONCLUSION: I want to stop to keep opening Chrome tabs by hand and automate the whole thing. I have no idea around what time the book was made so I first need to search for the BookingID "b2c729b5-6440-4829-8562-abd81991e2a0", then store the timestamp of first and last log and run a second and third query based on those timestamps.
There is no relation between the 3 logs I search so there is no way to filter from the Discover, I need to automate 3 different query.
Here is how I would do it. First of all, from what I understand, you have three different indexes:
one for "bookings"
one for "cars"
one for "matchings"
First, in Discover, I would create three Saved Searches, one per index pattern. Then in Visualize, I would create a Vertical bar chart on the bookings saved search (Bucket X-Axis by date_histogram on the timestamp field, leave the rest as is). You'll get a nice histogram of all your booking events bucketed by time.
Finally, I would create a dashboard and add the vertical bar chart + those three saved searches inside it.
When done, the way I would search according to the process you've described above is as follows:
Search for the booking ID b2c729b5-6440-4829-8562-abd81991e2a0 in the top filter bar. In the bar chart histogram (bookings), you will see all documents related to the selected booking. On that chart, you can select the exact period from when the very first booking document happened to the very last. This will adapt the main time picker at the top and the start/end time will be "remembered" by Kibana
Remove the booking ID from the top filter (since we now know the time range and Kibana stores it). Search for Tesla-45-OU in the top filter bar. The bar histogram + the booking saved search + the matchings saved search will be empty, but you'll have data inside the second list, the one for cars. Find whatever you need to find in there and go to the next step.
Remove the car ID from the top filter and search for ComputationMatrix. Now the third saved search is going to show you whatever documents you need to see within that time range.
I'm lacking realistic data to try this out, but I definitely think this is possible as I've laid out above, probably with some adaptations.
Kibana does work like this (any order is ok):
Select time filter: https://www.elastic.co/guide/en/kibana/current/set-time-filter.html
Add additional criteria for search like for example field s is b2c729b5-6440-4829-8562-abd81991e2a0.
Add aditional criteria for search like for example field x is Fail.
Additionaly you can view surrounding documents https://www.elastic.co/guide/en/kibana/current/document-context.html#document-context
This is how Kibana works.
You can prepare some filters beforehands, save them and then use them if you want to automate the process of discovering somehow.
You can do that in Discover tab in Kibana using New/Save/Open options.
Edit:
I do not think you can achieve what you need in Kibana. As I mentioned earlier one option is to change the data that is comming to Elasticsearch so you can search for it via discover in Kibana. Another option could be builiding for example Java application, that is using Elasticsearch - then you can write algorithm that returns the data that you want. But i think it's a big overhead and I recommend checking the data first.
Edit: To clarify - you can create external Java let's say SpringBoot application that uses Elasticsearch - all the data that you need is inside it.
But in this option you will not use Kibana at all.
You can export the result to csv or what you want in the code.
SpringBoot application can ask ElasticSearch for whatever it needs, then it would be easy to store these time variables inside of Java code.
EDIT: After OP edited question to change it dramatically:
#FrancescoMantovani Well the edited version is very different from where you first posted here How to automate the whole process without need of copy/paste and running a second query? and search for word fail in a single shot. In accepted answer you are still using a three filters one at a time so it is not one search, but three.
What's more if you would use one index, and send data from multiple hosts via filebeat you don't even to have to create this dashboard to do that. Then you can you can select the exact period from when the very first document happened to the very last regarding filter and then remove it and add another filter that you need - it's simple as that. Before you were writing about one query,
How to automate the whole process without need of copy/paste and
running a second query?
not three. And you don't need to open new tab in Chrome each time you want to change filter just organize the data by for example using filebeat as mentioned before.
There is no relation between the 3 logs
From what you wrote the realation exist and it is time.
If the data is in for example three diferent indicies (cause documents don't have much similiar data) you can do it like that:
You change them easily in dicover see:
You can go to discover select index 1 search, select time range that you need, when you change index the time range is still the one you selected, you only need to change filter - you will get what you need.

Ember JS : Lazy load Model data

I'm trying to render ember model data which has more than 1000 records. This will take more than 2 min to finish the rendering part.
So, I have a plan to optimize it. I want to load the first 100 records in the first time. Once they go to the end then I need to load the second 100 records.
How can I do that?
Retrieving pages of data
The concept is paging and depends on how your backend handles paging. But in the generic case, something like:
let result = this.store.query('post', {
limit: 10,
offset: 0
});
once processed by the backend would result in a query to a relational database like:
SELECT * FROM post LIMIT 10 OFFSET 0;
So, you will need to keep track of the current page you are showing. Each time you want to fetch a new page, you will simply increment your offset by page * limit where page is the current page index. So the next query when page = 1 would be:
let result = this.store.query('post', {
limit: 10,
offset: 10 // 1 * 10
});
It's probably a good idea for you backend to return the total result count, which you can access via some kind metadata key on your JSON responses (or however you want since it depends on the way your backend speaks collections). This way you know when to stop trying to fetch the next page.
UX around retrieval
You will need to choose whether you want to do simple paging, which supplies a next and previous button that the user clicks on to retrieve the next / previous page. Probably best UX to manage the page with query params so that the forward/back buttons in the browser move pages and refreshing does not lose the page. You should also disable / enable the previous and next buttons when there are no pages to fetch in either direction.
Pros
easy to develop
user never loses their place
since only ever showing one page at a time, no memory / performance concerns
Cons
users have to click buttons and may choose not to / not realize how
Addons
ember-cli-pagination
The alternative would be using infinite scrolling (a la Facebook news feed). You must pay attention to the scrolling position to know when to fetch the next view (which requires math around the sizes of the current items). Alternatively, you evaluate whether item n - 2 or something like that is in the view port and then prefetch the next page.
Pros
users never think about paging and continue seeing new content with ease
Cons
horrible for returning to ones spot on page change / refresh
horrible for searching
requires a great deal of attention to the amount of data you are showing or else you will run into overconsumption of memory and performance degradation.
more difficult to write since you must handle scrolling events, view port detection, etc
addons
ember-in-viewport
vertical-collection
ember-infinity

Overcoming querying limitations in Couchbase

We recently made a shift from relational (MySQL) to NoSQL (couchbase). Basically its a back-end for social mobile game. We were facing a lot of problems scaling our backend to handle increasing number of users. When using MySQL loading a user took a lot of time as there were a lot of joins between multiple tables. We saw a huge improvement after moving to couchbase specially when loading data as most of it is kept in a single document.
On the downside, couchbase also seems to have a lot of limitations as far as querying is concerned. Couchbase alternative to SQL query is views. While we managed to handle most of our queries using map-reduce, we are really having a hard time figuring out how to handle time based queries. e.g. we need to filter users based on timestamp attribute. We only need a user in view if time is less than current time:
if(user.time < new Date().getTime() / 1000)
What happens is that once a user's time is set to some future time, it gets exempted from this view which is the desired behavior but it never gets added back to view unless we update it - a document only gets re-indexed in view when its updated.
Our solution right now is to load first x user documents and then check time in our application. Sorting is done on user.time attribute so we get those users who's time is less than or near to current time. But I am not sure if this is actually going to work in live environment. Ideally we would like to avoid these type of checks at application level.
Also there are times e.g. match making when we need to check multiple time based attributes. Our current strategy doesn't work in such cases and we frequently get documents from view which do not pass these checks when done in application. I would really appreciate if someone who has already tackled similar problems could share their experiences. Thanks in advance.
Update:
We tried using range queries which works for only one key. Like I said in most cases we have multiple time based keys meaning multiple ranges which does not work.
If you use Date().getTime() inside a view function, you'll always get the time when that view was indexed, just as you said "it never gets added back to view unless we update it".
There are two ways:
Bad way (don't do this in production). Query views with stale=false param. That will cause view to update before it will return results. But view indexing is slow process, especially if you have > 1 milllion records.
Good way. Use range requests. You just need to emit your date in map function as a key or a part of complex key and use that range request. You can see one example here or here (also if you want to use DateTime in couchbase this example will be more usefull). Or just look to my example below:
I.e. you will have docs like:
doc = {
"id"=1,
"type"="doctype",
"timestamp"=123456, //document update or creation time
"data"="lalala"
}
For those docs map function will look like:
map = function(){
if (doc.type === "doctype"){
emit(doc.timestamp,null);
}
}
And now to get recently "updated" docs you need to query this view with params:
startKey="dateTimeNowFromApp"
endKey="{}"
descending=true
Note that startKey and endKey are swapped, because I used descending order. Here is also a link to documnetation about key types that couchbase supports.
Also I've found a link to a question that can also help.

Solr + Haystack searching

I am trying to implement a search engine for a new app.
The app allows people to rate items (+1 or -1) - Giving the items a +ve or -ve score.
When people search for items, I'd like to take into account their rating and to order the results accordingly. If the item is a match, it should show up. But if it's a match with a high score it should be boosted up the results a bit.
A really good match should win over a fairly good match with a high score, so it needs to be weighted along with the rest of it (i.e. I boosted my titles a bit).
Not stuck on Solr by any means, only just started playing today.
With Solr, you can maintain a field with the document which holds the difference.
The difference can be between the total +1ve's and the -1ve's.
Solr allows you to boost on field values using function queries.
So you can query with the boost on the difference field, with documents with better difference scoring over others.
From indexing front, as this difference would change quite often, the respective document needs to be updated everytime.
Solr does not allow the updation of the single field, so you need to handle the incremental updates of the difference field.
If that would be a concern to you, can try using ExternalFileField.
This allows mapping of certain fields of documents such as ranking, popularity external to the index in a separate file.
The file can be updated and index committed to reflect the changes.
The field can also be used with function queries to boost the results as needed, however have lot of limitations.
You can order your results by a field that stores the ranking.
sqs.filter(content='blah').order_by('rating')

Django pagination of large image thumbnails

I have a couple hundred of image thumbnails, 15k each. I want to display 20 or so on each page.
Would django.core.paginator suffice for the pagination of these pages? I.e., will it return only those images displayed on the current page? (And if not, what would be a good way to do this?) Thank you.
Depends, because there is one big limitation from the RDBMS (which affects all databases, including MySQL, Postgres, etc.).
django.core.paginator takes a QuerySet which represent any kind of SQL query and adds a LIMIT clause to just get a couple of entries from the database. This approach works well for many kinds of applications, but might become a serious problem if you have a lot of entries. The particular problem is, that whenever you access the 800th page, the database will actually fetch 801*20 entries and then drop the first 800*20 entries again to return the last twenty.
Unfortunately, there is no easy way to solve this problem. In a lot of cases, a next/prev button might be enough so you can write your own pagination which does operate on after-keys instead of page numbers. For example, if the last entry currently displayed by the user has the key "D" you show a next button which links to /next?after=D and then use a SQL query like SELECT * FROM objects WHERE key >DORDER BY key LIMIT 20. The advantage of this approach is, that you can add an index on objects.key which speed up things significantly.
The other approach requires, that you add an additional, indexed (!) column page_num to your table. Then you can perform SQL queries like SELECT * FROM objects WHERE page_num=800 ORDER BY key. With that approach, you can still access all pages randomly, but you have to maintain the page_num column. This might be easy if data is mostly appended at the end and is more complicated if you want to delete/insert elements from the middle efficiently.
So, I would start with django.core.paginator because it's just about 1 line of code. But keep an eye on the response times of your paginated views and the slowquery log from your database. If your database server can't handle the load anymore, you will have to choose one of the techniques mentioned above. Choose solution 2 if random page access is an requirement and solution 1 otherwise (because it's much simpler).
PS: And yes, django.core.paginator will work correctly. :)