Ok so I am trying to search Amazon through the api for movie or tv titles. I simply want to get back the list that would include if it is available, Price, if its available on prime.
The query I have works with some issues.
By specifing a searchindex of "video" it returns results for ordering the physical dics. I only want streaming results. I have looked and streaming is not a valid search index. Is there any way to do this?
I guess basically I just want to return if it available in streaming and if it is.. is it also available on prime.
Related
I have following problem. In my amplify studio i see 10k datapoints
But if i take a closer look into the corresponding database i see this:
I have over 200k+ data but it shows only 10k inside amplify studio. Why is it like that?
When i try this code in my frontend:
let p = await DataStore.query(Datapoint, Predicates.ALL, {
limit: 1000000000,
})
console.log(p.length)
I get 10000 back. The same number like in amplify studio.
Other questions: Whats the best way to store dozens of datapoints? I need it for chart visualizing.
A DynamoDB Query or Scan request does not return all items in one huge list. Instead, it returns just a single "page" of results, whose size defaults to 1MB. A client library like amplify could call these Query or Scan requests repeatedly to collect all pages into one huge array in memory, but that doesn't make too much sense once it grows very big. So applications usually want to iterate over all the results, not to collect them into one huge array in memory.
So most DynamoDB APIs provide a pagination interface for the query and scan operations - which provides you with one page of results, and a way to get the next page. Alternatively, some APIs can give you an iterator over results - and internally do this pagination. I'm not familiar with Amplify so I don't know which API to recommend. But certainly an API that returns all results as one big array must have its limits, and apparently you've found them.
So I am running Python 3.7.1 and I am trying to make a program that pulls out only customers that use an American Express card and display only their name and Email.
I have part of the code that pulls all the customers data that uses the same card type, but it pulls up multiple of the same name and email and all other information. I just can't figure out how to eliminate multiples and only display Name and Email. Below I will show a picture of my code and a screen shot of the output for reference.
My code so far
Output(notice the multiples of Mary and Hunter)
Assuming your file isn't extremely long, consider using Python's set data structure to filter out duplicates. You can check for membership within the set via the in operator (e.g. x in s) and you can add new elements to the set via the add() method (e.g. s.add(x)). At a high level, you want to amend your code to check whether the element is already in your set (in which case you don't need to print it again), and if it is not in the set, add it to ensure you don't print it again.
Using the Graph API or FQL, is there a way to efficiently find a user's first post or status? As in, the first one they ever made?
The slow way, I assume, would be to paginate through the feed, but for users like me who joined in 2005 or earlier, that would take a very long time with a huge amount of API calls.
From what I have found, we cannot obtain the date the user registered with Facebook for a good starting point, and we cannot sort by date ascending (not outside of the single page of data returned) to get the oldest post on top.
Is there any reasonable way to do this?
you can use facebook query language (FQL) to get first post information.
Please refer below query for more details :-
SELECT message, time FROM status WHERE uid= me() ORDER BY time ASC LIMIT 1
Please check and let me know in case of any issue.
Thanks and Regards
Durgaprasad
I think the Public API is limited to the depth of information it is allowed to query. Facebook probably put in these constraints for performance and cost concerns. Maybe they've changed it. When I tried to go backwards thru a person's stream about 4 months ago, there seemed to be a limit as to how far back I could go. Maybe it's a time limit or a # posts back limit. If you know when your user first posted, then getting to it should be fairly quick using the since/until time stamps in your queries.
I am trying to implement a search engine for a new app.
The app allows people to rate items (+1 or -1) - Giving the items a +ve or -ve score.
When people search for items, I'd like to take into account their rating and to order the results accordingly. If the item is a match, it should show up. But if it's a match with a high score it should be boosted up the results a bit.
A really good match should win over a fairly good match with a high score, so it needs to be weighted along with the rest of it (i.e. I boosted my titles a bit).
Not stuck on Solr by any means, only just started playing today.
With Solr, you can maintain a field with the document which holds the difference.
The difference can be between the total +1ve's and the -1ve's.
Solr allows you to boost on field values using function queries.
So you can query with the boost on the difference field, with documents with better difference scoring over others.
From indexing front, as this difference would change quite often, the respective document needs to be updated everytime.
Solr does not allow the updation of the single field, so you need to handle the incremental updates of the difference field.
If that would be a concern to you, can try using ExternalFileField.
This allows mapping of certain fields of documents such as ranking, popularity external to the index in a separate file.
The file can be updated and index committed to reflect the changes.
The field can also be used with function queries to boost the results as needed, however have lot of limitations.
You can order your results by a field that stores the ranking.
sqs.filter(content='blah').order_by('rating')
I took a look through the API documentation and since the ItemSearch operation requires the Keywords parameter, I do not think it will be possible but I just want to confirm.
Should I be looking at a different operation?
Use an XML parser.
Amazon has the bestsellers feed at:
http://www.amazon.ca/rss/bestsellers/books/ref=pd_ts_rss_link
I haven't found a means to obtain the top 100, but the TopSellers ResponseGroup can be used on the BrowseNodeLookup operation to return the top 10 items in a given BrowseNodeId. The BrowseNodeId identifies a product category.
If you are interested in a set of categories, retrieving the top 10 items from each category might be an acceptable compromise