I had a few questions related to the changes to the Google Maps/Routes/Places that happened a couple weeks ago
My app calls places.PlacesService.GetAutocompletePredictions and, at the moment, in the Google Cloud Platform, I can see my "request" usage. However the billing switched to "per-character" and I could not find my usage by that metric. Any idea how to view it?
Is there a way to see my usage for the past few months?
How does this "per-character" thing work? Is it the number of characters of the prediction that I pick or the number of characters predicted at the end of my pre-filled phrase? Google doesn't seem to want to make their pricing particularly clear
Related
I want to write an Alexa skill that would read a list of items out to me and let me interrupt when I wanted and have the backend know where I was in the list that was interrupted.
For example:
Me: Find me a news story about pigs.
Alexa: I found 4 news stories about pigs. The first is titled 'James the pig goes to Mexico', the second is titled 'Pig Escapes Local Farm' [I interrupt]
Me: Tell me about that.
Alexa: The article is by James Watson, is dated today, and reads, "Johnny the Potbelly Pig found a hole in the fence and..."
I can't find anything to indicate that my code can know where an interruption occurs. Am I missing it?
I believe you are correct: the ASK does not provide any way to know when you were interrupted, however, this is all happening in real-time so you could figure it out by observing the amount of time that passes between doing the first ASK 'tell' (ie. where you call context.success( response )), and when you receive the "Tell me that" intent.
Note that the time it takes to read in US-en could be different then for US-gb, so you'll have to do separate calibrations. Also, you might have to add some pauses into your speech text to improve accuracy since there will of course be some variability in the results due to processing times.
If you are using a service like AWS Lambda or Google App Engine that add extra latency when there are no warm instances available, then you will probably need to take that into account.
The problem is to collect the map coordinates of some given locations and display on the site that i'm creating.I have heard that it is called Scraping.Can anyone tell me how to do this?
Google has really tight controls on what they let you get to -- there's an API, but there are limits on the order of 1,000 queries/day, depending on what information you need exactly. If you try to get around their API, they have incredibly clever algorithms that shut you out immediately -- attempting to scrape Google search results, I found that after a few minutes, they were able to immediately block me as soon as I hopped on new Tor endpoints.
The people at Google are extremely good software developers, and if they don't want you to have some info, you can't get it.
I'm writing an app to collect facebook posts matching a certain search term, and I'm trying to fetch only new or updated posts since the graph.facebook.com/search endpoint. I've concluded from debugging that this particular endpoint uses time-based pagination (since, until), so here's my process:
fetch new posts using the most recent 'since' time (default to now - 5 mins at start)
update my 'since' time to the most recent created_time or updated_time from the list of return posts
sleep X seconds, repeat
However, I can't even see my own newly created posts. I do get some results, but they seem random in terms of why they match my search and not my own. For testing purposes, I'm using a user-level access token generated using the FB developer tools, so I should definitely not have any permissions issues restricting me from seeing my own content.
What gives?
Edit: More testing reveals that I can randomly receive SOME of my own posts, but there appears to be no rhyme or reason why one post shows up and the others don't. For example, I just posted 3 posts and received the second one via my app. The first and third are nowhere to be found.
I think what you are seeing here is an artefact of the consistency model Facebook is using. You can see another example of this when you look at your feed from two different devices. If I look at my feed from my smartphone and then go and check out my feed on my PC, sometimes I see the same items and sometimes there are items I saw on one device, that I didn't see on the other. This is because Facebook uses Eventual consistency.
In simple terms this means that given enough time, all data clusters will be consistent, but this is not guaranteed at any given time point. The bad news is: there is not much you can do about this. It's just a fact-of-life when working with very large distributed systems (and Facebook is one of the largest in the world). At this scale it is just not practical, where technology is today, to keep all copies of the data completely in sync at all times. What I think you are seeing is two requests serviced by two clusters which are currently not 100% in sync.
Here is an interesting read on the subject.
And here is something from Facebook. You can skip to the Consistancy section of the page (Although, I would recommend reading the entire post. It is a very interesting overview of Facebook architecture).
I'm having a problem with Advanced segments in Google Mobile App Analytics.
A condition has been setup to include all screens that match regex "/01-12-2013/" - but it's also showing me screens which does not contain this string. For example I'm getting a screen name containing "/11-11-2013/" which I would have expected to be filtered out.
The segment seems to return different results based on which tab I'm in in Google Mobile App Analytics. If that helps at all.
In "Audience Overview" I's returning 48.02% of all Screen Views. In "Behavior Overview" it's returning 71.51% of all screen views.
Here are some screenshots to illustrate the problem.
This is going to sound a bit ridiculous but, when creating advanced segments, after you've created them, I'd give them an hour or two before relying on the data they provide. I still have yet to find a solid answer as to why this is, but across a wide range of sites over the past year or two I've found similar issues. I've noticed that when I create an advanced segment to filter specific pages, invariably the initial results still show irrelevant pages I specifically filtered out. The only thing I've been able to attribute this to is some sort of "lag", on Google's side, in updating the Analytics data/property/view/segment. In almost every case, I've simply waited an hour or two after creating an advanced segment, and the data that was filtered usually displays correctly by then.
An interesting thing I've also seen was that a third party reporting platform I use, that has a Google Analytics integration, actually displayed the correct advanced segmented data BEFORE it showed up properly in Analytics. Strange.
I have seen many instances where the Likes, Comments, and Shares reported from the Graph API endpoint for a particular post do not match the stats reported by Insights. I.e.:
https://graph.facebook.com/PAGEID_POSTID reports:
likes 1000, comments 450, shares 300
https://graph.facebook.com/PAGEID_POSTID/insights/post_stories_by_action_type reports:
likes 725, comments 375, shares, 200
Or something like that. I do realize that sometimes, insights will show higher numbers because Insights tracks stories that are generated off of a shared copy of the original post (very cool), but these are cases where Insights reports numbers lower than what the Graph API endpoint shows.
Is this because after 28 days, Facebook Insights stops updating for individual post stats?
Any ideas?
I've filed a bug here: https://developers.facebook.com/bugs/400094966692945
The unfortunate reality is that Facebook Insights is horribly buggy, and in general fixing api bugs does not seem to be much of a priority for Facebook.
If you visit your Insights page frequently you will see this. Days where the count of app removals spikes by 1000%, followed two days later by user installs spiking 1000%. Days where there are zero comments on your app's posts where you have recorded thousands in your database. The list goes on...
Even when they "fix" Insights bugs, they don't always fix the history, leaving the inaccurate data in place and only correcting it on a go-forward basis.
This is the kiss of death for any reporting or analytics system. Once you start seeing the bugs over and over where you know from your own independent data that the reporting system has bugs, you start to question everything it shows you: is this reality or is it a bug? How can I make any decisions based on this data if I cannot trust the data? Folks with enterprise experience know this, but the kids who run Facebook...not so much.