How to retrieve a list of contacts that belong to a specific group without limit error - google-people-api

I am unable to read contacts of a group.
I use the method contactGroups.List (https://people.googleapis.com/v1/contactGroups) to read all groups.
Then I read all the contacts with the supplied resource names for the given group with the people.get method (https://people.googleapis.com/v1/resourceName).
This works, but since a request is required for every contact, I immediately get the error:
Quota exceeded for quota metric 'Read requests' and limit 'Read requests per minute per user' of service 'people.googleapis.com' for consumer.
The limit is 75 requests / 60s / user.
Is there another way?

You could wrap your request with a try/catch and repeat the call after 2sec if a HttpError with code 429 (limit reached) is raised.
I'm not sure what language you are using so here is something generic.
Function my_call(request):
Try:
Return request.execute()
Catch HttpError as error:
If error.resp.status == 429:
sleep(2sec)
Return my_call(request)
Else:
Raise error
Where the request is built by the people.get method and the HttpError can be included from the googleapi module/library I believe.
Note that it would probably be more efficient (API calls wise) to use the people.connection.list method and retrieve
the contacts from the response instead.

Related

Updating Sales Order line items in Dynamics business central

I have a sales order created using API for business central. Sales order has a single line item. I want to update the quantity of line item. Following is what I have tried so far.
Endpoint: https://api.businesscentral.dynamics.com/v1.0/domain.com/api/v1.0/companies(company_id)/salesOrders(sales_order_ide)/salesOrderLines(sales_order_line_id)
where sales order line id is of the form e86d3aa1-f2f8-ea11-aa61-0022481e3b8c-10000
as described in this document When a PATCH request is made, I get the following exception:
')' or ',' expected at position 9 in
'(sale-order-line-item-id)'.
The exception stated above was also occuring when i was simply trying to get the line item but that was fixed when I changed the URL and it took the form:
Endpoint:
https://api.businesscentral.dynamics.com/v1.0/domain.com/api/v1.0/companies(b4a4beb2-2d42-40dc-9229-5b5c371be4e3)/salesOrders(e86d3aa1-f2f8-ea11-aa61-0022481e3b8c)/salesOrderLines?filter=sequence eq 10000
This endpoint is returning correct response when i try to get the line item by issuing
GET request. However, when I issue a PATCH request using the same endpoint, with a simple request body e.g.
{"quantity" : 2.0}
it throws the exception:
'PATCH' requests for 'salesOrderLines' of EdmType 'Collection' are not
allowed within Dynamics 365 Business Central OData web services.
I am also specifying the if-Match header along with the request that contains etag value for the line item but of no avail and same exception is occurring. Am I missing something? Any help will be appreciated.
For those who may visit this question later, after much hit and trial through Postman, I finally figured out the problem. In my case if-Match header that's basically is Etag for the line item is all fine. The Problem was with API URL, specifically the way we specify the line item id. We have to specify this in single quotes so the URL for API call becomes:
https://api.businesscentral.dynamics.com/v1.0/domain.com/api/v1.0/companies(company_id)/salesOrders(sales_order_ide)/salesOrderLines('sales_order_line_id')
You would note that we are not specifying company_id and sales_order_id in single quotes, reason being, both of these parameters a of type GUID whereas sales_order_line_id is of type string as per metadata document.
I am getting below error
{
"error": {
"code": "BadRequest_NotFound",
"message": "Bad Request - Error in query syntax."
}
}

Instagram Graph API - Fetch media insights metric when a user switched from personal to business account

I'm looking for a way to fetch Media Insights metrics in Instagram Graph API (https://developers.facebook.com/docs/instagram-api/reference/media/insights) with a nested query based on the userId, even when a client switched from a Personal to a Business account.
I use this nested query to fetch all the data I need : https://graph.facebook.com/v3.2/{userId}?fields=followers_count,media{media_type,caption,timestamp,like_count,insights.metric(reach, impressions)} (this part causes the error: insights.metric(reach, impressions) - it works however for an account that has always been a Business one)
However, because some media linked to the userId were posted before the user switched to a Business account, instead of returning the data only for the media posted after, the API returns this error:
{
"error": {
"message": "Invalid parameter",
"type": "OAuthException",
"code": 100,
"error_data": {
"blame_field_specs": [
[
""
]
]
},
"error_subcode": 2108006,
"is_transient": false,
"error_user_title": "Media Posted Before Business Account Conversion",
"error_user_msg": "The media was posted before the most recent time that the user's account was converted to a business account from a personal account.",
"fbtrace_id": "Gs85pUz14JC"
}
}
Is there a way to know, thru the API, which media were created before and after the account switch from Personal to Business? Or is there a way to fetch the date on which the account was switched?
The only way I currently see to get the data I need is to use the /media edge and query insights for each media until I get an error. Then I would get approximately the date I need. However, this is not optimized at all since we are rate limited to 200 calls per user per hour.
I have the same problem.
For now, I'm Switch between queries (if first have error)
"userId"?fields=id,media.limit(100){insights.metric(reach, impressions)}
"userId"?fields=id,media.limit(100)
I show the user all insights in zero.
I don't know if they're the best alternative, like identify the time of conversion to business and get the post between this range of DateTime
I got the same problem and solved it like this:
Use the nested query just like you did, including insights.metric
If the error appears, do another call without insights.metric - to at least get all other data
For most accounts, it works and there is no additional API call. For the rest, i just cannot get the insights and i have to live with it, i guess - until Facebook/IG fixes the issue.
I got the same problem and solved it like this:
Step1: Convert your Instagram account to a Professional account.
Step2: Then According to Error Post a new post on Instagram and get their Post-ID.
Step3: Then try to get a request using that Post-ID.
{Post-ID}?fields=comments_count,like_count,timestamp,insights.metric(reach,impressions)
curl -i -X GET "https://graph.facebook.com/v12.0/{Post-ID}?fields=comments_count%2Clike_count%2Ctimestamp%2Cinsights.metric(reach%2Cimpressions)&access_token={access_token}"
For more: insights
Here is the relevant logic from a script that can handle this error while still doing a full import. It works by reducing the requested limit to 1 once the error is encountered. It will keep requesting insights until it encounters the error again, then removes insights from the fields and returns to the requested limit.
limit = 50
error_2108006 = False
metrics = 'insights.metric%28impressions%29%2C' # Must be URL encoded for replacement
url = '/PAGE_ID/media?fields=%sid,caption,media_url,media_type&limit=%s' % (metrics, limit)
# While we have more pages
while True:
# Make your API call to Instagram
posts = get_posts_from_instagram(url)
# Check for error 2108006
if posts == 2108006:
# First time getting this error, keep trying to get insights but one by one
if error_2108006 is False:
error_2108006 = True
url = url.replace('limit={}'.format(limit), 'limit=1')
continue
# Not the first time. Strip out insights and return to desired limit.
url = url.replace(metrics, '')
url = url.replace('limit=1', 'limit='.format(limit))
continue
# Do something with the data
for post in posts:
continue
# If there are more pages, fetch the next URL
if 'paging' in posts and 'next' in posts['paging']:
url = posts['paging']['next']
continue
# Done
break

How to get rid of `WARNING Stripped prohibited headers from URLFetch request'

Situation:
When requesting Google Bigquery from a python handler, this warnings shows up half a dozen of times
WARNING 2017-04-28 10:01:55,450 urlfetch_stub.py:550] Stripped prohibited headers from URLFetch request: ['content-length']
Then it either raise a deadline exception or successfully terminate the request.
HTTPException: Deadline exceeded while waiting for HTTP response
(success / Deadline exception ration is approx 2 / 1)
Here is how I do the request from the python handler:
import uuid
from google.cloud import bigquery
client = bigquery.Client()
q = client.run_async_query(str(uuid.uuid4()),("SELECT * FROM my_table"))
q.use_legacy_sql = False
q.allowLargeResults = True
q.begin()
wait_for_job(q)
res = q.results()
Question: How to prevent the warning to occur ?
Problem: The warning returned by the query is caught as a failure by GAE and makes the handler to resend a query, so on until success or Deadline Exception confirmation source. It is problematic as it multiply the time needed to execute a request.
Extra information:
In my current python handler I run three different requests one after the other. The number of Warning messages is the same for the three requests although it changes depending on time.
e.g.:
request 1: 3 warnings, then request 2 and 3 will have Three time the warning message.
wait 5 min
request 1: 6 warnings, then request 2 and 3 will have Six time the warning message.
This warning appear whith the new big query client.
This code works :
from oauth2client.contrib.appengine import AppAssertionCredentials
from apiclient.discovery import build
import httplib2
credentials = AppAssertionCredentials('https://www.googleapis.com/auth/sqlservice.admin')
http = httplib2.Http()
http = credentials.authorize(http)
service = build('bigquery', 'v2', http=http)
query = "SELECT * FROM [" + table + "]"
body = {'query': query, 'timeoutMs': 2000}
result = service.jobs().query(projectId=PROJECT_ID, body=body).execute()
So, it seems that GAE strips/modifies specific headers of all outbound HTTP requests by default: see link here. Additionally, after digging into the core HTTP helpers within the google-cloud-python client library, it looks like the default behavior is to add the content-length headers: see code here.
While none of this is ideal in your case, I don't think it should be a dealbreaker. I think that adding in a try...catch clause should help alleviate the pain, although you will continue to have these errors show up in the logs. Additionally, you could insert a print type(e) on exception e to find out more about the exception from your logs. See conversation here

Retrieve error code and message from Google Api Client in Python

I can't find a way to retrieve the HTTP error code and the error message from a call to a Google API using the Google API Client (in Python).
I know the call can raise a HttpError (if it's not a network problem) but that's all I know.
Hope you can help
This is how to access the error details on HttpError in Google API Client(Python):
import json
...
except HttpError as e:
error_reason = json.loads(e.content)['error']['errors'][0]['message']
...
Actually, found out that e.resp.status is where the HTTP error code is stored (e being the caught exception). Still don't know how to isolate the error message.
Class HttpError has a built-in _get_reason() method that calculate the reason for the error from the response content. _get_reason() source code.
The method has a kind of a misleading name for me. Because I've needed to get the value of the actual reason key from a response. Not a message key that _get_reason() is getting. But the method doing exactly what is asked by Valentin.
Example of a response, decoded from bytes to JSON:
{
"error": {
"errors": [{
"domain": "usageLimits",
"reason": "dailyLimitExceeded",
"message": "Daily Limit Exceeded. The quota will be reset at midnight Pacific Time(PT).
You may monitor your quota usage and adjust limits in the API Console:
https: //console.developers.google.com/apis/api/youtube.googleapis.com/quotas?project=908279247468"
}],
"code": 403,
"message": "Daily Limit Exceeded. The quota will be reset at midnight Pacific Time(PT).
You may monitor your quota usage and adjust limits in the API Console:
https: //console.developers.google.com/apis/api/youtube.googleapis.com/quotas?project=908279247468"
}
}
I found out HttpError has a status_code property. You can find it here.
from googleapiclient.errors import HttpError
...
except HttpError as e:
print(e.status_code)
...
You can use error_details or reason from HttpError in Google API Client(Python):
...
except HttpError as e:
error_reason = e.reason
error_details = e.error_details # list of dict
...

What does annotate mean at a request rate limiter

I'm using Django Ratelimit to limit the rate my views can be called by an IP.
But I don't know what the parameter block means, documented here.
When I set it to True, I get a 403 when my rate limit is exceeded.
But I don't understand what happens when it is set to False. The doc says:
block – False Whether to block the request instead of annotating.
My question is: What does "annotate" mean in this context.
As you say, the decorator raises a Ratelimited exception when block=True. This returns a 403 Permission Denied response to the user.
If block=False, no exception is raised. However, a boolean limited has been set on the request object. In your view, you can check for this 'annotation' using getattr, and handle it however you like.
was_limited = getattr(request, 'limited', False):
if was_limited:
return HttpResponse("You have been rate limited")
So if you use block=False, it's up to you to check the value request.limited, and handle it properly.