Google's docs for people.connections.list say the following:
The first page of a full sync request has an additional quota. If the quota is exceeded, a 429 error will be returned. This quota is fixed and can not be increased.
Does this mean that after you do the first sync, you're not supposed to trigger the full sync again for a while or you may get blocked by google with status 429, or does this mean something else?
Related
I am trying to sync 1 million record to ES, and I am doing it using bulk API in batch of 2k.
But after inserting around 25k-32k, elastic search is giving following exception.
Unable to parse response body: org.elasticsearch.ElasticsearchStatusException
ElasticsearchStatusException[Unable to parse response body]; nested: ResponseException[method [POST], host [**********], URI [/_bulk?timeout=1m], status line [HTTP/1.1 403 Request throttled due to too many requests]
403 Request throttled due to too many requests /_bulk]; nested: ResponseException[method [POST], host [************], URI [/_bulk?timeout=1m], status line [HTTP/1.1 403 Request throttled due to too many requests]
403 Request throttled due to too many requests /_bulk];
I am using aws elastic search.
I think, I need to implement wait strategy to handle it, something like keep checking es status and call bulk insert if status all of ES okay.
But not sure how to implement it? Does ES offers anything pre-build for it?
Or Anything better way to handle this?
Thanks in advance.
Update:
I am using AWS elastic search version 6.8
Thanks #dravit for including my previous SO answer in the comment, after following the comments it seems OP wants to improve the performance of bulk indexing and want exponential backoff, which i don't think Elasticsearch provides out of the box.
I see that you are putting a pause of 1 second after every second which will not work in all the cases, and if you have large number of batches and documents to be indexed, for sure it will take a lot of time. There are few more suggestions from my side to improve the performance.
Follow my tips to improve the reindex speed in Elasticsearch and see what all things listed here is applicable and doing them improves speed by what factor.
Find a batching strategy which best suits to your environment, I am not sure but this article from #spinscale who is the developer of java high level rest client might help or you can ask a question on https://discuss.elastic.co/, I remembered he shared a very good batching strategy in one of his webinar but couldn't find the link of it.
Notice various ES metrics apart from bulk threadpool and queue size, and see if your ES still has capacity can you increase the queue size and increase the rate by which you can send requests to ES.
Check the error handling guide here
If you receive persistent 403 Request throttled due to too many requests or 429 Too Many Requests errors, consider scaling vertically. Amazon Elasticsearch Service throttles requests if the payload would cause memory usage to exceed the maximum size of the Java heap.
Scale your application vertically or increase delay between requests.
i am making api request to create disks to the Google Cloud platform and get status code as 200.so but when i check if disk is ready i get that "error":{"code":404 ,"reason":"notFound","domain":"global"}. when i check google cloud logs i see for request the below error code. "status": { "code": 8, "message": "RATE_LIMIT_EXCEEDED" } -can anyone help possible solutions for this like which exact quota limit should be increased? i have tried retry mechanism with pause included abt 3 sec's with that i was able to reduce the probability but the real issue still there.
you can request for an increase in the quota allocation using the GCP Console -> IAM & Admin -> Quotas. Please find the Compute Engine Quota that is showing up as exceeded and click on it to drill down to the specific operation types. I believe you were hitting the limit "Operation read requests"
you may have hit an operation read request limit.
The error Throttling failure: Maximum SigV2 SMTP sending rate exceeded. suddenly started to appear in our .NET application though there were no exceeding any quota (14 mails per second or 50000 per day) in our AWS Sending Statistics.
I can see many similar issues about Throttling – Maximum sending rate exceeded on StackOverflow but I'm confused about SigV2 in my error message.
Searching in other resources like this one gave me the idea that this issue started to happen recently from about October 20, 2020, and there is no exact answer to why this happened. The only solution I can see is to migrate from using SigV2 signing process to the new method.
The question is: Why this happened and can this issue be solved without changes in the application code?
This is not a service quota issue. The issue you are facing is due to excessive authentication requests from your side. This issue will not occur if you renew your SMTP credentials in the SES console to use Sigv4 credentials.
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-credentials.html
As you can see your daily rate exceeded
But you can open a case and increase your maximum rate
I work for an Student Information System and we're using the Admin SDK directory API to create school districts Google Org Unit structures from within our software.
POST https://www.googleapis.com/admin/directory/v1/customer/customerId/orgunits
When generating these API requests we're consistently receiving dailyLimitExceeded errors even when the district's quota has not been reached.
This error can be bypassed by ignoring the error, and implementing an exponential back-off routine, but I believe this to be acting much more like the quotaExceeded error is intended to act rather than dailyLimitExceeded, in that the request succeeds afterward on the first retry of this request.
In detail, the test I just ran successfully completed 9 of these API calls and then I received this response on the 10th:
Google.Apis.Requests.RequestError
Quota limit exceeded for the day. [403]
Errors [Message[Quota limit exceeded for the day.] Location[ - ] Reason[dailyLimitExceeded] Domain[usageLimits]
From the start of the batch of API calls it took about 10 seconds to get to the point where the error occurred.
Thanks for your help!
What I would suggest is to slow down your API requests. Don't make like 10 requests in 1 second. Give it a space in between requests. You are correct to implement exponential backoff. Also, if you can, use other accounts as well to make requests.
I'm getting this error when tring to activate gmail API in my project from https://console.cloud.google.com/apis/api/gmail-json.googleapis.com/overview :
Spanner encountered a persistent error: generic::RESOURCE_EXHAUSTED:
Quota exceeded: chubby!mdb/apiserving-spanner in alloc
cloud_flex:us_global over user limit for [FLASH_SPACE].
What should I do to fix that ?
(My final goal is to update user email signature for my domain)
Based from Standard Error Responses, "quotaExceeded" indicates that limit per view has been reached or exceeded. Recommended action for such error is to retry using exponential back-off and you need to wait for at least one in-progress request for this view (profile) to complete.
For more information, you may also go through Usage Limits and Implementing Server-Side Authorization .