I'm getting this error when tring to activate gmail API in my project from https://console.cloud.google.com/apis/api/gmail-json.googleapis.com/overview :
Spanner encountered a persistent error: generic::RESOURCE_EXHAUSTED:
Quota exceeded: chubby!mdb/apiserving-spanner in alloc
cloud_flex:us_global over user limit for [FLASH_SPACE].
What should I do to fix that ?
(My final goal is to update user email signature for my domain)
Based from Standard Error Responses, "quotaExceeded" indicates that limit per view has been reached or exceeded. Recommended action for such error is to retry using exponential back-off and you need to wait for at least one in-progress request for this view (profile) to complete.
For more information, you may also go through Usage Limits and Implementing Server-Side Authorization .
Related
We have a pubsub subscription setup passing requests to a Google Cloud Function.
Both the cloud function and the subscription to it are set to "Retry on Failure" (both with exponential back-off policies fwiw).
The Google Cloud Function is limited to 40 concurrent instances.
When the subscription queue is larger than the available instances, the expected behaviour is delivery will fail and be retried later.
What seems to be happening is the logs are filled with messages saying:
{
"textPayload": "The request was aborted because there was no available instance.",
"insertId": "6109fbbb0007ec4aaa3855a9",
...
}
And the subscription messages are just dropped and not retried.
Is this the expected behaviour? It seems crazy to me but if so, what architecture should you put in place to catch these dropped messages?
Edit: These issues started showing up in our logs on July 5 2021 and can't be found in logs before that date. Before that, the pubsub/gcf combo used to work as expected.
The error you are encountering is a known issue and the updates can be tracked through this Issue Tracker. You can also STAR the issue to receive automatic updates and give it traction by referring to this link. The tracker also discusses work-arounds to mitigate the request aborts. Since you have already implemented retries with exponential backoff, please take a look at the other solutions provided here.
If your concern is to do with Google Cloud Functions scalability or in general require further investigation of these errors, please reach out to GCP support in case you have a support plan. Otherwise, please open an issue in the issue tracker.
i am making api request to create disks to the Google Cloud platform and get status code as 200.so but when i check if disk is ready i get that "error":{"code":404 ,"reason":"notFound","domain":"global"}. when i check google cloud logs i see for request the below error code. "status": { "code": 8, "message": "RATE_LIMIT_EXCEEDED" } -can anyone help possible solutions for this like which exact quota limit should be increased? i have tried retry mechanism with pause included abt 3 sec's with that i was able to reduce the probability but the real issue still there.
you can request for an increase in the quota allocation using the GCP Console -> IAM & Admin -> Quotas. Please find the Compute Engine Quota that is showing up as exceeded and click on it to drill down to the specific operation types. I believe you were hitting the limit "Operation read requests"
you may have hit an operation read request limit.
3 days ago we received an alert from the facebook developers page inform us that one of our apps had reached 100% of the hourly rate limit. Our application had an error that caused the increase in calls to the APIS that we solved yesterday afternoon. Since that we deployed the fix we see that in API calls graph (graph: "Application Level Rate Limiting") we don't reach the limit but the calls to the facebook APIS still failing. We want to know if there is a period of time to recover access to the APIs after not reaching that limit.
Here you can see a screenshot of the alert:
alert
In the response headers of one of the calls, we receive this error:
Status code: 403
Header name: WWW-Authenticate
Header value: OAuth "Facebook Platform" "invalid_request" "(#4) Application request limit reached
You can see the header here
You are not the only one right now:
https://developers.facebook.com/support/bugs/169774397034403/
But i suppose it should be gone after a day or a few hours, in my experience, sometimes i can make a few calls and then it shuts me off again, while our application is not that api call intensive.
This is the response from Facebook:
Dear all,
We checked with our rate limiting team who confirmed that several
improvements were made to help you troubleshoot rate limit related
error messages. For example, we've fixed an existing graph and added a
new one in the app dashboard to give you more info about the status of
your app.
If you continue to receive error code #4 in your request, we'd
appreciate it if you can create a new bug report because this thread
is getting rather long. We'll be happy to analyze each individual case
for you if you can provide the following info:
your app id the entire error message include the trace id a screenshot
of the graphs on your app dashboard
Thanks for your patience while we looked into this.
Xiao
We have 2 React Native app are using AWS Cognito for authentication. We use library react-native-aws-cognito-js in our code. The apps are working fine until these 2 days. Apps are experiencing intermittent "Internal Server Error".
How can I find more information about this error? Any tool can help us pinpoint the cause?
Update
From CloudTrail, each API call has an event "CreateNetworkInterface". Many of such API calls have error code "Client.NetworkInterfaceLimitExceeded". What is the cause and solution to this?
According to this AWS Doc (in Chinese), CloudWatch will not write to log when error is due to insufficient IP/ENI. That explains the increase in error number but no logs in CloudWatch.
Upate 2
We have found a scheduled Lambda job which may exhausted IP addresses. We stopped the batch job. But still can't have too many user login to server due to "Client.NetworkInterfaceLimitExceeded" error. I realized that there are many "CreateNetworkInterface" event and few "DeleteNetworkInterface" event. How can I "clean up / reset" all network interface in VPC?
Short answer: Cloud Trail.
Long answer with a suggestion
Assuming your application code is fine, most likely the cause of your 500 error is based on Cognito's initial limitations (e.g., number of calls per user): https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html.
AWS suggests to use Cloud Trail, for logging Api calls.
However I would suggest, to prove the limitations first, add some logs around the api call yourself, and in development you could call your app/api with a high number of calls; and most likely you will see the 500 error due to the limitations.
You could do the following in the terminal:
for i in `seq 1 1000`; do curl --cookie SecureCookie=TokenValueFromAWS http://localhost:desirablePort/SecuredPath; done
I work for an Student Information System and we're using the Admin SDK directory API to create school districts Google Org Unit structures from within our software.
POST https://www.googleapis.com/admin/directory/v1/customer/customerId/orgunits
When generating these API requests we're consistently receiving dailyLimitExceeded errors even when the district's quota has not been reached.
This error can be bypassed by ignoring the error, and implementing an exponential back-off routine, but I believe this to be acting much more like the quotaExceeded error is intended to act rather than dailyLimitExceeded, in that the request succeeds afterward on the first retry of this request.
In detail, the test I just ran successfully completed 9 of these API calls and then I received this response on the 10th:
Google.Apis.Requests.RequestError
Quota limit exceeded for the day. [403]
Errors [Message[Quota limit exceeded for the day.] Location[ - ] Reason[dailyLimitExceeded] Domain[usageLimits]
From the start of the batch of API calls it took about 10 seconds to get to the point where the error occurred.
Thanks for your help!
What I would suggest is to slow down your API requests. Don't make like 10 requests in 1 second. Give it a space in between requests. You are correct to implement exponential backoff. Also, if you can, use other accounts as well to make requests.