How to validate the RefreshToken programmatically in Google? - google-cloud-platform

We are using Google Ads API and we wanted to validate the Refresh token programmatically, as using a incorrect refresh token or expired refresh token is taking lot of time before giving an exception(60 mins approx or even more) and hence causing a 504 TIMEOUT. Also there is a limitation on number of refresh token that we can create which is at max 50 refresh token at a time and if we create new 51st refresh token then the oldest one will expire. And hence chances of getting into this issue is more likely so we wanted to know if there is some API via which we can validate and then take appropriate actions instead of direct calling Google Ads API and getting into TIMEOUT ISSUE.
We also reached out to Google ads forum for this requirement and suggested to reach out GCP support ref link to Question asked: https://groups.google.com/g/adwords-api/c/tqOdXsnL5NI
We tried calling listaccessiblecustomers .
And we were expecting to get some invalid Exception in some ms or some secs so that we can log it for Error notification to our customers instead, after calling the API the call got stuck for almost 61 mins and then 504 TIMEOUT occurred.

You really need to post your code. You said you tried calling the listaccessiblecustomers service, but how? Are you using the client libraries? If so, what language are you even using?
You need to put in a bit of effort if you need some help. Remember, we can't see what you see on the screen in front of you.

Related

How to change failure message for Alexa?

I want to change the default failure message in Alexa, Sorry, I'm having trouble accessing your {} skill right now.
You cannot change that prompt but you can code to avoid that as much as possible. The error happens when Alexa is not able to get a valid response from your skill endpoint. There can be multiple reasons to that as mentioned here
1. Your endpoint is giving an invalid response
This can be due to the errors/exceptions happening in your endpoint code. You can make sure that error/exceptions don't occur and if they occur, thre is code to catch them and provide a valid response back to Alexa, with an error message of your choice.
2. Your endpoint availability
Make sure that your endpoints are available all the time if you have configured them as an endpoint. This is pretty much guaranteed if you are using Lambda endpoints. But if you are your own hosted web service endpoint, then you must put in all the measures to keep it available for Alexa to communicate with it.
3. Your endpoint response time
Make sure that your endpoint gives back the response within the time period that Alexa expects it to get(guess its 10 seconds). Also make sure if you are using Lambda functions, you have configured them with reasonable execution time to avoid timeout errors.
If you cover the exception/error/availability scenarios well then you can avoid the default error message as much as possible.

Facebook API calls rate limit reached

3 days ago we received an alert from the facebook developers page inform us that one of our apps had reached 100% of the hourly rate limit. Our application had an error that caused the increase in calls to the APIS that we solved yesterday afternoon. Since that we deployed the fix we see that in API calls graph (graph: "Application Level Rate Limiting") we don't reach the limit but the calls to the facebook APIS still failing. We want to know if there is a period of time to recover access to the APIs after not reaching that limit.
Here you can see a screenshot of the alert:
alert
In the response headers of one of the calls, we receive this error:
Status code: 403
Header name: WWW-Authenticate
Header value: OAuth "Facebook Platform" "invalid_request" "(#4) Application request limit reached
You can see the header here
You are not the only one right now:
https://developers.facebook.com/support/bugs/169774397034403/
But i suppose it should be gone after a day or a few hours, in my experience, sometimes i can make a few calls and then it shuts me off again, while our application is not that api call intensive.
This is the response from Facebook:
Dear all,
We checked with our rate limiting team who confirmed that several
improvements were made to help you troubleshoot rate limit related
error messages. For example, we've fixed an existing graph and added a
new one in the app dashboard to give you more info about the status of
your app.
If you continue to receive error code #4 in your request, we'd
appreciate it if you can create a new bug report because this thread
is getting rather long. We'll be happy to analyze each individual case
for you if you can provide the following info:
your app id the entire error message include the trace id a screenshot
of the graphs on your app dashboard
Thanks for your patience while we looked into this.
Xiao

When using the Admin SDK directory API to insert Org Units a dailyLimitExceeded error is returned even though that quota has not been reached

I work for an Student Information System and we're using the Admin SDK directory API to create school districts Google Org Unit structures from within our software.
POST https://www.googleapis.com/admin/directory/v1/customer/customerId/orgunits
When generating these API requests we're consistently receiving dailyLimitExceeded errors even when the district's quota has not been reached.
This error can be bypassed by ignoring the error, and implementing an exponential back-off routine, but I believe this to be acting much more like the quotaExceeded error is intended to act rather than dailyLimitExceeded, in that the request succeeds afterward on the first retry of this request.
In detail, the test I just ran successfully completed 9 of these API calls and then I received this response on the 10th:
Google.Apis.Requests.RequestError
Quota limit exceeded for the day. [403]
Errors [Message[Quota limit exceeded for the day.] Location[ - ] Reason[dailyLimitExceeded] Domain[usageLimits]
From the start of the batch of API calls it took about 10 seconds to get to the point where the error occurred.
Thanks for your help!
What I would suggest is to slow down your API requests. Don't make like 10 requests in 1 second. Give it a space in between requests. You are correct to implement exponential backoff. Also, if you can, use other accounts as well to make requests.

How AWS Cognito User Pool defends against bruteforce attacks

I am going to use AWS Cognito User Pool product as user directory for application and have several questions:
Is Amazon throttle request to Cognito User Pool and if yes what is the rate limit of calls to get throttled?
How Cognito defends against bruteforce attack on login/password?
After couple of hours search I found this two exceptions in source code:
TooManyFailedAttemptsException This exception gets thrown when the
user has made too many failed attempts for a given action (e.g., sign
in).
HTTP Status Code: 400
TooManyRequestsException This exception gets thrown when the user has
made too many requests for a given operation.
HTTP Status Code: 400
Also, I tried to log in with wrong credentials to test limits, I get NotAuthorizedException: Password attempts exceeded exception after 5. attempt.
In a similar scenario, I tried to brute force to forgot password but after 10 failed attempt I got LimitExceededException: Attempt limit exceeded, please try after some time.
I think that is how they do it.
Yes, Cognito User Pools protects against brute force attacks by using various security mechanisms. Throttling is one of those of mechanisms. We do not share limits as they vary dynamically.
This contains the latest documentation on the lockout policies for Cognito.
https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-authentication-flow.html
We allow five failed sign-in attempts. After that we start temporary lockouts with exponentially increasing times starting at 1 second and doubling after each failed attempt up to about 15 minutes. Attempts during a temporary lockout period are ignored. After the temporary lockout period, if the next attempt fails, a new temporary lockout starts with twice the duration as the last. Waiting about 15 minutes without any attempts will also reset the temporary lockout. Please note that this behavior is subject to change.
Rather than (or in addition to) focusing on bruteforcing the login endpoint, I think forgot password flow deserves some attention.
Forgot password email contains a 6-digit code that can be used to set new password.
This code is valid for 1 hour. User Pools code validity resource quotas.
In my tests I could make 5 attempts to set new password within an hour for a single user before throttling came into effect (LimitExceededException: Attempt limit exceeded, please try after some time.)
Now, if I do the math correctly, there are 1000000 possible values for a code (from my tests I never saw codes starting with 0 so there may be less). You have 5 attempts/hr to guess the code. So each hour you have 5/1000000*100=0.0005% chance to succeed with resetting the password without knowing the code.
Is this a small chance? It seems so.
Considering a large-scale attack bruteforcing multiple users with retries concurrently should I sleep well at night? I don't know!
To solve the issue once and for all, why can't Cognito use longer codes that are hard to guess (I want to sleep well at night). Maybe it has something to do with the fact that the same codes mechanism is used in text messages. I wish there was an official comment.

Uploading multiple photos returns OAuthException for some of them

I'm uploading multiple photos to the user's application album using a Graph API Batch Request.
For some mysterious reason, I get a 400 Error code for some photos, but not for others, in the same batch.
See the result here : http://pastie.org/3635995
Doing the same batch, with the same photos, doesn't fail on the same photos.
Sometimes I have 1 erros, sometimes 10.
Any suggestions ?
Here is a dump of the batch I send : http://pastie.org/3636047
From what I've seen, it's just the nature of the Facebook API. If your calls are mission critical, you will need to setup a "retry queue" of API calls that do fail. Give them three chances to fail before you finally give up and log it as errorred. Also, be sure to give a few seconds/minutes in between retry attempts.