I'm trying to change Text messaging preferences in the AWS SNS service.
I'm getting this error
Invalid parameter: (Service: AmazonSNS; Status Code: 400; Error Code: InvalidParameter; Request ID: 2681ed63-5c47-5bb6-a4c3-beb27367210a)
I have 4 clients that use AWS, and I get this error in every client's account.
Or as in my case, I was starting to use SNS in my account for the first time and tried to set USD 5 as the upper limit. The max allowed when you set up the first time is USD 1. Then you've to request AWS Support to increase this limit.
After 7 days talking with AWS support, I get the solution.
Believe or not, it's simple. The input "Default sender ID" must have only 11 digits.
Why not just say it on error message? ;)
Related
My company is trying to sync a folder from Workdocs to S3. I am currently using an AWS guide from here: https://aws.amazon.com/es/blogs/storage/auto-sync-files-from-amazon-workdocs-to-amazon-s3/. I am facing an issue in the point 7. Setting up the WorkDocs notification receiving these two errors:
An error occurred (InternalFailure) when calling the CreateNotificationSubscription operation (reached max retries: 2): None
An error occurred (ThrottlingException) when calling the CreateNotificationSubscription operation (reached max retries: 2): Rate exceeded
I have the logs from the Cloud Shell in case those are needed.
Any help would be much appreciated
These errors normally occurs when you are trying to use an invalid topic ARN.
Make sure that you using the correct ARN (eg. use the ARN of the Topic instead of the subscription or check if there are some typos on the ARN)
I use SNS to send confirmation codes for signing up with Cognito.
Initially, it all worked great, with a $10 spending limit on us-east-1(N Virginia).
After some card issues, my spening limit got reduced to $1 and it was already reached.
After requesting a spending limit increase, Amazon increased my spending limit on Amazon SNS us-east-2(Ohio). My issue is that now Cognito tries to send messages using the Virginia server instead of the Ohio one, resulting in failed attempts.
I would like to switch SNS servers or maybe disable us-east-1 to fix this issue.
I'd appreciate any info on the matter.
Thanks in advance.
https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-email-phone-verification.html
SMS messages from Amazon Cognito user pools are routed through Amazon
SNS in the same region unless noted in the following table.
There is no way to change this internal mapping. Ideally the easiest way is to increase the SNS spending limit in the us-east-1 region. Not sure why you got it in the us-east-2 region.. maybe you could explain further on that.
Another option is to use this new feature:
https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-custom-sms-sender.html
It is a new Lambda trigger which is not available in the console but can be added in the CLI. You could use the AWS SDK to send the CODEs via SNS in the given region. I have not done this myself but should satisfy your usecase.
Earlier this month we've enabled Stackdriver Monitoring in 3 our projects on GCP.
Recently we've found that Stackdriver API metrics show around 85% of errors:
On graphs, these error codes are 429:
I've checked Quotas, everything seems fine:
Next metrics graph tells us what method causing errors:
Using the other graph "Errors by credential" I found out that API requests made by our GKE service account. We have custom service account for GKE instances, and as far as we know it has all required permissions for monitoring:
roles/logging.logWriter
roles/monitoring.metricWriter
roles/stackdriver.resourceMetadata.writer (as noted in this issue)
Also, stackdriver-metadata-agent pods in GKE cluster logs related error every minute:
stackdriver-metadata-agent-cluster-level-d6556b55-2bkbc metadata-agent I0203 15:03:16.911940 1 binarylog.go:265] rpc: flushed binary log to ""
stackdriver-metadata-agent-cluster-level-d6556b55-2bkbc metadata-agent W0203 15:03:56.495034 1 kubernetes.go:118] Failed to publish resource metadata: rpc error: code = ResourceExhausted desc = Resource has been exhausted (e.g. check quota).
stackdriver-metadata-agent-cluster-level-d6556b55-2bkbc metadata-agent I0203 15:04:16.912272 1 binarylog.go:265] rpc: flushed binary log to ""
stackdriver-metadata-agent-cluster-level-d6556b55-2bkbc metadata-agent W0203 15:04:56.657831 1 kubernetes.go:118] Failed to publish resource metadata: rpc error: code = ResourceExhausted desc = Resource has been exhausted (e.g. check quota).
Aside from that I haven't found any logs related to the issue yet, and I cannot figure out who does 2 requests per second to Stackdriver API receiving 429 errors.
I should add that everything above is true for all 3 projects.
Can someone suggest how can we solve the issue?
Is this still an excess of the quota? If yes, why request metrics for quotas are ok Quota exceeded errors count contains no data?
Are we missing any permissions on our GKE service account?
What else can be related?
Thanks in advance.
This is a known behavior where container and pods tend to publish updates very frequently and that hitting the rate limits. There's no performance or functionality issues with this behavior except getting noisy logs.
Its also possible to apply logs exclusion to avoid getting them posted on Stackdriver logging.
I'm trying to send SMS via Amazon SNS, but since yesterday they are failing, I see a delivery rate of 0% in the AWS Console. As I understand, I would have to pay 29$ upfront to report a failing service.
Is there a possibility to get via CloudWatch the reason for the failed SMS service, or is there another method to complain to Amazon?
Since yesterday, the messages have completely stopped coming so you might have reached the SNS SMS spend limit for your account. By default it is 1 USD for a AWS account in a AWS region. You can check Delivery Status logs which might say 'No Quota Left for Account'.You should get your limit increased with limit increase case for which you do not have to pay 29 dollars.
But in future if you want to prevent this i.e. if you want to get notified earlier before you reach the actual SMS Spend Limit for your account so that limit can be increased before hand. In this way you would not face SMS delivery issues due to limit being reached. You can create a CloudWatch Alarm on your SMSMonthToDateSpentUSD metric.
Please have a look at this video: https://www.youtube.com/watch?v=5-HdLf_lizI
Everything was working yesterday and I'm simply still testing so my capacity shouldn't be high to begin with but I keep receiving these errors today:
{
Message = "We currently do not have sufficient capacity in the region you requested. Our system will be working on provisioning
additional capacity. You can avoid getting this error by temporarily
reducing your request rate.";
Type =Service;
}
What is this error message and should I be concerned that something like this would happen when I go into production? This is a serious error because my users are mandated to login using calls to api gateway (utilizing aws lambda).
This kind of error should not last long as it will immediately trigger AWS provision request.
If you concern about your api gateway availbility, consider to create redundant lambda function on other regions and switch whenever this error occurs. However calling lambda from a remote region can introduce long latency.
Another suggestion is, please review the aws limits for API gateway and Lambda services in your account. If your requests do exceed the limits, raise ticket to aws to extend it.
Amazon API Gateway Limits
Resource Default Limit
Maximum APIs per AWS account 60
Maximum resources per API 300
Maximum labels per API 10
Increase the limits is free service in aws.
Refer: Amazon API Gateway Limits
AWS Lambda posted an event on the service health dashboard, so please follow this for further details on that specific issue.
Unfortunately, if you want to return a custom code when Lambda errors in this way you would have to write a mapping template and attach it to every integration response where you used a Lambda integration.
We recognize that this is suboptimal and is work most customers would prefer API Gateway just handle for them. With that in mind, we already have a high priority item on our backlog to make it easier to pass through the status codes from the Lambda integration. I cannot, however, commit to a timeframe as to when this would be available.