My company is trying to sync a folder from Workdocs to S3. I am currently using an AWS guide from here: https://aws.amazon.com/es/blogs/storage/auto-sync-files-from-amazon-workdocs-to-amazon-s3/. I am facing an issue in the point 7. Setting up the WorkDocs notification receiving these two errors:
An error occurred (InternalFailure) when calling the CreateNotificationSubscription operation (reached max retries: 2): None
An error occurred (ThrottlingException) when calling the CreateNotificationSubscription operation (reached max retries: 2): Rate exceeded
I have the logs from the Cloud Shell in case those are needed.
Any help would be much appreciated
These errors normally occurs when you are trying to use an invalid topic ARN.
Make sure that you using the correct ARN (eg. use the ARN of the Topic instead of the subscription or check if there are some typos on the ARN)
Related
There are many documents that explain how to resolve this error. Checked many of them and tried . However following them is not resolving this issue for me.
Error I get is
An error occurred (ValidationError) when calling the PutLifecycleHook operation: Unable to publish test message to notification target arn:aws:sqs:xxxxx:XXXXX:kubeeventsqueue.fifo using IAM role arn:aws:iam::XXXXXXXXX:role/kubeautoscaling. Please check your target and role configuration and try to put lifecycle hook again.
The command I am using is:
aws autoscaling put-lifecycle-hook --lifecycle-hook-name terminate --auto-scaling-group-name mygroupname --lifecycle-transition autoscaling:EC2_INSTANCE_TERMINATING --role-arn arn:aws:iam::XXXXXX:role/kubeautoscaling --notification-target-arn arn:aws:sqs:xxxxx:XXXXXXX:kubeeventsqueue.fifo
Note that I have replaced XXXXX for the actual ids above.
The role concerned (arn:aws:iam::XXXXXX:role/kubeautoscaling) is having trust relationship with autoscaling.amazonaws.com. It is also having "AutoScalingNotificationAccessRole" policy attached to it.
While testing, I have also tried adding a permission of "Allow everybody" for All SQS Actions (SQS:*). (Removed it after testing though).
I have also tried to first create SQS queue and then configure --notification-target-arn, without any success.
Any help on this would be very helpful.
It appears that you are using an Amazon SQS FIFO (first-in-first-out) queue.
From Configuring Notifications for Amazon EC2 Auto Scaling Lifecycle Hooks - Receive Notification Using Amazon SQS:
FIFO queues are not compatible with lifecycle hooks.
I don't know whether this is the cause of your current error, but it would prohibit your desired configuration from working.
Yes, FIFO queues are definitely not supported by LifeCycleHooks. I Wasted a lot of time dorking with permissions and queue configuration only to finally find that FIFO is not supported. It would be nice if this were much more prominent in the documentation because 1) it's way not obvious or intuitive and 2) the error message received suggests it's permissions or something. How about explicitly stating "FIFO Queues are not supported" instead of "Failed to send test message..." RIDICULOUS!
I'm trying to change Text messaging preferences in the AWS SNS service.
I'm getting this error
Invalid parameter: (Service: AmazonSNS; Status Code: 400; Error Code: InvalidParameter; Request ID: 2681ed63-5c47-5bb6-a4c3-beb27367210a)
I have 4 clients that use AWS, and I get this error in every client's account.
Or as in my case, I was starting to use SNS in my account for the first time and tried to set USD 5 as the upper limit. The max allowed when you set up the first time is USD 1. Then you've to request AWS Support to increase this limit.
After 7 days talking with AWS support, I get the solution.
Believe or not, it's simple. The input "Default sender ID" must have only 11 digits.
Why not just say it on error message? ;)
I am trying to generate credential report. I get following error
aws iam generate-credential-report
An error occurred (Throttling) when calling the GenerateCredentialReport operation (reached max retries: 4): Rate exceeded
Also , from boto3 API , I am not getting the report.
Is there any way to set limit?
I opened a support case with AWS about it, here is their response:
Thank you for contacting AWS about your GetCredentialReport issue.
According to our IAM team, we have observed an increase in the call
volume of the IAM GenerateCredentialReport API. In order to avoid any
impact that increase in call volume might have on the service and our
customers, we blocked that API. Callers will receive LimitExceeded
exception. We are actively investigating a solution that will lead to
unblocking the API.
The API seems to be working now. This is the latest response from AWS Support regarding the issue:
"We have deployed a fix to the GenerareCredentialReport API issue
which will protect the IAM service from elevated latencies and error
rates. We are going to ramp up the traffic to the API over the next
few days. In the meanwhile, clients calling the API might receive
“LimitExceed Exception”. In this case, we recommend that the clients
retry with exponential back off."
I'm trying to use Data pipeline service to export data from dynamodb to S3 but keep getting the following error.
Unable to create resource for #EmrClusterForBackup_ due to:
The supplied ami version is invalid.
(Service: AmazonElasticMapReduce; Status Code: 400; Error Code: ValidationException; Request ID: xxx)
I have tried changing AMI to a lot of different values according to their documentation like all the ones in 4.x.x and 5.x.x series but keep getting the same error. I see there have been few threads but none have an answer.
Hoping to find someone who's been able to solve this.
Check if region of your EMR Cluster is the same as region of pipeline.
There is a bug and by default it is set as region of DynamoDB (which might be different)
I am getting below error form aws emr. I have submitted job from cli. The job status is pending.
A client error (ThrottlingException) occurred when calling the ListSteps operation: Rate exceeded
How to see the all the active jobs in emr cluster and how to kill them from cli and also from the aws console.
Regards
sanjeeb
AWS APIs are rate limited. According to the AWS docs the recommended approach to dealing with a throttling response is to implement exponential backoff in your retry logic means when you get ThrottlingException try to catch it and sleep for some time say half second and then retry ...