AWS Glue Trigger stuck in Deactivating state - amazon-web-services

I have a trigger in AWS glue that is stuck the DEACTIVATING state. When in the state, you cannot enable it. I have tried deleting (both in the console and cli) but get the following error message:
An error occurred (InvalidInputException) when calling the DeleteTrigger operation: The operation is not allowed when trigger is in state: DEACTIVATING
Has anyone come across this before? Is there a way to force delete? I cannot see anything in the documentation.

Related

AWS DescribeExecution for Step Function not returning error/cause

I have a step function that is failing, and I can see its error and cause in the AWS console, as well as through the cli by calling aws stepfunctions describe-execution --execution-arn <arn>.
However, if I create a lambda that invokes StepFunction.DescribeExecution(), the result shows everything except for error and cause.
I can't for the life of me figure out why the StepFunction sdk isn't returning them, my guess is possibly an IAM role issue, but I did give that lambda in question states:DescribeExecution.
tldr: Calling DescribeExecution with the aws cli correctly returns the error and cause for a failed step function execution. Calling that same method in another lambda does not.

Amazon Workdocs to S3 Problem with subscription following guide

My company is trying to sync a folder from Workdocs to S3. I am currently using an AWS guide from here: https://aws.amazon.com/es/blogs/storage/auto-sync-files-from-amazon-workdocs-to-amazon-s3/. I am facing an issue in the point 7. Setting up the WorkDocs notification receiving these two errors:
An error occurred (InternalFailure) when calling the CreateNotificationSubscription operation (reached max retries: 2): None
An error occurred (ThrottlingException) when calling the CreateNotificationSubscription operation (reached max retries: 2): Rate exceeded
I have the logs from the Cloud Shell in case those are needed.
Any help would be much appreciated
These errors normally occurs when you are trying to use an invalid topic ARN.
Make sure that you using the correct ARN (eg. use the ARN of the Topic instead of the subscription or check if there are some typos on the ARN)

PutLifecycleHook operation: Unable to publish test message to notification target (FIFO)

There are many documents that explain how to resolve this error. Checked many of them and tried . However following them is not resolving this issue for me.
Error I get is
An error occurred (ValidationError) when calling the PutLifecycleHook operation: Unable to publish test message to notification target arn:aws:sqs:xxxxx:XXXXX:kubeeventsqueue.fifo using IAM role arn:aws:iam::XXXXXXXXX:role/kubeautoscaling. Please check your target and role configuration and try to put lifecycle hook again.
The command I am using is:
aws autoscaling put-lifecycle-hook --lifecycle-hook-name terminate --auto-scaling-group-name mygroupname --lifecycle-transition autoscaling:EC2_INSTANCE_TERMINATING --role-arn arn:aws:iam::XXXXXX:role/kubeautoscaling --notification-target-arn arn:aws:sqs:xxxxx:XXXXXXX:kubeeventsqueue.fifo
Note that I have replaced XXXXX for the actual ids above.
The role concerned (arn:aws:iam::XXXXXX:role/kubeautoscaling) is having trust relationship with autoscaling.amazonaws.com. It is also having "AutoScalingNotificationAccessRole" policy attached to it.
While testing, I have also tried adding a permission of "Allow everybody" for All SQS Actions (SQS:*). (Removed it after testing though).
I have also tried to first create SQS queue and then configure --notification-target-arn, without any success.
Any help on this would be very helpful.
It appears that you are using an Amazon SQS FIFO (first-in-first-out) queue.
From Configuring Notifications for Amazon EC2 Auto Scaling Lifecycle Hooks - Receive Notification Using Amazon SQS:
FIFO queues are not compatible with lifecycle hooks.
I don't know whether this is the cause of your current error, but it would prohibit your desired configuration from working.
Yes, FIFO queues are definitely not supported by LifeCycleHooks. I Wasted a lot of time dorking with permissions and queue configuration only to finally find that FIFO is not supported. It would be nice if this were much more prominent in the documentation because 1) it's way not obvious or intuitive and 2) the error message received suggests it's permissions or something. How about explicitly stating "FIFO Queues are not supported" instead of "Failed to send test message..." RIDICULOUS!

An error occurred (InvalidParameterException) when calling the PutSubscriptionFilter operation

Trying to put cloud watch logs into kineses firehose.
Followed below:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExample
Got this error
An error occurred (InvalidParameterException) when calling the PutSubscriptionFilter operation: Could not deliver test message to specified Firehose stream. Check if t
e given Firehose stream is in ACTIVE state.
aws logs put-subscription-filter --log-group-name "xxxx" --filter-name "xxx" --filter-pattern "{$.httpMethod = GET}" --destination-arn "arn:aws:firehose:us-east-1:12345567:deliverystream/xxxxx" --role-arn "arn:aws:iam::12344566:role/xxxxx"
You need to update the trust policy of your IAM role so that it gives permissions to the logs.amazonaws.com service principal to assume it, otherwise CloudWatch Logs won't be able to assume your role to publish events to your Kinesis stream. (Obviously you also need to double-check the permissions on your role to make sure it has permissions to read from your Log Group and write to your Kinesis Stream.)
It would be nice if they added this to the error message to help point people in the right direction...
The most likely problem that causes this error is a permissions issue. i.e. something wrong in the definition of the IAM role you passed to --role-arn. You may want to double check that the role and its permissions were set up properly as described in the doc.
I was getting a similar error when subscribing to a cloudwatch loggroup and publishing to a Kinesis stream. Cdk was not defining a dependency needed for the SubscriptionFilter to be created after the Policy that would allow the filtered events to be published in Kinesis. This is reported in this github cdk issue:
https://github.com/aws/aws-cdk/issues/21827
I ended up using the workaround implemented by github user AlexStasko: https://github.com/AlexStasko/aws-subscription-filter-issue/blob/main/lib/app-stack.ts
If your Firehose is active status and you can send log stream then the remaining issue is only policy.
I got the similar issue when follow the tutorial. The one confused here is Kinesis part and Firehose part, we may mixed up together. You need to recheck your: ~/PermissionsForCWL.json, with details part of:
....
"Action":["firehose:*"], *// You could confused with kinesis:* like me*
"Resource":["arn:aws:firehose:region:123456789012:*"]
....
When I did the tutorial you mentioned, it was defaulting to a different region so I had to pass --region with my region. It wasn't until I did the entire steps with the correct region that it worked.
For me I think this issue was occurring due to the time it takes for the IAM data plane to settle after new roles are created via regional IAM endpoints for regions that are geographically far away from us-east-1.
I have a custom Lambda CF resource that auto-subscribes all existing and future log groups to a Firehose via a subscription filter. The IAM role gets deployed for CW Logs then very quickly the Lambda function tries to subscribe the log groups. And on occasion this error would happen.
I added a time.sleep(30) to my code (this code only runs once a stack creation so it's not going to hurt anything to wait 30 seconds).

Error while submitting aws emr job from command line

I am getting below error form aws emr. I have submitted job from cli. The job status is pending.
A client error (ThrottlingException) occurred when calling the ListSteps operation: Rate exceeded
How to see the all the active jobs in emr cluster and how to kill them from cli and also from the aws console.
Regards
sanjeeb
AWS APIs are rate limited. According to the AWS docs the recommended approach to dealing with a throttling response is to implement exponential backoff in your retry logic means when you get ThrottlingException try to catch it and sleep for some time say half second and then retry ...