Rate exceed (Service: AWSSimpleSystemsManagement) - amazon-web-services

When I deploy serverless framework to AWS cloudformation stack, I got this error message.
Rate exceeded (Service: AWSSimpleSystemManagement; Status Code: 400; Error Code: ThrottlingException; Request ID: ....)
Do you have any idea how to resolve it?

Not sure what exactly returns this one to you but seems like you are deploying too fast or querying the AWSSimpleSystemManagement too fast as you are getting throttling exception.
Double check your code for bugs (maybe you are doing an action N times and not once). If you need to interact with AWSSimpleSystemManagement at this rate probably you can increase the number of requests via a ticket to AWS.
If that’s not the case, open them a ticket.

Related

Elastic search 403 Request throttled due to too many requests /_bulk

I am trying to sync 1 million record to ES, and I am doing it using bulk API in batch of 2k.
But after inserting around 25k-32k, elastic search is giving following exception.
Unable to parse response body: org.elasticsearch.ElasticsearchStatusException
ElasticsearchStatusException[Unable to parse response body]; nested: ResponseException[method [POST], host [**********], URI [/_bulk?timeout=1m], status line [HTTP/1.1 403 Request throttled due to too many requests]
403 Request throttled due to too many requests /_bulk]; nested: ResponseException[method [POST], host [************], URI [/_bulk?timeout=1m], status line [HTTP/1.1 403 Request throttled due to too many requests]
403 Request throttled due to too many requests /_bulk];
I am using aws elastic search.
I think, I need to implement wait strategy to handle it, something like keep checking es status and call bulk insert if status all of ES okay.
But not sure how to implement it? Does ES offers anything pre-build for it?
Or Anything better way to handle this?
Thanks in advance.
Update:
I am using AWS elastic search version 6.8
Thanks #dravit for including my previous SO answer in the comment, after following the comments it seems OP wants to improve the performance of bulk indexing and want exponential backoff, which i don't think Elasticsearch provides out of the box.
I see that you are putting a pause of 1 second after every second which will not work in all the cases, and if you have large number of batches and documents to be indexed, for sure it will take a lot of time. There are few more suggestions from my side to improve the performance.
Follow my tips to improve the reindex speed in Elasticsearch and see what all things listed here is applicable and doing them improves speed by what factor.
Find a batching strategy which best suits to your environment, I am not sure but this article from #spinscale who is the developer of java high level rest client might help or you can ask a question on https://discuss.elastic.co/, I remembered he shared a very good batching strategy in one of his webinar but couldn't find the link of it.
Notice various ES metrics apart from bulk threadpool and queue size, and see if your ES still has capacity can you increase the queue size and increase the rate by which you can send requests to ES.
Check the error handling guide here
If you receive persistent 403 Request throttled due to too many requests or 429 Too Many Requests errors, consider scaling vertically. Amazon Elasticsearch Service throttles requests if the payload would cause memory usage to exceed the maximum size of the Java heap.
Scale your application vertically or increase delay between requests.

400 bad request error upon getObjectMetadata request on AWS S3 after several hours of proper work

aws-java-sdk-s3 v.1.11.106
My app app run in ECS cluster on EC2 instance.
The logic behind the app is pretty straightforward: once per 5 minutes, the cron job get launched and makes getObjectMetadata request in order to obtain eTag of the specific object.
Everything works fine for hours. But after several hours (the recent case was 7 hours), my app starts to get the following error again and again:
Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 675FC4678EC190E5)
My guess is that something gets expired (maybe some token), but I can't find what it could be.

AWS SimpleWorkflow Request Entity Too Large

I have a workflow that contains a bunch of activities. I store each activity's response in a S3 bucket.
I pass the S3 key as an input to each activity. Inside the activity, I have a method that retrieve the data from S3 and perform some operation. But my last activity failed and threw error:
Caused by: com.amazonaws.AmazonServiceException: Request entity too large (Service: AmazonSimpleWorkflow; Status Code: 413; Error Code: Request entity too large; Request ID: null)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:820)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:439)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:245)
at com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient.invoke(AmazonSimpleWorkflowClient.java:3173)
at com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient.respondActivityTaskFailed(AmazonSimpleWorkflowClient.java:2878)
at com.amazonaws.services.simpleworkflow.flow.worker.SynchronousActivityTaskPoller.respondActivityTaskFailed(SynchronousActivityTaskPoller.java:255)
at com.amazonaws.services.simpleworkflow.flow.worker.SynchronousActivityTaskPoller.respondActivityTaskFailedWithRetry(SynchronousActivityTaskPoller.java:246)
at com.amazonaws.services.simpleworkflow.flow.worker.SynchronousActivityTaskPoller.execute(SynchronousActivityTaskPoller.java:208)
at com.amazonaws.services.simpleworkflow.flow.worker.ActivityTaskPoller$1.run(ActivityTaskPoller.java:97)
... 3 more
I know AWS SWF has some limits on data size, but I am only passing a S3 Key to activity. Inside activity, it will read from S3 and process the data. I am not sure why I am getting this error. If anyone knows, please help! Thanks a lot!
Your activity failed as respondActivityTaskFailed SWF API call is seen in the stack trace. So my guess is that the exception message + stack trace exceeded the maximum size allowed by SWF service.

Throttling while registering activities in Simple Work Flow

We have started to experience failures when our processes start up during the registration of activities. The problem is happening in GenericActivityWorker.registerActivityTypes.
The exception generate is:
Caused by: AmazonServiceException: Status Code: 400, AWS Service: AmazonSimpleWorkflow, AWS Request ID: 78726c24-47ee-11e3-8b49-534d57dc0b7f, AWS Error Code: ThrottlingException, AWS Error Message: Rate exceeded
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:350)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:202)
at com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient.invoke(AmazonSimpleWorkflowClient.java:3061)
at com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient.registerActivityType(AmazonSimpleWorkflowClient.java:2231)
at com.amazonaws.services.simpleworkflow.flow.worker.GenericActivityWorker.registerActivityType(GenericActivityWorker.java:153)
at com.amazonaws.services.simpleworkflow.flow.worker.GenericActivityWorker.registerActivityTypes(GenericActivityWorker.java:118)
at com.amazonaws.services.simpleworkflow.flow.worker.GenericActivityWorker.registerTypesToPoll(GenericActivityWorker.java:105)
at com.amazonaws.services.simpleworkflow.flow.worker.GenericWorker.start(GenericWorker.java:367)
at com.amazonaws.services.simpleworkflow.flow.ActivityWorker.start(ActivityWorker.java:248)
at com.fluid.retail.workflows.DefaultWorkflowHost.start(DefaultWorkflowHost.java:226)
... 5 more
The ActivityWorker in question has 5 activity implementation classes associated with it, and I think that this throttling is occurring because the internal Flow Framework code is looping over the activity types to register them without any delay in between them.
Because this code is internal to the framework, we can't add any sleep() calls to prevent being throttled.
Any ideas would be appreciated.
Are you sure this is happening during registering your ur activities? Or it is happening during scheduling your activities?
You would get this issue if you try to run a workflow that will schedule too many activities too fast. At this point you have 2 options.
1. Try and make the activites sequential and make them wait on the previous one.
2. Contact AWS to increase your accounts rate.

How to remove RequestTimeTooSkewed check from Amazon?

I have a Java 7 "agent" program running on several client machines (mostly Windows XP). My "agent" uploads client files to Amazon S3 and often I get this error:
RequestTimeTooSkewed
I know this is because the client's computer system time difference is too large compared to Amazon's. Here's my problem: I can't control the client's computer (system) time! So, I don't want Amazon to care about time differences.
I heard about jets3t, but I'm hoping not having to resort to yet another tool (agent footprint must remain small).
Any ideas how to remove this check and get rid of this pesky error?
Error detail:
Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 59C9614D15006F23, AWS Error Code: RequestTimeTooSkewed, AWS Error Message: The difference between the request time and the current time is too large., S3 Extended Request ID: v1pGBm3ed2J9dZ3sG/3aDrG3DUGSlt3Ac+9nduK2slih2wyaAnc1n5Jrt5TkRzlV
The error is coming from the S3 service, not from the client, so there really isn't anything you can do other than correct the clock on the client. That check is being done on the service to help detect and prevent replay attacks so it's an important part of the overall security of the service.
Trying a different client-side SDK won't help.