Build failed with Lex - amazon-web-services

While building Lex I'm getting the following error:
The number of intents and slots exceeds the permissible value for bot
"test_dev". A bot can have a maximum of 310 intents and slots. This
bot has 63 intents and 253slots.
However, as I can see in the official AWS documentation there is a limit of 1000 intents per account and 100 slots in each account.
Is there a limit that is associated with each bot as well?

I have added a ticket to AWS and got the below response from them:
Usually, an error of this type occurs when the sum of the number of
slots and intents for a Lex bot has exceeded the value of 310. Looking
at the observed error, I see that you have 253 slots and 63 intents,
when these numbers are combined they sum up to 316 and they exceed the
maximum limit of 310. Therefore, the limit of 310 is breached and
hence the error is observed.
To address your question regarding limits for Lex bots, yes, there are
limits that are applicable to bots only as discussed in {1}. I
understand your concern about the limit of 1000 intents per account,
however, this limit applies at account level and not on a Lex bot
level. With that said, I am happy to assist you with submitting a
limit request for an increase on the number of intents and slots for
your bot. For the request to be fulfilled, I would appreciate if you
could provide me with the following information:
A use case stating the reasons for the limit request.
Please confirm the number of slots and number of intents you wish these values to be increased to.
Please confirm that limit increase for the Lex bot ‘test’ in the eu-west-1 region.'

Related

AWS Elasticsearch publishing wrong total request metric

We have an AWS Elasticsearch cluster setup. However, our Error rate alarm goes off at regular intervals. The way we are trying to calculate our error rate is:
((sum(4xx) + sum(5xx))/sum(ElasticsearchRequests)) * 100
However, if you look at the screenshot below, at 7:15 4xx was 4, however ElasticsearchRequests value is only 2. Based on the metrics info on AWS Elasticsearch documentation page, ElasticsearchRequests should be total number of requests, so it should clearly be greater than or equal to 4xx.
Can someone please help me understand in what I am doing wrong here?
AWS definitions of these metrics are:
OpenSearchRequests (previously ElasticsearchRequests): The number of requests made to the OpenSearch cluster. Relevant statistics: Sum
2xx, 3xx, 4xx, 5xx: The number of requests to the domain that resulted in the given HTTP response code (2xx, 3xx, 4xx, 5xx). Relevant statistics: Sum
Please note the different terms used for the subjects of the metrics: cluster vs domain
To my understanding, OpenSearchRequests only considers requests that actually reach the underlying OpenSearch/ElasticSearch cluster, so some the 4xx requests might not (e.g. 403 errors), hence the difference in metrics.
Also, AWS only recommends comparing 5xx to OpenSearchRequests:
5xx alarms >= 10% of OpenSearchRequests: One or more data nodes might be overloaded, or requests are failing to complete within the idle timeout period. Consider switching to larger instance types or adding more nodes to the cluster. Confirm that you're following best practices for shard and cluster architecture.
I know this was posted a while back but I've additionally struggled with this issue and maybe I can add a few pointers.
First off, make sure your metrics are properly configured. For instance, some responses (4xx for example) take up to 5 minutes to register, while OpensearchRequests are refershed every minute. This makes for a very wonky graph that will definitely throw off your error rate.
In the picture above, I send a request that returns 400 every 5 seconds, and send a response that returns 200 every 0.5 seconds. The period in this case is 1 minute. This makes it so on average it should be around a 10% error rate. As you can see by the green line, the requests sent are summed up every minute, whereas the the 4xx are summed up every 5 minute, and in between every minute they are 0, which makes for an error rate spike every 5 minutes (since the opensearch requests are not multiplied by 5).
In the next image, the period is set to 5 minutes. Notice how this time the error rate is around 10 percent.
When I look at your graph, I see metrics that look like they are based off of a different period.
The second pointer I may add is to make sure to account for when no data is coming in. The behavior the alarm has may vary based on your how you define the "treat missing data" parameter. In some cases, if no data comes in, your expression might make it so it stays in alarm when in fact there is only no new data coming in. Some metrics might return no value when no requests are made, while some may return 0. In the former case, you can use the FILL(metric, value) function to specify what to return when no value is returned. Experiment with what happens to your error rate if you divide by zero.
Hope this message helps clarify a bit.

On WS02, rate limit works successfully but the quota does not

When setting throttling limits for our API, it appears that the Rate Limit works successfully but the Quota does not.
We created a subscription that limits to 10 requests/second, and when running tests, we obtain a 429 response upon sending an 11th query in one second, which is exactly what we want and expect.
However, the filter also has a Quota of 100 requests/minute, yet we are able to run over 100 requests (have tested up to 300 queries and still gotten entirely 200 response codes) in the span of a minute without getting throttled.

AWS WAF How to rate limit path by IP below the minimum of 2000 requests/minute

I have a path (mysite.com/myapiendpoint for sake of example) that is both resource intensive to service, and very prone to bot abuse. I need to rate limit access to that specific path to something like 10 requests per minute per client IP address. How can this be done?
I'm hosting off an EC2 instance with CloudFront and AWS WAF in front. I have the standard "Rate Based Rule" enabled, but its 2,000 requests per minute per IP address minimum is absolutely unusable for my application.
I was considering using API Gateway for this, and have used it in the past, but its rate limiting as I understand it is not based on IP address, so bots would simply use up the limit and legitimate users would constantly be denied usage of the endpoint.
My site does not use sessions of any sort, so I don't think I could do any sort of rate limiting in the server itself. Also please bear in mind my site is a one-man-operation and I'm somewhat new to AWS :)
How can I limit the usage per IP to something like 10 requests per minute, preferably in WAF?
[Edit]
After more research I'm wondering if I could enable header forwarding to the origin (running node/express) and use a rate-limiter package. Is this a viable solution?
I don't know if this is still useful to you - but I just got a tip from AWS support. If you add the rate limit rule multiple times, it effectively reduces the number of requests each time. Basically what happens is each time you add the rule, it counts an extra request for each IP. So say an IP makes a single request. If you have 2 rate limit rules applied, the request is counted twice. So basically, instead of 2000 requests, the IP only has to make 1000 before it gets blocked. If you add 3 rules, it will count each request 3 times - so the IP will be blocked at 667 requests.
The other thing they clarified is that the "window" is 5 minutes, but if the total is breached anywhere in that window, it will be blocked. I thought the WAF would only evaluate the requests after a 5 minute period. So for example. Say you have a single rule for 2000 requests in 5 minutes. Say an IP makes 2000 requests in the 1st minute, then only 10 requests after that for the next 4 minutes. I initially understood that the IP would only be blocked after minute 5 (because WAF evaluates a 5 minute window). But apparently, if the IP exceeds the limit anywhere in that window, it will be locked immediately. So if that IP makes 2000 requests in minute 1, it will actually be blocked from minute 2, 3, 4 and 5. But then will be allowed again from minute 6 onward.
This clarified a lot for me. Having said that, I haven't tested this yet. I assume the AWS support techie knows what he's talking about - but definitely worth testing first.
AWS have now finally released an update which allows the rate limit to go as low as 100 requests every 5 minutes.
Announcement post: https://aws.amazon.com/about-aws/whats-new/2019/08/lower-threshold-for-aws-waf-rate-based-rules/
Using rule twice will not work, because WAF rate based rule will count on cloud watch logs basis, both rule will count 2000 requests separately, so it would not work for you.
You can use AWS-WAF automation cloud front template, and choose lambda/Athena parser, this way, request count will perform on s3 logs basis, also you will be able to block SQL,XSS and bad bot requests.

What is the maximum number of event source mappings per AWS Lambda?

I cannot find this limitation listed anywhere in the AWS Documentation for maximum number of event sources to trigger one Lambda.
I have a Lambda which will be triggered by an indefinitely growing number of S3 Buckets. Obviously this will only work if the maximum number of buckets exceeds the maximum number of triggers. Is there a maximium? If so, what is it, and can it be increased?
I just ran into a limit. I added 60 CloudWatch triggers to a Lambda function and when I tried adding one more trigger, I got an error saying:
"The final policy size (20643) is bigger than the limit (20480). (Service: AWSLambda; Status Code: 400; Error Code: PolicyLengthExceededException;"
There's a paginated response to http://docs.aws.amazon.com/lambda/latest/dg/API_ListEventSourceMappings.html , and since I can find no info on the lambda limits page, I bet there's no limit (or at least there's some huge number you don't have to practically worry about).
This is still very high in search results, which led me here. I was able to create 6500 event source mappings for an MQ Event Source. The Web Console maxes out at 1000 displayed event source mappings. I did not verify that all 6500 event source mappings worked.

Facebook Graph API rate limiting

We want to collect some metrics about our client public Facebook pages (~1-5K users) on a daily (or weekly) basis.
I'm talking about 3-5 typical metrics : "likes", "fan posts" etc.
I understand that according to the "Rate Limiting on the Graph API" documentation [1] it's possible to have 200 calls per 1 hour.
For now we don't have any FB public application that can help us to increase this limit. To generate application token we will create it to but I doubt it will have a lot of users.
Does anybody know will we have problems with rate limit exceptions while invoking Graph API more than 200 times per 60 min.?
I guess our expected rate is 5-10K calls per 60 min (once a day).
Phrase from the documentation [1] "Rate limiting in the FB Graph API is encountered only in rare circumstances" gives me hope that it won't be a problem.
Thank you!
[1] https://developers.facebook.com/docs/graph-api/advanced/rate-limiting
You won't have any problems initially. Facebook does not necessarily block apps immediately for being over the limits.
As per their documentation
If your app is making enough calls to be considered for rate limiting by our system, we return an X-App-Usage HTTP header
So, if you don't get any X-App-Usage header,Then your app hasn't be considered "worthy" of throttling by their automated systems yet.
So it would be best to check for this header, while making your api requests. Once you start receiving this Header, it would be best change your frequency of the API calls or give a timeout.