How to define different quotas to different endpoints in AWS API Gateway - amazon-web-services

Quotas are associated with "Usage Plans", but the "Usage Plan" applies to all endpoints in a Stage.
Is there any way to define different quotas to different endpoints in my API using AWS API Gateway?
Example for one user with an APIKey:
POST myapi.domain.com/v1/articles --> 100 requests/month
GET myapi.domain.com/v1/articles --> 1,000 requests/month
POST myapi.domain.com/v1/comments --> 1,000 requests/month
GET myapi.domain.com/v1/comments --> no limit

Related

AWS Api Gateway maximum resource limit per api

What is the hard limit for the resources per REST api in Api Gateway? As per AWS docs https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#api-gateway-execution-service-limits-table, default quota is 300 per api which can be increased on request.
My use is I have multiple versions of the apis which I am trying to add in single REST api in Api Gateway. Is there a hard limit at AWS beyond which they won't increase?

AWS API Gateway - Monitor specific endpoints

I have created an API Gateway in AWS with two resources (endpoints). Let's say /foo and /bar. Each endpoint has a POST method.
I want to monitor how many times each endpoint got invoked - /foo and /bar in our example. And the metrics to show the 2xx, 4xx, and 5xx responses.
I know API Gateway got a generic "API Calls" metric that shows the total invocations of the API. But how do I monitor how many times each endpoint got called?
You can filter API Gateway metrics for the API method with the specified API name, stage, resource, and method.
API Gateway will not send these metrics unless you have explicitly enabled detailed CloudWatch metrics. You can do this in the console by selecting Enable Detailed CloudWatch Metrics under a stage Logs/Tracing tab. Alternatively, you can call the update-stage AWS CLI command to update the metricsEnabled property to true.
Enable Detailed CloudWatch Metrics on AWS Console:
Documentation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-metrics-and-dimensions.html#api-gateway-metricdimensions

Maximum number of Resources for this API has been reached. Please contact AWS if you need additional Resources in AWS Rest API Gateway

I've swagger JSON which I want to import on REST API Gateway, but I'm getting the message of
Maximum number of Resources for this API has been reached. Please contact AWS if you need additional Resources. when I import. What should I do, it says I need additional resources. Where I can add additional resources on API Gateway.
As per AWS docs, the default limit for Resources per API is 300. The error msg you have suggest that you are exceeding the limit.
Since the Resources per API limit can be increased (some limits can't), you have to request such an increase from AWS. The Increase account service quotas tutorial at AWS explains how to do it.

AWS API Gateway limits of Routes and Integrations

AWS API Gateway has a limit of 300 routes, which can be increased by contacting AWS, but it also has a limit of 300 integrations that cannot be increased.
My understanding is that every route must have an Integration defined as one of these options:
Lambda Function
HTTP
Mock
AWS Service
VPC Link
Not having an integration set for a route is not an option.
Given that, what's the point of requesting more than 300 routes if I can't have more than 300 integrations?
I must be misinterpreting what "Integration" means in this context, what else could it mean?
I can confirm that increasing the limit of Routes allowed me to create more than 300 endpoints with Integrations mapping to the same Lambda function.
I didn't try more than 300 different Lambdas, but for the purpose of this question this at least confirms that multiple mappings to the same Lambda are not counting toward this limit of 300 "Integrations".

Throttling data not published to all active nodes

I have two physical nodes of api manager configured as explained in this guide. Two all components in one machine boxes behind a load balancer.
https://docs.wso2.com/display/AM220/Configuring+an+Active-Active+Deployment#ConfiguringanActive-ActiveDeployment-Step4-ConfigurethePublisherwiththeGateway
Following is my configuration for the data publisher in api-manager.xml
<DataPublisher>
<Enabled>true</Enabled>
<ReceiverUrlGroup>{tcp://172.31.31.93:9611},{tcp://172.31.16.52:9611} </ReceiverUrlGroup>
<AuthUrlGroup>{ssl://172.31.31.93:9711},{ssl://172.31.16.52:9711}</AuthUrlGroup>
<DataPublisherPool>
<MaxIdle>1000</MaxIdle>
<InitIdleCapacity>200</InitIdleCapacity>
</DataPublisherPool>
<DataPublisherThreadPool>
<CorePoolSize>200</CorePoolSize>
<MaxmimumPoolSize>1000</MaxmimumPoolSize>
<KeepAliveTime>200</KeepAliveTime>
</DataPublisherThreadPool>
I have set an application with a application throttling policy of 10 Requests per minute. When I put both the instances behind a load balancer and fire requests, as long as both nodes are up, I am allowed to make 20 requests per minute. However if I bring one node down, it allows me to make 10 requests per minute. I suspect that traffic data is not being published by gateway to traffic manager.
What could be a configuration miss that I need to check to enable that? The only thing that deviates from the specified configuration in my case is that I don't point to single gateway node from both publishers. So both nodes have following APIGateway config in api-manager.xml
<APIGateway>
<!-- The environments to which an API will be published -->
<Environments>
<!-- Environments can be of different types. Allowed values are 'hybrid', 'production' and 'sandbox'.
An API deployed on a 'production' type gateway will only support production keys
An API deployed on a 'sandbox' type gateway will only support sandbox keys
An API deployed on a 'hybrid' type gateway will support both production and sandbox keys. -->
<!-- api-console element specifies whether the environment should be listed in API Console or not -->
<Environment type="hybrid" api-console="true">
<Name>Production and Sandbox</Name>
<Description>This is a hybrid gateway that handles both production and sandbox token traffic.</Description>
<!-- Server URL of the API gateway -->
<ServerURL>https://localhost:${mgt.transport.https.port}${carbon.context}services/</ServerURL>
<!-- Admin username for the API gateway. -->
<Username>${admin.username}</Username>
<!-- Admin password for the API gateway.-->
<Password>${admin.password}</Password>
<!-- Endpoint URLs for the APIs hosted in this API gateway.-->
<GatewayEndpoint>http://${carbon.local.ip}:${http.nio.port},https://${carbon.local.ip}:${https.nio.port}</GatewayEndpoint>
</Environment>
</Environments>
</APIGateway>
Any help will be appreciated. My goal here is to ensure that irrespective of which gateway node the request is redirected to, the throttling should be consistent as defined in subscription across nodes.
Currently, it seems as if each node does an individual throttling.