I’m building a service and I’m planning to charge a fixed price for each lambda call.
How to count requests per client if the lambda function being called is the same? I’m planning to pass a client id
You can use API Gateway Usage Plans for your requirement.
After you create, test, and deploy your APIs, you can use API Gateway usage plans to make them available as product offerings for your customers. You can configure usage plans and API keys to allow customers to access selected APIs at agreed-upon request rates and quotas that meet their business requirements and budget constraints. If desired, you can set default method-level throttling limits for an API or set throttling limits for individual API methods.
A usage plan specifies who can access one or more deployed API stages and methods—and also how much and how fast they can access them. The plan uses API keys to identify API clients and meters access to the associated API stages for each key. It also lets you configure throttling limits and quota limits that are enforced on individual client API keys.
Read this docs for more detail explanation.
You can use api gateway https://aws.amazon.com/api-gateway/
"Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services."
It provides you with statistics about usage as well as different options like limit numbers of requests per api_key, etc
Related
On every request to the API backend, we can have the API key from the gateway, but if we need to perform operations based on the usage plan, then we also need the usage plan information. One of the use cases is to provide limited services for the FREE plan.
We can determine the usage plan based on the provided API key with the help of the AWS SDK, but it will add additional latency to the API responses. Is there a better way to achieve this, perhaps through the gateway settings?
If the usage plan information can be passed as a header or query string to the backend, that would be great!
For learning purposes, I have developed a front-end application in Angular with AWS as back-end.
As this is a didactic project, I must prevent any possible costs explosion. Overall, for what concerns API Gateway calls.
At the moment, I have a single public GET endpoint for providing the information to the public homepage in the front-end.
I need to attach a usage plan to this endpoint for limiting the maximum number of calls to this endpoint. For example, max 10000 calls/week.
I already tried with an API-KEY:
Created the Usage Plan with "Quota: 10,000 requests per week"
Created the API KEY connected to the Usage Plan
Connected the API KEY to the authentication method of the endpoint
It works, but in this way I need to hard code the API KEY on the front-end.
I know that hard coding sensitive information on the front-end is a bad practice, but I thought that in this case the API KEY is needed only for connecting a Usage Plan, not for securing private information or operations. So I'm not sure if in this case it should be acceptable or not.
Is this solution safe, or could the same API KEY be used for other malicious purposes?
Are there any other solutions?
To add to the other answer, API Keys are not Authorization Tokens.
API Keys associated with Usage Plans are sent to the gateway on the x-api-key header (by default).
Tokens used by authorizers live on the standard Authorization header.
Use API Keys to set custom throttling and quotas for a particular caller, but you still need an Authorizer on any secure endpoints.
You can optionally set an API Key to be returned from the Authorizer when using a custom authorizer to be applied to the Usage Plan, which prevents you from having to distribute keys to clients in addition to their secrets.
APIG Key Source Docs
As the documentation says, generally you wouldn't want to rely on a hardcoded API key for authentication.
In addition, based on the information you provided, the usage plan limits are tied to the use by the user of the API key. So you could also set up throttling on the account or stage itself.
Finally, if it's possible, you could set up security group rules on your server or access control lists on your vpc that is serving your front end so that not everyone can access it.
Other ideas are to shut down the API gateway when you aren't using it and also rotate the API key frequently.
Obviously none of these are going to be enough if this were hosting sensitive information or doing anything remotely important. But as long as the only information in the client side is that API Key to access the API Gateway and that API Gateway itself doesn't interact with anything sensitive that's probably enough for a learning project.
But you should do all of this at your own risk as for all you know I'm the person who's going to try to hack you ;)
I am working on an API layer which serves requests from a backend database.
So the requirements are:
Repopulate the whole table without downtime for API service: A main requirement for the API is that we should be able to re-populate the tables( 2to 3 tables, structured csv like data) in backend database periodically (bi-weekly or monthly), but the API service should not go down.
low latency globally in the order of 100s of millisecond
scalability with requests per second
rate limit clients
also switch to previous versions of the tables in the backend in case of issues.
My questions are about what kinds of AWS database that i can use and other AWS components which can achieve the above goals.
If you want a secure, low latency global API, then I would go with an edge optimized API Gateway API.
Documentation is here API GW limits regarding the maximum requests per second.
You can rate limit clients using API GW. Also, you can have different stages in API GW that correspond to different aliases in lambda. Lambda would be your serverless compute layer that would handle your API GW requests and in turn query your database. Using versioning and aliasing in lambda would allow you to switch to different database tables. Given that you are planning to use csv like data, you could go with RDS and use the Aurora engine, which is compliant with MySQL and PostgreSQL, and is an extremely cost effective option.
As some additional information, you should use lambda proxy integration between your API GW APIs and your lambda functions. This allows you to enable Identity and Access Management (IAM) for your APIs.
Documentation on Lambda proxy integration: Lambda proxy integration
Here is some documentation on Lambda: AWS Lambda versioning and aliases
Here is some documentation on RDS Aurora: AWS RDS Aurora
We are looking at utilising aws api gateway for better management of APIs. However, at a enterprise level, what will be the best practise? Will a common API gateway for to be used by all app teams be necessary (In this case, we might need a administrator for this common API gateway which adds to overhead) or should each app team build their own API gateway and administration of their API calls?
Hope to have someone share their experiences.
I have used AWS API gateway for different web/mobile application projects. let me try to answer your questions one by one here.
Limitations Based Design
API gateway comes with limitations. You can find answers based on these limitations.
For eg: There is a soft limit on "Resources per API" and its set at 300 which can be increased up to 500 max. This means that in future, if more than 500 resources are needed, new API gateway needs to be created.
So, it's better to logically segregate the APIs and have different API gateways depending on the purpose.
Throttle limit per region across REST APIs, WebSocket APIs, and WebSocket callback APIs is limited (Soft) to 10,000 requests per second (RPS) with an additional burst capacity provided by the token bucket algorithm, using a maximum bucket capacity of 5,000 requests.
So based on traffic API gateway needs to be designed.
There are many such limitations https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Features Based Design
API Gateway uses OPEN API standards and facilitates XML/JSON import and export features. So if a new API Gateway is created with a swagger file from an application, its better not to mix it with other applications.
There are many features like 1)Enable API cache, 2)Enable throttling, 3) Web Application Firewall 4) Client Certificate which cannot be common for all APis in an enterprise. So again, it's better to have separate APIs based on the requirements.
AWS API Gateway is engineered with different logging mechanisms and each API gateway implementation will need a tailored approach.
SDK generation comes very handy for mobile development and again there is no use of bundling all apis into one SDk and providing access.
So my suggestion is to use multiple API gateways, for an enterprise based on specific needs.
How can I monitor my Amazon API Gateway APIs API key wise?
Currently it is showing data for all API keys, but I want to display API calls, 5xx errors, 4xx errors etc for particular API key.
If you're looking at monitoring the API on X-Api-Key header level, it looks like this is currently not possible. I'm guessing you'd have to do it yourself on the application layer, which should be relatively easy if you're using Lambda. Your question brings up another question: Does it really make sense to monitor individual API keys when errors are associated with particular API deployment/version?
If you'd like to monitor per-user use, you need to make use of IAM credentials with your API and CloudTrail to monitor requests made with specific credentials. You can find more info on the API Gateway CloudTrail integration page.
API Gateway doesn't (yet) offer first-class support for API Key metrics. As #kixorz mentioned, you could implement this in the application layer for the time being (for example using Lambda and CloudWatch).