Usage based billing for hosting a REST API - amazon-web-services

I currently hosting my website on a combination of Amazon S3 and Cloudfront. These services have a usage-based billing. When there are no users visiting my website, I am paying next to nothing.
Now I wanted to create a simple REST API where users can invite other users. I thought about using node.js or sinatra. But when I want to host that, I need to start at least one EC2 node, which roughly costs 120$ a year. I know both Heroku and AWS have free tier options, but I am explicitly looking for usage-based billing.
Is there a service that allows usage-based billing (eg. number of requests) for a custom REST API?

Well, AWS's API Gateway provides the REST API part, with billing ...
Low-Cost and Efficient
With Amazon API Gateway, you pay only for calls
made to your APIs and data transfer out. There are no minimum fees or
upfront commitments.
but you'll still have to point it at a back end service. EC2 would incur the costs you mention, but if your 'action' is simple, you may be able to use AWS Lambda. It to is priced based on number of requests and actual compute time.

Related

Is there an AWS API for getting current request token bucket sizes and refill rates for an AWS service?

I am developing a software application that calls AWS API using golang API (https://aws.amazon.com/sdk-for-go/). My software service is building a real-time topology of the cloud environment and so depends on calling this API for each service in the cloud account as often as possible.
I am looking for information pertaining to how many of the following example API calls, shown here using their AWS CLI equivalents, can I issue per second:
$ aws iam list-users
$ aws iam list-roles
$ aws ec2 describe-instances
...
These API calls are rate-limited by what AWS calls request token bucket sizes and refill rates (see https://docs.aws.amazon.com/AWSEC2/latest/APIReference/throttling.html#throttling-limits-cost-based as an example for these rates for EC2). The cloud customer, of course, can increase some of these rates and bucket sizes.
My question is, is there an API for those rates for all services so that my application can be aware of what the current bucket sizes are and the refill rates are so as not to get throttled and run afoul of my customers contracts. In other words, I want the EC2 request token bucket sizes delivered as a JSON to my software application (and for all the other AWS services too). Is this possible?
To the best of my knowledge, and as of the writing of this response, the token bucket algorithm implementation on AWS seems to be very difficult to get a handle on.
Many services do not have these numbers, max bucket size and bucket refill rate, published in the documentation, Unlike GCP's API throttling which is simple, easy to guess and write quick rules around (https://cloud.google.com/compute/docs/api-rate-limits) or Azures', which is is also well documented (https://learn.microsoft.com/en-us/graph/throttling and https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling).
Also, there is definitely no API for API throttling bucket sizes and refill rates available on AWS.
As a result, your AWS application's API throttling response has to be very application-specific and tuned for your specific needs.

How to charge users by usage?

I’m building a service and I’m planning to charge a fixed price for each lambda call.
How to count requests per client if the lambda function being called is the same? I’m planning to pass a client id
You can use API Gateway Usage Plans for your requirement.
After you create, test, and deploy your APIs, you can use API Gateway usage plans to make them available as product offerings for your customers. You can configure usage plans and API keys to allow customers to access selected APIs at agreed-upon request rates and quotas that meet their business requirements and budget constraints. If desired, you can set default method-level throttling limits for an API or set throttling limits for individual API methods.
A usage plan specifies who can access one or more deployed API stages and methods—and also how much and how fast they can access them. The plan uses API keys to identify API clients and meters access to the associated API stages for each key. It also lets you configure throttling limits and quota limits that are enforced on individual client API keys.
Read this docs for more detail explanation.
You can use api gateway https://aws.amazon.com/api-gateway/
"Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services."
It provides you with statistics about usage as well as different options like limit numbers of requests per api_key, etc

Do we pay for CloudFront distribution seperately that is created in API Gateway edge-optimised Custom Domain?

when creating an edge-optimised Custom Domain in API Gateway, a CloudFront Distribution is created for which we do not seem to have any control over.
I have been looking at the pricing model and there's no mention of pricing for the CloudFront service. So, I'm guessing there wouldn't be but if we're going to be paying for the Distribution, it's going to be very expensive for us. Is there a documentation somewhere that talks about this? I just want to confirm it before I decide on it. Please help me out.
You will only pay the API Gateway price, that is included on the pricing page.
When you create an Edge API Gateway endpoint the CloudFront distribution is not created in your account, in fact it is created in AWS account which hosts the API Gateway solution. So whilst it is distributed behind a CloudFront distribution it will not be within your account hence why the pricing will only include API Gateway pricing.

AWS serverless architecture – Why should I use API gateway?

Here is my use case:
Static react frontend hosted on s3
Python backend on lambda conduction long running data analysis
Postgres database on rds
Backend and frontend communicate exclusively with JSON
Occasionally backend creates and stores powerpoint files in s3 bucket and then serves them up by sending s3 link to frontend
Convince me that it is worthwhile going through all the headaches of setting up API gateway to connect the frontend and backend rather than invoking lambda directly from the frontend!
Especially given the 29s timeout which is not long enough for my app meaning I need to implement asynchronous processing and add a whole other layer of aws architecture (messaging, queuing and polling with SNS and SQS) which increases cost, time and potential for problems. I understand there are some security concerns, is there no way to securely invoke a lambda function?
You are talking about invoking a lambda directly from JavaScript running on a client machine.
I believe the only way to do that would be embedding the AWS SDK for JavaScript in your react frontend. See:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/browser-invoke-lambda-function-example.html
There are several security concerns with this, only some of which can be mitigated.
First off, you will need to hardcode AWS credentials in to your frontend for the world to see. The access those credentials have can be limited in scope, but be very careful to get this right, or otherwise you'll be paying for someone's cryptomining operation.
Assuming you only want certain people to upload files to a storage service you are paying for, you will need some form of authentication and authorisation. API Gateway doesn't really do authentication, but it can do authorisation, albeit by connecting to other AWS services like Cognito or Lambda (custom authorizers). You'll have to build this into your backend Lambda yourself. Absolutely doable and probably not much more effort than using a custom authorizer from the API Gateway.
The main issue with connecting to Lambda direct is that Lambda has the ability to scale rapidly, which can be an issue if someone tries to hit you with a denial of service attack. Lambda is cheap, but running 1000 concurrent instances 24 hours a day is going to add up.
API Gateway allows you rate limit per second/minute/hour/etc., Lambda only allows you to limit the number of concurrent instances at any given time. So if you were to set that limit at 1, an attacker could cause that 1 instance to run for 24 hours a day.

Bronze tier does not restrict to one request per minute in WSO2 API Manager

I have published an API with tier availability as bronze.
When I subscribe to that API as a different user in the store, bronze will be the only available tier to subscribe.After subscription, when I try accessing the API, I am able to hit it many times in a minute without any restriction. Why does it not restrict me for 1 request per minute?
Thanks
Are you using your local install of WSO2 API Manager or API Cloud service? In API Cloud, the tiers are all presetup and work flawlessly as far as I can tell.
For your local API Manager set up the couple things I would look at are:
Check whether the APIs that you set up require authorization - if you set them up as public without authorization key requirement - then I think there is no tier enforcement either because your tier cannot be verified without authorization.
Check your ties.xml to ensure that the throttling level for the tier is properly set up: https://docs.wso2.com/display/AM170/Managing+Throttling+Tiers