I’m trying to pick up API gateway.
What are the different use cases for alb vs API gateway? I’ll be honest to say that I am not completely familiar with the concept of an API. I know we can invoke it backend, and the analogy that it’s a restaurant menu. However, beyond that, I am not clear what the difference is. Is it that applications using ALB has a user interface, but not API gateway? I tried reading through documentation on the features, but got even more lost. As such, I am hoping someone can share the use cases for both services so that I can visualise what it’s used for and the reason for the features. Thanks!
API GW is focused on API and has some additional options - e.g. API definition via swagger, execution of a lambda function, converting the call to an event bridge event, support of authenticators (iam, cognito), multiple deployment stages etc.
The load balancer listens on a port and that's about it.
Q: In what cases would you require these API GW features as opposed to just using an ALB?
A: One obvious benefit is a serverless or "low code". Let's say you want an API which processes asynchronous requests.
It is possible to create an API endpoint which queues all incoming requests to a SQS queue with just one AWS CLI command with no programming (provided the resources do exist):
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-aws-services.html
Each API endpoint can be served by a different AWS resource including EC2/ALB.
ALB is well suited for things like traditional monolithic applications (LAMP, Java application servers etc.)
Related
Imagine a use case where multiple REST APIs (pure API with no UI) are deployed using Cloud Run or Cloud Functions. Each API:
is unique and for a specific outside client, which will use it for
various unknown purposes.
needs to be secured so that it can only be used by the corresponding
client.
The APIs need to be available via the same custom domain, e.g.:
api.example.com/client_1
api.example.com/client_2
The clients are potentially many, so any solution must be possible to automate programmatically.
Is there a good approach for achieving this using GCP? I understand that an API Gateway along with Load Balancing can be used to achieve availability via a custom domain. However I’m not sure how to best tackle the security part. Using API keys seems like an option, but IIUC each key would have access to all APIs encapsulated by the Gateway in that case. Also I’m not sure if the API keys can be created programmatically in a straightforward manner using one of the GCP client libraries.
Are there better options I’m missing?
Using API Gateway for your use case might not be the best possible option, reason being below section of GCP documentation:
Custom domain names are not supported for API Gateway. If you want to customize the domain name, you have to create a load balancer to use your custom domain name and then direct requests to the gateway.dev domain of your deployed API.
This might in turn increase the costs for your application.
So, I would suggest creating your REST APIs via nodejs and deploying it over Cloud Run. Cloud Run supports canonicalizing DNS names.
NOTE: It is still not supported in every regions, so you might want to be thoughtful about that with respect to your Latency Issues.
Coming to the part of securing your API's below can be followed:
You can use create API Keys and configure your API to accept these keys via header/query params
To create your application's API key you can follow the google document:
https://support.google.com/googleapi/answer/6158862?hl=en
https://medium.com/swlh/secure-apis-in-cloud-run-cloud-functions-and-app-engine-using-cloud-endpoints-espv2-beta-b51b1c213aea
You can create multiple APIs using the same domain even without using Load Balancers and complex coding by using OpenAPI. This document outlines the procedure for creating multiple APIs using the sub domains in GCP. There are multiple ways for applying authentication to your OpenAPI follow this documentation for enabling authentication in OpenAPI. Hope this might help you.
I have a private S3 bucket with lots of small files. I'd like to expose the contents of the bucket (only read-only access) using AWS API Gateway as a proxy. Both S3 bucket and AWS API Gateway belong to the same AWS account and are in the same VPC and Availability Zone.
AWS API Gateway comes in two types: HTTP API, REST API. The configuration options of REST API are more advanced, additionally, REST API supports much more AWS services integrations than the HTTP API. In fact, the use case I described above is fully covered in one of the documentation tabs of REST API. However, REST API has one huge disadvantage - it's about 70% more expensive than the HTTP API, the price comes with more configuration options but as for now, I need only one - integration with the S3 service that's why I believe this type of service is not well suited for my use case. I started searching if HTTP API can be integrated with S3, and so far I haven't found any way to achieve it.
I tried creating/editing service-linked roles associated with the HTTP API Gateway instance, but those roles can't be edited (only read-only access). As for now, I don't have any idea where I should search next, or if my goal is even achievable using HTTP API.
I am a fan of AWSs HTTP APIs.
I work daily with an API that serves a very similar purpose. The way I have done it is by using AWS Lambda functions integrated with the APIs paths.
What works for me is this:
Define your API paths, and integrate them with AWS Lambda functions.
Have your integrated Lambda function return a signed URL for any objects you want to provide access to through API calls.
There are several different ways to pass the name of the object(s) you want to the Lambda function servicing the API call.
This is the short answer. I plan to give a longer answer at a later time. But this has worked for me.
I am very new to AWS Lambda and am struggling to understand its functionalities based on many examples I found online (+reading endless documentations). I understand the primary goal of using such service is to implement a serverless architecture that is cost and probably effort-efficient by allowing Lambda and the API Gateway to take on the role of managing your server (so serverless does not mean you don't use a server, but the architecture takes care of things for you). I organized my research into two general approaches taken by developers to deploy a Flask web application to Lambda:
Deploy the entire application to Lambda using zappa-and zappa configurations (json file) will be the API Gateway authentication.
Deploy only the function, the parsing blackbox that transforms user input to a form the backend endpoint is expecting (and backwards as well) -> Grab a proxy url from the API Gateway that configures Lambda proxy -> Have a separate application program that uses the url
(and then there's 3, which does not use the API Gateway and invokes the Lambda function in the application itself-but I really want to get a hands on experience using the API Gateway)
Here are the questions I have for each of the above two approaches:
For 1, I don't understand how Lambda is calling the functions in the Flask application. According to my understanding, Lambda only calls functions that have the parameters event and context-or are the url calls (url formulated by the API Gateway) actually the events calling the separate functions in the Flask application, thus enabling Lambda to function as a 'serverless' environment-this doesn't make sense to me because event, in most of the examples I analyzed, is the user input data. That means some of the functions in the application have no event and some do, which means Lambda somehow magically figures out what to do with different function calls?
I also know that Lambda does have limited capacity, so is this the best way? It seems to be the standard way of deploying a web application on Lambda.
For 2, I understand the steps leading to incorporating the API Gateway urls in the Flask application. The Flask application therefore will use the url to access the Lambda function and have HTTP endpoints for user access. HOWEVER, that means, if I have the Flask application on my local computer, the application will only be hosted when I run the application on my computer-I would like it to have persistent public access (hopefully). I read about AWS Cloud9-would this be a good solution? Where should I deploy the application itself to optimize this architecture-without using services that take away the architecture's severless-ness (like an EC2 instance maybe OR on S3, where I would be putting my frontend html files, and host a website)? Also, going back to 1 (sorry I am trying to organize my ideas in a coherent way, and it's not working too well), will the application run consistently as long as I leave the API Gateway endpoint open?
I don't know what's the best practice for deploying a Flask application using AWS Lambda and the API Gateway but based on my findings, the above two are most frequently used. It would be really helpful if you could answer my questions so I could actually start playing with AWS Lambda! Thank you! (+I did read all the Amazon documentations, and these are the final-remaining questions I have before I start experimenting:))
Zappa has its own code to handle requests and make them compatible with the "Flask" format. Keep in mind that you aren't really using Flask as intended in either cases when using Lambda. Lambdas are invoked only when calls are made, flask usually keeps running looking for requests. But the continuously running part is handled by the API Gateway here. Zappa essentially creates a single ANY request on API gateway, this request is passed to your lambda handler which interprets it and uses it to invoke your flask function.
If you you are building API Gateway + Lambda you don't need to use Flask. It would be much easier to simply make a function that is invoked by the parameters passed to lambda handler by the API Gateway. The front end application you can host on S3 (if it is static or Angular).
I would say the best practice here would be to not use Flask and go with the API Gateway + Lambda option. This lets you put custom security and checks on your APIs as well as making the application a lot more stable as every request has its own lambda.
We are looking at utilising aws api gateway for better management of APIs. However, at a enterprise level, what will be the best practise? Will a common API gateway for to be used by all app teams be necessary (In this case, we might need a administrator for this common API gateway which adds to overhead) or should each app team build their own API gateway and administration of their API calls?
Hope to have someone share their experiences.
I have used AWS API gateway for different web/mobile application projects. let me try to answer your questions one by one here.
Limitations Based Design
API gateway comes with limitations. You can find answers based on these limitations.
For eg: There is a soft limit on "Resources per API" and its set at 300 which can be increased up to 500 max. This means that in future, if more than 500 resources are needed, new API gateway needs to be created.
So, it's better to logically segregate the APIs and have different API gateways depending on the purpose.
Throttle limit per region across REST APIs, WebSocket APIs, and WebSocket callback APIs is limited (Soft) to 10,000 requests per second (RPS) with an additional burst capacity provided by the token bucket algorithm, using a maximum bucket capacity of 5,000 requests.
So based on traffic API gateway needs to be designed.
There are many such limitations https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Features Based Design
API Gateway uses OPEN API standards and facilitates XML/JSON import and export features. So if a new API Gateway is created with a swagger file from an application, its better not to mix it with other applications.
There are many features like 1)Enable API cache, 2)Enable throttling, 3) Web Application Firewall 4) Client Certificate which cannot be common for all APis in an enterprise. So again, it's better to have separate APIs based on the requirements.
AWS API Gateway is engineered with different logging mechanisms and each API gateway implementation will need a tailored approach.
SDK generation comes very handy for mobile development and again there is no use of bundling all apis into one SDk and providing access.
So my suggestion is to use multiple API gateways, for an enterprise based on specific needs.
I have a few different services (generated by the Serverless Framework) that need to communicate between each other. The data is sensitive and requires authentication.
My current strategy is to create an api key for each service communicate between services using json web token like the token below.
fM61kaav8l3y_aLC/3ZZF7nlQGyYJsZVpLLiux5d84UnAoHOqLPu4dw3W7MiGwPiyN
What are some other options for communicating between services? Are there any downsides to this approach? To reiterate, the request needs to be authenticated and appropriately handle sensitive data.
Do you need sync or async communication?
A good approach would be to use events, because aws-lambda is designed as an event based system. So you could use SNS or SQS to decouple your services.
If you just want to make calls from one service to another you could invoke the lambda function directly via the aws-sdk see docs. So you would not add an API Gateways endpoint and your lambdas would stay private.
To better anwser your question you should give a short overview of your application and and an example of an interservice call you would make.
As I understand it, you intend to make the various functions in a given a service private. In doing so, each service will likely have serverless.yml file that resembles the following:
Image shows the setup for api keys used with a serverless framework rest api
While this is a suitable approach, it is less desirable than using ** Custom Authorizers**.
Custom Authorizers allow you to run an AWS Lambda Function before your targeted AWS Lambda Function. This is useful for Microservice Architectures or when you simply want to do some Authorization before running your business logic.
If you are familiar with the onEnter function when using ReactRouter, the logic among Custom Authorizers is similar.
Regarding implementation, since different services are leveraged to deploy various functions, consider deploying the function to AWS and noteing the ARN of the Lambda function. Follow these links to see the appropriate setup for the custom authorizer.
These images show the serverless.yml file for using custom authorizers when the authorizers are not part of the service but rather deployed on lambda already
The following github project aws-node-auth0-custom-authorizers-api/frontend is a good example of how to implement Custom Authorizers when the authorizer funciton is in the same service as the private function. Note your situation differs slightly yet you should expect their authorizer function logic to be simliar - only the project structure should differ