API Gateway End Points to Microservices - amazon-web-services

We are building a cloud native architecture with React Frontend on S3, API Gateway, Microservices on Lamba functions and necessary AWS services. API Gateway to fulfill the cross-cutting concerns and to act as a BFF.
My question is about the mapping between the API Gateway and Microservices. There are 3 microservices (let say order, customer, payment) built on Java Spring.
Should I need to create equivalent API's (https proxy) in API gateway to connect to the microservices?
If so, should I need to create 3 API resources with 2 endpoint methods (let say get, post) in API gateway to map all the 3 microservices? so 3 API's with 6 methods in total?
If so, should I need to create a microservice per API method (let say get, post) so 6 microservices in total?
When comes to microservice that's also a Rest service which means list of Resources and methods again. In that case, 3 Microservices with (3 Resources X each 2 Methods) - 6 methods in total?
So, I could visualize one to many mapping between API methods in API gateway and Microservice methods like 6 X 6 combinations? I agree microservice should not be actions based but creating a few CRUD actions is planned for now.
For cross-cutting concerns (auth, logging etc.), should we define separate API resources in API Gateway? then no microservices required in that case.
I read a lot about API Gateway and Microservices but I could not get a clear understanding on the best practices around creating the API end points when it comes to integration with microservices. Please shed some light. Thanks.

Related

How to build an AWS serverless Apollo Federation API which is constructed from subgraphs in separate microservices?

The Task:
Imagine a large-ish company or product that has many microservices run by separate teams. Each microservice exists in a separate repo. You want to build a single unified Apollo GraphQL API which collates all the subgraphs from the separate microservice APIs. And you want to build it using Serverless technologies. The Unified API should be authenticated using Cognito but the underlying subgraph APIs shouldn't be exposed to the public internet.
Ideal Scenario: AWS-AppSync would support federation natively.
In reality Scenario: Since this can't be done easily we have to run the Apollo Federation Server in a lambda which is fronted by API Gateway. The setup of the Federation server knows where the endpoints are for the subgraph.
Question: How to we construct the subgraphs using AppSync or further Lambda servers for each microservice, or something else? What techniques have people used to deploy Apollo Federation within AWS?
Design Considerations:
Resources Based Policies:
What I would prefer is the API-Gateway to be authenticated with Cognito but the App Sycn subgraphs to be given full access to the Lambda. However because App Sync doesn't support Resource Based Policies this isn't possible.
API-Key:
I can use API-Key auth on the subgraphs but since AppSync has publically accessible endpoints this feels like a security risk.
Cognito:
A possibility - would need to pass through Cognito auth from API-Gateway, to Lambda, then to subgraphs. Feels icky.
Lambda Authorization:
Add Lambda auth for subgraphs and use request context(?) to determine the request was coming internally. A hack for resource-based policies.
Out to Internet and Back Subgraph:
AppSync provides a pubilc url for the endpoint and composing the federated graph pulls the schemas to build the supegraph schema. This feels like internal services going out to the internet and then back in. The best solution would be some internal ip addresses / urls and hosting all the subgraphs within a private VPC.
Conclusion:
Building a secure federated graph feels hacky with serverless technologies. It feels like I should avoid AppSync all together and use a subgraphs (Private API Gateways in private VPC - powered by Lambdas) feeding info to a Public API Gateway authenticated by Cognito.
Interested in thoughts.

How to use ECS and Lambda microservices within the same API?

I am trying to setup a microservice architecture on AWS, each microservice is a REST API.
Some of the services are running on ECS using Fargate and some of the services are running as a set of lambdas.
I am trying to have each api route resolve to the correct service, whether it is a ECS or Lambda based service.
I can see how it would be possible using only ECS services (with Application Load Balancer and listeners) or using only Lambdas (with an API Gateway). But I just cant seem to figure out how to mix the two together.
I have been searching relentlessly all week and I cannot find any decent documentation or an example of how to implement something similar to this.
There appears to be a limit to the number of routes for ALB or API Gateway. If I have several lambda based services there will need to be a declared path for each Lambda function and they will use up the path limit very quickly.
Should there be an intermediary step between each service and the API Gateway? For instance, each Lambda service has its own API Gateway which 'groups' those functions together. Which would mean there will be a nested set of API Gateways that the parent API Gateway routes to. This doesn't feel correct though.
Any help in the right direction would be appreciated.
Thanks
Your AWS account's API Gateway REST and Websocket routes/resources limit can be increased with a request to AWS support.

API Gateway - How Deploy API works?

After creating API Gateway with two API names and integrating with lambda function,
AWS documentation recommends to deploy this API, as shown below:
1) What does deploy API mean? How creating API gateway different from deploying API?
2) Does deploy API option internally create Cloud formation template? that creates a stack and deploy
1) What does deploy API mean? How creating an API gateway different from deploying API?
Let' say you have created your API but how about making it public so that it can be used.
That's where deploy comes. Once you are done with writing your API, deploy it to make it callable by your users. When you deploy, then you get the link from API Gateway which can be accessed by everyone.
It is described here
2) Does deploy API option internally create Cloud formation template? that creates a stack and deploy
No, Like you said you have integrated a lambda function with your API then API Gateway will simply redirect all the calls to your lambda function which is serverless.
An API Gateway is a Proxy that manages the endpoints ,It acts as the single entryway into a system allowing multiple APIs or microservices to act cohesively and provide a uniform experience to the user.
The most important role the API gateway plays is ensuring reliable processing of every API call. In addition, the API gateway provides the ability to design API specs, help provide enterprise-grade security, and manage APIs centrally.
An API Gateway is a server that is the single entry point into the
system. It is similar to the Facade pattern from object‑oriented
design. The API Gateway encapsulates the internal system architecture
and provides an API that is tailored to each client. It might have
other responsibilities such as authentication, monitoring, load
balancing, caching, request shaping and static response handling.
https://learn.microsoft.com/en-us/azure/architecture/microservices/design/gateway
https://microservices.io/patterns/apigateway.html
Deploying a REST API in Amazon API Gateway:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-tutorials.html
https://auth0.com/docs/integrations/aws-api-gateway/custom-authorizers/part-1
https://auth0.com/docs/integrations/aws-api-gateway/custom-authorizers/part-2
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-deploy-api.html

API Gateway Best Practises

We are looking at utilising aws api gateway for better management of APIs. However, at a enterprise level, what will be the best practise? Will a common API gateway for to be used by all app teams be necessary (In this case, we might need a administrator for this common API gateway which adds to overhead) or should each app team build their own API gateway and administration of their API calls?
Hope to have someone share their experiences.
I have used AWS API gateway for different web/mobile application projects. let me try to answer your questions one by one here.
Limitations Based Design
API gateway comes with limitations. You can find answers based on these limitations.
For eg: There is a soft limit on "Resources per API" and its set at 300 which can be increased up to 500 max. This means that in future, if more than 500 resources are needed, new API gateway needs to be created.
So, it's better to logically segregate the APIs and have different API gateways depending on the purpose.
Throttle limit per region across REST APIs, WebSocket APIs, and WebSocket callback APIs is limited (Soft) to 10,000 requests per second (RPS) with an additional burst capacity provided by the token bucket algorithm, using a maximum bucket capacity of 5,000 requests.
So based on traffic API gateway needs to be designed.
There are many such limitations https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Features Based Design
API Gateway uses OPEN API standards and facilitates XML/JSON import and export features. So if a new API Gateway is created with a swagger file from an application, its better not to mix it with other applications.
There are many features like 1)Enable API cache, 2)Enable throttling, 3) Web Application Firewall 4) Client Certificate which cannot be common for all APis in an enterprise. So again, it's better to have separate APIs based on the requirements.
AWS API Gateway is engineered with different logging mechanisms and each API gateway implementation will need a tailored approach.
SDK generation comes very handy for mobile development and again there is no use of bundling all apis into one SDk and providing access.
So my suggestion is to use multiple API gateways, for an enterprise based on specific needs.

May i call service to service AWS Lambda directly in API Architecture?

This is the first time I use AWS Lambda as an API architecture.
Because im trying to implement serverless.
Let say, I have three microservices where all of the microservices hosted on AWS Lambda.
And I use AWS API Gateway as router. I also implemented Jason web token in API Gateway.
This is the public URL that the frontend will use.
URL Routing API - https://mydomain.co/v1/lambda-service1
    Lambda REAL URL - https://cr7z0dds42.execute-api.ap-southeast-amazonaws.com/DEV/
URL Routing API - http://mydomain.co/v1/lambda-service2
    Lambda REAL URL - https://cr7z0ddgg2.execute-api.ap-southeast-amazonaws.com/DEV/
API Routing URL - http://mydomain.co/v1/lambda-service3
    Lambda REAL URL - https://cgf7z0ddgg2.execute-api.ap-southeast-amazonaws.com/DEV/
Basically, if I am currently my client / frontend, I want to call data from API number 1 by using TOKEN, i will use the API routing URL.
But there are some cases that the API number 1 needs to call service number 2 before return to client / frontend.
Currently what I do is call directly service number 2 via Lambda REAL URL, not API Routing URL from service number 1 without using TOKEN.
Is this justified?
If I got your query correctly maybe below might assist.
Justification is mainly based on the scale you expect the app you are developing and also your organisation/architecture policies (you might want to enforce).
If tokens are used for all API calls and making it consistence will make it easy when a wider team of developers working on the project down the track. Also make it easy for troubleshooting, as the scope of each Lambda function (in/outs) logic is clear and consistent.
Also another thought is around cost, API calls does cost you at scale. So that should be taken in to consideration as well during architecture. But (in my opinion) consistency in the app call flow, for the savings it might give for traffic not hitting API gateway, where the call is made directly from Lambda to internal resources might be negligible.
Anyways some thoughts to consider.