May i call service to service AWS Lambda directly in API Architecture? - amazon-web-services

This is the first time I use AWS Lambda as an API architecture.
Because im trying to implement serverless.
Let say, I have three microservices where all of the microservices hosted on AWS Lambda.
And I use AWS API Gateway as router. I also implemented Jason web token in API Gateway.
This is the public URL that the frontend will use.
URL Routing API - https://mydomain.co/v1/lambda-service1
    Lambda REAL URL - https://cr7z0dds42.execute-api.ap-southeast-amazonaws.com/DEV/
URL Routing API - http://mydomain.co/v1/lambda-service2
    Lambda REAL URL - https://cr7z0ddgg2.execute-api.ap-southeast-amazonaws.com/DEV/
API Routing URL - http://mydomain.co/v1/lambda-service3
    Lambda REAL URL - https://cgf7z0ddgg2.execute-api.ap-southeast-amazonaws.com/DEV/
Basically, if I am currently my client / frontend, I want to call data from API number 1 by using TOKEN, i will use the API routing URL.
But there are some cases that the API number 1 needs to call service number 2 before return to client / frontend.
Currently what I do is call directly service number 2 via Lambda REAL URL, not API Routing URL from service number 1 without using TOKEN.
Is this justified?

If I got your query correctly maybe below might assist.
Justification is mainly based on the scale you expect the app you are developing and also your organisation/architecture policies (you might want to enforce).
If tokens are used for all API calls and making it consistence will make it easy when a wider team of developers working on the project down the track. Also make it easy for troubleshooting, as the scope of each Lambda function (in/outs) logic is clear and consistent.
Also another thought is around cost, API calls does cost you at scale. So that should be taken in to consideration as well during architecture. But (in my opinion) consistency in the app call flow, for the savings it might give for traffic not hitting API gateway, where the call is made directly from Lambda to internal resources might be negligible.
Anyways some thoughts to consider.

Related

AWS HTTP API Gateway as a proxy to private S3 bucket

I have a private S3 bucket with lots of small files. I'd like to expose the contents of the bucket (only read-only access) using AWS API Gateway as a proxy. Both S3 bucket and AWS API Gateway belong to the same AWS account and are in the same VPC and Availability Zone.
AWS API Gateway comes in two types: HTTP API, REST API. The configuration options of REST API are more advanced, additionally, REST API supports much more AWS services integrations than the HTTP API. In fact, the use case I described above is fully covered in one of the documentation tabs of REST API. However, REST API has one huge disadvantage - it's about 70% more expensive than the HTTP API, the price comes with more configuration options but as for now, I need only one - integration with the S3 service that's why I believe this type of service is not well suited for my use case. I started searching if HTTP API can be integrated with S3, and so far I haven't found any way to achieve it.
I tried creating/editing service-linked roles associated with the HTTP API Gateway instance, but those roles can't be edited (only read-only access). As for now, I don't have any idea where I should search next, or if my goal is even achievable using HTTP API.
I am a fan of AWSs HTTP APIs.
I work daily with an API that serves a very similar purpose. The way I have done it is by using AWS Lambda functions integrated with the APIs paths.
What works for me is this:
Define your API paths, and integrate them with AWS Lambda functions.
Have your integrated Lambda function return a signed URL for any objects you want to provide access to through API calls.
There are several different ways to pass the name of the object(s) you want to the Lambda function servicing the API call.
This is the short answer. I plan to give a longer answer at a later time. But this has worked for me.

Attaching a usage plan to a public Api Gateway Endpoint

For learning purposes, I have developed a front-end application in Angular with AWS as back-end.
As this is a didactic project, I must prevent any possible costs explosion. Overall, for what concerns API Gateway calls.
At the moment, I have a single public GET endpoint for providing the information to the public homepage in the front-end.
I need to attach a usage plan to this endpoint for limiting the maximum number of calls to this endpoint. For example, max 10000 calls/week.
I already tried with an API-KEY:
Created the Usage Plan with "Quota: 10,000 requests per week"
Created the API KEY connected to the Usage Plan
Connected the API KEY to the authentication method of the endpoint
It works, but in this way I need to hard code the API KEY on the front-end.
I know that hard coding sensitive information on the front-end is a bad practice, but I thought that in this case the API KEY is needed only for connecting a Usage Plan, not for securing private information or operations. So I'm not sure if in this case it should be acceptable or not.
Is this solution safe, or could the same API KEY be used for other malicious purposes?
Are there any other solutions?
To add to the other answer, API Keys are not Authorization Tokens.
API Keys associated with Usage Plans are sent to the gateway on the x-api-key header (by default).
Tokens used by authorizers live on the standard Authorization header.
Use API Keys to set custom throttling and quotas for a particular caller, but you still need an Authorizer on any secure endpoints.
You can optionally set an API Key to be returned from the Authorizer when using a custom authorizer to be applied to the Usage Plan, which prevents you from having to distribute keys to clients in addition to their secrets.
APIG Key Source Docs
As the documentation says, generally you wouldn't want to rely on a hardcoded API key for authentication.
In addition, based on the information you provided, the usage plan limits are tied to the use by the user of the API key. So you could also set up throttling on the account or stage itself.
Finally, if it's possible, you could set up security group rules on your server or access control lists on your vpc that is serving your front end so that not everyone can access it.
Other ideas are to shut down the API gateway when you aren't using it and also rotate the API key frequently.
Obviously none of these are going to be enough if this were hosting sensitive information or doing anything remotely important. But as long as the only information in the client side is that API Key to access the API Gateway and that API Gateway itself doesn't interact with anything sensitive that's probably enough for a learning project.
But you should do all of this at your own risk as for all you know I'm the person who's going to try to hack you ;)

Flask web application deployment via AWS Lambda

I am very new to AWS Lambda and am struggling to understand its functionalities based on many examples I found online (+reading endless documentations). I understand the primary goal of using such service is to implement a serverless architecture that is cost and probably effort-efficient by allowing Lambda and the API Gateway to take on the role of managing your server (so serverless does not mean you don't use a server, but the architecture takes care of things for you). I organized my research into two general approaches taken by developers to deploy a Flask web application to Lambda:
Deploy the entire application to Lambda using zappa-and zappa configurations (json file) will be the API Gateway authentication.
Deploy only the function, the parsing blackbox that transforms user input to a form the backend endpoint is expecting (and backwards as well) -> Grab a proxy url from the API Gateway that configures Lambda proxy -> Have a separate application program that uses the url
(and then there's 3, which does not use the API Gateway and invokes the Lambda function in the application itself-but I really want to get a hands on experience using the API Gateway)
Here are the questions I have for each of the above two approaches:
For 1, I don't understand how Lambda is calling the functions in the Flask application. According to my understanding, Lambda only calls functions that have the parameters event and context-or are the url calls (url formulated by the API Gateway) actually the events calling the separate functions in the Flask application, thus enabling Lambda to function as a 'serverless' environment-this doesn't make sense to me because event, in most of the examples I analyzed, is the user input data. That means some of the functions in the application have no event and some do, which means Lambda somehow magically figures out what to do with different function calls?
I also know that Lambda does have limited capacity, so is this the best way? It seems to be the standard way of deploying a web application on Lambda.
For 2, I understand the steps leading to incorporating the API Gateway urls in the Flask application. The Flask application therefore will use the url to access the Lambda function and have HTTP endpoints for user access. HOWEVER, that means, if I have the Flask application on my local computer, the application will only be hosted when I run the application on my computer-I would like it to have persistent public access (hopefully). I read about AWS Cloud9-would this be a good solution? Where should I deploy the application itself to optimize this architecture-without using services that take away the architecture's severless-ness (like an EC2 instance maybe OR on S3, where I would be putting my frontend html files, and host a website)? Also, going back to 1 (sorry I am trying to organize my ideas in a coherent way, and it's not working too well), will the application run consistently as long as I leave the API Gateway endpoint open?
I don't know what's the best practice for deploying a Flask application using AWS Lambda and the API Gateway but based on my findings, the above two are most frequently used. It would be really helpful if you could answer my questions so I could actually start playing with AWS Lambda! Thank you! (+I did read all the Amazon documentations, and these are the final-remaining questions I have before I start experimenting:))
Zappa has its own code to handle requests and make them compatible with the "Flask" format. Keep in mind that you aren't really using Flask as intended in either cases when using Lambda. Lambdas are invoked only when calls are made, flask usually keeps running looking for requests. But the continuously running part is handled by the API Gateway here. Zappa essentially creates a single ANY request on API gateway, this request is passed to your lambda handler which interprets it and uses it to invoke your flask function.
If you you are building API Gateway + Lambda you don't need to use Flask. It would be much easier to simply make a function that is invoked by the parameters passed to lambda handler by the API Gateway. The front end application you can host on S3 (if it is static or Angular).
I would say the best practice here would be to not use Flask and go with the API Gateway + Lambda option. This lets you put custom security and checks on your APIs as well as making the application a lot more stable as every request has its own lambda.

Use Application Load Balancer (ALB) to trigger Lambda functions, how to add a bit of authentication at the ALB level?

I have a React app which calls API gateway, which in turn triggers my Lambda functions. Now for saving cost purpose due to the potentially let’s say, tens of millions of requests to the API gateway, I did some research and are looking at to potentially use ALB to invoke my Lambdas rather than API GW. My API GW is simply a Lambda-Proxy integration.
My question is with API GW I can add API keys and custom authorizers etc, but for ALB, how do I add a bit of authentication at the ALB layer, say only allow the invocation of my Lambda functions only from the client that I trust? Note my client is a static React app with no server behind it! I don’t need anything too fancy but just want to reject requests other than my trusted request origins. Inside Lambda to cover browser I will just add CORS to response header. But at ALB level, how do I achieve what I required?
Looking forward to getting some shed of lights here!
Thanks
Is it an option using AWS Cognito or CloudFront? We did that with an Enterprise application which uses OIDC (OAuth 2.0). It does implement just Authentication for now.
Give a look in these links:
https://aws.amazon.com/de/blogs/aws/built-in-authentication-in-alb/
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-authenticate-users.html

AWS serverless architecture – Why should I use API gateway?

Here is my use case:
Static react frontend hosted on s3
Python backend on lambda conduction long running data analysis
Postgres database on rds
Backend and frontend communicate exclusively with JSON
Occasionally backend creates and stores powerpoint files in s3 bucket and then serves them up by sending s3 link to frontend
Convince me that it is worthwhile going through all the headaches of setting up API gateway to connect the frontend and backend rather than invoking lambda directly from the frontend!
Especially given the 29s timeout which is not long enough for my app meaning I need to implement asynchronous processing and add a whole other layer of aws architecture (messaging, queuing and polling with SNS and SQS) which increases cost, time and potential for problems. I understand there are some security concerns, is there no way to securely invoke a lambda function?
You are talking about invoking a lambda directly from JavaScript running on a client machine.
I believe the only way to do that would be embedding the AWS SDK for JavaScript in your react frontend. See:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/browser-invoke-lambda-function-example.html
There are several security concerns with this, only some of which can be mitigated.
First off, you will need to hardcode AWS credentials in to your frontend for the world to see. The access those credentials have can be limited in scope, but be very careful to get this right, or otherwise you'll be paying for someone's cryptomining operation.
Assuming you only want certain people to upload files to a storage service you are paying for, you will need some form of authentication and authorisation. API Gateway doesn't really do authentication, but it can do authorisation, albeit by connecting to other AWS services like Cognito or Lambda (custom authorizers). You'll have to build this into your backend Lambda yourself. Absolutely doable and probably not much more effort than using a custom authorizer from the API Gateway.
The main issue with connecting to Lambda direct is that Lambda has the ability to scale rapidly, which can be an issue if someone tries to hit you with a denial of service attack. Lambda is cheap, but running 1000 concurrent instances 24 hours a day is going to add up.
API Gateway allows you rate limit per second/minute/hour/etc., Lambda only allows you to limit the number of concurrent instances at any given time. So if you were to set that limit at 1, an attacker could cause that 1 instance to run for 24 hours a day.