Usually when I deploy my app to AWS Lambda it is still one self-contained app which resolves requests inside of it.
So no matter what request I send, full lambda starts and handles this request:
I know that some people take full advantage of lambdas and deploy each "endpoint" as individual lambda function like this:
How do you run this locally? To me it seems like you would have to run as many node apps as many endpoints you have.
Are there any frameworks/methods which help with such architecture?
Maybe something would be able to "extract" all endpoints from a node app and build lambdas for each of them?
Related
I'm quite new to AWS and not able to understand one thing. I've installed aws cli, and now I'll start using AWS to code. But all the tutorials online shows creating instances, some deployments etc. Is there any way I can run it locally? Because it's an enterprise network and I don't want to cause any unreasonable payment charges.
P.S.- is there anything similar how we work with node application? I build app using vscode, run locally and test. Once all good, i Deploy in production.
You don't need Serverless framework or SAM to run it locally.
The function is a normal nodeJs code.
For the Lambda Function, you normally export the function handler, like:
import { Callback, Context } from 'aws-lambda';
export function handler(event: any, context: Context, callback: Callback): void {}
You can just import this file inside another file or test case and run it passing the event, the context, and the callback as parameters.
For the Context and Callback, you can use the #types/aws-lambda to see their definition.
To the event, you'll need to craft a JSON object in the format that the Lambda Function will accept.
In #types/aws-lambda you will find all possible event types used by the AWS platform.
You can use Serverless framework or AWS SAM (serverless application model) to create your Lambda functions. In that case, you have the option to run your Lambda function locally.
AWS SAM is an official AWS framework for the creation of serverless services (Lambda, DynamoDB, etc) and Serverless Framework is 3rd party solution, although pretty popular.
There are probably some other solutions also that can help you run a Lambda function locally, but since you're talking about enterprise, it would be expected that you use one of these 2 solutions to create your serverless infrastructure.
I'm writing to you because I'm quite a novice with AWS... I only worked before with EC2 instances for simple tasks...
I am currently looking for an AWS service for reciving data using REST API calls (to external AWS services).
So far I have used EC2 where I deployed my library (python) that made calls and stored data in S3.
What more efficient ways does AWS offer for this? some SaaS?
I know that they are still more details to know in order to choose a good services but I would like to know from where I can start looking.
Many thanks in advance :)
I make API requests using AWS Lambda. Specifically, I leave code that makes requests, writes the response to a file and pushes the response object (file) to AWS S3.
You'll need a relative/absolute path to push the files to wherever you want to ingest. By default lambda servers current working directory is: /var/task but you may want to write your files to /tmp/ instead.
You can automate the ingestion process by setting a CloudWatch rule to trigger your function. Sometimes I chain lambda functions when I need to loop requests with changing parameters instead of packing all requests within a single function,
i.e.
I leave the base request (parameterized) in one function and expose the function through an API Gateway endpoint.
I create a second function to call the base function once for each value I need by using the Event object (which is the JSON body of a regular request). This data will replace parameters within the base function.
I automate the second function.
Tip:
Lambda sometimes will run your requests inside the same server. So if you're continuously running these for testing the server may have files from past calls that you don't want, so I usually have a clean-up step at the beginning of my functions that iterates through my filesystem to make sure there are no files before making the requests.
Using python 3.8 as a runtime I use the requests module to send the request, I write the file and use boto3 to push the response object to an aws S3 bucket.
To invoke an external service you need some "compute resources" to run your client. Under compute resources in aws we understand ec2, ecs (docker container) or lambda (serverless - my favorite)
You had your code already running on EC2 so you should already know you need VPC with a public subnet and ip address to make an outbound call regardless the compute resource you choose
Given a REST API, outside of my AWS environment, which can be queried for json data:
https://someExternalApi.com/?date=20190814
How can I setup a serverless job in AWS to hit the external endpoint on a periodic basis and store the results in S3?
I know that I can instantiate an EC2 instance and just setup a cron. But I am looking for a serverless solution, which seems to be more idiomatic.
Thank you in advance for your consideration and response.
Yes, you absolutely can do this, and probably in several different ways!
The pieces I would use would be:
CloudWatch Event using a cron-like schedule, which then triggers...
A lambda function (with the right IAM permissions) that calls the API using eg python requests or equivalent http library and then uses the AWS SDK to write the results to an S3 bucket of your choice:
An S3 bucket ready to receive!
This should be all you need to achieve what you want.
I'm going to skip the implementation details, as it is largely outside the scope of your question. As such, I'm going to assume your function already is written and targets nodeJS.
AWS can do this on its own, but to make it simpler, I'd recommend using Serverless. We're going to assume you're using this.
Assuming you're entirely new to serverless, the first thing you'll need to do is to create a handler:
serverless create --template "aws-nodejs" --path my-service
This creates a service based on the aws-nodejs template on the provided path. In there, you will find serverless.yml (the configuration for your function) and handler.js (the code itself).
Assuming your function is exported as crawlSomeExternalApi on the handler export (module.exports.crawlSomeExternalApi = () => {...}), the functions entry on your serverless file would look like this if you wanted to invoke it every 3 hours:
functions:
crawl:
handler: handler.crawlSomeExternalApi
events:
- schedule: rate(3 hours)
That's it! All you need now is to deploy it through serverless deploy -v
Below the hood, what this does is create a CloudWatch schedule entry on your function. An example of it can be found over on the documentation
First thing you need is a Lambda function. Implement your logic, of hitting the API and writing data to S3 or whatever, inside the Lambda function. Next thing, you need a schedule to periodically trigger your lambda function. Schedule expression can be used to trigger an event periodically either using a cron expression or a rate expression. The lambda function you created earlier should be configured as the target for this CloudWatch rule.
The resulting flow will be, CloudWatch invokes the lambda function whenever there's a trigger (depending on your CloudWatch rule). Lambda then performs your logic.
I am creating a serverless infrastructure with multiple functions. So far I have managed to publish a new function on AWS lambda using the aws-sam-cli.
One of the last functions is my firebase listener which is supposed to trigger certain aws lambda functions.
Intially, I thought to create a new function and add the listener as follows:
import firebase_admin
cred = firebase_admin.credentials.Certificate(cert_json)
app = firebase_admin.initialize_app(cred, config)
bucket = storage.bucket(app=app)
node_to_listen = '/alerts/'
firebase_admin.db.reference(node_to_listen).listen(listener)
However, the issue is that AWS lambda seems to be designed not to run functions continuously but only be triggered by events. This is true as well for the Firebase listen() function, which means that we get a chicken or egg problem, who triggers who?
How can I therefore publish the firebase listener function and where? Should it be deployed somewhere else (e.g. Heroku?) in order to continuously listen and send the event requests to aws lambda? Or is there a way to connect those two?
There's no way to keep an active listener in any Functions-as-a-Service environment that I know of. The whole purpose of such environments is to run (short) workloads in response to events. You are trying to actually to trigger an event by keeping a listener, which simply doesn't fit the FaaS model.
The two solutions I can see:
Implement your listener on an environment that keeps an active process.
Implement your listener on a FaaS environment that can itself listen to Firebase Realtime Database events. The only environment that can current do so it Cloud Functions, which has Firebase Realtime Database as an event source. So you'd then trigger your Lambda function from Cloud Functions.
The second solution is the only one that really feels fully serverless, but it seems a bit weird to trigger Amazon Lambda from Google Cloud Functions.
There is work under way to allow interop between FaaS providers. But I'm not sure of the current status (link to spec/working group welcome), nor if your scenario would be covered in there.
To reduce the cost on instances, we were looking for options.
AWS lambda seems to be a good option for us.
Its still in the preliminary stage of searching for available alternatives.
My concern is if we switch some of our applications to lambda, we will be confined to use AWS environments only , and in future it might become a boundation for a scenario , which we cant predict at the moment.
So my question is, is there a way that we can still use lambda in an environment which is not an AWS environment.
Thanks!
AWS Lambda functions are basically containers, where its lifecycle is managed by Amazon.
When you use Lambda, there are several best practices you can follow, to avoid full locking. One of the recommended practice is to separate the business logic from Lambda handler. When you separate the Lambda handler, it only works as the controller which points to the executing code.
/handler.js
/lib
/create-items
/list-items
For example, if you design a web application API this way with NodeJS in Lambda, you can later move the business logic to an ExpressJS server by moving the handler code to ExpressJS Routes.
As you can see, you will still require putting additional effort to move an application from Lambda to another environment. By properly designing, you can only reduce the efforts.
As per my knowledge,
Its AWS lambda function, so it is suppose to be deployed on AWS instances only, because they support the needed environment.
From AWS site there are couple of options ...
https://docs.aws.amazon.com/lambda/latest/dg/deploying-lambda-apps.html