AWS Multiple Lambda Functions in one zip - amazon-web-services

I'm using CloudFormation to create lambda resources and I have several python scripts as well as one js script in a lambda-resources folder. Would it be okay to pass the same file location for every lambda function and just specify unique handlers? When my lambda function is created it looks like it only creates one lambda function.

Yes, this is definitely one way to accomplish what you're looking to do.
You'll need to create a zipped version of your lambda-resources folder and upload it via the Lambda service or even to S3, then reference it as the file location for each Lambda function.

Related

mounting /tmp to a container based lambda from inside of calling lambda

I have an automation task that uses a lambda which calls two other lambdas. The first invoked lambda, lambda 1, fetches some data, processes it and writes it to /tmp. The other lambda, lambda 2, was intended to be run by reading the file written to /tmp before uploading to another location outside of AWS. The upload lambda 2 is based off a docker image. Is it possible to mount /tmp from the runtime of the lambda calling lambda 2 so lambda 2 can read the file written by lambda 1?
If this is not possible the only other alternative would be to use either a EFS file system or pass the data directly into lambda 2's payload as a string correct? These files are not too large so I am thinking of passing the string into the payload directly as the alternative option.
Different Lambda functions don't share the same disk. The best way to share state in this use case would be with something like S3.

Can we import our locally created lambda function to AWS console?

I have been working on AWS Lambda. For some reason, I had to create a Lambda function locally. I just want to know, can we import local lambda function to aws lambda console. If yes then please elaborate how can i achieve this?
It is pretty easy:
Write your lambda function with a lambda_handler
Create a requirements.txt
Install all the requirements in the same folder
Package it(Zip file).
Go to AWS Lambda --> Create function --> Fill all details --> Create function.
Under code section: upload from --> .zip file ---> (select your file) --> upload--> save
Modify the handler according to your function name.
Yes you can. Assume you create the Lambda function using the Java run-time API (com.amazonaws.services.lambda.runtime.RequestHandler) and you create a FAT JAR that contains all of the required dependencies. You can deploy the Lambda function (the JAR) using the AWS Management Console, as described in this AWS tutorial.
Creating an AWS Lambda function that detects images with Personal Protective Equipment

Trigger AWS Lambda in Java for the newly uploaded file

I am working on a requirement where I want to trigger the AWS Lambda function written in Java when a file is uploaded on S3 bucket. The condition is that the function should pick-up the latest file in the bucket. Right now, I have the lambda function which picks up the specified file (having already specified file name). But as per the requirement, the file name can be anything(eg. web-log-). Is there any way to do that?
Since with lambda functions, we have access to the event object, can I use it to find out the recently uploaded file?
You could check out the AWS Lambda S3 tutorials, which should show how the uploaded object is passed in as event data. The example code contains a line which should point you in the right direction:
event.Records[0].s3.object.key

How to Specify Additional Parameter to AWS Lambda Function Triggered by S3

I am creating an AWS Lambda function that is triggered for each PUT on an S3 bucket. A separate Java application creates the S3 bucket, sets up the trigger to the Lambda on Put, and PUTs a set of files into the bucket. The Lambda function executes a compiled binary, it passes to the binary a script, which acts on the new S3 object.
All of this is working fine.
My problem is that I have a set of close to 100 different scripts, and am regularly developing new scripts. The ZIP for the Lambda contains all the scripts. Scripts correspond to different types of files, so when I run the Java application, I want to specify WHICH script in the Lambda function to use. I'm trying to avoid having to create a new Lambda for each script, since each one effectively does the exact same thing but for the name of the script.
When you INVOKE a Lambda, you can put parameters into the context. But my Lambda is triggered, so most of what I react to is in the event. I can't figure out how to communicate this simple parameter to the Lambda efficiently as I set up the S3 bucket and the event trigger.
How can I do this?
You can't have S3 post extra parameters to your Lambda function. What you can do is create a DynamoDB table that maps S3 buckets to scripts, or S3 prefixes to scripts, or something of the sort. Then your Lambda function can lookup that mapping before executing your script.
It is not possible to specify parameters that are passed to the AWS Lambda function. The function is triggered by Amazon S3, which passes standard information (bucket, key).
However, when creating the object in Amazon S3 you could attach object metadata. The Lambda function could then retrieve the metadata after it has been notified of the event.
An alternate approach would be to subscribe several Lambda functions to the S3 bucket. The functions could look at the event and decide whether or not to process the event.
For example, if you had pictures and text files being stored, you could create one Lambda function for pictures and another for text files. Both functions would be triggered upon object creation. Each function would look at the file extension (or, if necessary, look within the object itself). If it is a filetype that is handles, then it can process the object. If it is not a filetype it handles, the function can simply exit. This type of check could be performed very quickly and Lambda only charges per 100ms, so the cost would be close to irrelevant.
The benefit of this approach is that you could keep your libraries separate from each other, rather than making one large Lambda package.

Where libraries are stored for AWS Lambda Functions?

I'm a newbie in AWS Lambda functions. I used a script in AWS CLI in order to create an aws function in Node.js. This script has a config file called config.json. After function creation, I'm able to see the code on Lambda AWS Console and here comes my doubt. The code has this line:
var config = require('./config.json');
So, where this "./config.json" file is actually stored. Could I be able to edit the contents of config.json after deployment of lambda function?
Thanks in advance.
So, where this ./config.json file is actually stored?
It should be stored in the same directory as your Lambda handler function. They should be bundled in a zip file and deployed to AWS. If you didn't deploy it that way then that file doesn't currently exist.
If your Lambda function consists of multiple files you will have to bundle your files and deploy it to AWS as a zip file.
You cannot edit the source of external libraries/files via the AWS Lambda web console. You can only edit the source of the Lambda function handler via the web console.
Your files are placed into the directory specified in the environment variable LAMBDA_TASK_ROOT. You can read this via nodejs as process.env.LAMBDA_TASK_ROOT.
The code you deploy, including the config.json file are read-only, but if you do wish to modify files on the server, you may do so underneath /tmp. Mind, those changes will only be valid for that single container, for its lifecycle (4m30s - 4hrs). Lambda will auto-scale up and down between 0 and 100 containers, by default.
Global variables are also retained across invocations, so if you read config.json into a global variable, then modify that variable, those changes will persist throughout the lifecycle of the underlying container(s). This can be useful for caching information across invocations, for instance.