I'm a newbie in AWS Lambda functions. I used a script in AWS CLI in order to create an aws function in Node.js. This script has a config file called config.json. After function creation, I'm able to see the code on Lambda AWS Console and here comes my doubt. The code has this line:
var config = require('./config.json');
So, where this "./config.json" file is actually stored. Could I be able to edit the contents of config.json after deployment of lambda function?
Thanks in advance.
So, where this ./config.json file is actually stored?
It should be stored in the same directory as your Lambda handler function. They should be bundled in a zip file and deployed to AWS. If you didn't deploy it that way then that file doesn't currently exist.
If your Lambda function consists of multiple files you will have to bundle your files and deploy it to AWS as a zip file.
You cannot edit the source of external libraries/files via the AWS Lambda web console. You can only edit the source of the Lambda function handler via the web console.
Your files are placed into the directory specified in the environment variable LAMBDA_TASK_ROOT. You can read this via nodejs as process.env.LAMBDA_TASK_ROOT.
The code you deploy, including the config.json file are read-only, but if you do wish to modify files on the server, you may do so underneath /tmp. Mind, those changes will only be valid for that single container, for its lifecycle (4m30s - 4hrs). Lambda will auto-scale up and down between 0 and 100 containers, by default.
Global variables are also retained across invocations, so if you read config.json into a global variable, then modify that variable, those changes will persist throughout the lifecycle of the underlying container(s). This can be useful for caching information across invocations, for instance.
Related
I was wondering what's the best way (if it is possible) to access local files from a lambda function.
Basically I want to get a .txt file that I have at C://Users/User/Desktop or any directory and put that file inside an s3 bucket.
I have been able to put information into a bucket from a lambda function (Hardcoded info) but I'm struggling in getting info from host to AWS. From what I have seen, the way to go is to use AWS IoT Greengrass but after hours of trying to make it work, things are not looking good.
Is AWS IoT Greengrass the only option or is it there a simpler way of accessing local files?
It is not at all possible to have a lambda function access local files directly - something running on your local machine is going to need to serve those files TO lambda, lambda can't retrieve them.
Without knowing anything about the problem you are trying to solve, I would certainly start with the thought of pushing those files to S3, and then let lambda do its thing.
Objective:
Whenever an object is stored in the bucket, trigger a batch job (aws batch) and pass the uploaded file url as an environment variable
Situation:
I currently have everything set up. I've got the s3 bucket with cloudwatch triggering batch jobs, but I am unable to get the full file url or to set environment variables.
I have followed the following tutorial: https://docs.aws.amazon.com/batch/latest/userguide/batch-cwe-target.html "To create an AWS Batch target that uses the input transformer".
The job is created and processed in AWS batch, and under the job details, i can see the parameters received are:
S3bucket: mybucket
S3key: view-0001/custom/2019-08-07T09:40:04.989384.json
But the environment variables have not changed, and the file URL does not contain all the other parameters such as access and expiration tokens.
I have also not found any information about what other variables can be used in the input transformer. If anyone has a link to a manual, it would be welcome.
Also, in the WAS CLI documentation, it is possible to set the environment variables when submitting a job, so i guess it should be possible here as well? https://docs.aws.amazon.com/cli/latest/reference/batch/submit-job.html
So the question is, how to submit a job with the file url as an environment variable?
You could accomplish this by triggering a Lambda function off the bucket and generating a pre-signed URL in the Lambda function and starting a Batch job from the Lambda function.
However, a better approach would be to simply access the file within the Batch function using the bucket and key. You could use the AWS SDK for your language or simply use awscli. For example you could download the file:
aws s3 cp s3://$BUCKET/$KEY /tmp/file.json
On the other hand, if you need a pre-signed URL outside of the Batch function, you could generate one with the AWS SDK or awscli:
aws s3 presign s3://$BUCKET/$KEY
With either of these approaches with accessing the file within the Batch job, you will need to configure the instance role of your Batch compute environment with IAM access to your S3 bucket.
I have a key that is being shared among different services and it is currently stored in an s3 bucket inside a text file.
My goal is to read that variable and pass it to my lambda service through cloudformation.
for an ec2 instance it was easy because I could download the file and read it, and that was easily achievable by putting the scripts inside my cloudformation json file. But I don't have any idea how to do it for my lambdas....!
I tried to put my credentials in gitlab pipeline but because of the access permissions it doesn't let gitlab pass it on, so my best and least expensive option right now is to do it in cloud formation.
The easiest method would be to have the Lambda function read the information from Amazon S3.
The only way to get CloudFormation to "read" some information from Amazon S3 would be to create a Custom Resource, which involves writing an AWS Lambda function. However, since you already have a Lambda function, it would be easier to simply have that function read the object.
It's worth mentioning that, rather than storing such information in Amazon S3, you could use the AWS Systems Manager Parameter Store, which is a great place to store configuration information. Your various applications can then use Parameter Store to store and retrieve the configuration. CloudFormation can also access the Parameter Store.
I'm using CloudFormation to create lambda resources and I have several python scripts as well as one js script in a lambda-resources folder. Would it be okay to pass the same file location for every lambda function and just specify unique handlers? When my lambda function is created it looks like it only creates one lambda function.
Yes, this is definitely one way to accomplish what you're looking to do.
You'll need to create a zipped version of your lambda-resources folder and upload it via the Lambda service or even to S3, then reference it as the file location for each Lambda function.
This feature is not clear to me about the benefits (I didn't find any good documentation):
Is it just faster in the case you reuse the same zip for many lambda functions because you upload only 1 time and you just give the S3 link URL to each lambda function?
If you use an S3 link, will all your lambda functions be updated with the latest code automatically when you re-upload the zip file, meaning is the zip file on S3 a "reference" to use at each call to a lambda function?
Thank you.
EDIT:
I have been asked "Why do you want the same code for multiple Lambda functions anyway?"
Because I use AWS Lambda with AWS API Gateway so I have 1 project with all my handlers which are actual "endpoints" for my RESTful API.
EDIT #2:
I confirm that uploading a modified version of the zip file on S3 doesn't change the existing lambda functions result.
If an AWS guy reads this message, that would be great to have a kind of batch update feature that updates a set of selected lambda functions with 1 zip file on S3 in 1 click (or even an "automatic update" feature that detects when the file has been updated ;-))
Let's say you have 50 handlers in 1 project, then you modify something global impacting all of them, currently you have to go through all your lambda functions and update the zip file manually...
The code is imported from the zip to Lambda. It is exactly the same as uploading the zip file through the Lambda console or API. However, if your Lambda function is big (they say >10MB), they recommend uploading to S3 and then using the S3 import functionality because that is more stable than directly uploading from the Lambda page. Other than that, there is no benefit.
So for question 1: no. Why do you want the same code for multiple Lambda functions anyway?
Question 2: If you overwrite the zip you will not update the Lambda function code.
To add to other people's use cases, having the ability to update a Lambda function from S3 is extremely useful within an automated deployment / CI process.
The instructions under New Deployment Options for AWS Lambda include a simple Lambda function that can be used to copy a ZIP file from S3 to Lambda itself, as well as instructions for triggering its execution when a new file is uploaded.
As an example of how easy this can make development and deployment, my current workflow is:
I update my Node lambda application on my local machine, and git commit it to a remote repository.
A Jenkins instance picks up the commit, pulls down the appropriate files, adds them into a ZIP file and uploads this to an S3 bucket.
The LambdaDeployment function then automatically deploys this new version for me, without me needing to even leave my development environment.
To answer what I think is the essence of your question, AWS allows you to use S3 as the origin for your Lambda zip file because sometimes uploading large files via your browser can timeout. Also, storing your code on S3 allows you to store it centrally, rather than on your computer and I'm sure there is a CodeCommit tie-in there as well.
Using the S3 method of uploading your code to Lambda also allows you to upload larger files (AWS has a 10MB limit when uploading via web browser).
#!/bin/bash
cd /your/workspace
#zips up the new code
zip -FSr yourzipfile.zip . -x *.git* *bin/\* *.zip
#Updates function code of lambda and pushes new zip file to s3bucket for cloudformation lambda:codeuri source
aws lambda update-function-code --function-name arn:aws:lambda:us-west-2:YOURID:function:YOURFUNCTIONNAME --zip-file file://yourzipfile.zip
Depends on aws-cli install and aws profile setup
aws --profile yourProfileName configure