AWS Lambda Functions have an option to enter the code uploaded as a file from S3. I have a successfully running lambda function with the code taken as a zip file from an S3 Bucket, however, any time you would like to update this code you would need to either manually edit the code inline within the lambda function or upload a new zip file to S3 and go into the lambda function and manually re-upload the file from S3. Is there any way to get the lambda function to link to a file in S3 so that it will automatically update its function code when you update the code file (or zip file) contained in S3?
Lambda doesn't actually reference the S3 code when it runs--just when it sets up the function. It is like it takes a copy of the code in your bucket and then runs the copy. So while there isn't a direct way to get the lambda function to automatically run the latest code in your bucket, you can make a small script to update the function code using SDK methods. I don't know which language you might want to use, but for example, you could write a script to call the AWS CLI to update the function code. See https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-code.html
Updates a Lambda function's code.
The function's code is locked when you publish a version. You can't
modify the code of a published version, only the unpublished version.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
Synopsis
update-function-code
--function-name [--zip-file ] [--s3-bucket ] [--s3-key ] [--s3-object-version ] [--publish |
--no-publish] [--dry-run | --no-dry-run] [--revision-id ] [--cli-input-json ] [--generate-cli-skeleton ]
You could do similar things using Python or PowerShell as well, such as using
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.update_function_code
You can set up an AWS Code deploy pipeline to get your code build and deployed on code commit in your code repository(github,bitbucket,etc)
CodeDeploy is a deployment service that automates application
deployments to Amazon EC2 instances, on-premises instances, serverless
Lambda functions, or Amazon ECS services.
Also, wanted to add if you want to go on a more unattended route of deploying your Updated code to the Lambda use this flow in your code Pipeline
Source -> Code Build (npm installs and zipping etc.) -> S3 Upload (sourcecode.zip in S3 bucket) -> Code Build (another build just for aws lambda update-funtion-code)
Make sure the role for the last stage has both S3 getObject and Lambda UpdateFunctionCode policies attached to it.
Related
I was able to follow this example1 and let my ec2 instance read from S3.
In order to write to the same bucket I thought changing line 572 from grant_read() to grant_read_write()
should work.
...
# Userdata executes script from S3
instance.user_data.add_execute_file_command(
file_path=local_path
)
# asset.grant_read(instance.role)
asset.grant_read_write(instance.role)
...
Yet the documented3 function cannot be accessed according to the error message.
>> 57: Pyright: Cannot access member "grant_read_write" for type "Asset"
What am I missing?
1 https://github.com/aws-samples/aws-cdk-examples/tree/master/python/ec2/instance
2 https://github.com/aws-samples/aws-cdk-examples/blob/master/python/ec2/instance/app.py#L57
3 https://docs.aws.amazon.com/cdk/latest/guide/permissions.html#permissions_grants
This is the documentation for Asset:
An asset represents a local file or directory, which is automatically
uploaded to S3 and then can be referenced within a CDK application.
The method grant_read_write isn't provided, as it is pointless. The documentation you've linked doesn't apply here.
an asset is just a Zip file that will be uploded to the bootstraped CDK s3 bucket, then referenced by Cloudformation when deploying.
if you have an script you want ot put into an s3 bucket, you dont want to use any form of asset cause that is a zip file. You would be better suited using a boto3 command to upload it once the bucket already exists, or making it part of a codePipeline to create the bucket with CDK then the next step in the pipeline uploads it.
the grant_read_write is for aws_cdk.aws_s3.Bucket constructs in this case.
Details - I have a CircleCI job that makes a zip of my lambda code and uploads it to S3 (We just keep updating the version of same s3 object for e.g. code.zip we dont change name).
Now i have CDK AWS code where i am defining the body of my lambda and making use of the s3 zip file using this url https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-lambda.Code.html#static-fromwbrbucketbucket-key-objectversion.
Issue - Now i want something automated deployment that whenever there is new version of code.zip file gets uploaded to S3, my all lambdas using should be automatically updated with the latest code.
Please suggest !!!
I can think of 2 solution for this
Have a step after you update the latest code in S3 to update your lambda function like below
aws lambda update-function-code
--function-name your_function_name
--s3-bucket --s3-key your_code.zip
Create another lamda function and create S3 create object or whatever event suits for you and even you can filter by .zip
And in you lambda function which will be triggered by S3 upload you can again use same AWS cli command to update your lambda function
I am sure there are multiple ways AWS Lambda can be versioned/published, but I am trying to do it in a particular way and need some help.
I have a dotnet core Lambda project as "MyTTL".
Now in gitlab YML script I have code which will push the Lambda to S3 bucket like below (Pseudo Script).
GITLAB SCRIPT
variables:
OUTPUT_FILE_PATH: '$CI_PROJECT_DIR/bin/Release/netcoreapp3.1/MyTTL.zip'
- dotnet lambda package
- aws s3 cp $OUTPUT_FILE_PATH s3://$S3_BUCKET/
Now above script works fine and upload MyTTL.zip to S3 bucket.
Now in the terraform I have below script to reference that Lambda
resource "aws_lambda_function" "lambda" {
s3_bucket = "My S3 BUCKET"
s3_key = "protected/sample/${var.artifact_version}.zip"
source_code_hash = "${filebase64sha256("${var.artifact_version}.zip")}"
}
As you can see I want to pass a version (artifact_version) to this module, so that I can tell which Lambda version a particular client is running on.
Question - I am not sure how do I make sure on every dotnet lambda package a new zip version is created so that old terraform can still point to the old version of Lambda code and I can make terraform modifcation to new version of lambda for different clients at will?
Manual Lame Solution - I make the code change in my dotnet core project let the gitlab script publish it to S3 then i download it rename that zip to version I want and then upload it to S3 and then later reference it in terraform
variables:
OUTPUT_FILE_PATH: '$CI_PROJECT_DIR/bin/Release/netcoreapp3.1/MyTTL.zip'
- dotnet lambda package
- aws s3 cp $OUTPUT_FILE_PATH s3://$S3_BUCKET/MyTTL${CI_COMMIT_SHORT_SHA }.zip
Now you have diferent versions of your lambda proyect... that with the hash of your commit... and now you no need to download only change the hash in the name. That hash always be unique on every commit.
I am using terraform to create all the infra(CodePipeline, lambda, buckets) on AWS
currently, I've created a pipeline that builds the source zip file and puts it on s3 bucket but the lambda still keeps using the older source. So, I update the URL manually in the AWS console and it works.
Now I want to automate the flow but available solutions are:
AWS SAM + CFT
Codebuild Stage to update the source using AWS CLI
Create a lambda that updates the source
Code Deploy + AWS SAM + CFT
I am not willing to use CFT at all since all of our code is in terraform and CFT requires me to create new lambdas instead of using old ones.
is there any other simpler way to update the lambda source through Codepipeline
The preferred way to deploy a Lambda via CodePipeline is using a CloudFormation Deploy action [1]. Since you are not looking to use CloudFormation, next option could be to run your terraform plan/apply commands from within a CodeBuild job that is part of the pipeline. You will need to provide the CodeBuild role required permission for resource creation (or export the credentials in Environment variabels for TF to use via this [2] method) and install the TF binary within install phase of buildspec.
Ref:
[1] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
[2] How to retrieve Secret Manager data in buildspec.yaml
I was following this guide to deploy the AWS Serverless Image Handler. I used the given template, and I was able to successfully deploy it.
However, I want to customize the code slightly for my specific needs, and I tried two different approaches, but none of them worked.
Approach #1
I downloaded the .zip source code from the Lambda console, I unarchived it, made the changes, and deployed it via S3 (because it was over 50 MB I couldn't directly from my machine).
However, this resulted in the following error: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-central-1'
Approach #2
Then I tried following the guide from their site: Customizing Lambda Thumbor Package
First problem is, they recommend an Amazon Linux for the listed operations which I don't have, and the instructions for installing it rather complex.
At the end of the process, they say to use the command aws s3 cp . s3://mybucket-[region_name]/serverless-image-handler/v1.0/ --recursive --exclude "*" --include "*.zip". However, this results in the error upload failed: Unable to locate credentials.
To fix that, I tried running aws configure, but here I got the following error: ./serverless-image-handler-ui.zip to s3://my-bucket-eu-central-1/serverless-image-handler/v10.0/serverless-image-handler-ui.zip An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist. I am suspecting that it's confused by the name of my bucket which uses the - separator the same as one separating the bucket name with the region in the command aws s3 cp . s3://mybucket-[region_name]/serverless-image-handler/v1.0/ ...
I just want a simple way to upload my customized code. How do I do that?