I am using terraform to create all the infra(CodePipeline, lambda, buckets) on AWS
currently, I've created a pipeline that builds the source zip file and puts it on s3 bucket but the lambda still keeps using the older source. So, I update the URL manually in the AWS console and it works.
Now I want to automate the flow but available solutions are:
AWS SAM + CFT
Codebuild Stage to update the source using AWS CLI
Create a lambda that updates the source
Code Deploy + AWS SAM + CFT
I am not willing to use CFT at all since all of our code is in terraform and CFT requires me to create new lambdas instead of using old ones.
is there any other simpler way to update the lambda source through Codepipeline
The preferred way to deploy a Lambda via CodePipeline is using a CloudFormation Deploy action [1]. Since you are not looking to use CloudFormation, next option could be to run your terraform plan/apply commands from within a CodeBuild job that is part of the pipeline. You will need to provide the CodeBuild role required permission for resource creation (or export the credentials in Environment variabels for TF to use via this [2] method) and install the TF binary within install phase of buildspec.
Ref:
[1] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
[2] How to retrieve Secret Manager data in buildspec.yaml
Related
I'm fairly new to AWS and using the CDK but have been working on a project which deploys to AWS via a pipeline using yaml for the cf-template and later a node script to run cdk deploy on a set of stack files that have been written in Typescript.
In the cf-template yaml where the cdk-toolkit is being defined there's a bucket resource with name X. After the toolkit has been created/updated in the pipeline, the cdk deploy command is executed to deploy some stacks and workers, which should live in bucket X. They aren't automatically being uploaded there however, so I've tried using the --parameters flag to specify X as below.
cdk deploy --toolkit-stack-name my-toolkit --parameters uploadBucketName=X --ci --require-approval never
When I do this I get the following error in the pipeline for the first stack that gets deployed:
Parameters: [uploadBucketName] do not exist in the template
I assumed this meant that in the MyFirstStack.ts file it was missing a parameter definition as suggested by the AWS documentation, but it's not clear to me why this is necessary or how it's supposed to be used when it's the cdk deploy command which provides a value for this parameter. I tried adding it per the docs
const uploadBucketName = new CfnParameter(this, "uploadBucketName", {
type: "String",
description: "The name of the Amazon S3 bucket where uploaded files will be stored."});
but not sure if this is really the right thing to do, and it doesn't work besides - I still get the same error.
Does anyone have any ideas where I'm going wrong?
If a Codebuild project runs on a custom image that has awscli preinstalled, but not configured for that AWS account, would it be still possible to run aws * in that project's buildspec without updating its AWS credentials there first?
In other words, are these credentials made available by Codebuild (e.g. via providing this information in automatically picked up environment variables) , or if I am using a custom image, it is up to me to take care of that explicitly, and aws * is only expected to work in buildspec out of the box without additional efforts on Codebuild managed images?
(I mean configuration/credentials for the account and role the Codebuild project in question operates in)
When you attach an IAM service role with your AWS Codebuild project, you don't need to configure AWS cli. IAM service role is part of environment configuration and this role will be assumed whenever you try to access resources in AWS. This goes same for your custom image for AWS Codebuild as well.
Details - I have a CircleCI job that makes a zip of my lambda code and uploads it to S3 (We just keep updating the version of same s3 object for e.g. code.zip we dont change name).
Now i have CDK AWS code where i am defining the body of my lambda and making use of the s3 zip file using this url https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-lambda.Code.html#static-fromwbrbucketbucket-key-objectversion.
Issue - Now i want something automated deployment that whenever there is new version of code.zip file gets uploaded to S3, my all lambdas using should be automatically updated with the latest code.
Please suggest !!!
I can think of 2 solution for this
Have a step after you update the latest code in S3 to update your lambda function like below
aws lambda update-function-code
--function-name your_function_name
--s3-bucket --s3-key your_code.zip
Create another lamda function and create S3 create object or whatever event suits for you and even you can filter by .zip
And in you lambda function which will be triggered by S3 upload you can again use same AWS cli command to update your lambda function
I am using CloudFormation with SAM to deploy a stack which contains:
S3 Bucket
Cognito
AWS::Serverless::Api
AWS::Serverless::Function (authorizers + microservices, Type: Api and endpoints of the API Gateway)
Log Groups
To deploy my stack, I first run aws cloudformation package to package the lambda and then run aws cloudformation deploy to deploy the generated stack. This is working.
My goal now is to be able to update a microservice without deploying the entire stack (not building authorizers and other microservices), similar to serverless deploy function in the Serverless framework. This should preferably be one reusable template that uses a macro or just replaces text in the file.
The problem I am facing with this:
Running aws lambda update-function-code requires the lambda to be redeployed
To redeploy the lambda I have to declare AWS::Serverless::Function. For the function to be part of the API Gateway, AWS::Serverless::Api must be declared as well.
Declaring AWS::Serverless::Api requires all the other functions to be defined or they will be removed from the API Gateway.
I feel like I am stuck here and have not found other options of achieving my goal.
Since you're using SAM, I'd recommend deploying and updating your application using the sam cli commands.
You can run
sam build
sam package
sam deploy
When you run sam deploy, it deploys your application, but all subsequent sam deploy commands will update your existing cloudformation stack with only the appropriate resources that need updating.
If you opt for keeping with the standard Cloudformation cli commands, you could use the aws cloudformation update-stack command so that you're not re-deploying an entire new stack.
I am sure there are multiple ways AWS Lambda can be versioned/published, but I am trying to do it in a particular way and need some help.
I have a dotnet core Lambda project as "MyTTL".
Now in gitlab YML script I have code which will push the Lambda to S3 bucket like below (Pseudo Script).
GITLAB SCRIPT
variables:
OUTPUT_FILE_PATH: '$CI_PROJECT_DIR/bin/Release/netcoreapp3.1/MyTTL.zip'
- dotnet lambda package
- aws s3 cp $OUTPUT_FILE_PATH s3://$S3_BUCKET/
Now above script works fine and upload MyTTL.zip to S3 bucket.
Now in the terraform I have below script to reference that Lambda
resource "aws_lambda_function" "lambda" {
s3_bucket = "My S3 BUCKET"
s3_key = "protected/sample/${var.artifact_version}.zip"
source_code_hash = "${filebase64sha256("${var.artifact_version}.zip")}"
}
As you can see I want to pass a version (artifact_version) to this module, so that I can tell which Lambda version a particular client is running on.
Question - I am not sure how do I make sure on every dotnet lambda package a new zip version is created so that old terraform can still point to the old version of Lambda code and I can make terraform modifcation to new version of lambda for different clients at will?
Manual Lame Solution - I make the code change in my dotnet core project let the gitlab script publish it to S3 then i download it rename that zip to version I want and then upload it to S3 and then later reference it in terraform
variables:
OUTPUT_FILE_PATH: '$CI_PROJECT_DIR/bin/Release/netcoreapp3.1/MyTTL.zip'
- dotnet lambda package
- aws s3 cp $OUTPUT_FILE_PATH s3://$S3_BUCKET/MyTTL${CI_COMMIT_SHORT_SHA }.zip
Now you have diferent versions of your lambda proyect... that with the hash of your commit... and now you no need to download only change the hash in the name. That hash always be unique on every commit.