Is there a way to use an SSM Parameter to provide the path for an S3 deployment in CodePipeline - amazon-web-services

So, I've got a simple CodePipeline setup that uses CodeBuild to put some artifacts together and then provisions a CloudFormation Stack.
One of the resources created is an S3 bucket for storing static files. It also creates an SSM parameter with the name of the bucket.
Currently, to deploy to the bucket I'm using an S3 stage to unpack the initial set of files. Unfortunately I can only figure out how to set the bucket manually. This works ok if the stack is already provisioned but fails if the stack is created from fresh (with a new bucket name assigned).
Is there a way I can use the SSM parameter as part of this stage that I'm not seeing?

I agree with #shariqmaws. You can save the environment variable in SSM parameter store and can use that variable as follows:
env:
parameter-store:
key: value
Once that's done, you can use that variable as follows:
aws s3 sync your-s3-files/ "s3://${key}"

If I understood you correctly, you wish to dynamically change the Bucket name of the S3 Deploy Action in CodePipeline. Currently this is not possible as this is part of the pipeline configuration [1].
What you can do instead is take matters in your own hands, replace the S3 Deploy action with a CodeBuild action and sync the files to the S3 bucket yourself. You can read in the parameter store value using this syntax in buildspec [2]:
env:
parameter-store:
key: "value"
... or use the 'aws ssm' cli 'get-parameter' command [3] on-demand.
[1] https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#w498aac41c11c11c31b3
[2] https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax-link3
[3] https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameter.html

Related

AWS CDK accessing parameters when deploying stacks on the pipeline via yaml, typescript and nodejs

I'm fairly new to AWS and using the CDK but have been working on a project which deploys to AWS via a pipeline using yaml for the cf-template and later a node script to run cdk deploy on a set of stack files that have been written in Typescript.
In the cf-template yaml where the cdk-toolkit is being defined there's a bucket resource with name X. After the toolkit has been created/updated in the pipeline, the cdk deploy command is executed to deploy some stacks and workers, which should live in bucket X. They aren't automatically being uploaded there however, so I've tried using the --parameters flag to specify X as below.
cdk deploy --toolkit-stack-name my-toolkit --parameters uploadBucketName=X --ci --require-approval never
When I do this I get the following error in the pipeline for the first stack that gets deployed:
Parameters: [uploadBucketName] do not exist in the template
I assumed this meant that in the MyFirstStack.ts file it was missing a parameter definition as suggested by the AWS documentation, but it's not clear to me why this is necessary or how it's supposed to be used when it's the cdk deploy command which provides a value for this parameter. I tried adding it per the docs
const uploadBucketName = new CfnParameter(this, "uploadBucketName", {
type: "String",
description: "The name of the Amazon S3 bucket where uploaded files will be stored."});
but not sure if this is really the right thing to do, and it doesn't work besides - I still get the same error.
Does anyone have any ideas where I'm going wrong?

Working with S3 buckets having no BucketName in AWS Lambda

Since the global uniqueness requirements of S3 bucket names, using the optional BucketName property in the AWS::S3::Bucket resource is problematic. Essentially, if I insist on using BucketName I need some way to attach a GUID there.
I can avoid this pain if I omit the BucketName property entirely, so CloudFormation reliably generates a unique name for me.
However, I face another problem: how do I work with this random bucket name in AWS Lambda/SAM/serverless.com/other code? I understand that CloudFormation templates can export the name, but how do I pass it to the Lambda code?
Is there a standard/recommended way of working with CloudFormation exports in AWS Lambda? The problem is not unique to S3 - e.g., AWS Amplify uses randomly generated DynamoDB table names, too.
If your Lambda is created through CloudFormation, you can pass the bucket name using Environment variables (Environment key in SAM and CloudFormation). You can refer to the bucket name using !Ref if the bucket is in the same spec and cross stack references if using different stacks. If you use cross stack references, you won't be able to modify or delete the output value in the original stack until you remove all references to it. If you are using Ref, the Lambda will also be updated if the bucket name changes.
If your Lambda isn't created through CloudFormation, you can use SSM parameter store as mentioned by Ervin in his comment. You can create a SSM Parameter and read it's value in your Lambda code.

Automate Lambda deployments using S3 buckets zip bundled code

Details - I have a CircleCI job that makes a zip of my lambda code and uploads it to S3 (We just keep updating the version of same s3 object for e.g. code.zip we dont change name).
Now i have CDK AWS code where i am defining the body of my lambda and making use of the s3 zip file using this url https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-lambda.Code.html#static-fromwbrbucketbucket-key-objectversion.
Issue - Now i want something automated deployment that whenever there is new version of code.zip file gets uploaded to S3, my all lambdas using should be automatically updated with the latest code.
Please suggest !!!
I can think of 2 solution for this
Have a step after you update the latest code in S3 to update your lambda function like below
aws lambda update-function-code
--function-name your_function_name
--s3-bucket --s3-key your_code.zip
Create another lamda function and create S3 create object or whatever event suits for you and even you can filter by .zip
And in you lambda function which will be triggered by S3 upload you can again use same AWS cli command to update your lambda function

AWS CDK update/add lifecycle to existing S3 bucket using custom source

I created an S3 bucket but now I would like to update/add lifecycle policies to it using CDK.
Currently I can import this bucket to a new stack file.
const testBucket = s3.Bucket.fromBucketAttributes(this, 'TestBucket', {
bucketArn: 'arn:aws:s3:::xxxxxx'});
How can I use AwsCustomResource to update/add lifecycle policies? For example, for prefix = long, I want those objects to expire in 7 days, and for prefix = short, I want them to expire in 3 days.
Or is there a general way of updating an existing S3 bucket in a new stack with CDK?
The best option for this is probably to add it to the stack using resource importing. Don't confuse a custom resource with a CDK construct. A custom resource involves deploying a lambda function and calling that function when the CloudFormation custom resource is present in your stack. A CDK construct is used to generate a CloudFormation template. It allows you to combine different resources into a single bit of code, and allows some logical decisions based on the input values.
Steps to import:
Make sure your current CDK is what is deployed. You cannot import resources when there are other changes.
Add the S3 bucket to your CDK code as if you were creating it for the first time. Make sure that all the settings are the same as what is currently deployed. This is important because the import process doesn't validate that what you have configured matches what you are importing.
Run cdk synth to generate the CloudFormation template that includes the S3 bucket.
In the CloudFormation console locate the stack that represents the CDK stack you are working with.
Select Import resources into stack from the Stack actions menu.
Follow the prompts. When asked for the template select Upload a template file and select the template created by the cdk synth command (hint: it will be in cdk-out). You will be prompted to add the bucket name. This should be the bucket that you are wanting to add to this stack.
Once that is done you can modify the bucket as if it were created by your stack.
One thing to note. CDK includes some meta-data in the CloudFormation template. This value must be the same as what is currently deploy or it will be seen as a change and you won't be able to perform the import. You can copy the values from what is currently deployed and manually edit the template created by cdk synth to match that.
Need to access the CfnBucket reference of the testBucket and add lifecycle rules to it.
const testBucket = s3.Bucket.fromBucketAttributes(this, 'TestBucket', {
bucketArn: 'arn:aws:s3:::xxxxxx')
testBucket.node.root.addLifecycleRule({prefix: 'short', expiration:3, enabled:true})
testBucket.node.root.addLifecycleRule({prefix: 'long', expiration:7, enabled:true})

aws codepipline update lambda function source using s3 object

I am using terraform to create all the infra(CodePipeline, lambda, buckets) on AWS
currently, I've created a pipeline that builds the source zip file and puts it on s3 bucket but the lambda still keeps using the older source. So, I update the URL manually in the AWS console and it works.
Now I want to automate the flow but available solutions are:
AWS SAM + CFT
Codebuild Stage to update the source using AWS CLI
Create a lambda that updates the source
Code Deploy + AWS SAM + CFT
I am not willing to use CFT at all since all of our code is in terraform and CFT requires me to create new lambdas instead of using old ones.
is there any other simpler way to update the lambda source through Codepipeline
The preferred way to deploy a Lambda via CodePipeline is using a CloudFormation Deploy action [1]. Since you are not looking to use CloudFormation, next option could be to run your terraform plan/apply commands from within a CodeBuild job that is part of the pipeline. You will need to provide the CodeBuild role required permission for resource creation (or export the credentials in Environment variabels for TF to use via this [2] method) and install the TF binary within install phase of buildspec.
Ref:
[1] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
[2] How to retrieve Secret Manager data in buildspec.yaml