How to specify file path in env var in CloudFormation template? - amazon-web-services

I am using a CloudFormation template that uses a docker image to provision a stack on AWS. This template requires an environment variable that points to the path of a PEM file, not the contents. The docker image reads those contents given the file path. I tried uploading the contents of the PEM file to AWS Secrets Manager and passing the ARN to the environment variable; however, the docker file checks if the file exists with -e and does not find it via the ARN. I also made sure to give the execution role the policies "ssm:GetParameters", "secretsmanager:GetSecretValue", and "kms:Decrypt" and list the ARN under the resources it applies to.
Why is it not able to find the file? Should I be uploading to S3 instead? Or do I need to do something more in CloudFormation that copies the file over into the context?

Related

Working on Video on demand CloudFormation template, need to customize the template

I need to modify the CloudFormation template : Video on Demand on AWS CloudFormation template
When I deploy the main template without any modifications and upload video in the source S3 bucket, then the folders that are getting created in the destination S3 bucket are having their names as guid of Dynamodb item as shown in the below picture,
In my case, the requirement is that those folders in the destination S3 bucket should get created with some meaningful names.
To resolve this issue, where exactly do I need to modify the template
Steps for modifying a CloudFormation stack can be found at:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-get-template.html
In general, Stacks support passed parameters of the form:
aws cloudformation create-stack --stack-name mystack
--template-body file://mypath/mystack.json
--parameters ParameterKey=KeyPairName
So one of the passed parameters could be the desired output filename.
Regarding file naming within MediaConvert, the service supports a set of time+date variables for naming output files, which can found at: https://docs.aws.amazon.com/mediaconvert/latest/ug/list-of-settings-variables-with-examples.html
Alternatively, you could rename the files after output using a Lambda Function triggered by S3 File Events. This would allow you to generate or retrieve a name conducive to your workflows, as well as set Tags and other object metadata.
Examples of this workflow are documented here: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
I hope this information helps you tune your workflow. Good luck!

Granting write access to S3 in python cdk

I was able to follow this example1 and let my ec2 instance read from S3.
In order to write to the same bucket I thought changing line 572 from grant_read() to grant_read_write()
should work.
...
# Userdata executes script from S3
instance.user_data.add_execute_file_command(
file_path=local_path
)
# asset.grant_read(instance.role)
asset.grant_read_write(instance.role)
...
Yet the documented3 function cannot be accessed according to the error message.
>> 57: Pyright: Cannot access member "grant_read_write" for type "Asset"
What am I missing?
1 https://github.com/aws-samples/aws-cdk-examples/tree/master/python/ec2/instance
2 https://github.com/aws-samples/aws-cdk-examples/blob/master/python/ec2/instance/app.py#L57
3 https://docs.aws.amazon.com/cdk/latest/guide/permissions.html#permissions_grants
This is the documentation for Asset:
An asset represents a local file or directory, which is automatically
uploaded to S3 and then can be referenced within a CDK application.
The method grant_read_write isn't provided, as it is pointless. The documentation you've linked doesn't apply here.
an asset is just a Zip file that will be uploded to the bootstraped CDK s3 bucket, then referenced by Cloudformation when deploying.
if you have an script you want ot put into an s3 bucket, you dont want to use any form of asset cause that is a zip file. You would be better suited using a boto3 command to upload it once the bucket already exists, or making it part of a codePipeline to create the bucket with CDK then the next step in the pipeline uploads it.
the grant_read_write is for aws_cdk.aws_s3.Bucket constructs in this case.

AWS Cloudformation: No change set if the CF's Referenced S3 files are changed

My CF is having Step Function state machine configuration which definition is placed at some S3 location.
Now if there is any change in the definition, i.e. change only in the S3 file, in this case updating CF is failing.
Solution I tried
I Have declared a parameter in my CF(say buildVersion) and every change in the S3 will lead to new build version and this build version I am sending though parameters
aws cloudformation --region "xyz" update-stack --stack-name "my stack name" --timeout-in-minutes 15 --template-body "CF file path" --parameters "parameter-file path"
This is how my parameter file look like, So before calling this command I am updating the parameter file with this buildVersion parameter.
[
{"ParameterKey":"EnvironmentName", "ParameterValue":"dev"},
{"ParameterKey":"BuildVersion", "ParameterValue":"0.0"}
]
But it is not solving the purpose and update command is failing, If I am doing the same thing with the AWS console i.e. updating parameter and click on update it is working fine.
Any suggestions will be highly appreciated.
When you're updating your template. It will not look into the file referenced on S3 to see if anything has changed. So it will simply see that the file location has not changed and conclude that there is no change in the template itself.
Since your question is not really complete, I'll have to answer based on some assumptions. You could enable versioning on the S3 bucket that holds your definition and pass in the S3 version identifier as a parameter to your stack to use as a property of your StepFunction S3 Location declaration. If you do this, you can upload a new version of the state machine declaration to S3 and use that version identifier as a parameter for updating your CloudFormation stack.
You could automate the entire process of updating the CloudFormation stack by using a CodePipeline or a Lambda trigger on S3 to automatically update the CloudFormation stack when a new StateMachine definition is uploaded.
The issue with BuildVersion approach was, this parameter was not participating in the AWS resource creation and updating this parameter though CLI does not affecting any thing in the change set.
I have added this parameter in the 'Tag' section of the state machine Resource and it worked for me.

Getting an error "File does not exist in artifact [SourceArtifact]" when working on codepipeline

I'm using S3 as source when creating the codepipline on "Add source stage".
During the "Add deploy stage" of codepiline i'm including the "Object URL" of the file as the artifactname but when i try to create the pipeline its failing with the error "File does not exist in artifact [SourceArtifact]" though the file is available in s3
Cloudformation deployment tasks in CodePipeline expect source artifacts to be in .zip format. The reference to the file within the artifact would be the path to the script within the zip file.
Per AWS documentation:
When you use Amazon Simple Storage Service (Amazon S3) as the source repository, CodePipeline requires you to zip your source files before uploading them to an S3 bucket. The zipped file is a CodePipeline artifact that can contain an AWS CloudFormation template, a template configuration file, or both.
Therefore, to correctly reference and process a Cloudformation script, follow the following steps:
Add your CloudFormation script (i.e. cf.yaml) to a .zip file (i.e. cf.zip)
Upload your zip file to S3
Set the .zip file as the path to the source S3 artifact (i.e. cf.zip)
Reference the source artifact in your deployment stage, but for the filename, reference the text file within the zip (i.e. cf.yaml)
Execute the pipeline
See Edit the artifact and upload it to an S3 Bucket

Is there a way to use an SSM Parameter to provide the path for an S3 deployment in CodePipeline

So, I've got a simple CodePipeline setup that uses CodeBuild to put some artifacts together and then provisions a CloudFormation Stack.
One of the resources created is an S3 bucket for storing static files. It also creates an SSM parameter with the name of the bucket.
Currently, to deploy to the bucket I'm using an S3 stage to unpack the initial set of files. Unfortunately I can only figure out how to set the bucket manually. This works ok if the stack is already provisioned but fails if the stack is created from fresh (with a new bucket name assigned).
Is there a way I can use the SSM parameter as part of this stage that I'm not seeing?
I agree with #shariqmaws. You can save the environment variable in SSM parameter store and can use that variable as follows:
env:
parameter-store:
key: value
Once that's done, you can use that variable as follows:
aws s3 sync your-s3-files/ "s3://${key}"
If I understood you correctly, you wish to dynamically change the Bucket name of the S3 Deploy Action in CodePipeline. Currently this is not possible as this is part of the pipeline configuration [1].
What you can do instead is take matters in your own hands, replace the S3 Deploy action with a CodeBuild action and sync the files to the S3 bucket yourself. You can read in the parameter store value using this syntax in buildspec [2]:
env:
parameter-store:
key: "value"
... or use the 'aws ssm' cli 'get-parameter' command [3] on-demand.
[1] https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#w498aac41c11c11c31b3
[2] https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax-link3
[3] https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameter.html