aws cdk push image to ecr - amazon-web-services

I am trying to do something that seems fairly logical and straight forward.
I am using the AWS CDK to provision an ecr repo:
repository = ecr.Repository(
self,
id="Repo",
repository_name=ecr_repo_name,
removal_policy=core.RemovalPolicy.DESTROY
)
I then have a Dockerfile which lives at the root of my project that I am trying to push to the same ECR repo in the deployment.
I do this in the same service code with:
assets = DockerImageAsset(
self,
"S3_text_image",
directory=str(Path(__file__).parent.parent),
repository_name=ecr_repo_name
)
The deployment is fine and goes ahead and the ECR Repo is created, but the image is pushed to a default location aws-cdk/assets
How do I make the deployment send my Dockerfile to the exact ECR repo I want it to live in ?

AWS CDK depricated the repositoryName property on DockerImageAsset. There are a few issues on GitHub referencing the problem. See this comment from one of the developers:
At the moment the CDK comes with 2 asset systems:
The legacy one (currently still the default), where you get to specify a repositoryName per asset, and the CLI will create and push to whatever ECR repository you name.
The new one (will become the default in the future), where a single ECR repository will be created by doing cdk bootstrap and all images will be pushed into it. The CLI will not create the repository any more, it must already exist. IIRC this was done to limit the permissions required for deployments. #eladb, can you help me remember why we chose to do it this way?
There is a request for a new construct that will allow you to deploy to a custom ECR repository at (aws-ecr-assets) ecr-deployment #12597.
Use Case
I would like to use this feature to completely deploy my local image source code to ECR for me using an ECR repo that I have previously created in my CDK app or more importantly outside the app using an arn. The biggest problem is that the image cannot be completely abstracted into the assets repo because of auditing and semantic versioning.
There is also a third party solution at https://github.com/wchaws/cdk-ecr-deployment if you do not want to wait for the CDK team to implement the new construct.

Related

AWS CDK accessing parameters when deploying stacks on the pipeline via yaml, typescript and nodejs

I'm fairly new to AWS and using the CDK but have been working on a project which deploys to AWS via a pipeline using yaml for the cf-template and later a node script to run cdk deploy on a set of stack files that have been written in Typescript.
In the cf-template yaml where the cdk-toolkit is being defined there's a bucket resource with name X. After the toolkit has been created/updated in the pipeline, the cdk deploy command is executed to deploy some stacks and workers, which should live in bucket X. They aren't automatically being uploaded there however, so I've tried using the --parameters flag to specify X as below.
cdk deploy --toolkit-stack-name my-toolkit --parameters uploadBucketName=X --ci --require-approval never
When I do this I get the following error in the pipeline for the first stack that gets deployed:
Parameters: [uploadBucketName] do not exist in the template
I assumed this meant that in the MyFirstStack.ts file it was missing a parameter definition as suggested by the AWS documentation, but it's not clear to me why this is necessary or how it's supposed to be used when it's the cdk deploy command which provides a value for this parameter. I tried adding it per the docs
const uploadBucketName = new CfnParameter(this, "uploadBucketName", {
type: "String",
description: "The name of the Amazon S3 bucket where uploaded files will be stored."});
but not sure if this is really the right thing to do, and it doesn't work besides - I still get the same error.
Does anyone have any ideas where I'm going wrong?

How to build CloudFormation template from multiple yml

I'm trying to get this repo going here - https://github.com/mydatastack/google-analytics-to-s3.
A link is provided to launch the AWS CloudFormation stack but it is no longer working as the S3 bucket containing the template is no longer active.
I have 2 questions about getting data pipeline running:
My first question would be what is 631216aef6ab2824fc63572d1d3d5e6c.template and can I create it through the 3 .yml files in the CloudFormation folder?
I've tried to create a template through CloudFormation designer , collector-ga.yml but it fails. I think its because the Resources within the yml aren't available when creating a template just from collector-ga. I've also tried uploading the repo to s3 and creating a template from there but that was also unsuccessful.
How can I launch the stack from the repo? I've found very little information online so an explanation or a pointer to some relevant resources would be appreciated.
This repository doesn't use the "standard" CloudFormation resources, but it uses AWS SAM. You'll have to install the SAM CLI tool and use that to deploy the CloudFormation stack. If you run sam deploy --guided it will help you with the setup of the necessary S3 bucket etc on your AWS account. SAM will upload the necessary files, resolve the internal local links between the templates by updating them with the S3 URLs and construct a packaged.yml template which it will use to deploy the stack.
Also, check out the AWS SAM user guide for more information.

Terraform deployment of Docker Containers to aws ecr

I am having issues deploying my docker images to aws ecr as part of a terraform deployment and I am trying to think through the best long term strategy.
At the moment I have a terraform remote backend in s3 and dynamodb on let's call it my master account. I then have dev/test etc environments in separate accounts. The terraform deployment is currently run off my local machine (mac) and uses the aws 'master' account and its credentials which in turn assumes a role in the target deployment account to create the resources as per:
provider "aws" { // tell terraform which SDK it needs to load
alias = "target"
region = var.region
assume_role {
role_arn = "arn:aws:iam::${var.deployment_account}:role/${var.provider_env_deployment_role_name}"
}
}
I am creating a number of ecs services with Fargate deployments. The container images are built in separate repos by GitHub Actions and saved as GitHub packages. These package names and versions are being deployed after the creation of the ecr and service (maybe that's not ideal thinking about it) and this is where the problems arise.
The process is to pull the image from GitHub Packages, retag it and upload to the ecr using multiple executions of a null_resource local-exec. Works fine stand alone but has problems as part of the terraform process. I think the reason is that the other resources use the above provider to get permissions but as null_resource does not accept a provider it cannot get the permissions this way. So I have been passing the aws creds values into the shell. Not convinced this is really secure but that's currently moot as it ain't working either. I get this error:
Error saving credentials: error storing credentials - err: exit status 1, out: `error storing credentials - err: exit status 1, out: `The specified item already exists in the keychain.``
Part of me thinks this is the wrong approach and that as I migrate to deploying via a Github action I can separate the infrastructure deployment via terraform from what is really the application deployment and just use GitHub secrets to reset the credentials values then run the script.
Alternatively, maybe the keychain thing just goes away and my process will work fine? Secure ??
That's fine for this scenario but it isn't really a generic approach for all my use cases.
I am shortly going to start deploying multiple aws lambda functions with docker containers. Haven't done it before but it looks like the process is going to be: create ecr, deploy container, deploy lambda function. This really implies that the container deployment should integral to the terraform deployment which loops back to my issue with the local-exec??
I found Actions to deploy to ECR which would imply splitting the deployments into multiple files but that seems inelegant and potentially brittle.
Maybe there is a simple solution, but given where I am trying to go with this, what is my best approach?
I know this isn't a complete answer, but you should be pulling your AWS creds from environment variables. I don't really understand if you need credentials for different accounts, but if you do then swap them during the progress of your action. See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html . Terraform should pick these up and automatically use them for AWS access.
Instead of those hard coded access key/secret access keys I'd suggest making use of Github/AWS's ability to assume role through temporary credentials with OIDC https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services
You'd likely only define one initial role that you'd authenticate into and from there assume into the other accounts you're deploying into.
These the assume role credentials are only good for an hour and do not have the operation overhead of having to rotate them.
As suggested by Kevin Buchs answer...
My primary issue was related to deploying from a mac and the use of the keychain. As this was not on the critical path I went round it and set up a GitHub Action.
The Action loaded environmental variables from GitHub secrets for my 'master' aws account credentials.
AWS_ACCESS_KEY_ID: ${{ secrets.NK_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.NK_AWS_SECRET_ACCESS_KEY }}
I also loaded the target accounts credentials into environmental variables in the same way BUT with the prefix TF_VAR.
TF_VAR_DEVELOP_AWS_ACCESS_KEY_ID: ${{ secrets.DEVELOP_AWS_ACCESS_KEY_ID }}
TF_VAR_DEVELOP_AWS_SECRET_ACCESS_KEY: ${{ secrets.DEVELOP_AWS_SECRET_ACCESS_KEY }}
I then declare terraform variables which will be automatically populated from the environment variables.
variable "DEVELOP_AWS_ACCESS_KEY_ID" {
description = "access key for the dev account"
type = string
}
variable "DEVELOP_AWS_SECRET_ACCESS_KEY" {
description = "secret access key for the dev account"
type = string
}
And when I run a shell script with a local exec:
resource "null_resource" "image-upload-to-importcsv-ecr" {
provisioner "local-exec" {
command = "./ecr-push.sh ${var.DEVELOP_AWS_ACCESS_KEY_ID} ${var.DEVELOP_AWS_SECRET_ACCESS_KEY} "
}
}
Within the script I can then use these arguments to set the credentials eg
AWS_ACCESS=$1
AWS_SECRET=$1
.....
export AWS_ACCESS_KEY_ID=${AWS_ACCESS}
export AWS_SECRET_ACCESS_KEY=${AWS_SECRET}
and the script now has credentials to do whatever.

Code commit repository cloning issue in Terraform

I am trying to create a repository in AWS Code-commit, i am able to create the repository but i am not able to clone github code into this created new repository.
This is the code i am using to create a repository
resource "aws_codecommit_repository" "test" {
repository_name = "MyTestRepository"
description = "This is the Sample App Repository"
}
i also want to clone a github repo into this new code-commit repo.
Here is the terraform code-commit page link for documentation https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codecommit_repository
What you are trying to do is essentially "migrate" a GitHub repo to CodeCommit. This is documented in AWS docs 1 and basically involves locally cloning source git repo and pushing to CodeCommit.
To "seed" the initial code automatically, CloudFormation has a construct Code in resource AWS::CodeCommit::Repository 2 which can copy initial code from an S3 bucket, unfortunately such an option does not seem to exist in Terraform's aws_codecommit_repository resource 3.

Clone CodeCommit from CodeBuild

Can you create a CodeBuild project which clones from initially one CodeCommit repo in the region, and then push the contents to a repo in another region?
I want to do it without using https credentials. I have a CodeBuild project which uses a role which assumes CodeCommitPowerUser access but the clone commands still doesn't work.
It seems region is used to compute the credentials https://github.com/aws/aws-cli/blob/develop/awscli/customizations/codecommit.py#L147
Credentials generated for one region may not be used for a repository in other regions.