Code commit repository cloning issue in Terraform - amazon-web-services

I am trying to create a repository in AWS Code-commit, i am able to create the repository but i am not able to clone github code into this created new repository.
This is the code i am using to create a repository
resource "aws_codecommit_repository" "test" {
repository_name = "MyTestRepository"
description = "This is the Sample App Repository"
}
i also want to clone a github repo into this new code-commit repo.
Here is the terraform code-commit page link for documentation https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codecommit_repository

What you are trying to do is essentially "migrate" a GitHub repo to CodeCommit. This is documented in AWS docs 1 and basically involves locally cloning source git repo and pushing to CodeCommit.
To "seed" the initial code automatically, CloudFormation has a construct Code in resource AWS::CodeCommit::Repository 2 which can copy initial code from an S3 bucket, unfortunately such an option does not seem to exist in Terraform's aws_codecommit_repository resource 3.

Related

CodeBuild and CodePipeline error in (AWS)

I try to create an AWS CodePipeline that will trigger and pull files from my GITHUB repo whenever there is a commit and then build & deploy to my ECS using CodeBuild.
I managed to create a CodeBuild that takes the files, builds a docker and tag + push it to the ECR and it's working perfectly fine.
BUT - when I try to use this CodeBuild project (which is working definitely OK manually) in my CodePipeline I receive an error. CLIENT_ERROR: AccessDenied: Access Denied status code: 403, request id: MRKXFJDHM0ZJF1F6, host id: C6ds+Gg//r7hxFtBuwwpOPfPPcLbywL5AEWkXixCqfdNbjuFOo4zKEqRx6immShnCNK4VgIyJTs= for primary source and source version arn:aws:s3:::codepipeline-us-east-1-805870671912/segev/SourceArti/Qm4QUD8
I understand it has some connection with the S3 bucket but I can not understand this error. Policies/roles are fine I guess.
Any idea why manually building is working OK and when the pipeline triggers the build I get this error?
Make sure the role associated to your CodePipeline has read&write permissions to your artifact S3, which from the error I can tell is arn:aws:s3:::codepipeline-us-east-1-805870671912
Check the docs about artifacts in CodePipeline:
https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome-introducing-artifacts.html
Greetings.

aws cdk push image to ecr

I am trying to do something that seems fairly logical and straight forward.
I am using the AWS CDK to provision an ecr repo:
repository = ecr.Repository(
self,
id="Repo",
repository_name=ecr_repo_name,
removal_policy=core.RemovalPolicy.DESTROY
)
I then have a Dockerfile which lives at the root of my project that I am trying to push to the same ECR repo in the deployment.
I do this in the same service code with:
assets = DockerImageAsset(
self,
"S3_text_image",
directory=str(Path(__file__).parent.parent),
repository_name=ecr_repo_name
)
The deployment is fine and goes ahead and the ECR Repo is created, but the image is pushed to a default location aws-cdk/assets
How do I make the deployment send my Dockerfile to the exact ECR repo I want it to live in ?
AWS CDK depricated the repositoryName property on DockerImageAsset. There are a few issues on GitHub referencing the problem. See this comment from one of the developers:
At the moment the CDK comes with 2 asset systems:
The legacy one (currently still the default), where you get to specify a repositoryName per asset, and the CLI will create and push to whatever ECR repository you name.
The new one (will become the default in the future), where a single ECR repository will be created by doing cdk bootstrap and all images will be pushed into it. The CLI will not create the repository any more, it must already exist. IIRC this was done to limit the permissions required for deployments. #eladb, can you help me remember why we chose to do it this way?
There is a request for a new construct that will allow you to deploy to a custom ECR repository at (aws-ecr-assets) ecr-deployment #12597.
Use Case
I would like to use this feature to completely deploy my local image source code to ECR for me using an ECR repo that I have previously created in my CDK app or more importantly outside the app using an arn. The biggest problem is that the image cannot be completely abstracted into the assets repo because of auditing and semantic versioning.
There is also a third party solution at https://github.com/wchaws/cdk-ecr-deployment if you do not want to wait for the CDK team to implement the new construct.

terraform infrastructure runs locally , building and deploying it on aws codepipeline gives error

I have created my aws infrastructure using terraform . the infrastructure includes creating elastic beanstalk apps , application load balancer , s3 , dynamodb , vpc-subnets and vpc-endpoints.
the aws infrastructure runs locally using the terraform commands as shown below:
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -auto-approve -var-file="terraform.tfvars"
The terraform.tfvars contains the variables like region , instance type , access key etc .
I want to automate the build and deploy process of this terraform infrastructure using the aws codepipeline .
How can I achieve this task ? What steps to follow ? Where to save the terraform.tfvars file ? What roles to specify in the specific codebuild role . What about the manual process of auto-approve ?
MY APPROACH :The entire process of codecommit/github , codebuild , codedeploy ie (codepipeline) is carried out through aws console , I started with github as source , it is working (the github repo includes my terraform code for building aws infrastructure) then for codebuild , I need to specify the env variables and the buildspec.yml file , this is the problem , Iocally I had a terraform.tfvars to do the job but here I need to do it in the buildspec.yml file .
QUESTIONS :I am unaware how to specify my terraform.tfvars credentials in the buildspec.yml file and what env variables to specify? I also know we need to specify roles in the codebuild project but how to effectively specify them ? How to also Store the Terraform state in s3 ?
- How can I achieve this task ?
Use CodeCommit to store your Terraform Code, CodeBuild to run terraform plan, terraform apply, etc and CodePipeline to connect CodeCommit with CodeBuild.
What steps to follow ?
There are many tutorials on the internet. Check this as an example:
https://medium.com/faun/terraform-deployments-with-aws-codepipeline-342074248843
Where to save the terraform.tfvars file ?
Ideally, you should create one terraform.tfvars for development environment, like terraform.tfvars.dev, and another one for production environment, like terraform.tfvars.prod. And in your CodeBuild environment, choose the file using environment variables.
What roles to specify in the specific CodeBuild role ?
Your CodeBuild role needs to have the permissions to create, list, delete and update resources. Basically, in one service, it's almost everything.
What about the manual process of auto-approve ?
Usually, you use terraform plan in one CodeBuild environment to show what are the changes in your environment, and after a manual approval, you execute terraform apply -auto-approve in another CodeBuild environment. Check the tutorial above, it shows how to create this.

Clone CodeCommit from CodeBuild

Can you create a CodeBuild project which clones from initially one CodeCommit repo in the region, and then push the contents to a repo in another region?
I want to do it without using https credentials. I have a CodeBuild project which uses a role which assumes CodeCommitPowerUser access but the clone commands still doesn't work.
It seems region is used to compute the credentials https://github.com/aws/aws-cli/blob/develop/awscli/customizations/codecommit.py#L147
Credentials generated for one region may not be used for a repository in other regions.

Trying to have a backup of Codecommit repos in S3 bucket of another AWS account

I am currently working on a task which should take a backup of all AWS Codecommit repositories (around 60 repositories at the moment) and place them in an S3 bucket located in another AWS account.
I have googled it to find out the possibilities around this but found nothing that best suites my requirement.
1.) Considered using Code Pipeline:
We can configure AWS CodePipeline to use a branch in an AWS CodeCommit
repository as the source stage for our code. In this way, when you make
changes to your selected branch in CodePipeline, an archive of the
repository at the tip of that branch will be delivered to your CodePipeline
bucket.
But, I had to neglect this option as it could be applied only to a
particular branch of a repository whereas I want a backup for 60
repositories all at a time.
2.) Considered doing it using simple git command which clones the git
repositories, placing the cloned stuff into a folder and sending them to S3
bucket in another account.
I had to neglect this because it complicates my process when a new git
repository is created where I need to manually go to AWS account and get the
url of that repo to clone.
So, I want to know if there is a good option to automatically backup Codecommit repositories in S3 located in a different AWS account. If something in any of the repos changes, it should automatically trigger that changed part and move it to S3.
Here is how I would solve,
Steps:
Create a CodeCommit Trigger under AWS CodeCommit Repository
Listen on EC2 on Jenkins or a node express or any http app
Get all latest commits from repo
aws s3 sync . s3://bucketname
This is the fastest backup I can think of.
For Automatic Repository Creation, you can use list-repositories,
http://docs.aws.amazon.com/cli/latest/reference/codecommit/list-repositories.html
if repo does not exist already, clone a new one or update the existing one.
You can also do git export to a single file and back that file with a versioning enabled on S3. This will increase backup time everytime it runs.
fully appreciate this thread is old, but i've just been tinkering with an EventBridge trigger for a commit on any repo, this event works for me, the target can then be set:
{
"source": ["aws.codecommit"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["codecommit.amazonaws.com"],
"eventName": ["GitPush"],
"requestParameters": {
"references": {
"commit": [{
"exists": true
}]
}
}
}
}
from there, iterate all repos to clone, then (in my case) git bundle and then move the bundle to the archive location...