How to rename AWS CodePipeline object - amazon-web-services

I use AWS CodePipeline linked with an S3 bucket to deploy applications to some Elastic Beanstalks. It is a very simple pipeline and has only 2 phase: one Source (bucket S3 -> war file) and one Deploy (EB reference).
Say I have an application called "app.war", when I deploy it manually to EB in aws there is an incremental number appended to my application name (app-1.war, app-2.war, ... ) based on how many times I deployed an application with the same name.
I want to achieve this with CodePipeline, there is something that I can do? Some phases that I have to configure? Variables?
I would like to rename the "code-pipeline-123abc..." name from my war file to be more specific with my application name like with manual deploy.

Related

Can awscli be used in AWS Codebuild buildspec running on a custom image?

If a Codebuild project runs on a custom image that has awscli preinstalled, but not configured for that AWS account, would it be still possible to run aws * in that project's buildspec without updating its AWS credentials there first?
In other words, are these credentials made available by Codebuild (e.g. via providing this information in automatically picked up environment variables) , or if I am using a custom image, it is up to me to take care of that explicitly, and aws * is only expected to work in buildspec out of the box without additional efforts on Codebuild managed images?
(I mean configuration/credentials for the account and role the Codebuild project in question operates in)
When you attach an IAM service role with your AWS Codebuild project, you don't need to configure AWS cli. IAM service role is part of environment configuration and this role will be assumed whenever you try to access resources in AWS. This goes same for your custom image for AWS Codebuild as well.

aws cdk push image to ecr

I am trying to do something that seems fairly logical and straight forward.
I am using the AWS CDK to provision an ecr repo:
repository = ecr.Repository(
self,
id="Repo",
repository_name=ecr_repo_name,
removal_policy=core.RemovalPolicy.DESTROY
)
I then have a Dockerfile which lives at the root of my project that I am trying to push to the same ECR repo in the deployment.
I do this in the same service code with:
assets = DockerImageAsset(
self,
"S3_text_image",
directory=str(Path(__file__).parent.parent),
repository_name=ecr_repo_name
)
The deployment is fine and goes ahead and the ECR Repo is created, but the image is pushed to a default location aws-cdk/assets
How do I make the deployment send my Dockerfile to the exact ECR repo I want it to live in ?
AWS CDK depricated the repositoryName property on DockerImageAsset. There are a few issues on GitHub referencing the problem. See this comment from one of the developers:
At the moment the CDK comes with 2 asset systems:
The legacy one (currently still the default), where you get to specify a repositoryName per asset, and the CLI will create and push to whatever ECR repository you name.
The new one (will become the default in the future), where a single ECR repository will be created by doing cdk bootstrap and all images will be pushed into it. The CLI will not create the repository any more, it must already exist. IIRC this was done to limit the permissions required for deployments. #eladb, can you help me remember why we chose to do it this way?
There is a request for a new construct that will allow you to deploy to a custom ECR repository at (aws-ecr-assets) ecr-deployment #12597.
Use Case
I would like to use this feature to completely deploy my local image source code to ECR for me using an ECR repo that I have previously created in my CDK app or more importantly outside the app using an arn. The biggest problem is that the image cannot be completely abstracted into the assets repo because of auditing and semantic versioning.
There is also a third party solution at https://github.com/wchaws/cdk-ecr-deployment if you do not want to wait for the CDK team to implement the new construct.

terraform infrastructure runs locally , building and deploying it on aws codepipeline gives error

I have created my aws infrastructure using terraform . the infrastructure includes creating elastic beanstalk apps , application load balancer , s3 , dynamodb , vpc-subnets and vpc-endpoints.
the aws infrastructure runs locally using the terraform commands as shown below:
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -auto-approve -var-file="terraform.tfvars"
The terraform.tfvars contains the variables like region , instance type , access key etc .
I want to automate the build and deploy process of this terraform infrastructure using the aws codepipeline .
How can I achieve this task ? What steps to follow ? Where to save the terraform.tfvars file ? What roles to specify in the specific codebuild role . What about the manual process of auto-approve ?
MY APPROACH :The entire process of codecommit/github , codebuild , codedeploy ie (codepipeline) is carried out through aws console , I started with github as source , it is working (the github repo includes my terraform code for building aws infrastructure) then for codebuild , I need to specify the env variables and the buildspec.yml file , this is the problem , Iocally I had a terraform.tfvars to do the job but here I need to do it in the buildspec.yml file .
QUESTIONS :I am unaware how to specify my terraform.tfvars credentials in the buildspec.yml file and what env variables to specify? I also know we need to specify roles in the codebuild project but how to effectively specify them ? How to also Store the Terraform state in s3 ?
- How can I achieve this task ?
Use CodeCommit to store your Terraform Code, CodeBuild to run terraform plan, terraform apply, etc and CodePipeline to connect CodeCommit with CodeBuild.
What steps to follow ?
There are many tutorials on the internet. Check this as an example:
https://medium.com/faun/terraform-deployments-with-aws-codepipeline-342074248843
Where to save the terraform.tfvars file ?
Ideally, you should create one terraform.tfvars for development environment, like terraform.tfvars.dev, and another one for production environment, like terraform.tfvars.prod. And in your CodeBuild environment, choose the file using environment variables.
What roles to specify in the specific CodeBuild role ?
Your CodeBuild role needs to have the permissions to create, list, delete and update resources. Basically, in one service, it's almost everything.
What about the manual process of auto-approve ?
Usually, you use terraform plan in one CodeBuild environment to show what are the changes in your environment, and after a manual approval, you execute terraform apply -auto-approve in another CodeBuild environment. Check the tutorial above, it shows how to create this.

AWS, serverless SAM template - publishing nested applications

I have defined AWS Serverless nested applications within my root SAM template by using Location property pointing to my local file system (as advised here - Defining a Nested Application from the Local File System).
Packaging and deploying work perfectly fine - applications run on AWS just fine - everything works just fine, except publish.
I can not find a way to publish to the Serverless Application Repository my root application that would also (somehow) contain all nested applications (inside?).
sam publish \
--template packaged.yaml \
--region us-east-1
returns
Error: Invalid Serverless Application Specification document. Number of errors found: XX. Errors: Resource with id [YYYYYYYYYYY] is invalid. Location property must be an Application Location Object referencing a valid AWS Serverless Application Repository application.
All my Location properties after the packaging is done are something like:
https://s3.eu-east-1.amazonaws.com/my-storage/34ct54v6547b56756n7.template
Does it mean I still need to package or/and publish all of the nested applications first to have CodeUri properties defined as AWS s3 URLS and then somehow change their Locations references in the root packaged template?
Did anyone try that, maybe?

aws codepipline update lambda function source using s3 object

I am using terraform to create all the infra(CodePipeline, lambda, buckets) on AWS
currently, I've created a pipeline that builds the source zip file and puts it on s3 bucket but the lambda still keeps using the older source. So, I update the URL manually in the AWS console and it works.
Now I want to automate the flow but available solutions are:
AWS SAM + CFT
Codebuild Stage to update the source using AWS CLI
Create a lambda that updates the source
Code Deploy + AWS SAM + CFT
I am not willing to use CFT at all since all of our code is in terraform and CFT requires me to create new lambdas instead of using old ones.
is there any other simpler way to update the lambda source through Codepipeline
The preferred way to deploy a Lambda via CodePipeline is using a CloudFormation Deploy action [1]. Since you are not looking to use CloudFormation, next option could be to run your terraform plan/apply commands from within a CodeBuild job that is part of the pipeline. You will need to provide the CodeBuild role required permission for resource creation (or export the credentials in Environment variabels for TF to use via this [2] method) and install the TF binary within install phase of buildspec.
Ref:
[1] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
[2] How to retrieve Secret Manager data in buildspec.yaml