I am not quite clear on the best practices related to using CDK to deploy private Github repos to AWS. I understand that a pipeline should be created by CDK and the pipeline should invoke CodeDeploy to deploy the assets, but beyond that the details are murky.
I also want to understand if for this use case it would make sense to have a separate CDK repo which is responsible for the infrastructure for the entire backend of my project, or if it would make more sense to have CDK code included in each individual component repo. As I will be utilizing a microservice/cell based approach for building out components, the overhead required in adding CDK configuration for each component might be substantial.
You can think of CDK as a compiler that takes a given language and 'compiles' it down to CloudFormation templates. Those templates are then uploaded to Cloudformation by the CDK framework, and run for you when you execute the deploy command.
So. To answer your question more directly - if you can figure out how to do it in cloudformation, you can do it. That may involve spinning up a code build and running a script that executes some api calls or prepares a package for an ec2 server that then uses that package in the next step or any number of things.
But remember that CDK synths its cloudformation template all at once, and is only creating the template. It does not run any scripts that may be part of your code builds, and it does not 'wait' for certain things to be complete - because it isn't doing anything like that. If you have a sequence of events that need to occur, you want to use CodePipeline to orchestrate those events for you - but you can set up your CodePipeline with CDK for certain!
As for Overhead, maybe at first. But trust me when I say it becomes very quick and easy to generate a CDK stack for a given microservice with experience, and its super handy. Being able to spin up an ad-hoc on demand testing environment is super useful. Being able to deploy individual stacks on demand and make quick changes with just a line of code is handy as all get out. Having a single source of code for both your prod and development environments that you make a change in one and on next deployment in each is automatically reflected is super handy.
CDK is a very powerful tool - but it is a very low level one. It creates the template that will create your resources. Thats it. If your resources need to do something after being created for something else to happen you have to make use of other services to orchestrate that (CodePipeline, StepFunctions, Cloudwatch Events, ect)
Related
I have inherited a small AWS project, and the infra is built in CDK. I am relatively new to CDK.
I have a Bitbucket pipeline that deploys to our preprod environment fine. Since it feels reliable, I am now productionising it.
I detailed on a prior question that there is no context in the project for the production VPCs and subnets. I have been advised there that I can get AWS to generate the context file; I have not had much luck with that, so for now I have hand-generated it.
For safety I have made the deployment command a no-execute one:
cdk deploy --stage=$STAGE --region=eu-west-1 --no-execute --require-approval never
In production I get this error with the prod creds:
current credentials could not be used to assume 'arn:aws:iam::$CDK_DEFAULT_ACCOUNT:role/cdk-xxxxxxxx-lookup-role-$CDK_DEFAULT_ACCOUNT-eu-west-1', but are for the right account. Proceeding anyway.
Bundling asset VoucherSupportStack/VoucherImporterFunction/Code/Stage...
I then get:
❌ VoucherSupportStack failed: Error: VoucherSupportStack: SSM parameter /cdk-bootstrap/xxxxxxxx/version not found. Has the environment been bootstrapped? Please run 'cdk bootstrap' (see https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html)
I am minded to run cdk bootstrap in a production pipeline, on a once-off basis, as I think this is all it needs. We have very little CDK knowledge amongst my team, so I am a bit stuck on obtaining the appropriate reassurances - is this safe to run on a production AWS account?
As I understand it, it will just create a harmless "stack" that does nothing (unless we start using cdk deploy ...).
Yes, you need to bootstrap every environment (account/region) that you deploy to, including your production environment(s).
It is definitely safe to do - it's what CDK expects.
You can scope the execution role down if you need (the default policy is AdministratorAccess).
Although your pipeline shouldn't ideally be performing lookups during synth - the recommended way is to run cdk synth once with your production credentials, which will perform the lookups and populate the cdk.context.json file. You would then commit this file to VCS and your pipeline will use these cached values instead of performing the lookups every time.
Generally yes, but here some extension to #gshpychka answer:
You don't have to bootstrap your production environment in case you are deploying your application with AWS Service Catalog. The setup in our project looks like following:
Resources account - for pipelines, secrets, ...
Development account - bootstrapped, the dev pipeline deploys directly to this account
Integration Account and Production Account - not bootstrapped, we are provisioning the releases and the release candidates through the AWS Service Catalog.
Service Catalog provides the cool functionality to provision and also update the applications in a friendly way. There are CDK LVL2 stable constructs for building Your product stacks.
Of course, this approach has its advantages and disadvantages. I would recommend using it if you want to have full control over when you want to deploy or update your application. It is also worth using this approach if you are developing an application that will be installed on a client account.
We're (mostly happily ;)) using the AWS CDK to deploy our application stack to multiple environments (e.g. production, centralized dev, individual dev).
Now we want to increase the security by applying the least privilege principle to the deployment role. As the CDK code already has all the information about which services it will touch, is there a best practice as to how to generate the role definition?
Obviously it can't be a part of the stack as it is needed to deploy the stack.
Is there any mechanism built in to the CDK (e.g. construct CloudFrontDistribution is used thus the deployment role needs to have the permission to create, update and delete CloudFrontDistributions - possibly even after the CloudFrontDistribution is mapped to only do that to that one distribution).
Any best practices as how to achieve that?
No. Sadly there isn't currently (2022-Q3) a way to have the CDK code also provide a IAM policy that would grant you access to run that template and nothing more.
However, everything is there to do it, and thanks to aspects it could probably be done relatively easily if you wanted to put in the leg work. I know many people in the community would love to have this.
You run into a chicken and an egg problem here. (We encounter a similar issue with Secret Manager and initializing secrets) pretty much the only solution I've found that works is a first time setup script that uses an SDK or the CLI to run the necessary commands for that first time setup. Then you can reference that beyond there.
However, it also depends on what roles you're taking about. Cdk deploy pretty much needs access to any given resource you may be setting up - but you can limit it through users. Your kept in a secret lock box root admin setup script can setup a single power user, that can then be used for initial cdk deploys. You can set up additional user groups that have the ability to deploy cdk or have that initial setup create a cdk role that cdk deploy can assume.
I am new to Terraform and building a CI setup. When I want to create a CodePipeline that is going to be connected to a GitHub repo, do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me? Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me?
Yes, you use aws_codepipeline which will create new pipeline in AWS.
Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
You can also import existing resources to terraform.
I see you submitted this eight months ago, so I am pretty sure you have your answer, but for those searching that comes across this question, here are my thoughts on it.
As most of you have researched, terraform is infrastructure as code (IaC). As IaC it needs to be executed somewhere. This means that you either execute locally or inside a pipeline. A pipeline consists of docker containers that emulate a local environment and run commands for you to deploy your code. There is more to that, but the premise of understanding how terraform runs remains the same.
So to the magic question, Terraform is Code, and if you intend to use a pipeline, Jenkins, AWS, GitLab, and more, then you need a code repository to put all your code into. In this case, a code repository where you can store your terraform code so a pipeline can consume it when deploying your code. There are other reasons why you should use a code repository, but your question is directed to terraform and its usage with the pipeline.
Now the magnificent argument, the chicken or the egg, when to create your pipeline and how to do it. To your original question, you could do both. You could store all your terraform code in a repository (i recommend), clone it down, and locally run terraform to create your pipeline. This would be ideal for you to save time and leverage automation. Newbies, you will have to research terraform state files which is an element you need to backup in some form or shape once the pipeline is deployed for you.
If you are not so comfortable with Terraform, the GUI in AWS is also fine, and you can configure it easily to hook your pipeline into Github to run jobs.
You must set up Terraform and AWS locally on your machine or within the pipeline to deploy your code in both scenarios. This article is pretty good and will give you the basic understanding of setting up terraform
Don't forget to configure AWS on your local machine. For you Newbies using pipeline, you can leverage some of the pipeline links to get you started. Remember one thing, within AWS Codepipeine; you have to use IAM roles and not access keys. That will make more sense once you have gone through the first link. Please also go to youtube and search Terraform for beginners in AWS. Various videos can provide a lot more substance to help you get started.
I have created an infrastructure with terraform and now I need to setup a CI... I'm thinking of using terraform also. Is it possible to extract certain part of tf code to place to Pipeline in order of updating ECS tasks ignoring the rest of infrastructure?
As ydaetskcoR suggested in a comment, if you really want to run parts of your Terraform configuration independently of the rest, you're better off splitting it up.
What I'd suggest is several terraform projects grouped in the way you'd organize responsibility and releases, (e.g. projects might have their own, VPC might be on its own, some shared infrastructure in its own), and use Terraform remote state to connect them all.
Is it a good practice for CloudFormation deployment be done via CI/CD? I am currently considering the safety & performance aspect.
If someone accidentally removed a DB for example, CloudFormation will just remove it ... There could be code reviews to prevent this ... but just wondering if its a good practice.
With a serverless application there maybe no choice? Like otherwise its too manual to deploy everything
Another observation is performance, CloudFormation is rarely changed but it will need to run anyway if its part of the CI/CD process. Is there any way to speed this up?
Definitely.
You cannot achieve CI/CD in true sense until you do that.
Consider a scenario that for a particular release you added a messaging queue (AWS SQS). Now if you haven't integrated your Cloudformation with your CI/CD then your code that reads/write to/from SQS goes into your environment but will fail to do either operations just for the simple fact that SQS does not exist because your cloudformation change that would have created the SQS did not execute. So, eventually you end up having half baked environment.
To avoid this pitfall it is highly recommended that you execute your cloudformation as part of your CI/CD
Regarding your concern 'If someone accidentally removed a DB for example, CloudFormation will just remove it', this can happen even with the actual code. For example the developer had put in some test code to cleanup the database but forgot to remove that and that code gets executed in production environment. But ideally this would not happen because of the guard rails of manual testing, automated testing and JUnits. So in similar context treat Cloudformation as any other code (in fact Cloudformation is best described as Infrastructure as Code) which should be tested thoroughly. To check out on details for Unit Testing Cloudformation , see Is there a way to unit test AWS Cloudformation template
Yes absolutely. If you treat Infrastructure as Software (IaS), then you should be able to implement modern CI/CD software practices like syntax checking, unit testing, functional testing, verification, automated testing and deployment etc on your Cloudformation templates as well.
AWS provides a best practices solution here:
https://aws.amazon.com/answers/devops/aws-cloudformation-validation-pipeline/
The solution provides this introduction:
>
"Many Amazon Web Services (AWS) customers use AWS CloudFormation to manage their infrastructure as code and to help deploy AWS resources in a controlled and predictable way. DevOps teams are commonly tasked with validating AWS CloudFormation templates before launch to ensure they follow industry best practices and satisfy company-specific business and governance requirements. These teams often leverage AWS Developer Tools, which is a set of services designed to help DevOps professionals follow continuous integration and continuous delivery (CI/CD) practices and create their own pipelines to automatically build, validate, and deploy code."