We're (mostly happily ;)) using the AWS CDK to deploy our application stack to multiple environments (e.g. production, centralized dev, individual dev).
Now we want to increase the security by applying the least privilege principle to the deployment role. As the CDK code already has all the information about which services it will touch, is there a best practice as to how to generate the role definition?
Obviously it can't be a part of the stack as it is needed to deploy the stack.
Is there any mechanism built in to the CDK (e.g. construct CloudFrontDistribution is used thus the deployment role needs to have the permission to create, update and delete CloudFrontDistributions - possibly even after the CloudFrontDistribution is mapped to only do that to that one distribution).
Any best practices as how to achieve that?
No. Sadly there isn't currently (2022-Q3) a way to have the CDK code also provide a IAM policy that would grant you access to run that template and nothing more.
However, everything is there to do it, and thanks to aspects it could probably be done relatively easily if you wanted to put in the leg work. I know many people in the community would love to have this.
You run into a chicken and an egg problem here. (We encounter a similar issue with Secret Manager and initializing secrets) pretty much the only solution I've found that works is a first time setup script that uses an SDK or the CLI to run the necessary commands for that first time setup. Then you can reference that beyond there.
However, it also depends on what roles you're taking about. Cdk deploy pretty much needs access to any given resource you may be setting up - but you can limit it through users. Your kept in a secret lock box root admin setup script can setup a single power user, that can then be used for initial cdk deploys. You can set up additional user groups that have the ability to deploy cdk or have that initial setup create a cdk role that cdk deploy can assume.
Related
I have inherited a small AWS project, and the infra is built in CDK. I am relatively new to CDK.
I have a Bitbucket pipeline that deploys to our preprod environment fine. Since it feels reliable, I am now productionising it.
I detailed on a prior question that there is no context in the project for the production VPCs and subnets. I have been advised there that I can get AWS to generate the context file; I have not had much luck with that, so for now I have hand-generated it.
For safety I have made the deployment command a no-execute one:
cdk deploy --stage=$STAGE --region=eu-west-1 --no-execute --require-approval never
In production I get this error with the prod creds:
current credentials could not be used to assume 'arn:aws:iam::$CDK_DEFAULT_ACCOUNT:role/cdk-xxxxxxxx-lookup-role-$CDK_DEFAULT_ACCOUNT-eu-west-1', but are for the right account. Proceeding anyway.
Bundling asset VoucherSupportStack/VoucherImporterFunction/Code/Stage...
I then get:
❌ VoucherSupportStack failed: Error: VoucherSupportStack: SSM parameter /cdk-bootstrap/xxxxxxxx/version not found. Has the environment been bootstrapped? Please run 'cdk bootstrap' (see https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html)
I am minded to run cdk bootstrap in a production pipeline, on a once-off basis, as I think this is all it needs. We have very little CDK knowledge amongst my team, so I am a bit stuck on obtaining the appropriate reassurances - is this safe to run on a production AWS account?
As I understand it, it will just create a harmless "stack" that does nothing (unless we start using cdk deploy ...).
Yes, you need to bootstrap every environment (account/region) that you deploy to, including your production environment(s).
It is definitely safe to do - it's what CDK expects.
You can scope the execution role down if you need (the default policy is AdministratorAccess).
Although your pipeline shouldn't ideally be performing lookups during synth - the recommended way is to run cdk synth once with your production credentials, which will perform the lookups and populate the cdk.context.json file. You would then commit this file to VCS and your pipeline will use these cached values instead of performing the lookups every time.
Generally yes, but here some extension to #gshpychka answer:
You don't have to bootstrap your production environment in case you are deploying your application with AWS Service Catalog. The setup in our project looks like following:
Resources account - for pipelines, secrets, ...
Development account - bootstrapped, the dev pipeline deploys directly to this account
Integration Account and Production Account - not bootstrapped, we are provisioning the releases and the release candidates through the AWS Service Catalog.
Service Catalog provides the cool functionality to provision and also update the applications in a friendly way. There are CDK LVL2 stable constructs for building Your product stacks.
Of course, this approach has its advantages and disadvantages. I would recommend using it if you want to have full control over when you want to deploy or update your application. It is also worth using this approach if you are developing an application that will be installed on a client account.
I am not quite clear on the best practices related to using CDK to deploy private Github repos to AWS. I understand that a pipeline should be created by CDK and the pipeline should invoke CodeDeploy to deploy the assets, but beyond that the details are murky.
I also want to understand if for this use case it would make sense to have a separate CDK repo which is responsible for the infrastructure for the entire backend of my project, or if it would make more sense to have CDK code included in each individual component repo. As I will be utilizing a microservice/cell based approach for building out components, the overhead required in adding CDK configuration for each component might be substantial.
You can think of CDK as a compiler that takes a given language and 'compiles' it down to CloudFormation templates. Those templates are then uploaded to Cloudformation by the CDK framework, and run for you when you execute the deploy command.
So. To answer your question more directly - if you can figure out how to do it in cloudformation, you can do it. That may involve spinning up a code build and running a script that executes some api calls or prepares a package for an ec2 server that then uses that package in the next step or any number of things.
But remember that CDK synths its cloudformation template all at once, and is only creating the template. It does not run any scripts that may be part of your code builds, and it does not 'wait' for certain things to be complete - because it isn't doing anything like that. If you have a sequence of events that need to occur, you want to use CodePipeline to orchestrate those events for you - but you can set up your CodePipeline with CDK for certain!
As for Overhead, maybe at first. But trust me when I say it becomes very quick and easy to generate a CDK stack for a given microservice with experience, and its super handy. Being able to spin up an ad-hoc on demand testing environment is super useful. Being able to deploy individual stacks on demand and make quick changes with just a line of code is handy as all get out. Having a single source of code for both your prod and development environments that you make a change in one and on next deployment in each is automatically reflected is super handy.
CDK is a very powerful tool - but it is a very low level one. It creates the template that will create your resources. Thats it. If your resources need to do something after being created for something else to happen you have to make use of other services to orchestrate that (CodePipeline, StepFunctions, Cloudwatch Events, ect)
I'm considering using AppConfig, but am struggling to understand how configurations would be used in a scenario where the Test and Staging deployments are in different accounts.
Having two completely different AppConfig setups in these two accounts seems counter productive, since it would make it difficult to elevate configurations to the different deployments.
I could alternately have one AppConfig setup, and call it from my application, but that would require cross account access, using a different role I presume, since there is no access to AppConfig using an ARN or resource-based policies.
So how would I access AppConfig across multiple accounts?
Stack Sets
Some services do have native multi-account support through the console. But if that fails, you can always use StackSets. If you can manage to package your AppConfig nicely up into a CloudFormation template, you can deploy a set of stacks to an Organizational Unit, which will deploy to all accounts in that OU.
This may or may not fit your use case based on your requirements. The typical use case for this is to enforce compliance and uniformity in these accounts that the VPC setup is consistent, logging is enabled etc. It isn't necessarily to deploy an application into different accounts not to say that this isn't a good idea, it just depends.
CI/CD - Preferred (IMO)
What I believe most people do is have a CI/CD account in AWS, or a separate CI/CD tool outside of AWS, which would have a pipe line (Code Pipline in AWS), which would have each of these accounts as a separate Stage. In your pipelines, you would have environment variables for each account if needed, and make the CLI/API calls to AWS which you are manually doing ATM. IMO this would be the most maintainable approach most of the time for the following reasons:
Can easily have differences in the environments, (conditions in CloudFormation are very hard to maintain IMO
If there is a problem in one stack your stack set it is not such an issue, as you may have one stack effect others.
You generally have more granularity and control than you would with only CloudFormation and StackSets although with a bit of effort you can technically do everything with CloudFormation.
Service Catalog
Another alternative is to use the AWS Service Catalog, with auto update of provisioned products, there is an example of this here. But again this was for a slightly different use case of independent IT teams in an organization consuming IT products available to the company.
App Config should be environment specific and cloud formation could be one of the solution to tackle the complxity of deployement.
I have create a stack, in there we create a lambda, execute some code from SDK, access to s3, write to dynamo and some other stuff, the problem now is that we are trying to deploy to a different account/region that we never deploy again, but now we are facing a lot of issues related to permissions, some of them my team already see them and are properly documented, but other cases, other teams may be facing those errors and we do not have that context, we try to go one by one as they appears but is something painful and my question is if there is a way to describe/analyze the policies that the rol that I assume has in order to execute that stack before the provisioning or how I can figure out which permission my resource needs? or basically it is go throughout all permission one by one
I'd really like something like this to exist but I do not foresee a reliable one being developed anytime soon. However, since I've been down that road myself I would suggest you something a bit more manageable.
AWS CloudFormation service role allows you to pass a role with greater permissions than the one gave to a normal user. In a nutshell, you must first create a role with some decently large permissions or even administrative permissions. Then you need to allow normal users to perform the iam:PassRole action for that resource (the role). Lastly, when you deploy a CloudFormation stack, make sure you specify the role you created as the "service role" in the stack options.
From a security standpoint there is pros and cons to both using a service role or giving a lot of different permissions to normal users. You have to assess for yourself if it's a risk you can manage.
I am trying to programmatically recreate a bunch of AWS resources that were created/configured manually via AWS consoles.
The AWS consoles do a lot for you.
For example, you can create a Lambda function with an Api-Gateway trigger in about 10 seconds using the AWS console.
The console is doing a lot of magic under the covers, defining and configuring resources such as policies, stages, permissions, models, etc.
In theory, CloudTrail is supposed to allow me to see what exactly is happening under the covers, but it seems to be silent in this case (i.e. Lambda function with Api-Gateway trigger).
I can play hide and seek and do extensive dumps using the CLI to list stages, policies, export api definitions, etc. etc. and look for the differences but is there an easier way? - like some way to trace the REST calls that the console is creating when it does all its magic?
Note: CloudFormer could have helped but it is only half-written software (Hey Amazon!) and only covers about a third of the resources I have defined. Does embracing Cloudformation imply not using these great time-saving consoles?
CloudFormation and other Infrastructure as code services are there to lessen the clicks you make while using AWS console or any other cloud console to manage your resources.
But these come in handy when you have to spin up resources which will be having almost the same configurations and software stack.
If you use CloudFormation you will be able to define the policies according to your need, which OS image to use, which stack to install etc. etc. it provides you minute control over your resources.
I suggest if you have to deploy these resources multiple times then create a CloudFormation template and use it.
So, I would suggest that rather than finding a way to recreate the code from your current infrastructure, create a CloudFormation template and use it for future needs.
If you need something easier than your current flow, this is it, as you just have to write your required configuration once.
Hashicorp Terraform is also a good alternative to AWS CloudFormation. You can use Terraforming to export the current infrastructure into Terraform readable files.