Can AppConfig be used for cross-account deployments? - amazon-web-services

I'm considering using AppConfig, but am struggling to understand how configurations would be used in a scenario where the Test and Staging deployments are in different accounts.
Having two completely different AppConfig setups in these two accounts seems counter productive, since it would make it difficult to elevate configurations to the different deployments.
I could alternately have one AppConfig setup, and call it from my application, but that would require cross account access, using a different role I presume, since there is no access to AppConfig using an ARN or resource-based policies.
So how would I access AppConfig across multiple accounts?

Stack Sets
Some services do have native multi-account support through the console. But if that fails, you can always use StackSets. If you can manage to package your AppConfig nicely up into a CloudFormation template, you can deploy a set of stacks to an Organizational Unit, which will deploy to all accounts in that OU.
This may or may not fit your use case based on your requirements. The typical use case for this is to enforce compliance and uniformity in these accounts that the VPC setup is consistent, logging is enabled etc. It isn't necessarily to deploy an application into different accounts not to say that this isn't a good idea, it just depends.
CI/CD - Preferred (IMO)
What I believe most people do is have a CI/CD account in AWS, or a separate CI/CD tool outside of AWS, which would have a pipe line (Code Pipline in AWS), which would have each of these accounts as a separate Stage. In your pipelines, you would have environment variables for each account if needed, and make the CLI/API calls to AWS which you are manually doing ATM. IMO this would be the most maintainable approach most of the time for the following reasons:
Can easily have differences in the environments, (conditions in CloudFormation are very hard to maintain IMO
If there is a problem in one stack your stack set it is not such an issue, as you may have one stack effect others.
You generally have more granularity and control than you would with only CloudFormation and StackSets although with a bit of effort you can technically do everything with CloudFormation.
Service Catalog
Another alternative is to use the AWS Service Catalog, with auto update of provisioned products, there is an example of this here. But again this was for a slightly different use case of independent IT teams in an organization consuming IT products available to the company.

App Config should be environment specific and cloud formation could be one of the solution to tackle the complxity of deployement.

Related

Separating Dev Prod Environments In AWS

I’m my scenario wants to separate out the production environment from our development environments.
We'd like to only have our production systems on one AWS account and all other systems and services on another.
I'd like to split/separate for billing purposes. If I do add more monitoring services many charge by the number of running instances. I have considerably more running instances than I need to monitor though so I'd like the separation. This also would make managing permissions in the future a lot easier I believe (e.g. security hub scores wouldn't be affected by LMS instances).
I'd like to split out all public facing assets to a separate AWS account. So RDS, all EC2 instances relating to prod-webserver (instances, target group, AMI, scaling, VPC, etc.), S3 cloudfront.abc.com bucket, jenkins, OpenVPN, all Seoul assets.
Perhaps I could achieve the goal with 'Organizations' or the 'Control Tower' as well. Could anyone please advise what would be best in my scenario? Is there Better alternative for this ?
The fact that you was to split for billing purposes means you should use separate AWS Accounts. While you could split some billing by tags within a single account, it's much easier to use multiple accounts to split the billing.
The typical split is Production / Testing / Development.
You can join the accounts together by using AWS Organizations, which gives some overall security controls.
Separating workloads and environments is considered a best practice in AWS according to the AWS Well-Architected Framework. Nowadays Control Tower (which builds upon AWS Organizations) is the standard for building multi-account setups in AWS.
Regarding multi-account setups I recommend reading the Organizing Your AWS Environment Using Multiple Accounts.
Also have a look at the open-source AWS Quickstart superwerker which sets up a well-architected AWS landing zone using AWS Control Tower, Security Hub, GuardDuty, and more.
AWS provides a lot information about this topic. E.g. a very detailed Whitepaper about Organizing Your AWS Environment in which they say
Using multiple AWS accounts to help isolate and manage your business applications and data can
help you optimize across most of the AWS Well-Architected Framework pillars, including operational
excellence, security, reliability, and cost optimization.
With accounts, you logically separate all resources (unless you allow something else) and therefore ensure independence between e.g. the development environment and the production environment.
You should also take a look at Organizational Units (OUs)
The following benefits of using OUs helped shape the Recommended OUs and accounts and Patterns for organizing your AWS accounts.
Group similar accounts based on function
Apply common policies
Share common resources
Provision and manage common resources
Control Tower is a tool which allows you to manage all your AWS accounts in one place. You can apply policies for every account, OU, or prohibit regions. You can use the Account Factory to create new accounts based on blueprints.
But still you need to collect a lot of knowledge about these tools and best practices because they're just that. Best practices and recommendations you can use to get started and build a good foundation, but they're nothing you can fully rely on because you may have individual factors.
So understanding these factor and consequences is very important.

CDK deployment and least privilege principle

We're (mostly happily ;)) using the AWS CDK to deploy our application stack to multiple environments (e.g. production, centralized dev, individual dev).
Now we want to increase the security by applying the least privilege principle to the deployment role. As the CDK code already has all the information about which services it will touch, is there a best practice as to how to generate the role definition?
Obviously it can't be a part of the stack as it is needed to deploy the stack.
Is there any mechanism built in to the CDK (e.g. construct CloudFrontDistribution is used thus the deployment role needs to have the permission to create, update and delete CloudFrontDistributions - possibly even after the CloudFrontDistribution is mapped to only do that to that one distribution).
Any best practices as how to achieve that?
No. Sadly there isn't currently (2022-Q3) a way to have the CDK code also provide a IAM policy that would grant you access to run that template and nothing more.
However, everything is there to do it, and thanks to aspects it could probably be done relatively easily if you wanted to put in the leg work. I know many people in the community would love to have this.
You run into a chicken and an egg problem here. (We encounter a similar issue with Secret Manager and initializing secrets) pretty much the only solution I've found that works is a first time setup script that uses an SDK or the CLI to run the necessary commands for that first time setup. Then you can reference that beyond there.
However, it also depends on what roles you're taking about. Cdk deploy pretty much needs access to any given resource you may be setting up - but you can limit it through users. Your kept in a secret lock box root admin setup script can setup a single power user, that can then be used for initial cdk deploys. You can set up additional user groups that have the ability to deploy cdk or have that initial setup create a cdk role that cdk deploy can assume.

What is the recommended way to separate environments within AWS API Gateway?

What approaches can be taken in order to separate environments when using AWS API gateway?
For example, I realize I could simply create a unique account per environment. However, I also would like to leverage the developer portal and dont want to duplicate my efforts any more than I have to.
Based on my little experience with AWS, I'd imagine there are two approaches:
Create unique instance within a unique account
Create a single account and use stage variables
Lets assume for example that I have 3 environments
DEV
STAGE
PROD
(with the option of having a PREPROD environment perhaps)
Perhaps creating unique account per env?
I want to know the recommended best practice for separating environments for ENTERPRISE applications. Any insight is appreciated.
Within a single API Gateway API, you can create different versions of your API configuration using Stages. Stages can be dev, beta, prod etc. which will have snapshots of different configurations of your API during development.
You may also want to look at Canary Deployments.

Mixing Terraform and Serverless Framework

It's more of an open question and I'm just hoping for any opinions and suggestions. I have AWS in mind but it probably can relate also to other cloud providers.
I'd like to provision IaaC solution that will be easily maintainable and cover all the requirements of modern serverless architecture. Terraform is a great tool for defining the infrastructure, has many official resources and stable support from the community. I really like its syntax and the whole concept of modules. However, it's quite bad for working with Lambdas. It also raises another question: should code change be deployed using the same flow as infrastructure change? Where to draw the line between code and infrastructure?
On the other hand, Serverless Framework allows for super easy development and deployment of Lambdas. It's strongly opinionated when it comes to the usage of resources but it comes with some many out-of-the-box features that it's worth it. It shouldn't really be used for defining the whole infrastructure.
My current approach is to define any shared resources using Terraform and any domain-related resources using Serverless. Here I have another issue that is related to my previous questions: deployment dependency. The simple scenario: Lambda.1 adds users to Cognito (shared resource) which has Lambda.2 as a trigger. I have to create a custom solution for managing the deployment order (Lambda.2 has to be deployed first, etc.). It's possible to hook up the Serverless Framework deployment into Terraform but then again: should the code deployment be mixed with infrastructure deployment?
It is totally possible to mix the two and I have had to do so a few times. How this looks actually ends up being simpler than it seems.
First off, if you think about whatever you do with the Serverless Framework as developing microservices (without the associated infrastructure management burden), that takes it one step in the right direction. Then, what you can do is decide that everything that is required to make that microservice work internally is defined within that microservice as a part of the services configuration in the serverless.yml, whether that be DynamoDB tables, Auth0 integrations, Kinesis streams, SQS, SNS, IAM permissions allocated to functions, etc. Keep that all defined as a part of that microservice. Terraform not required.
Now think about what that and other microservices might need to interact with more broadly. They aren't critical for that services internal operation but are critical for integration into the rest of the organisations infrastructure. This includes things like deployment IAM roles used by the Serverless Framework services to deploy into CloudFormation, Relational Databases that have to be shared amongst multiple services and resources, networking elements (VPC's, Security Groups, etc), monolithic clusters like ElasticSearch and Redis ... all of these elements are great candidates for definition outside of the Serverless Framework and work really well with Terraform.
Any resource would be able to connect to these Terraform defined resource as needed, unlike that hard association such as Lambda functions triggered off of an API Gateway endpoint.
Hope that helps

what's the drawback of using same AWS account in different environments with different VPC?

What are the drawbacks of deploying 3 environments (DEV, QA, and Production) under the same AWS account, in different VPC IP tables.
To me it makes sense, if the same team will need to manage 3 different environments.
I've heard people saying that one should use separate accounts for development and production, but does that mean to use completely different environments and that they should have different console login links?
Please advise. Thanks!!
You can make both ideas work (single account with multiple environments, or multiple accounts with one environment per account) and both have advantages and disadvantages.
If you run multiple environments in the same account:
your AWS account limits are more easily reached
a runaway dev script could impact production's ability to scale up
loss of credentials endangers all of your environments
developers could accidentally damage production
I think it's also simpler to separate production costs from other costs if you use multiple accounts and consolidated billing.
Setting up cross-account access is simple, if you need it.
Generally, it is recommended to separate the production environment from the rest. For this, you can create a separate AWS account to deployment. The main reason for this is the isolation of the production account from the rest so that both for security as well as more managed control over it.
The problem in having one AWS account for multiple stages (Dev, QA, and Production) is that it is difficult to completely isolate environments only using IAM permissions. Even if its the same team, separating production account from the rest allow them to build confidence in using the other accounts(Dev and QA) without any hesitation. This also reduces the production issues happening by mistakes (Specially when using many AWS services for the application).
To centralize the billing and reduce the management complexities of multiple AWS accounts, you can use AWS organizations.
It's not separate AWS accounts but organizations.
Please read here https://aws.amazon.com/organizations/ .
Yes, each org will have its own console login link.
Using different VPC in same, for separate dev/qa/prod envs, you need to deal with different names for S3 buckets and DynamoDb tables as these doesn't support VPC segregation.
[Bonus]: One org costs around $100 per month :)