AWS CDK multiple Apps - amazon-web-services

Would it be possible to have two CDK Apps in the same project, something like this:
from aws_cdk import core
from stack1 import Stack1
from stack2 import Stack2
app1 = core.App()
Stack1(app1, "CDK1")
app1.synth()
app2 = core.App()
Stack2(app2, "CDK2")
app2.synth()
And deploy them? Synchronously/Asynchronously?
Would it be possible to reference some resources from one app in the other one?

Yes you can have multiple applications in a CDK project, but there are some serious caveats.
A CDK process can only synth/deploy one app at a time.
They cannot be defined in the same file.
They cannot directly reference each other's resources.
To put this in perspective, each app is functionally isolated from each other and it is roughly equivalent to having two separate CDK projects just sharing the same codebase, so the use cases for this are limited.
The only way for them to share resources is either to extract it to an additional common app that must be deployed first, or for you to store the ARN of that resource in something (e.g., Parameter Store), and load it at run time. You cannot assume that the resource will exist as one of the apps may not have been deployed yet, and if you import the resource into your Stack directly, you've defeated the whole point of splitting them apart.
That is to say, this is ok:
stack1.lambda:
from ssm_parameter_store import SSMParameterStore
store = SSMParameterStore(prefix='/Prod')
ssn_arn = store['stack2.sns']
if !ssn_arn
// Doesn't matter
return
try:
sns.publish(ssn_arn, 'something')
except:
// Doesn't matter
But if it's critical to stack1 that a resource from stack2 exists, or you want to import a stack2 resource into stack1, then you either need to do a third split of all the common resources: common-resources.app.py, or there's no point splitting them.
We do this a lot in our projects, with one app creating a CodePipeline that automatically deploys the other app. However, we only do this because we prefer the pipeline lives next to the code it is deploying and it would be equally valid to extract it into an entirely new project.
If you want to do this, you need to do:
app1.py:
from aws_cdk import core
from stack1 import Stack1
app1 = core.App()
Stack1(app1, "CDK1")
app1.synth()
app2.py:
from aws_cdk import core
from stack2 import Stack2
app2 = core.App()
Stack2(app2, "CDK2")
app2.synth()
You then deploy this by running in parallel or sequentially:
cdk deploy --app "python app1.py"
cdk deploy --app "python app2.py"

Having re-read your question, the short answer is no. In testing this, I found that CDK would only create the second app defined.
You can, however, deploy multiple-stack applications:
https://docs.aws.amazon.com/cdk/latest/guide/stack_how_to_create_multiple_stacks.html
It's also possible to reference resources from one stack in another, by using core.CfnOutput and core.Fn.importValue:
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.core/CfnOutput.html
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.core/Fn.html
Under the hood, this uses CloudFormation's ability to export outputs and import them in other stacks. Effectively your multiple stack CDK app will create nested CloudFormation stacks.
In terms of deployments, CDK creates a CloudFormation change set and deploys it, so all changes will be deployed on cdk deploy. From your perspective, it'll be synchronous, but there may be some asynchronous API calls happening under the hood through CloudFormation.

Based on your comments - mainly seeking to do parrallel deployments from your local - you don't need multiple apps.
Just open a new terminal shell and start deploying again, using stack names to define which stack you are currently deploying:
**Shell 1**
cdk deploy StackName1
> Open a new terminal window
**Shell 2**
cdk deploy OtherStackName
and they will both run simultaneously. They have no interaction with each other, and if they depend on each others resources to be deployed in a certain order this will simply be a recipe for disaster.
but if all you are looking for is speed of deployment, then yeah. This will do the trick just fine.
If this is a common action however, you'd be best advised to set up a CodePipeline with one stage having two CodeDeploy actions to deploy your stacks from the synthed templates (or two codebuilds to do the same thing using cdk deploy)

Yes, you can do pretty much the exact thing that you gave as an example in your question: have 2 apps and synthesize them into 2 separate folders. You do that by overriding outdir prop for each app, otherwise they would override each other's compiled files. See more complete example at the end.
A few caveats though!
As of the time of this writing, this is most likely unsupported. In the docs of the outdir property it says:
You should never need to set this value.
This property is intended for internal and testing use.
So take it or leave it on your own risk :)
Calling cdk synth on this project will indeed create 2 folders with the right files but the command fails with ENOENT: no such file or directory, open 'cdk.out/manifest.json'. The mentioned folder cdk.out is created too, it's just empty. So I guess the CDK team doesn't account for anyone using this approach. I don't know CDK internals well enough to be 100% sure but from a brief glance into the compiled templates, the output looks ok and should probably work.
You are limited in what you can share between the apps. Note that when you instantiate a stack, the first argument is an app. Therefore, for the second app you need a new instantiation.
You can deploy each app separately with --app flag, e.g. cdk deploy --app cdk.out.dev
Full example here:
#!/usr/bin/env node
import "source-map-support/register";
import * as cdk from "aws-cdk-lib";
import { EventInfrastructureStack } from "../lib/stacks/event-infrastructure-stack";
const devApp = new cdk.App({
outdir: "cdk.out.dev",
});
new EventInfrastructureStack(devApp, "EventInfrastructureStack", {
env: {
account: "account1",
region: "eu-west-1",
},
});
const prodApp = new cdk.App({
outdir: "cdk.out.prod",
});
new EventInfrastructureStack(prodApp, "EventInfrastructureStack", {
env: {
account: "acount2",
region: "eu-west-1",
},
});
devApp.synth();
prodApp.synth();
Now, you didn't tell us what were you trying to achieve. My goal when first looking into this was to have a separate app for each environment. CDK offers Stage construct for this purpose, docs here.
An abstract application modeling unit consisting of Stacks that should
be deployed together.
You can then instantiate (stage) multiple times to model multiple
copies of your application which should be be deployed to different
environments.
Maybe that's what you were really looking for?

Related

How to provide custom backend code in AWS so that a Lambda function can import it

Imagine a basic Lambda Function like this:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""This is an example Lambda Function which imports MyFancyClass from the
backend.
"""
from typing import Any, Dict
from project_name.module import MyFancyClass
def lambda_handler(event: Dict[str, Any], context: Any) -> Dict[str, Any]:
"""Standard Lambda event handler."""
del context
class_instance = MyFancyClass('some initial value')
result = class_instance.special_method(event)
return result
I could not manage to find an easy explanation how the code setup in AWS works such that you provide a backend code repo from where the Lambda function can import Python modules.
I read about Lambda Layers, but deploying all dependencies as a ZIP-file every time I make a change in the backend does not seem to be the smooth process I envision.
I much rather believe there must be a way to set up a CI/CD - pipeline on AWS where the code repo is managed AWS CodeCommit. With that, you push changes to the remote backend repo, it gets updated and finally, when the Lambda function is executed it accesses the up-to-date backend code.
But maybe, zipping the backend code and deploying it as a Lambda layer is really the only way. And if so, I would like to know how this can be combined with a CI/CD-pipeline + repo in the most convenient way.
You have to package the dependencies either into the Lambda deployment zip file, or into a Lambda Layer zip file, and deploy that to AWS Lambda. An AWS Lambda function cannot pull dependencies dynamically at run time, it has to have those dependencies included as part of the deployment, or as a deployed Layer that it depends on.
You need to configure your CI/CD platform to build and deploy a new version of your Lambda deployment (or Lambda layer deployment) whenever you push changes to the source code repository.
But maybe, zipping the backend code and deploying it as a Lambda layer is really the only way.
Yes that is the only way.
And if so, I would like to know how this can be combined with a CI/CD-pipeline + repo in the most convenient way.
That last question is extremely broad.
In general you would have a CI/CD pipeline that is triggered by updates to your source code repository, builds a new version of your Lambda function or Lambda layer, and then deploys that update to AWS Lambda. If you need specific help with a specific step in that process, ask that as a separate question.

How to achieve multiple gcs backends in terraform

Within our team. We all have our own dev project, and then we have a test and prod environment.
We are currently in the process of migrating from deployment manager, and gcloud cli. Into terraform. however we havent been able to figure out a way to create isolated backends within gcs backend. We have noticed that the remote backends support setting a dedicated workspace but we havent been able to setup something similar within gcs.
Is it possible to state that terraform resource A, will have a configurable backend, that we can adjust per project, or is the equivalent possible with workspaces?
So that we can use either tfvars, and vars parameters to switch between projects?
As stands everytime we attempt to make the backend configurable through vars, we get the error in terraform init of
Error: Variables not allowed
How does one go about creating isolated backends for each project.
Or if that isn't possible how can we guarantee that with multiple projects a shared backend state will not collide causing the state to be incorrect?
Your backend must been known when you run your terraform init command, I mean your backend bucket.
If you don't want to use workspace, you have to customize the backend value before running the init. We are use make to achieve this. According to the environment, make create a backend.tf file with the correct backend name. And run the init command.
EDIT 1
We have this piece of script (sh) which create the backend before triggering the terraform command. (it's our Make file that do this)
cat > $TF_export_dir/backend.tf << EOF
terraform {
backend "gcs" {
bucket = "$TF_subsidiary-$TF_environment-$TF_deployed_application_code-gcs-tfstatebackend"
prefix = "terraform/state"
}
}
EOF
Of course the bucket name pattern is dependent of our project. The $TF_environment is the most important because according to the env var set, the bucket reached will be different.

how to deploy in different environment (dev, uat ,prod) using cdk pipeline?

When I commit to develop branch it must deploy code to specific environment (dev). Similarly when i deploy to uat branch it must deploy to uat environment. How do i achieve this functionality in aws cdk pipeline ?
There is stage and be deployed to multiple region but need to define if pushed to this branch then deploy to this environment likewise.
The best approach depends on a few factors including whether your stack is environment agnostic or not (i.e. whether it needs to look up resources from within a given account.)
For simply switching between different accounts and regions, the CDK team has a decent writeup here which recommends a small wrapper script for each environment that injects the configuration by way of CDK_DEPLOY_ACCOUNT and CDK_DEPLOY_REGION environment variables.
If you want to provide other synth time context then you can do so via the context API, which allows you to provide configuration 'in six different ways':
Automatically from the current AWS account.
Through the --context option to the cdk command.
In the project's cdk.context.json file.
In the project's cdk.json file.
In the context key of your ~/.cdk.json file.
In your AWS CDK app using the construct.node.setContext method.
My team uses the inline context args to define an environment name, and from the environment name, it reads a json config file that defines many environment-dependent parameters.
cdk deploy --context env=Dev
We let the environment name determine the branch name and set it accordingly on the 'Branch' property of the 'GitHubSourceAction'. (C# code)
string env = (string)this.Node.TryGetContext("env");
var pipeline = new CdkPipeline(this, "My-Pipeline", new CdkPipelineProps()
{
SourceAction = new GitHubSourceAction(new GitHubSourceActionProps()
{
Branch = env
})
})

Can we use AWS-CDK within another application?

The documentation on AWS-CDK has examples of setting it up as a standalone application with support in multiple languages.
I have the following questions regarding the same:
Is it possible to use it within a separate app (written in .NET Core or Angular) like a library?
By above I mean being able to instantiate the construct classes within my app's services and create stacks in my AWS account.
If yes, how does it affect the deployment process? Will invoking the synth() function, generate the cloud-formation templates as expected?
Apologies if my question is vague. I am just getting started with this and am willing to provide necessary details if needed.
I appreciate any help in this regard. Thank you.
I've tried using cdk as a library, but had a few issues and started calling it from another app by using a cli call.
I was using typescript and basically what I did was to call the synth method on the app construct:
import * as cdk from '#aws-cdk/core';
const app = new cdk.App();
... // do something
const cf = app.synth(); // here you get the cloud assembly
cf.something() // you can manipulate the results here
A few issues I found was to get errors during synth as they were not proper bubbled up.
I couldn't find a way to deal with the assets either...
In summary, I didn't explore it much further than that, but I think cdk might need more development to be able to use all its features when importing as a library.

Importing existing resources with multiple accounts

We have four AWS accounts used to define different environments: dev, sqe, stg, prd. We're only now using CF and I'd like to import an existing resource into a stack. As we roll this out each environment will get the new stack and I'm wondering if there's an easier way to import the resource in each env. than to initially go through the console to import the reasource while add the stack (would be nice if we could just deploy via our deployment system.)
What I was hoping for was something I could specify in the stack definition itself (e.g., "here's a bucket that already exists, take ownership"), but I'm not finding anything. Currently it seems like the easiest route would be to create an empty stack in each environment which imports the resource and then just deploy as normal.
Also, what happens when/if an update fails and a stack gets stuck in ROLLBACK_COMPLETE? Do I have to go through this again after deleting the stack?
What you have described sounds exactly like your after a Continuous Integration / Continuous Deployment (CICD) pipeline. Instead of trying to import existing resources into your accounts, your better off designing the cloudformation templates then deploying them to each environment through Code Pipeline. This will also provide a clean separation between the accounts instead of importing stg resources to prd.
A fantastic example and quickstart is the serverless-cicd-for-enterprise which should serve as a good starting point for you.
You can't get stuck on 'rollback complete', as that is the last action a failed change set executes. What it means is that it tried to update, couldn't and has reverted to the last successful deployment. If this is the first deployment (no successful deployments) you will need to delete the stack and try again. However, if you have had a successful deployment you can run an update stack.