how to deploy in different environment (dev, uat ,prod) using cdk pipeline? - amazon-web-services

When I commit to develop branch it must deploy code to specific environment (dev). Similarly when i deploy to uat branch it must deploy to uat environment. How do i achieve this functionality in aws cdk pipeline ?
There is stage and be deployed to multiple region but need to define if pushed to this branch then deploy to this environment likewise.

The best approach depends on a few factors including whether your stack is environment agnostic or not (i.e. whether it needs to look up resources from within a given account.)
For simply switching between different accounts and regions, the CDK team has a decent writeup here which recommends a small wrapper script for each environment that injects the configuration by way of CDK_DEPLOY_ACCOUNT and CDK_DEPLOY_REGION environment variables.
If you want to provide other synth time context then you can do so via the context API, which allows you to provide configuration 'in six different ways':
Automatically from the current AWS account.
Through the --context option to the cdk command.
In the project's cdk.context.json file.
In the project's cdk.json file.
In the context key of your ~/.cdk.json file.
In your AWS CDK app using the construct.node.setContext method.

My team uses the inline context args to define an environment name, and from the environment name, it reads a json config file that defines many environment-dependent parameters.
cdk deploy --context env=Dev
We let the environment name determine the branch name and set it accordingly on the 'Branch' property of the 'GitHubSourceAction'. (C# code)
string env = (string)this.Node.TryGetContext("env");
var pipeline = new CdkPipeline(this, "My-Pipeline", new CdkPipelineProps()
{
SourceAction = new GitHubSourceAction(new GitHubSourceActionProps()
{
Branch = env
})
})

Related

How to achieve multiple gcs backends in terraform

Within our team. We all have our own dev project, and then we have a test and prod environment.
We are currently in the process of migrating from deployment manager, and gcloud cli. Into terraform. however we havent been able to figure out a way to create isolated backends within gcs backend. We have noticed that the remote backends support setting a dedicated workspace but we havent been able to setup something similar within gcs.
Is it possible to state that terraform resource A, will have a configurable backend, that we can adjust per project, or is the equivalent possible with workspaces?
So that we can use either tfvars, and vars parameters to switch between projects?
As stands everytime we attempt to make the backend configurable through vars, we get the error in terraform init of
Error: Variables not allowed
How does one go about creating isolated backends for each project.
Or if that isn't possible how can we guarantee that with multiple projects a shared backend state will not collide causing the state to be incorrect?
Your backend must been known when you run your terraform init command, I mean your backend bucket.
If you don't want to use workspace, you have to customize the backend value before running the init. We are use make to achieve this. According to the environment, make create a backend.tf file with the correct backend name. And run the init command.
EDIT 1
We have this piece of script (sh) which create the backend before triggering the terraform command. (it's our Make file that do this)
cat > $TF_export_dir/backend.tf << EOF
terraform {
backend "gcs" {
bucket = "$TF_subsidiary-$TF_environment-$TF_deployed_application_code-gcs-tfstatebackend"
prefix = "terraform/state"
}
}
EOF
Of course the bucket name pattern is dependent of our project. The $TF_environment is the most important because according to the env var set, the bucket reached will be different.

AWS CDK multiple Apps

Would it be possible to have two CDK Apps in the same project, something like this:
from aws_cdk import core
from stack1 import Stack1
from stack2 import Stack2
app1 = core.App()
Stack1(app1, "CDK1")
app1.synth()
app2 = core.App()
Stack2(app2, "CDK2")
app2.synth()
And deploy them? Synchronously/Asynchronously?
Would it be possible to reference some resources from one app in the other one?
Yes you can have multiple applications in a CDK project, but there are some serious caveats.
A CDK process can only synth/deploy one app at a time.
They cannot be defined in the same file.
They cannot directly reference each other's resources.
To put this in perspective, each app is functionally isolated from each other and it is roughly equivalent to having two separate CDK projects just sharing the same codebase, so the use cases for this are limited.
The only way for them to share resources is either to extract it to an additional common app that must be deployed first, or for you to store the ARN of that resource in something (e.g., Parameter Store), and load it at run time. You cannot assume that the resource will exist as one of the apps may not have been deployed yet, and if you import the resource into your Stack directly, you've defeated the whole point of splitting them apart.
That is to say, this is ok:
stack1.lambda:
from ssm_parameter_store import SSMParameterStore
store = SSMParameterStore(prefix='/Prod')
ssn_arn = store['stack2.sns']
if !ssn_arn
// Doesn't matter
return
try:
sns.publish(ssn_arn, 'something')
except:
// Doesn't matter
But if it's critical to stack1 that a resource from stack2 exists, or you want to import a stack2 resource into stack1, then you either need to do a third split of all the common resources: common-resources.app.py, or there's no point splitting them.
We do this a lot in our projects, with one app creating a CodePipeline that automatically deploys the other app. However, we only do this because we prefer the pipeline lives next to the code it is deploying and it would be equally valid to extract it into an entirely new project.
If you want to do this, you need to do:
app1.py:
from aws_cdk import core
from stack1 import Stack1
app1 = core.App()
Stack1(app1, "CDK1")
app1.synth()
app2.py:
from aws_cdk import core
from stack2 import Stack2
app2 = core.App()
Stack2(app2, "CDK2")
app2.synth()
You then deploy this by running in parallel or sequentially:
cdk deploy --app "python app1.py"
cdk deploy --app "python app2.py"
Having re-read your question, the short answer is no. In testing this, I found that CDK would only create the second app defined.
You can, however, deploy multiple-stack applications:
https://docs.aws.amazon.com/cdk/latest/guide/stack_how_to_create_multiple_stacks.html
It's also possible to reference resources from one stack in another, by using core.CfnOutput and core.Fn.importValue:
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.core/CfnOutput.html
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.core/Fn.html
Under the hood, this uses CloudFormation's ability to export outputs and import them in other stacks. Effectively your multiple stack CDK app will create nested CloudFormation stacks.
In terms of deployments, CDK creates a CloudFormation change set and deploys it, so all changes will be deployed on cdk deploy. From your perspective, it'll be synchronous, but there may be some asynchronous API calls happening under the hood through CloudFormation.
Based on your comments - mainly seeking to do parrallel deployments from your local - you don't need multiple apps.
Just open a new terminal shell and start deploying again, using stack names to define which stack you are currently deploying:
**Shell 1**
cdk deploy StackName1
> Open a new terminal window
**Shell 2**
cdk deploy OtherStackName
and they will both run simultaneously. They have no interaction with each other, and if they depend on each others resources to be deployed in a certain order this will simply be a recipe for disaster.
but if all you are looking for is speed of deployment, then yeah. This will do the trick just fine.
If this is a common action however, you'd be best advised to set up a CodePipeline with one stage having two CodeDeploy actions to deploy your stacks from the synthed templates (or two codebuilds to do the same thing using cdk deploy)
Yes, you can do pretty much the exact thing that you gave as an example in your question: have 2 apps and synthesize them into 2 separate folders. You do that by overriding outdir prop for each app, otherwise they would override each other's compiled files. See more complete example at the end.
A few caveats though!
As of the time of this writing, this is most likely unsupported. In the docs of the outdir property it says:
You should never need to set this value.
This property is intended for internal and testing use.
So take it or leave it on your own risk :)
Calling cdk synth on this project will indeed create 2 folders with the right files but the command fails with ENOENT: no such file or directory, open 'cdk.out/manifest.json'. The mentioned folder cdk.out is created too, it's just empty. So I guess the CDK team doesn't account for anyone using this approach. I don't know CDK internals well enough to be 100% sure but from a brief glance into the compiled templates, the output looks ok and should probably work.
You are limited in what you can share between the apps. Note that when you instantiate a stack, the first argument is an app. Therefore, for the second app you need a new instantiation.
You can deploy each app separately with --app flag, e.g. cdk deploy --app cdk.out.dev
Full example here:
#!/usr/bin/env node
import "source-map-support/register";
import * as cdk from "aws-cdk-lib";
import { EventInfrastructureStack } from "../lib/stacks/event-infrastructure-stack";
const devApp = new cdk.App({
outdir: "cdk.out.dev",
});
new EventInfrastructureStack(devApp, "EventInfrastructureStack", {
env: {
account: "account1",
region: "eu-west-1",
},
});
const prodApp = new cdk.App({
outdir: "cdk.out.prod",
});
new EventInfrastructureStack(prodApp, "EventInfrastructureStack", {
env: {
account: "acount2",
region: "eu-west-1",
},
});
devApp.synth();
prodApp.synth();
Now, you didn't tell us what were you trying to achieve. My goal when first looking into this was to have a separate app for each environment. CDK offers Stage construct for this purpose, docs here.
An abstract application modeling unit consisting of Stacks that should
be deployed together.
You can then instantiate (stage) multiple times to model multiple
copies of your application which should be be deployed to different
environments.
Maybe that's what you were really looking for?

In AWS CodePipeline do we have an option to provide a parameter at run time

I have an angular project and i wanted to have a single pipeline to build for uat, develop and production. I know in the codebuild we can provide an environment variable but if this is hardcoded each time i need to edit the codebuild.
Like jenkins is there any option which ask for parameter which needs to inject to codebuild ?
You cannot pass a variable "from outside" to the CodePipeline, for example when starting a pipeline, pass a variable 'Environment' like dev, uat etc. The StartPipelineExecution API has no such provision.
Instead, actions within the pipeline can generate and pass variables to subsequent actions. This is useful for say CodeBuild action generating a comment which is later consumed by the Manual approval action. Please see the following links for Variable feature in CodePipeline:
https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-variables.html
https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-variables.html

How can you add environment variables in a Jenkins Multibranch Pipeline

I am working on a Django project, I have integrated it withJenkins Multibranch Pipeline. I cannot inject environment variables by the Multibrach Pipeline option even after installing the Environment Injector plugin.
I have evironment variables like DB_PASSWORD that must be included in the envvars.
Any insight will be highly appreciated.
Since you require secrets, the best practice way is to use the withCredentials plugin which can load credentials stored in Jenkins credential store and expose them as ENV variables to code executed within its block/closure. Using withCredentials does not expose them in Jenkins logs
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'DB_Creds',
usernameVariable: 'DB_USERNAME',
passwordVariable: 'DB_PASSWORD']]) {// do stuff }
For non sensitive Env vars use withEnv
withEnv(["AWS_REGION=${params.AWS_REGION}"]) {// do stuff }
If for whatever reason you want Env vars set across your entire pipeline (not entirely recommended but sometimes necessary):
env.MY_VAR='var'
echo("My Env var = ${env.MY_VAR}")

AWS serverless projects cannot be shared via Git?

s-function.json needs that variable "customRole": "${myLambdaRole}",
BUT if somebody else get my serverless project via git clone he doesn't get the _meta folder.
Now he calls serverless project init with the same stage and region. That creates the _meta folder BUT it does NOT populate the s-variables-common.json with the Output Variables from s-resources-cf.json.
Now he tries to deploy with serverless dash deploy and that errors
Serverless: WARNING: This variable is not defined: myLambdaRole
Unfortunately even calling serverless resources deploy will not fix the problem because it says
Serverless: Deploying resources to stage "dev" in region "us-east-1" via Cloudformation (~3 minutes)...
Serverless: No resource updates are to be performed.
and the s-variables-common.json is still not populated with the necessary output variables.
What that means is basically that it is impossible to work as a team together at the same stage in the same region with the same resources when sharing the project via Git.
So since we don't want to check in the _meta folder into Git I would suggest that a serverless project init call should make sure that all the Output Variables are properly fetched and populated in the s-variables-common.json.
This is pretty important, or how do you guys share projects via 'Git' ?
There is a plugin called "meta sync" that should solve your problem:
https://github.com/serverless/serverless-meta-sync