I have a CDK project where we wrap all our resources in the Pipeline construct. Prior to adding the pipeline, we could run cdk diff locally to view changes to the resources we were deploying. Now that we use the Pipeline construct, running diff's locally only results in changes to the pipeline construct being displayed. Is there any way other fiddling with the pipeline construct to view diff's of the application resources and not the pipeline?
Option 1: diff the Application resources AND the Pipeline resources
The stack specifier ** will return differences for all stacks in the hierarchy, not just the Pipeline itself:
cdk diff '**' -a 'ts-node ./bin/app-pipeline.ts'
Option 2: diff the Application resources only
To exclude differences in your Pipeline stack entirely, first nest the "application' stacks in a new Construct subclass. See CDK doc's MyService construct example. MyService wraps three child "application" stacks:
MyService # Construct
ControlPlane # Stack
DataPlane # Stack
Monitoring # Stack
Then use MyService in two contexts, your pipeline Stage and App:
# app-pipeline.ts
MyPipeline # Pipeline
MyStage # Stage
MyService # Construct
# app.ts
App # App
MyService # Construct
Running cdk diff --app 'ts-node ./bin/app.ts' on the App will generate the differences in ControlPlane, DataPlane and Monitoring, not the pipeline itself. These are the same Application differences that will be deployed in the pipeline.
Related
I am trying to create an AWS CodePipeline using AWS CDK in python
cdk verson = 2.29.0
import aws_cdk as cdk
from aws_cdk.pipelines import CodePipeline, CodePipelineSource, ShellStep
from aws_cdk import (
aws_codecommit,
pipelines,
aws_codepipeline_actions,
aws_codepipeline,
aws_codebuild as codebuild,
aws_iam as iam
)
from my_pipeline.my_pipeline_app_stage import MyPipelineAppStage
from constructs import Construct
class MyStacksStage(cdk.Stage):
def __init__(self, scope, id, *, env=None, outdir=None):
super().__init__(scope, id, env=env, outdir=outdir)
self.stack1 = cdk.Stack(self, "stack1")
class MyPipelineStack(cdk.Stack):
def __init__(self, scope: Construct, construct_id: str, branch, **kwargs) -> None:
super().__init__(scope, construct_id ,**kwargs)
repository = aws_codecommit.Repository.from_repository_name(self,"cdk_pipeline", repository_name="repository-name")
pipeline_source = CodePipelineSource.code_commit(repository,"master")
pipeline = CodePipeline(self, "Pipeline",
self_mutation=False,
pipeline_name="cdk_pipeline",
synth=ShellStep("Synth",
input=pipeline_source,
commands=["npm install -g aws-cdk",
"python -m pip install -r requirements.txt",
"cdk synth"]
),
)
pipeline.add_stage(prod,
post=[pipelines.ShellStep("stack2_post",
commands=["ls"])]
I am creating the pipeline using aws_cdk.pipelines.CodePipeline.
What I want is just to create a step to run a script in CodeBuild, but to add a stage I need to create a class stage that contains at least a stack.
The way I am doing it right now is by creating the MyStacksStage class and adding a variable cdk.Stack.
When I add the stage then I add the ShellStep in the post parameter to be able to run a shell command.
It´s my first time working with AWS CodePipeline and I would want to know if there is another way to create a stage to run shell commands without creating a stack and run them in pre or post?
CDK pipelines is for deploying CDK apps. If you just want to create a pipeline that doesn't deploy any CloudFormation stacks defined with CDK and instead runs arbitrary shell commands in Code Build, you don't need CDK pipelines at all - just create a plain codebuild.Pipeline and add a CodeBuildAction to it.
I need to be able to watch for my TypeScript lambda function changes within my CDK app. I'm using SAM to locally invoke the API and do not want to deploy to the cloud each time changes happen. So using something such as SAM Accelerate, for example, is not an option.
Currently, I must run cdk build and sam local start-api manually each time I change a single line in my function code, and it painfully takes a long time to start.
Any solutions or workarounds for this?
You need a Typescript watch feature with a hook to run arbitrary post-compile commands.* Typescript's tsc --watch can't do it (open issue), but the tsc-watch package can:
tsc-watch --onSuccess "./start-api.sh"
tsc-watch will call start-api.sh after each each successful compile, synthing a sam-friendly template version and starting the local testing api:
# start-api.sh
STACK_NAME=MyStack
npx cdk synth $STACK_NAME -a 'ts-node ./bin/app.ts' --no-staging --no-validation --quiet --output cdk.local
sam local start-api --template cdk.local/$STACK_NAME.template.json
* cdk watch (an alias of cdk deploy --watch) won't work in your case, because you don't want to deploy on each change.
I'm building an application with AWS CDK that uses CodePipeline. So there are essentially two stacks, one sets up the code pipeline and the other sets up the application (and it's triggered by the pipeline).
I'm working out of what is built in https://cdkworkshop.com/ so in my project I have a file cdk.json that has an entry app pointing to a specific TypeScript file (example4-be is the application name):
{
"app": "npx ts-node --prefer-ts-exts bin/example4-be.ts",
This file builds the CodePipeline stack:
#!/usr/bin/env node
import * as cdk from "aws-cdk-lib"
import {PipelineStack} from "../lib/pipeline-stack"
const app = new cdk.App()
new PipelineStack(app, "Example4BePipeline")
so when I try to use sam to run the application locally, it fails saying there are no Lambda functions. I believe it's because it's building the CodePipeline stack and not the application stack. If I change exampe4-be.ts to this:
#!/usr/bin/env node
import * as cdk from "aws-cdk-lib"
import {Example4BeStack} from "../lib/example4-be-stack";
const app = new cdk.App()
new Example4BeStack(app, "Example4BePipeline")
it works. Example4BeStack is the application stack. But obviously if I commit this, the CodePipeline will stop working.
How can I have both things working at the same time?
The commands I run to have sam run the application locally are:~
cdk synth --no-staging | out-file template.yaml -encoding utf8
sam local start-api
Create two cdk.App chains in your codebase, one for the pipeline and one for standalone development/testing with sam local or cdk deploy. Your "application" stacks will be part of both chains. Here's a simplified example of the pattern I use:
Pipeline deploy (app-pipeline.ts): ApiStack and DatabaseStack are children of a cdk.Stage, grandchildren of the PipelineStack, and great-granchildren of a cdk.App.
Development deploys (app.ts): ApiStack and DatabaseStack are children of a cdk.App. Use with sam local and cdk deploy for dev and testing.
bin/
app.ts # calls makeAppStacks to add the stacks; runs frequently during development
app-pipeline.ts # adds the PipelineStack to an App
lib/
ApiStack.ts
DatabaseStack.ts
PipelineStack.ts # adds DeployStage to the pipeline
DeployStage.ts # subclasses cdk.Stage; calls makeAppStacks.ts to add the stacks
makeAppStacks.ts # adds the Api and Db stacks to either an App or a Stage
A makeAppStacks wrapper function instantiates the actual stacks.
// makeAppStacks.ts
export const makeAppStacks = (scope: cdk.App | DeployStage, appName: string, account: string, region: string): void => {
const {table} = new DatabaseStack(scope, 'MyDb', ...)
new ApiStack(scope, 'MyApi', {table, ...})
};
makeAppStacks gets called in two places. DeployStage.ts and app.ts are generic and rarely change:
// DeployStage.ts
export class DeployStage extends cdk.Stage {
constructor(scope: Construct, id: string, props: DeployStageProps) {
super(scope, id, props);
makeAppStacks(this, props.appName, props.env.account, props.env.region);
}
}
// app.ts
const app = new cdk.App();
const account = process.env.AWS_ACCOUNT;
makeAppStacks(app, 'MyApp', account, 'us-east-1');
Add some scripts for convenience:
"scripts": {
"---- app (sandbox env) ----": "",
"deploy-sandbox:cdk": "AWS_ACCOUNT=<Sandbox Acct> npx cdk deploy '*' --app 'ts-node ./bin/app.ts' --profile sandbox --outputs-file cdk.outputs.json",
"deploy-sandbox": "build && test && deploy-sandbox:cdk",
"destroy-sandbox": ...,
"synth-sandbox": ...,
"---- app-pipeline (pipeline env) ----": "",
"deploy-pipeline:cdk": "npx cdk deploy '*' --app 'ts-node ./bin/app-pipeline.ts' --profile pipeline",
"deploy-pipeline": "build && deploy-pipeline:cdk",
}
I'm having a set of Terraform files and in particular one variables.tf file which sort of holds my variables like aws access key, aws access token etc. I want to now automate the resource creation on AWS using GitLab CI / CD.
My plan is the following:
Write a .gitlab-ci-yml file
Have the terraform calls in the .gitlab-ci.yml file
I know that I can have secret environment variables in GitLab, but I'm not sure how I can push those variables into my Terraform variables.tf file which looks like this now!
# AWS Config
variable "aws_access_key" {
default = "YOUR_ADMIN_ACCESS_KEY"
}
variable "aws_secret_key" {
default = "YOUR_ADMIN_SECRET_KEY"
}
variable "aws_region" {
default = "us-west-2"
}
In my .gitlab-ci.yml, I have access to the secrets like this:
- 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}'
- 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}'
- 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
How can I pipe it to my Terraform scripts? Any ideas? I would need to read the secrets from GitLab's environment and pass it on to the Terraform scripts!
Which executor are you using for your GitLab runners?
You don't necessarily need to use the Docker executor but can use a runner installed on a bare-metal machine or in a VM.
If you install the gettext package on the respective machine/VM as well you can use the same method as I described in Referencing gitlab secrets in Terraform for the Docker executor.
Another possibility could be that you set
job:
stage: ...
variables:
TF_VAR_SECRET1: ${GITLAB_SECRET}
or
job:
stage: ...
script:
- export TF_VAR_SECRET1=${GITLAB_SECRET}
in your CI job configuration and interpolate these. Please see Getting an Environment Variable in Terraform configuration? as well
Bear in mind that terraform requires a TF_VAR_ prefix to environment variables. So actually you need something like this in .gitlab-ci.yml
- 'TF_VAR_AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}'
- 'TF_VAR_AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}'
- 'TF_VAR_AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
Which also means you could just set the variable in the pipeline with that prefix as well and not need this extra mapping step.
I see you actually did discover this per your comment---I'm still posting this answer since I missed your comment the first time and it would have saved me an hour of work.
I created and built a new CDK project:
mkdir myproj
cd myproj
cdk init --language typescript
npm run build
If I try to run the resulting javascript, I see the following:
PS C:\repos\myproj> node .\bin\myproj.js
CloudExecutable/1.0
Usage:
C:\repos\myproj\bin\myproj.js REQUEST
REQUEST is a JSON-encoded request object.
What is the right way to run my app?
You don't need to run your CDK programs directly, but rather use the CDK Toolkit instead.
To synthesize an AWS CloudFormation from your app:
cdk synth --app "node .\bin\myproj.js"
To avoid re-typing the --app switch every time, you can setup a cdk.json file with:
{ "app": "node .\app\myproj.js" }
Note: A default cdk.json is created by cdk init, so you should already see it under C:\repos\myproj.
You can also use the toolkit to deploy your app into an AWS environment:
cdk deploy
Or list all the stacks in your app:
cdk ls
The CDK application expects a request to be provided as a positional CLI argument when you're using the low-level API (aka running the app directly), for example:
node .\bin\myproj.js '{"type":"list"}'
It can also be passed as a Base64-encoded blob instead (that can make quoting the JSON less painful in a number of cases) - the Base64 needs to be prefixed with base64: in this case.
node .\bin\myproj.js base64:eyAidHlwZSI6ICJsaXN0IiB9Cg==
In order to determine what are the APIs that are available, and what arguments they expect, you can refer to the #aws-cdk/cx-api specification.