dependency cannot cross stage boundaries - amazon-web-services

I have an ApplicationStack which created a S3Bucket:
export class ApplicationStack extends Cdk.Stack {
public readonly websiteBucket : S3.Bucket;
constructor(scope: Construct, id: string, props: ApplicationStackProps) {
super(scope, id, props);
// Amazon S3 bucket to host the store website artifact
this.websiteBucket = new S3.Bucket(this, "eCommerceWebsite", {
bucketName: `${props.websiteDomain}-${account}-${region}`,
websiteIndexDocument: "index.html",
websiteErrorDocument: "error.html",
removalPolicy: Cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
accessControl: S3.BucketAccessControl.PRIVATE,
encryption: S3.BucketEncryption.S3_MANAGED,
publicReadAccess: false,
blockPublicAccess: S3.BlockPublicAccess.BLOCK_ALL,
});
// Create a dummy export.
// https://www.endoflineblog.com/cdk-tips-03-how-to-unblock-cross-stack-references
this.exportValue(this.websiteBucket.bucketArn);
...
...
...
}
}
I also defines ApplicationStage which contains above ApplicationStack
export class ApplicationStage extends Cdk.Stage {
public readonly websiteBucket : S3.Bucket;
constructor(scope: Construct, id: string, props: ApplicationStageProps) {
super(scope, id, props);
const applicationStack = new ApplicationStack(this, `eCommerceDatabaseStack-${props.stageName}`, {
stageName: props.stageName,
websiteDomain: props.websiteDomain,
});
this.websiteBucket = applicationStack.websiteBucket;
}
public getWebsiteBucket() {
return this.websiteBucket;
}
}
In my pipeline stack, I want to create application stage for each stage that need deploy the website artifact to its corresponding S3 bucket. This is a cross-account CI/CD pipeline, and I have 3 separate AWS accounts(Alpha, Gamma, Prod) for this website. Whenever I ship code out, the pipeline should deploy the new artifact to Alpha then Gamma then Prod, and the alpha.ecommerce.com, gamma.ecommerce.com, ecommerce.com should be updated in this order. The problem happens when reference the S3Bucket in S3DeployAction below:
export class CodePipelineStack extends CDK.Stack {
constructor(scope: CDK.App, id: string, props: CodePipelineStackProps) {
super(scope, id, props);
...
...
// Here the pipelineStageInfoList contains Gamma and Prod information.
pipelineStageInfoList.forEach((pipelineStage: PipelineStageInfo) => {
const applicationStage = new ApplicationStage(this, pipelineStage.stageName, {
stageName: pipelineStage.stageName,
pipelineName: props.pipelineName,
websiteDomain: props.websiteDomain,
env: {
account: pipelineStage.awsAccount,
region: pipelineStage.awsRegion,
},
});
const stage = pipeline.addStage(applicationStage);
// Here is what went wrong. It is trying to deploy the S3Bucket for that stage.
stage.addAction(
new codepipeline_actions.S3DeployAction({
actionName: "Deploy-Website",
input: outputWebsite,
bucket: applicationStage.getWebsiteBucket(),
})
);
});
}
...
...
...
}
Run cdk synthesize got below error:
/Users/yangliu/Projects/eCommerce/eCommerceWebsitePipelineCdk/node_modules/aws-cdk-lib/core/lib/deps.ts:39
throw new Error(`You cannot add a dependency from '${source.node.path}' (in ${describeStage(sourceStage)}) to '${target.node.path}' (in ${describeStage(targetStage)}): dependency cannot cross stage boundaries`);
^
Error: You cannot add a dependency from 'eCommerceWebsitePipelineCdk-CodePipeline-Stack' (in the App) to 'eCommerceWebsitePipelineCdk-CodePipeline-Stack/ALPHA/eCommerceDatabaseStack-ALPHA' (in Stage 'eCommerceWebsitePipelineCdk-CodePipeline-Stack/ALPHA'): dependency cannot cross stage boundaries
I think this means that I didn't pass the S3Bucket reference in the right way here.
How to fix it?
Update with my solution 2022-09-06
Based on matthew-bonig#'s advice, I am able to get this work.
I have a separate stack to be deployed to each account to create the S3 buckets and its required CloudFront distribution. Then my pipeline stack just focus on tracking my GitHub repository, and update S3 bucket whenever a new commit is pushed.

This usually occurs because your pipeline is running in a different account/region than your stacks created from the pipelineStageInfoList.
If they aren't in the same account/region then then simplest route is to manually set the s3 bucket names by a property on your 'InfoList' and forgo using references like you're trying to use. So you'd have to deploy everything first, then come back with an update afterwards that sets those values.
If they are, then you can try to set the pipeline stacks account/region directly like you are with the other stacks and that might help.

Related

AWS CDK and AppSync: Invalid principal in policy: "SERVICE":"appsync"

I'm trying to follow tutorials and use AWS's CDK CLI to deploy a stack using AppSync and some resource creations fail with errors like the following showing under events for the Stack in the CloudFormation console:
Invalid principal in policy: "SERVICE":"appsync" (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: 8d98f07c-d717-4dfe-af96-14f2d72d993f; Proxy: null)
I suspect what happened is that when cleaning up things on my personal developer account I deleted something I shouldn't have, but due to limited AWS experience I don't know what to create, I suspect it's an IAM policy, but I don't know the exact settings to use.
I'm trying on a new clean project created using cdk init sample-app --language=typescript. Running cdk deploy immediately after the above command works fine.
I originally tried using cdk-appsync-transformer to create GraphQL endpoints to a DynamoDB table and encountered the error.
I tried re-running cdk bootstrap after deleting the CDKToolkit CloudFormation stack, but it's not fixing this problem.
To rule out that it's due to something with the 3rd party library, I tried using AWS's own AppSync Construct Library instead and even following the example there I encountered the same error (although on creation of different resource types).
Reproduction steps
Create a new folder.
In the new folder run cdk init sample-app --language=typescript.
Install the AWS AppSync Construct Library: npm i #aws-cdk/aws-appsync-alpha#2.58.1-alpha.0 --save.
As per AWS's docs:
Create lib/schema.graphql with the following:
type demo {
id: String!
version: String!
}
type Query {
getDemos: [ demo! ]
}
input DemoInput {
version: String!
}
type Mutation {
addDemo(input: DemoInput!): demo
}
Update the lib/<projectName>-stack.ts file to be essentially like the following:
import * as appsync from '#aws-cdk/aws-appsync-alpha';
import { Duration, Stack, StackProps } from 'aws-cdk-lib';
import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
import * as sns from 'aws-cdk-lib/aws-sns';
import * as subs from 'aws-cdk-lib/aws-sns-subscriptions';
import * as sqs from 'aws-cdk-lib/aws-sqs';
import { Construct } from 'constructs';
import * as path from 'path';
export class CdkTest3Stack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const queue = new sqs.Queue(this, 'CdkTest3Queue', {
visibilityTimeout: Duration.seconds(300)
});
const topic = new sns.Topic(this, 'CdkTest3Topic');
topic.addSubscription(new subs.SqsSubscription(queue));
const api = new appsync.GraphqlApi(this, 'Api', {
name: 'demo',
schema: appsync.SchemaFile.fromAsset(path.join(__dirname, 'schema.graphql')),
authorizationConfig: {
defaultAuthorization: {
authorizationType: appsync.AuthorizationType.IAM,
},
},
xrayEnabled: true,
});
const demoTable = new dynamodb.Table(this, 'DemoTable', {
partitionKey: {
name: 'id',
type: dynamodb.AttributeType.STRING,
},
});
const demoDS = api.addDynamoDbDataSource('demoDataSource', demoTable);
// Resolver for the Query "getDemos" that scans the DynamoDb table and returns the entire list.
// Resolver Mapping Template Reference:
// https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-dynamodb. html
demoDS.createResolver('QueryGetDemosResolver', {
typeName: 'Query',
fieldName: 'getDemos',
requestMappingTemplate: appsync.MappingTemplate.dynamoDbScanTable(),
responseMappingTemplate: appsync.MappingTemplate.dynamoDbResultList(),
});
// Resolver for the Mutation "addDemo" that puts the item into the DynamoDb table.
demoDS.createResolver('MutationAddDemoResolver', {
typeName: 'Mutation',
fieldName: 'addDemo',
requestMappingTemplate: appsync.MappingTemplate.dynamoDbPutItem(
appsync.PrimaryKey.partition('id').auto(),
appsync.Values.projecting('input'),
),
responseMappingTemplate: appsync.MappingTemplate.dynamoDbResultItem(),
});
//To enable DynamoDB read consistency with the `MappingTemplate`:
demoDS.createResolver('QueryGetDemosConsistentResolver', {
typeName: 'Query',
fieldName: 'getDemosConsistent',
requestMappingTemplate: appsync.MappingTemplate.dynamoDbScanTable(true),
responseMappingTemplate: appsync.MappingTemplate.dynamoDbResultList(),
});
}
}
Run cdk deploy.

How can I reference a codebuild project from a different region in CDK?

I am using CDK to deploy a codepipeline. My design is to deploy codebuild projects to different regions and reference them in one single pipeline in one region.
I build the codebuild project as:
export class CodebuildCdkStack extends cdk.Stack {
region: string;
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
}
createCodebuildProject = () => {
return new codebuild.PipelineProject(this, name, ...
});
}
}
const project1 = new CodebuildCdkStack(this, 'project1', {env: {region: 'ap-southeast-1', account}});
const project2 = new CodebuildCdkStack(this, 'project2', {env: {region: 'ap-southeast-2', account}});
actions1 =new actions.CodeBuildAction({
...
project: codebuild.PipelineProject.fromProjectArn(this, `project1`, project1.projectArn),
});
actions2 =new actions.CodeBuildAction({
...
project: codebuild.PipelineProject.fromProjectArn(this, `project2`, project2.projectArn),
});
When I run cdk deploy, I got this error:
The 'account' property must be a concrete value (action: 'project1')
After some debugging, I found the value project1.projectArn is not an Arn string, instead it is a Token object. So how can I reference the codebuild project in a correct way?

CDK Pipelines: Use Stack output in `post`step of stage

I have a pipeline stack with a stage. This stage contains multiple stacks. One of the stacks creates a Step Function. Now I would like to trigger that step function in the post of the stage (I created InvokeStepFunctionStep as a custom ICodePipelineActionFactory implementation for this).
This is from my pipeline stack code:
// TODO make this dynamic
const stepFunctionArn = "arn:aws:states:<FULL_ARN_OMITTED>";
pipeline.addStage(stage, {
post: [ new InvokeStepFunctionStep('step-function-invoke', {
stateMachine: sfn.StateMachine.fromStateMachineArn(this, 'StepFunctionfromArn',stepFunctionArn),
stateMachineInput: StateMachineInput.literal(stepFunctionsInput)
})]
});
Obviously the hard coded ARN is bad. I tried getting the ARN of the step function as a variable from the stage's stack. However this fails with
dependency cannot cross stage boundaries
I also tried using a CfnOutput for the ARN but when I try to use it via Fn.ImportValue the UpdatePipelineStep fails in CloudFormation with
No export named EdgePackagingStateMachineArn found
What is the recommended way to pass this information dynamically?
You could try using the CfnOutput.importValue property to reference CfnOutput value, which works for me. See below:
Service stack:
export class XxxStack extends Stack {
public readonly s3BucketName: CfnOutput;
constructor(scope: Construct, id: string, props?: StackProps) {
...
this.s3BucketName = new CfnOutput(
this,
's3BucketName',
{
exportName: `${this.stackName}-s3BucketName`,
value: s3Bucket.bucketName,
}
);
}
}
Stage class:
import { CfnOutput, Construct, Stage, StageProps } from '#aws-cdk/core';
export class CdkPipelineStage extends Stage {
public readonly s3BucketName: CfnOutput;
constructor(scope: Construct, id: string, props?: StageProps) {
super(scope, id, props);
const service = new XxxStack(
this,
'xxx',
{
...
}
);
this.s3BucketName = service.s3BucketName;
}
}
Pipeline stack:
import { CdkPipeline, SimpleSynthAction } from '#aws-cdk/pipelines';
const pipeline = new CdkPipeline(this, 'Pipeline', {...})
const preprod = new CdkPipelineStage(this, 'Staging', {
env: { account: PREPROD_ACCOUNT, region: PIPELINE_REGION },
});
// put validations for the stages
const preprodStage = pipeline.addApplicationStage(preprod);
preprodStage.addActions(
new ShellScriptAction({
actionName: 'TestService',
additionalArtifacts: [sourceArtifact],
rolePolicyStatements: [
new PolicyStatement({
effect: Effect.ALLOW,
actions: ['s3:getObject'],
resources: [
`arn:aws:s3:::${preprod.s3BucketName.importValue}/*`,
`arn:aws:s3:::${preprod.s3BucketName.importValue}`,
],
}),
],
useOutputs: {
ENV_S3_BUCKET_NAME: pipeline.stackOutput(
preprod.s3BucketName
),
},
...
}),
);
Note: my CDK version is
$ cdk --version
1.121.0 (build 026cb8f)
And I can confirm that CfnOutput.importValue also works for CDK version 1.139.0, and CDK version 2.8.0
Option 1: easy, not optimal.
Specify a name for your Step Function, and pass it to both the stack that creates it, and your invokation step. Build the ARN from the name.
This option isn't great because specifying physical names for CloudFormation resources has its disadvantages - mainly the inability to introduce any subsequent change that requires resource replacement, which may likely be necessary for a step function.
Option 2: more convoluted, but might be better long-term.
Create an SSM parameter with the step function's ARN from within the stack that creates the step function, then read the SSM parameter in your invokation step.
This will also require specifying a physical name for a resource - the SSM parameter, but you are not likely to require resource replacement for it, so it is less of an issue.

Adding Logic To Check If Infra is In Account and Deploying If Not AWS-CDK

The title may be a big vague so let me clarify. I am currently trying to enable AWSConfig rules and in order to do this the account must have AWSConfigurationRecorder and AWSDeliveryChannel. The issue lies that when an account already has this enabled, it will error out your entire stack when trying to deploy. I am trying to figure out a way to create logic that would essentially check if the AWSConfigurationRecorder or AWSDeliveryChannel are already there and if they are to skip over it and deploy just the rules and visa versa. Here is the code:
export class fullConfigStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const globalConfigRole = new iam.Role(this, 'globalConfigRole', {
assumedBy: new iam.ServicePrincipal('config.amazonaws.com'), // required
});
globalConfigRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AWSConfigRoleForOrganizations'));
globalConfigRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('ReadOnlyAccess'));
const globalConfigRecorder = new config.CfnConfigurationRecorder(this, 'globalConfigRecorder',{
roleArn: globalConfigRole.roleArn,
name: 'globalConfigRecorder',
recordingGroup: {
allSupported: true,
includeGlobalResourceTypes: true
}
});
const globalConfigBucket = new s3.Bucket(this, 'globalConfigBucket',{
accessControl: s3.BucketAccessControl.LOG_DELIVERY_WRITE
});
const cisConfigDeliveryChannel = new config.CfnDeliveryChannel(this,'cisConfigDeliveryChannel',{
s3BucketName: globalConfigBucket.bucketName,
configSnapshotDeliveryProperties: {
deliveryFrequency: 'TwentyFour_Hours'
}
});
const generalConfigRole = new iam.Role(this, 'generalConfigRole',{
assumedBy: new iam.ServicePrincipal('config.amazonaws.com')
});
const cloudTrailEnabledRule = new ManagedRule(this, 'cloudTrailEnabledRule', {
identifier: 'CLOUD_TRAIL_ENABLED'
});
So to clarify again I want to add some if/else logic with the cisConfigDeliveryChannel and globalConfigRecorder as to not error out the entire stack! If there is another way to solve this that I'm not seeing, please let me know!
In your AWS CloudFormation template, you can create a Lambda-backed custom resource with a function that checks whether your resources exist or not. This Lambda function then returns an identifier for CloudFormation to determine if the resources need to be created.

CloudFormation Cross-Region Reference

When you are running multiple CloudFormation stacks within the same region, you are able to share references across stacks using CloudFormation Outputs
However, outputs cannot be used for cross region references as that documentation highlights.
You can't create cross-stack references across regions. You can use the intrinsic function Fn::ImportValue to import only values that have been exported within the same region.
How do you reference values across regions in CloudFormation?
For an example to follow, I have a Route 53 hosted zone deployed in us-east-1. However, I have a backend in us-west-2 that I want to create a DNS-validated ACM certificate which requires a reference to the hosted zone in order to be able to create the appropriate CNAME for prove ownership.
How would I go about referencing that hosted zone id created in us-east-1 from within us-west-2?
The easiest way I have found of doing this is writing the reference you want to share (i.e. your hosted zone id in this case) to the Systems Manager Parameter Store and then referencing that value in your "child" stack in the separate region using a custom resource.
Fortunately, this is incredibly easy if your templates are created using Cloud Development Kit (CDK).
For the custom resource to read from SSM, you can use something like this:
// ssm-parameter-reader.ts
import { Construct } from '#aws-cdk/core';
import { AwsCustomResource, AwsSdkCall } from '#aws-cdk/custom-resources';
interface SSMParameterReaderProps {
parameterName: string;
region: string;
}
export class SSMParameterReader extends AwsCustomResource {
constructor(scope: Construct, name: string, props: SSMParameterReaderProps) {
const { parameterName, region } = props;
const ssmAwsSdkCall: AwsSdkCall = {
service: 'SSM',
action: 'getParameter',
parameters: {
Name: parameterName
},
region,
physicalResourceId: Date.now().toString() // Update physical id to always fetch the latest version
};
super(scope, name, { onUpdate: ssmAwsSdkCall });
}
public getParameterValue(): string {
return this.getData('Parameter.Value').toString();
}
}
To write the hosted zone id to parameter store, you can simply do this:
// route53.ts (deployed in us-east-1)
import { PublicHostedZone } from '#aws-cdk/aws-route53';
import { StringParameter } from '#aws-cdk/aws-ssm';
export const ROUTE_53_HOSTED_ZONE_ID_SSM_PARAM = 'ROUTE_53_HOSTED_ZONE_ID_SSM_PARAM';
/**
* Other Logic
*/
const hostedZone = new PublicHostedZone(this, 'WebsiteHostedZone', { zoneName: 'example.com' });
new StringParameter(this, 'Route53HostedZoneIdSSMParam', {
parameterName: ROUTE_53_HOSTED_ZONE_ID_SSM_PARAM,
description: 'The Route 53 hosted zone id for this account',
stringValue: hostedZone.hostedZoneId
});
Lastly, you can read that value from the parameter store in that region using the custom resource we just created and use that to create a certificate in us-west-2.
// acm.ts (deployed in us-west-2)
import { DnsValidatedCertificate } from '#aws-cdk/aws-certificatemanager';
import { PublicHostedZone } from '#aws-cdk/aws-route53';
import { ROUTE_53_HOSTED_ZONE_ID_SSM_PARAM } from './route53';
import { SSMParameterReader } from './ssm-parameter-reader';
/**
* Other Logic
*/
const hostedZoneIdReader = new SSMParameterReader(this, 'Route53HostedZoneIdReader', {
parameterName: ROUTE_53_HOSTED_ZONE_ID_SSM_PARAM,
region: 'us-east-1'
});
const hostedZoneId: string = hostedZoneIdReader.getParameterValue();
const hostedZone = PublicHostedZone.fromPublicHostedZoneId(this, 'Route53HostedZone', hostedZoneId);
const certificate = new DnsValidatedCertificate(this, 'ApiGatewayCertificate', { 'pdx.example.com', hostedZone });
The cdk library has been updated, the code avove needs to be changed to the following:
import { Construct } from '#aws-cdk/core';
import { AwsCustomResource, AwsSdkCall } from '#aws-cdk/custom-resources';
import iam = require("#aws-cdk/aws-iam");
interface SSMParameterReaderProps {
parameterName: string;
region: string;
}
export class SSMParameterReader extends AwsCustomResource {
constructor(scope: Construct, name: string, props: SSMParameterReaderProps) {
const { parameterName, region } = props;
const ssmAwsSdkCall: AwsSdkCall = {
service: 'SSM',
action: 'getParameter',
parameters: {
Name: parameterName
},
region,
physicalResourceId: {id:Date.now().toString()} // Update physical id to always fetch the latest version
};
super(scope, name, { onUpdate: ssmAwsSdkCall,policy:{
statements:[new iam.PolicyStatement({
resources : ['*'],
actions : ['ssm:GetParameter'],
effect:iam.Effect.ALLOW,
}
)]
}});
}
public getParameterValue(): string {
return this.getResponseField('Parameter.Value').toString();
}
}
CDK 2.x
There is a new Stack property called crossRegionReferences which you can enable to add cross region references. It's as simple as this:
const stack = new Stack(app, 'Stack', {
crossRegionReferences: true,
});
Under the hood, this does something similar to the above answers by using custom resources and Systems Manager. From the CDK docs:
crossRegionReferences?
Enable this flag to allow native cross region stack references.
Enabling this will create a CloudFormation custom resource in both the producing stack and consuming stack in order to perform the export/import
This feature is currently experimental
More details from the CDK core package README:
You can enable the Stack property crossRegionReferences
in order to access resources in a different stack and region. With this feature flag
enabled it is possible to do something like creating a CloudFront distribution in us-east-2 and
an ACM certificate in us-east-1.
When the AWS CDK determines that the resource is in a different stack and is in a different
region, it will "export" the value by creating a custom resource in the producing stack which
creates SSM Parameters in the consuming region for each exported value. The parameters will be
created with the name '/cdk/exports/${consumingStackName}/${export-name}'.
In order to "import" the exports into the consuming stack a SSM Dynamic reference
is used to reference the SSM parameter which was created.
In order to mimic strong references, a Custom Resource is also created in the consuming
stack which marks the SSM parameters as being "imported". When a parameter has been successfully
imported, the producing stack cannot update the value.
CDK 1.x
If you are on CDK 1.x, continue using the workaround that others have shared.
Update 2023-01-16 with cdkv2 version 2.56.0 in a projen generated projects (hence respecting eslint rules and best practices for formatting etc.) :
import {
aws_iam as iam,
custom_resources as cr,
} from 'aws-cdk-lib';
import { Construct } from 'constructs';
interface SSMParameterReaderProps {
parameterName: string;
region: string;
}
export class SSMParameterReader extends cr.AwsCustomResource {
constructor(scope: Construct, name: string, props: SSMParameterReaderProps) {
const { parameterName, region } = props;
const ssmAwsSdkCall: cr.AwsSdkCall = {
service: 'SSM',
action: 'getParameter',
parameters: {
Name: parameterName,
},
region,
physicalResourceId: { id: Date.now().toString() }, // Update physical id to always fetch the latest version
};
super(scope, name, {
onUpdate: ssmAwsSdkCall,
policy: {
statements: [
new iam.PolicyStatement({
resources: ['*'],
actions: ['ssm:GetParameter'],
effect: iam.Effect.ALLOW,
}),
],
},
});
}
public getParameterValue(): string {
return this.getResponseField('Parameter.Value').toString();
}
};
Could not edit the post above ...