Error in AWS CDK V2 construct for AWS ECR - amazon-web-services

I have written code to create a repo and a few properties. Even though I am passing reponame as a string 'testing' as part of an interface, my code is going through the else condition and creating the reponame as undefined+date.
2nd issue: Can you also help me to find the issue for the principal in the permission policy? I am receiving an error saying props.accountIds.map is wrong: I am passing an array to accountIds.
import * as ecr from 'aws-cdk-lib/aws-ecr';
import { Duration, RemovalPolicy, Stack } from 'aws-cdk-lib';
import { Repository, RepositoryEncryption, TagMutability } from 'aws-cdk-lib/aws-ecr';
import {AWSAccountDetails} from '../lib/utils/definition';
import * as cdk from 'aws-cdk-lib';
export class ecrStack extends cdk.Stack {
constructor(scope: cdk.App, id: string, props: any ){
super(scope, id);
const repository = this.createEcr(props);
this.createAdditionalProperty(repository,props);
}
//Method to check and create the AWS ECR REPO
private createEcr( props: AWSAccountDetails): any {
let imageTagMutability : ecr.TagMutability = ecr.TagMutability.IMMUTABLE;
let imageScanOnPush : Boolean =true;
let encryption : ecr.RepositoryEncryption =ecr.RepositoryEncryption.KMS;
if ( props.imageTagMutability in ecr.TagMutability ) {
imageTagMutability =props.imageTagMutability;
}
if (typeof props.imageScanOnPush ! == 'boolean'){
imageScanOnPush =props.imageScanOnPush;
}
if (typeof props.encryption ! == 'undefined'){
encryption =props.encryption;
}
if (!props.repositoryName) {
throw Error('No repository name provided');
}
let repository = ecr.Repository.fromRepositoryName(this, 'ecrRepo', props.repositoryName);
if (!repository.repositoryArn) {
// Repository does not exist, create a new one with the original name
repository=new ecr.Repository(this, props.repositoryName, {
repositoryName: props.repositoryName,
imageTagMutability: props.imageTagMutability,
encryption: RepositoryEncryption.KMS,
imageScanOnPush: props.imageScanOnPush,
removalPolicy: RemovalPolicy.DESTROY
});
} else {
const modifiedRepositoryName = `${props.repositoryName}-${Date.now()}`;
repository= new ecr.Repository(this, modifiedRepositoryName, {
repositoryName: modifiedRepositoryName,
imageTagMutability: props.imageTagMutability,
encryption: RepositoryEncryption.KMS,
imageScanOnPush: props.imageScanOnPush,
removalPolicy: RemovalPolicy.DESTROY
});
}return repository;
}
//Method to add the lifecycle policy,Tags and create aws account permissions.
private createAdditionalProperty(repository: any, props:AWSAccountDetails) {
let AgeOfImage :number =180;
if (typeof props.ImageAge ! == 'undefined'){
repository.addLifecycleRule({
rulePriority: 1,
maxImageAge:Duration.days(AgeOfImage)
});
} else {
repository.addLifecycleRule({
rulePriority: 1,
maxImageAge:Duration.days(props.ImageAge)
});
}
//Tags
const Tags:{[key:string]:string}={
Name: props.repositoryName,
}
//Permission to external aws account to grant permission for ECR pull and push
// const policy = new iam.PolicyDocument();
//policy.addStatements(new iam.PolicyStatement({
// actions: ['ecr:*'],
//actions: ['ecr:BatchCheckLayerAvailability', 'ecr:GetDownloadUrlForLayer', 'ecr:BatchGetImage', 'ecr:PutImage']
// resources: [repository.repositoryArn],
// principals: props.accountIds.map(id => new iam.AccountPrincipal(id))
// }));
}
addLifecycleRule(arg0: { rulePriority: number; maxImageAge: Duration; }) {
throw new Error('Method not implemented.');
}
}
The interface file:
import * as ecr from 'aws-cdk-lib/aws-ecr';
import { ecrStack } from '../ecrstack-stack';
export interface AWSAccountDetails {
ImageCount: any;
readonly repositoryName :'abcd'; /* Repo Name */
readonly ImageAge:110; //Number of days before image is deleted.i.e 90. need to change to imageAge
readonly imageTagMutability : ecr.TagMutability.IMMUTABLE; /* If the Repo should enable Tag Immutability or not; Default setting is Enabled */
readonly imageScanOnPush : true; /* If the Repo should enable ScanonPush or not ; Default setting is Enabled */
readonly encryption : 'KMS'; /* If the Repo should KMS or not ; Default setting is Enabled for AWS managed KMS Key*/
readonly accountIds : string //Account number to grant access to pull and push.
readonly encruptionproperty: 'KMS';
}

I have to pass the props as object and then export to the main stack. This solved the issue.

Related

Got Error "Either integrationSubtype` or `integrationUri` must be specified" when try to integration Fargate service with API gateway

I'm trying to create a public API which will be integrated with a Fargate service which already exists in private subnet.
I got below error when run cdk synthesize --profile=PandaService-Alpha.
/Users/yangliu/Projects/Panda/PandaApi/node_modules/#aws-cdk/aws-apigatewayv2-alpha/lib/http/integration.ts:249
throw new Error('Either `integrationSubtype` or `integrationUri` must be specified.');
^
Error: Either `integrationSubtype` or `integrationUri` must be specified.
at new HttpIntegration (/Users/yangliu/Projects/Panda/PandaApi/node_modules/#aws-cdk/aws-apigatewayv2-alpha/lib/http/integration.ts:249:13)
at HttpAlbIntegration._bindToRoute (/Users/yangliu/Projects/Panda/PandaApi/node_modules/#aws-cdk/aws-apigatewayv2-alpha/lib/http/integration.ts:317:26)
at new HttpRoute (/Users/yangliu/Projects/Panda/PandaApi/node_modules/#aws-cdk/aws-apigatewayv2-alpha/lib/http/route.ts:191:38)
at /Users/yangliu/Projects/Panda/PandaApi/node_modules/#aws-cdk/aws-apigatewayv2-alpha/lib/http/api.ts:458:14
at Array.map (<anonymous>)
at HttpApi.addRoutes (/Users/yangliu/Projects/Panda/PandaApi/node_modules/#aws-cdk/aws-apigatewayv2-alpha/lib/http/api.ts:455:20)
at ApigatewayStack.addApiRoutes (/Users/yangliu/Projects/Panda/PandaApi/lib/apigateway-stack.ts:110:22)
at new ApigatewayStack (/Users/yangliu/Projects/Panda/PandaApi/lib/apigateway-stack.ts:101:10)
at /Users/yangliu/Projects/Panda/PandaApi/bin/app.ts:17:3
The error is thrown in the addApiRoutes method in below code.
Code
import * as CDK from "aws-cdk-lib";
import * as CertificateManager from "aws-cdk-lib/aws-certificatemanager";
import * as Route53 from "aws-cdk-lib/aws-route53";
import * as ApiGatewayV2Alpha from "#aws-cdk/aws-apigatewayv2-alpha";
import * as ApiGatewayV2IntegrationsAlpha from "#aws-cdk/aws-apigatewayv2-integrations-alpha";
import * as ELBv2 from "aws-cdk-lib/aws-elasticloadbalancingv2";
import { Construct } from "constructs";
import { StageInfo } from "../config/stage-config";
import * as EC2 from "aws-cdk-lib/aws-ec2";
export interface ApigatewayStackProps extends CDK.StackProps {
readonly packageName: string;
readonly stageInfo: StageInfo;
}
export class ApigatewayStack extends CDK.Stack {
private readonly coreVpc: EC2.IVpc;
// Prefix for CDK constrcut ID
private readonly constructIdPrefix: string;
private readonly pandaApi: ApiGatewayV2Alpha.HttpApi;
constructor(scope: Construct, id: string, props: ApigatewayStackProps) {
super(scope, id, props);
this.coreVpc = EC2.Vpc.fromLookup(
this,
`${props.stageInfo.stageName}VpcLookupId`,
{
vpcName: "CoreVpc",
}
);
this.constructIdPrefix = `${props.packageName}-${props.stageInfo.stageName}`;
const hostedZone: Route53.IHostedZone = Route53.HostedZone.fromLookup(
this,
`${this.constructIdPrefix}-HostedZoneLookup`,
{
domainName: props.stageInfo.domainName,
}
);
const domainCertificate = new CertificateManager.Certificate(
this,
`${this.constructIdPrefix}-pandaApiCertificate`,
{
domainName: props.stageInfo.domainName,
validation:
CertificateManager.CertificateValidation.fromDns(hostedZone),
}
);
const customDomainName = new ApiGatewayV2Alpha.DomainName(
this,
`${this.constructIdPrefix}-ApiGatewayDomainName`,
{
certificate: domainCertificate,
domainName: props.stageInfo.domainName,
}
);
this.pandaApi = new ApiGatewayV2Alpha.HttpApi(
this,
`${this.constructIdPrefix}-pandaApi`,
{
defaultDomainMapping: {
domainName: customDomainName,
//mappingKey: props.pipelineStageInfo.stageName
},
corsPreflight: {
allowOrigins: ["*"],
allowHeaders: ["*"],
allowMethods: [
ApiGatewayV2Alpha.CorsHttpMethod.OPTIONS,
ApiGatewayV2Alpha.CorsHttpMethod.GET,
ApiGatewayV2Alpha.CorsHttpMethod.POST,
ApiGatewayV2Alpha.CorsHttpMethod.PUT,
],
maxAge: CDK.Duration.hours(6),
},
//createDefaultStage: false,
// only allow use custom domain
disableExecuteApiEndpoint: true
}
);
this.addApiRoutes(props);
}
/**
* Add API routes for multiple services.
*/
private addApiRoutes(props: ApigatewayStackProps) {
const PandaServiceIntegration : ApiGatewayV2IntegrationsAlpha.HttpAlbIntegration =
this.generatePandaServiceIntegration(props);
this.pandaApi.addRoutes({
path: "/products",
methods: [ApiGatewayV2Alpha.HttpMethod.ANY],
integration: PandaServiceIntegration,
});
this.pandaApi.addRoutes({
path: "/store-categories",
methods: [ApiGatewayV2Alpha.HttpMethod.ANY],
integration: PandaServiceIntegration,
});
this.pandaApi.addRoutes({
path: "/stores",
methods: [ApiGatewayV2Alpha.HttpMethod.ANY],
integration: PandaServiceIntegration,
});
}
/**
*
* #returns HttpAlbIntegration for PandaService.
*/
private generatePandaServiceIntegration(props: ApigatewayStackProps) {
const vpcLink = new ApiGatewayV2Alpha.VpcLink(
this,
`${this.constructIdPrefix}-VpcLink`,
{
vpc: this.coreVpc,
subnets: {
subnetType: EC2.SubnetType.PRIVATE_ISOLATED,
},
}
);
const PandaServiceAlbSecurityGroup = EC2.SecurityGroup.fromLookupByName(
this,
`${this.constructIdPrefix}-PandaServiceAlbSecurityGroupLookup`,
"PandaServiceAlbSecurityGroup",
this.coreVpc
);
const PandaServiceAlbListener : ELBv2.IApplicationListener =
ELBv2.ApplicationListener.fromApplicationListenerAttributes(this, `${this.constructIdPrefix}-PandaServiceAlbListenerLookUp`, {
listenerArn: props.stageInfo.PandaServiceAlbArn,
securityGroup: PandaServiceAlbSecurityGroup,
});
const PandaServiceIntegration: ApiGatewayV2IntegrationsAlpha.HttpAlbIntegration =
new ApiGatewayV2IntegrationsAlpha.HttpAlbIntegration(
`${this.constructIdPrefix}-PandaServiceIntegration`,
PandaServiceAlbListener ,
{
method: ApiGatewayV2Alpha.HttpMethod.ANY,
vpcLink: vpcLink,
secureServerName: props.stageInfo.domainName,
parameterMapping: new ApiGatewayV2Alpha.ParameterMapping()
}
);
return PandaServiceIntegration;
}
}
As Otavio pointed out, my props.stageInfo.PandaServiceAlbArn is an empty string, after updating it with the actual string the problem get resolved.

Trouble with Snapshot tests in CDK

I am trying to do some snapshot testing on my CDK stack but the snapshot is not generating.
This is my stack:
export interface SNSStackProps extends cdk.StackProps {
assumedRole: string
}
export class SNSStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props: AssumedRole) {
super(scope, id, props)
const topicName = "TopicName"
const topic = new sns.Topic(this, topicName, {
displayName: "Topic Name",
fifo: true,
topicName: topicName,
contentBasedDeduplication: true
})
const assumedRole = iam.Role.fromRoleArn(
this,
"AssumedRole",
props.assumedRole
)
topic.grantPublish(assumedRole.grantPrincipal)
}
}
This is my snapshot test
test("Creates an SNS topic ", () => {
const stack = new Stack()
new SNSStack.SNSStack(stack, "SNSStack", {
env: {
account: "test_account",
region: "test_region"
},
assumedRoleArn: "arn:aws:iam::1111111:role/testRole"
})
expect(SynthUtils.toCloudFormation(stack)).toMatchSnapshot()
})
This generates a snapshot with an empty object like this
exports[`dlq creates an alarm 1`] = `Object {}`;
Why is the object empty in the snapshot? And how do I get the Object in the snapshot to populate with the resources in my stack?
You should create a Stack by using an App and not another stack. You can then easily synthesize the App and extract the JSON cloudformation stack which you can use for your snapshot. Below is an example of how I've used it to create the stack and retrieve the CloudFormation.
const app = new App();
new ApiStack(app, 'api-stack', params);
return app.synth({ force: true }).getStackByName('api-stack').template;
It is possible that once you've got the stack reference, you can use the SynthUtils way to get the CloudFormation repository.

File upload to Amazon S3 from Salesforce LWC (without apex)

I have tried create a LWC component which job is to upload a file in Amazom S3 bucket. I have configured AWS bucket perfectly test it upload a file by postman. But I could not file from LWC component. I was getting this error.
I am following this tutorial.
I have configured CSP Trusted Sites and CORS in salesforce.Images below:
Here is my code:
import { LightningElement, track, wire } from "lwc";
import { getRecord } from "lightning/uiRecordApi";
import { loadScript } from "lightning/platformResourceLoader";
import AWS_SDK from "#salesforce/resourceUrl/awsjssdk";
import getAWSCredential from '#salesforce/apex/CRM_AWSUtility.getAWSCredential';
export default class FileUploadComponentLWC extends LightningElement {
/*========= Start - variable declaration =========*/
s3; //store AWS S3 object
isAwsSdkInitialized = false; //flag to check if AWS SDK initialized
#track awsSettngRecordId; //store record id of custom metadata type where AWS configurations are stored
selectedFilesToUpload; //store selected file
#track showSpinner = false; //used for when to show spinner
#track fileName; //to display the selected file name
/*========= End - variable declaration =========*/
//Called after every render of the component. This lifecycle hook is specific to Lightning Web Components,
//it isn’t from the HTML custom elements specification.
renderedCallback() {
if (this.isAwsSdkInitialized) {
return;
}
Promise.all([loadScript(this, AWS_SDK)])
.then(() => {
//For demo, hard coded the Record Id. It can dynamically be passed the record id based upon use cases
// this.awsSettngRecordId = "m012v000000FMQJ";
})
.catch(error => {
console.error("error -> " + error);
});
}
//Using wire service getting AWS configuration from Custom Metadata type based upon record id passed
#wire(getAWSCredential)
awsConfigData({ error, data }) {
if (data) {
console.log('data: ',data)
let awsS3MetadataConf = {};
let currentData = data[0]
//console.log("AWS Conf ====> " + JSON.stringify(currentData));
awsS3MetadataConf = {
s3bucketName: currentData.Bucket_Name__c,
awsAccessKeyId: currentData.Access_Key__c,
awsSecretAccessKey: currentData.Secret_Key__c,
s3RegionName: 'us-east-1'
};
this.initializeAwsSdk(awsS3MetadataConf); //Initializing AWS SDK based upon configuration data
} else if (error) {
console.error("error ====> " + JSON.stringify(error));
}
}
//Initializing AWS SDK
initializeAwsSdk(confData) {
const AWS = window.AWS;
AWS.config.update({
accessKeyId: confData.awsAccessKeyId, //Assigning access key id
secretAccessKey: confData.awsSecretAccessKey //Assigning secret access key
});
AWS.config.region = confData.s3RegionName; //Assigning region of S3 bucket
this.s3 = new AWS.S3({
apiVersion: "2006-03-01",
params: {
Bucket: confData.s3bucketName //Assigning S3 bucket name
}
});
console.log('S3: ',this.s3)
this.isAwsSdkInitialized = true;
}
//get the file name from user's selection
handleSelectedFiles(event) {
if (event.target.files.length > 0) {
this.selectedFilesToUpload = event.target.files[0];
this.fileName = event.target.files[0].name;
console.log("fileName ====> " + this.fileName);
}
}
//file upload to AWS S3 bucket
uploadToAWS() {
if (this.selectedFilesToUpload) {
console.log('uploadToAWS...')
this.showSpinner = true;
let objKey = this.selectedFilesToUpload.name
.replace(/\s+/g, "_") //each space character is being replaced with _
.toLowerCase();
console.log('objKey: ',objKey);
//starting file upload
this.s3.putObject(
{
Key: objKey,
ContentType: this.selectedFilesToUpload.type,
Body: this.selectedFilesToUpload,
ACL: "public-read"
},
err => {
if (err) {
this.showSpinner = false;
console.error(err);
} else {
this.showSpinner = false;
console.log("Success");
this.listS3Objects();
}
}
);
}
this.showSpinner = false;
console.log('uploadToAWS Finish...')
}
//listing all stored documents from S3 bucket
listS3Objects() {
console.log("AWS -> " + JSON.stringify(this.s3));
this.s3.listObjects((err, data) => {
if (err) {
console.log("Error listS3Objects", err);
} else {
console.log("Success listS3Objects", data);
}
});
}
}
Please help someone. Thank you advance.
Problem solved. We have found problem in AWS configuration.

CDK to enable DNS resolution for VPCPeering

I have VPC peering to connect to a lambda in one aws account to a RDS instance in another aws account. This works fine but required the VPC peering to have DNS resolution option enabled.
By default DNS resolution is set to :
DNS resolution from accepter VPC to private IP :Disabled.
This can be done via the AWS console and the CLI. I am not able to achieve the same using AWS CDK.
https://docs.aws.amazon.com/vpc/latest/peering/modify-peering-connections.html
The CfnVPCPeeringConnection does not seem to have this option.
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ec2.CfnVPCPeeringConnection.html
Is there any other way of achieving this via CDK ?
const cfnVPCPeeringConnection :CfnVPCPeeringConnection =
new CfnVPCPeeringConnection(
stack,
"vpcPeeringId",
{
peerVpcId : "<vpcId of acceptor account>",
vpcId : "<reference of the Id>",
peerOwnerId : "<aws acc number>",
peerRegion : "<region>",
peerRoleArn :"<arn created in the acceptor account>"",
}
);
//update route tables
rdsConnectorVpc.isolatedSubnets.forEach(({ routeTable: { routeTableId } }, index) => {
new CfnRoute(this.parentStack, 'PrivateSubnetPeeringConnectionRoute' + index, {
destinationCidrBlock: '<CIDR>',
routeTableId,
vpcPeeringConnectionId: cfnVPCPeeringConnection.ref,
})
});
You can use a CustomResource Construct in AWS CDK to achieve it:
import * as cdk from "#aws-cdk/core";
import ec2 = require("#aws-cdk/aws-ec2");
import iam = require("#aws-cdk/aws-iam");
import { AwsCustomResource, AwsCustomResourcePolicy, AwsSdkCall, PhysicalResourceId } from "#aws-cdk/custom-resources";
import { RetentionDays } from "#aws-cdk/aws-logs";
export interface AllowVPCPeeringDNSResolutionProps {
vpcPeering: ec2.CfnVPCPeeringConnection,
}
export class AllowVPCPeeringDNSResolution extends cdk.Construct {
constructor(scope: cdk.Construct, id: string, props: AllowVPCPeeringDNSResolutionProps) {
super(scope, id);
const onCreate:AwsSdkCall = {
service: "EC2",
action: "modifyVpcPeeringConnectionOptions",
parameters: {
VpcPeeringConnectionId: props.vpcPeering.ref,
AccepterPeeringConnectionOptions: {
AllowDnsResolutionFromRemoteVpc: true,
},
RequesterPeeringConnectionOptions: {
AllowDnsResolutionFromRemoteVpc: true
}
},
physicalResourceId: PhysicalResourceId.of(`allowVPCPeeringDNSResolution:${props.vpcPeering.ref}`)
};
const onUpdate = onCreate;
const onDelete:AwsSdkCall = {
service: "EC2",
action: "modifyVpcPeeringConnectionOptions",
parameters: {
VpcPeeringConnectionId: props.vpcPeering.ref,
AccepterPeeringConnectionOptions: {
AllowDnsResolutionFromRemoteVpc: false,
},
RequesterPeeringConnectionOptions: {
AllowDnsResolutionFromRemoteVpc: false
}
},
};
const customResource = new AwsCustomResource(this, "allow-peering-dns-resolution", {
policy: AwsCustomResourcePolicy.fromStatements([
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
resources: ["*"],
actions: [
"ec2:ModifyVpcPeeringConnectionOptions",
]
}),
]),
logRetention: RetentionDays.ONE_DAY,
onCreate,
onUpdate,
onDelete,
});
customResource.node.addDependency(props.vpcPeering);
}
}
and use it like this:
[...]
const peerConnection = new ec2.CfnVPCPeeringConnection(this, "peerConnection", {
vpcId: destinationVPC.vpcId,
peerVpcId: lambdaVPCToDestinationVPC.vpcId,
});
new AllowVPCPeeringDNSResolution(this, "peerConnectionDNSResolution", {
vpcPeering: peerConnection,
});
[...]

Update asset files using Tokens with aws-cdk

I have created this stack:
export class InfrastructureStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const bucket = new s3.Bucket(this, "My Hello Website", {
websiteIndexDocument: 'index.html',
websiteErrorDocument: 'error.html',
publicReadAccess: true,
removalPolicy: cdk.RemovalPolicy.DESTROY
});
const api = new apigateway.RestApi(this, "My Endpoint", {
restApiName: "My rest API name",
description: "Some cool description"
});
const myLambda = new lambda.Function(this, 'My Backend', {
runtime: lambda.Runtime.NODEJS_8_10,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'code'))
});
const apiToLambda = new apigateway.LambdaIntegration(myLambda)
api.root.addMethod('GET', apiToLambda);
updateWebsiteUrl.newUrl(api.url);
}
}
Last line of code is my function to update asset that will be deployed on S3 as a website with a API url that will be created during deployment. This is just a plain Node.js script that replaces files PLACEHOLDER with api.url.
Of course during compile time the CDK does not know what will be the final adress of REST endpoint because this is happening during deploy time and it updates my url with somethis like:
'https://${Token[TOKEN.26]}.execute-api.${Token[AWS::Region.4]}.${Token[AWS::URLSuffix.1]}/${Token[TOKEN.32]}/;'
Is there any way that I can update this after integrating lambda with API endpooint after deploying those?
I would like to use #aws-cdk/aws-s3-deployment module to deploy code to newly created bucket. All in the same Stack, so one cdk deploy will update everything I need.
To avoid confusion. My updateWebsiteUrl is:
export function newUrl(newUrl: string): void {
const scriptPath = path.join(__dirname, '/../../front/');
const scriptName = 'script.js';
fs.readFile(scriptPath + scriptName, (err, buf) => {
let scriptContent : string = buf.toString();
let newScript = scriptContent.replace('URL_PLACEHOLDER', newUrl);
fs.writeFile(scriptPath + 'newScript.js', newScript, () => {
console.log('done writing');
});
});
}
And my script is just simple:
const url = URL_PLACEHOLDER;
function foo() {
let req = new XMLHttpRequest();
req.open('GET', url , false);
req.send(null);
if (req.status == 200) {
replaceContent(req.response);
}
}
function replaceContent(content) {
document.getElementById('content').innerHTML = content;
}
I ran into the same issue today and managed to find a solution for it.
The C# code I am using in my CDK program is the following:
// This will at runtime be just a token which refers to the actual JSON in the format {'api':{'baseUrl':'https://your-url'}}
var configJson = stack.ToJsonString(new Dictionary<string, object>
{
["api"] = new Dictionary<string, object>
{
["baseUrl"] = api.Url
}
});
var configFile = new AwsCustomResource(this, "config-file", new AwsCustomResourceProps
{
OnUpdate = new AwsSdkCall
{
Service = "S3",
Action = "putObject",
Parameters = new Dictionary<string, string>
{
["Bucket"] = bucket.BucketName,
["Key"] = "config.json",
["Body"] = configJson,
["ContentType"] = "application /json",
["CacheControl"] = "max -age=0, no-cache, no-store, must-revalidate"
},
PhysicalResourceId = PhysicalResourceId.Of("config"),
},
Policy = AwsCustomResourcePolicy.FromStatements(
new[]
{
new PolicyStatement(new PolicyStatementProps
{
Actions = new[] { "s3:PutObject" },
Resources= new [] { bucket.ArnForObjects("config.json") }
})
})
});
}
You will need to install the following package to have the types available: https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html
It is basically a part of the solution you can find as an answer to this question AWS CDK passing API Gateway URL to static site in same Stack, or at this GitHub repository: https://github.com/jogold/cloudstructs/blob/master/src/static-website/index.ts#L134