I got a s3 bucket where that uploads a manifest file on creation in CDK.
This manifest file is then use by an Dataset in Quicksight. But my CDK deployment fails because the manifest file in S3 can't be found by QuickSight. So I want to add a dependsOn for the Quicksight resource.
const quicksightBucket = new s3.Bucket(this, "userS3Bucket", {
bucketName: "quicksight-bucket-user",
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
versioned: true,
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
})
const bucketDeployment = new s3deploy.BucketDeployment(
this,
"bucketDeployment",
{
destinationBucket: quicksightBucket,
sources: [
s3deploy.Source.asset("/Users/user/Downloads/housing"),
],
}
)
const quicksightDatasource = new quicksight.CfnDataSource(
this,
"quicksight-datasource",
{
name: "quicksightdatasource",
awsAccountId: "123123",
dataSourceId: "7217623409123897423687",
type: "S3",
dataSourceParameters: {
s3Parameters: {
manifestFileLocation: {
bucket: quicksightBucket.bucketName,
key: "manifest.json",
},
},
},
}
)
quicksightDatasource.addDependsOn(bucketDeployment)
I'm getting an error like: Argument of type 'Bucket' is not assignable to parameter of type 'CfnResource'.
To add a dependency on the Bucket itself:
quicksightDatasource.addDependency(
quicksightBucket.node.defaultChild as s3.CfnBucket
);
That's probably not what you want, though. That ensures the bucket exists before the QuickSight resource is created. It doesn't ensure your manifest.json data are in the bucket. To do that, instead add a dependency on the Custom Resource that's deployed by the s3deploy.BucketDeployment:
quicksightDatasource.addDependency(
bucketDeployment.node.tryFindChild("CustomResource")?.node.defaultChild as CfnCustomResource
);
You need to depend on the deployed bucket, which is resolved only when the deployment is complete.
quicksightDatasource.addDependency(bucketDeployment.deployedBucket)
If you want to reference the destination bucket in another construct and make sure the bucket deployment has happened before the next operation is started, pass the other construct a reference to deployment.deployedBucket.
Source: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3_deployment.BucketDeployment.html#deployedbucket
Related
I am trying to set up CloudFront distribution with S3 bucket as origin, I have added a policy to the bucket and created Origin Access Control and assigned it to the bucket but when I try to deploy it I get an error "Invalid request provided: Illegal configuration: The origin type and OAC origin type differ." Here's my code:
// S3 bucket
export class S3Bucket extends Bucket {
constructor(scope: Construct) {
super(scope, S3_BUCKET_NAME, {
websiteIndexDocument: 'index.html',
blockPublicAccess: BlockPublicAccess.BLOCK_ALL
});
}
};
// CloudFront Distribution
export class CloudFrontDistribution extends cloudfront.Distribution {
constructor(scope: Construct, bucket: Bucket) {
super(scope, CLOUD_FRONT_DISTRIBUTION_NAME, {
defaultBehavior: {
origin: new S3Origin(bucket),
viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
compress: true
}
});
const oac = new cloudfront.CfnOriginAccessControl(this, 'MyOriginAccessControl', {
originAccessControlConfig: {
name: 'MyOriginAccessControl',
originAccessControlOriginType: 's3',
signingBehavior: 'always',
signingProtocol: 'sigv4'
}
});
const allowOriginAccessIdentityPolicy = new PolicyStatement({
actions: ['s3:GetObject'],
principals: [new ServicePrincipal(this.distributionId)],
effect: Effect.ALLOW,
resources: [oac.attrId]
});
const allowCloudFrontReadOnlyPolicy = new PolicyStatement({
actions: ['s3:GetObject'],
principals: [new ServicePrincipal('cloudfront.amazonaws.com')],
effect: Effect.ALLOW,
conditions: {
'StringEquals': {
"AWS:SourceArn": this.distributionId
}
}
});
bucket.addToResourcePolicy(allowCloudFrontReadOnlyPolicy)
bucket.addToResourcePolicy(allowOriginAccessIdentityPolicy)
const cfnDistribution = this.node.defaultChild as cloudfront.CfnDistribution
cfnDistribution.addPropertyOverride(
'DistributionConfig.Origins.0.OriginAccessControlId',
oac.getAtt('Id')
)
};
};
In the console I can see that wrong Origin name is set. This is the bucket website endpoint which doesn't allow adding Origin Access Control.
When I change it to the S3 REST API address then OAC appears.
How do I change this in CDK?
I've figured this out. I read CDK source code and it looks like when you create your bucket with
websiteIndexDocument: 'index.html'
CDK enables website hosting for the bucket automatically under the hood. It also uses bucket's website endpoint as origin domain (which was my problem). If website hosting is not enabled it creates Origin Access Identity, creates a policy for the bucket to allow access to the bucket only from OAI and it uses bucket's regional domain name as origin's domain. The solution was to remove
websiteIndexDocument: 'index.html'
and block public access to the bucket
blockPublicAccess: BlockPublicAccess.BLOCK_ALL
My goal is to enable logging for a regional WebAcl via AWS CDK. This seems to be possible over Cloud Formation and there are the appropriate constructs in CDK. But when using the following code to create a Log Group and linking it in a LoggingConfiguration ...
const webAclLogGroup = new LogGroup(scope, "awsWafLogs", {
logGroupName: `aws-waf-logs`
});
// Create logging configuration with log group as destination
new CfnLoggingConfiguration(scope, "webAclLoggingConfiguration", {
logDestinationConfigs: webAclLogGroup.logGroupArn, // Arn of LogGroup
resourceArn: aclArn // Arn of Acl
});
... I get an exception during cdk deploy, stating that the string in the LogdestinationConfig is not a correct Arn (some parts of the Arn in the log messages have been removed):
Resource handler returned message: "Error reason: The ARN isn't valid. A valid ARN begins with arn: and includes other information separated by colons or slashes., field: LOG_DESTINATION, parameter: arn:aws:logs:xxx:xxx:xxx-awswaflogsF99ED1BA-PAeH9Lt2Y3fi:* (Service: Wafv2, Status Code: 400, Request ID: xxx, Extended Request ID: null)"
I cannot see an error in the generated Cloud Formation code after cdk synth:
"webAclLoggingConfiguration": {
"id": "webAclLoggingConfiguration",
"path": "xxx/xxx/webAclLoggingConfiguration",
"attributes": {
"aws:cdk:cloudformation:type": "AWS::WAFv2::LoggingConfiguration",
"aws:cdk:cloudformation:props": {
"logDestinationConfigs": [
{
"Fn::GetAtt": [
{
"Ref": "awsWafLogs58D3FD01"
},
"Arn"
]
}
],
"resourceArn": {
"Fn::GetAtt": [
"webACL",
"Arn"
]
}
}
},
"constructInfo": {
"fqn": "aws-cdk-lib.aws_wafv2.CfnLoggingConfiguration",
"version": "2.37.1"
}
},
I'm using Cdk with Typescript and the Cdk version is currently set to 2.37.1 but it also did not work with 2.16.0.
WAF has particular requirements to the naming and format of Logging Destination configs as described and shown in their docs.
Specifically, the ARN of the Log Group cannot end in :* which unfortunately is the return value for a Log Group ARN in Cloudformation.
A workaround would be to construct the required ARN format manually like this, which will omit the :* suffix. Also note that logDestinationConfigs takes a List of Strings, though only with exactly 1 element in it.
const webAclLogGroup = new LogGroup(scope, "awsWafLogs", {
logGroupName: `aws-waf-logs`
});
// Create logging configuration with log group as destination
new CfnLoggingConfiguration(scope, "webAclLoggingConfiguration", {
logDestinationConfigs: [
// Construct the different ARN format from the logGroupName
Stack.of(this).formatArn({
arnFormat: ArnFormat.COLON_RESOURCE_NAME,
service: "logs",
resource: "log-group",
resourceName: webAclLogGroup.logGroupName,
})
],
resourceArn: aclArn // Arn of Acl
});
PS: I work for AWS on the CDK team.
I would love to be able to update an existing lambda function via AWS CDK. I need to update the environment variable configuration. From what I can see this is not possible, is there something workable to make this happen?
I am using code like this to import the lambda:
const importedLambdaFromArn = lambda.Function.fromFunctionAttributes(
this,
'external-lambda-from-arn',
{
functionArn: 'my-arn',
role: importedRole,
}
);
For now, I have to manually alter a cloudformation template. Updating directly in cdk would be much nicer.
Yes, it is possible, although you should read #Allan_Chua's answer before actually doing it. Lambda's UpdateFunctionConfiguration API can modify a deployed function's environment variables. The CDK's AwsCustomResource construct lets us call that API during stack deployment.*
Let's say you want to set TABLE_NAME on a previously deployed lambda to the value of a DynamoDB table's name:
// MyStack.ts
const existingFunc = lambda.Function.fromFunctionArn(this, 'ImportedFunction', arn);
const table = new dynamo.Table(this, 'DemoTable', {
partitionKey: { name: 'id', type: dynamo.AttributeType.STRING },
});
new cr.AwsCustomResource(this, 'UpdateEnvVar', {
onCreate: {
service: 'Lambda',
action: 'updateFunctionConfiguration',
parameters: {
FunctionName: existingFunc.functionArn,
Environment: {
Variables: {
TABLE_NAME: table.tableName,
},
},
},
physicalResourceId: cr.PhysicalResourceId.of('DemoTable'),
},
policy: cr.AwsCustomResourcePolicy.fromSdkCalls({
resources: [existingFunc.functionArn],
}),
});
Under the hood, the custom resource creates a lambda that makes the UpdateFunctionConfiguration call using the JS SDK when the stack is created. There are also onUpdate and onDelete cases to handle.
* Again, whether this is a good idea or not depends on the use case. You could always call UpdateFunctionConfiguration without the CDK.
The main purpose of CDK is to enable AWS customers to have the capability to automatically provision resources. If we're attempting to update settings of pre-existing resources that were managed by other CloudFormation stacks, it is better to update the variable on its parent CloudFormation template instead of CDK. This provides the following advantages on your side:
There's a single source of truth of what the variable should look like
There's no tug o war between the CDK and CloudFormation template whenever an update is being pushed from these sources.
Otherwise, since this is a compute layer, just get rid of the lambda function from CloudFormation and start full CDK usage altogether bro!
Hope this advise helps
If you are using AWS Amplify, the accepted answer will not work and instead you can do this by exporting a CloudFormation Output from your custom resource stack and then referencing that output using an input parameter in the other stack.
With CDK
new CfnOutput(this, 'MyOutput', { value: 'MyValue' });
With CloudFormation Template
"Outputs": {
"MyOutput": {
"Value": "MyValue"
}
}
Add an input parameter to the cloudformation-template.json of the resource you want to reference your output value in:
"Parameters": {
"myInput": {
"Type": "String",
"Description": "A custom input"
},
}
Create a parameters.json file that passes the output to the input parameter:
{
"myInput": {
"Fn::GetAtt": ["customResource", "Outputs.MyOutput"]
}
}
Finally, reference that input in your stack:
"Resources": {
"LambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Environment": {
"Variables": {
"myEnvVar": {
"Ref": "myInput"
},
}
},
}
}
}
I have created a s3 bucket with CDK
const test_bucket = new s3.Bucket(this, 'assets-bucket-id', {
bucketName: 'assets-bucket-name',
cors: [
{
allowedHeaders: [
"*"
],
allowedMethods: [
s3.HttpMethods.POST,
s3.HttpMethods.PUT,
s3.HttpMethods.GET,
],
allowedOrigins: [
"*"
],
exposedHeaders: [
'x-amz-server-side-encryption',
'x-amz-request-id',
'x-amz-id-2',
'ETag'
],
}
],
})
however i want to add folders of protected, public, private since i m using that for cognito uploads and those are required https://docs.amplify.aws/lib/storage/configureaccess/q/platform/js/
is there anyway i can use cdk s3 module to achieve that?
thanks
As per what #Jarmod clarified in comments, even though it's possible to use Lambda or some scripts to automate creation of folders upon resource creation by CDK (CDK has no native way of doing it at the time being), it's not needed for my use case.
The respective folders will be created upon successful upload.
Tested by configuring to specify desired 'level' (e.g. 'protected') docs.amplify.aws/lib/storage/configureaccess/q/platform/js and it automatically create a folder as the level upon successful upload.
This AWS CloudFormation document suggests that it is possible to administer an 'AWS::SSM::Document' resource with a DocumentType of 'Package'. However the 'Content' required to achieve this remains a mystery.
Is it possible to create a Document of type 'Package' via CloudFormation, and if so, what is the equivalent of this valid CLI command written as a CloudFormation template (preferably with YAML formatting)?
ssm create-document --name my-package --content "file://manifest.json" --attachments Key="SourceUrl",Values="s3://my-s3-bucket" --document-type Package
Failed Attempt. The content used is an inline version of the manifest.json which was provided when using the CLI option. There doesn't seem to be an option to specify an AttachmentSource when using CloudFormation:
AWSTemplateFormatVersion: 2010-09-09
Resources:
Document:
Type: AWS::SSM::Document
Properties:
Name: 'my-package'
Content: !Sub |
{
"schemaVersion": "2.0",
"version": "Auto-Generated-1579701261956",
"packages": {
"windows": {
"_any": {
"x86_64": {
"file": "my-file.zip"
}
}
}
},
"files": {
"my-file.zip": {
"checksums": {
"sha256": "sha...."
}
}
}
}
DocumentType: Package
CloudFormation Error
AttachmentSource not provided in the input request. (Service: AmazonSSM; Status Code: 400; Error Code: InvalidParameterValueException;
Yes, this is possible! I've successfully created a resource with DocumentType: Package and the package shows up in the SSM console under Distributor Packages after the stack succeeds.
Your YAML is almost there, but you need to also include the Attachments property that is now available.
Here is a working example:
AWSTemplateFormatVersion: "2010-09-09"
Description: Sample to create a Package type Document
Parameters:
S3BucketName:
Type: "String"
Default: "my-sample-bucket-for-package-files"
Description: "The name of the S3 bucket."
Resources:
CrowdStrikePackage:
Type: AWS::SSM::Document
Properties:
Attachments:
- Key: "SourceUrl"
Values:
- !Sub "s3://${S3BucketName}"
Content:
!Sub |
{
"schemaVersion": "2.0",
"version": "1.0",
"packages": {
"windows": {
"_any": {
"_any": {
"file": "YourZipFileName.zip"
}
}
}
},
"files": {
"YourZipFileName.zip": {
"checksums": {
"sha256": "7981B430E8E7C45FA1404FE6FDAB8C3A21BBCF60E8860E5668395FC427CE7070"
}
}
}
}
DocumentFormat: "JSON"
DocumentType: "Package"
Name: "YourPackageNameGoesHere"
TargetType: "/AWS::EC2::Instance"
Note: for the Attachments property you must use the SourceUrl key when using DocumentType: Package. The creation process will append a "/" to this S3 bucket URL and concatenate it with each file name you have listed in the manifest that is the Content property when it creates the package.
Seems there is no direct way to create an SSM Document with Attachment via CloudFormation (CFN). You can use a workaround as using a backed Lambda CFN where you will use a Lambda to call the API SDK to create SSM Document then use Custom Resource in CFN to invoke that Lambda.
There are some notes on how to implement this solution as below:
How to invoke Lambda from CFN: Is it possible to trigger a lambda on creation from CloudFormation template
Sample of a Lambda sending response format (when using Custom Resource in CFN): https://github.com/stelligent/cloudformation-custom-resources
In order to deploy Lambda with best practices and easy upload the attachment, Document content from local, you should use sam deploy instead of CFN create stack.
You can get information of the newly created resource from lambda to the CFN by adding the resource detail into the data json in the response lambda send back and the CFN can use it with !GetAtt CustomResrc.Attribute, you can find more detail here.
There are some drawbacks on this solution:
Add more complex to the original solution as you have to create resources for the Lambda execution such as (S3 to deploy Lambda, Role for Lambda execution and assume the SSM execution, SSM content file - or you have to use a 'long' inline content). It won't be a One-call CFN create-stack anymore. However, you can put everything into the SAM template because at the end of the day, it's just a CFN template
When Delete the CFN stack, you have to implement the lambda when RequestType == Delete for cleaning up your resource.
PS: If you don't have to work strictly on CFN, then you can try with Terraform: https://www.terraform.io/docs/providers/aws/r/ssm_document.html