How to add S3 BucketPolicy with AWS CDK? - amazon-web-services

I wanna translate this CloudFormation piece into CDK:
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: S3BucketImageUploadBuffer
PolicyDocument:
Version: "2012-10-17"
Statement:
Action:
- s3:PutObject
- s3:PutObjectAcl
Effect: Allow
Resource:
- ...
Looking at the documentation here, I don't see a way to provide the policy document itself.

This is an example from a working CDK-Stack:
artifactBucket.addToResourcePolicy(
new PolicyStatement({
resources: [
this.pipeline.artifactBucket.arnForObjects("*"),
this.pipeline.artifactBucket.bucketArn],
],
actions: ["s3:List*", "s3:Get*"],
principals: [new ArnPrincipal(this.deploymentRole.roleArn)]
})
);

Building on #Thomas Wagner's answer, this is how I did this. I was trying to limit the bucket to a given IP range:
import * as cdk from '#aws-cdk/core';
import * as s3 from '#aws-cdk/aws-s3';
import * as s3Deployment from '#aws-cdk/aws-s3-deployment';
import * as iam from '#aws-cdk/aws-iam';
export class StaticSiteStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// Bucket where frontend site goes.
const mySiteBucket = new s3.Bucket(this, 'mySiteBucket', {
websiteIndexDocument: "index.html"
});
let ipLimitPolicy = new iam.PolicyStatement({
actions: ['s3:Get*', 's3:List*'],
resources: [mySiteBucket.arnForObjects('*')],
principals: [new iam.AnyPrincipal()]
});
ipLimitPolicy.addCondition('IpAddress', {
"aws:SourceIp": ['1.2.3.4/22']
});
// Allow connections from my CIDR
mySiteBucket.addToResourcePolicy(ipLimitPolicy);
// Deploy assets
const mySiteDeploy = new s3Deployment.BucketDeployment(this, 'deployAdminSite', {
sources: [s3Deployment.Source.asset("./mysite")],
destinationBucket: mySiteBucket
});
}
}
I was able to use the s3.arnForObjects() and iam.AnyPrincipal() helper functions rather than specifying ARNs or Principals directly.
The assets I want to deploy to the bucket are kept in the root of my project directory in a directory called mysite, and then referenced via a call to s3Deployment.BucketDeployment. This can be any directory your build process has access to, of course.

The CDK does this a little differently. I believe you are supposed to use bucket.addToResourcePolicy, as documented here.

As per the original question, then the answer from #thomas-wagner is the way to go.
If anyone comes here looking for how to create the bucket policy for a CloudFront Distribution without creating a dependency on a bucket then you need to use the L1 construct CfnBucketPolicy (rough C# example below):
IOriginAccessIdentity originAccessIdentity = new OriginAccessIdentity(this, "origin-access-identity", new OriginAccessIdentityProps
{
Comment = "Origin Access Identity",
});
PolicyStatement bucketAccessPolicy = new PolicyStatement(new PolicyStatementProps
{
Effect = Effect.ALLOW,
Principals = new[]
{
originAccessIdentity.GrantPrincipal
},
Actions = new[]
{
"s3:GetObject",
},
Resources = new[]
{
Props.OriginBucket.ArnForObjects("*"),
}
});
_ = new CfnBucketPolicy(this, $"bucket-policy", new CfnBucketPolicyProps
{
Bucket = Props.OriginBucket.BucketName,
PolicyDocument = new PolicyDocument(new PolicyDocumentProps
{
Statements = new[]
{
bucketAccessPolicy,
},
}),
});
Where Props.OriginBucket is an instance of IBucket (just a bucket).

Related

AWS CDK grant decrypt permission to Kinesis Data Stream's AWS managed CMK

I'm provisioning Kinesis Data Stream with AWS managed KMS key as well as Delivery Stream reading from stream. There's a problem on how to add decrypt policy on delivery stream role for managed key. The code is showing below and the issue is that getting key with 'aws/kinesis' alias doesn't work unless I have a way to add dependency to 'kinesisStream' resource. But there's no 'addDependsOn' method in IKey-interface. How can I ensure that Stream (and it's managed KMS key) is created before I try to fetch that key?
const kinesisStream = new kinesis.Stream(this, 'kinesisStream', {
streamName: `my-stream`,
shardCount: 1,
encryption: kinesis.StreamEncryption.MANAGED,
retentionPeriod: cdk.Duration.days(1),
});
const kinesisStreamRole = new iam.Role(this, 'kinesisStreamRole', {
assumedBy: new iam.ServicePrincipal('firehose.amazonaws.com'),
});
// How to add dependency to kinesisStream resource to ensure it's created before trying to fetch KMS key using 'fromLookup'?
// Now getting:
// [Error at /my-stack] Could not find any key with alias named aws/kinesis
const managedKinesisKmsKey = kms.Key.fromLookup(this, 'managedKinesisKmsKey', {
aliasName: 'aws/kinesis',
});
const managedKinesisKmsKeyPolicy = new iam.Policy(this, 'managedKinesisKmsKeyPolicy', {
roles: [kinesisStreamRole],
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
resources: [managedKinesisKmsKey.keyArn],
actions: ['kms:Decrypt'],
}),
],
});
You can use the key alias to grant the access to this AWS managed key. We know that the alias for Kinesis service specific AWS managed key is "aws/kinesis".
AWS developer guide for using aliases to control access to KMS keys: https://docs.aws.amazon.com/kms/latest/developerguide/alias-authorization.html
Working solution
const kinesisStream = new kinesis.Stream(this, 'kinesisStream', {
streamName: `my-stream`,
shardCount: 1,
encryption: kinesis.StreamEncryption.MANAGED,
retentionPeriod: cdk.Duration.days(1),
});
const kinesisStreamRole = new iam.Role(this, 'kinesisStreamRole', {
assumedBy: new iam.ServicePrincipal('firehose.amazonaws.com'),
});
const managedKinesisKmsKeyPolicy = new iam.Policy(this, 'managedKinesisKmsKeyPolicy', {
roles: [kinesisStreamRole],
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
resources: ['*'],
actions: ['kms:Decrypt'],
conditions: {
StringLike: {
'kms:RequestAlias': 'aws/kinesis',
},
},
}),
],
});

Unable to configure CloudFront distribution with S3 origin and Origin Access Control using AWS CDK

I am trying to set up CloudFront distribution with S3 bucket as origin, I have added a policy to the bucket and created Origin Access Control and assigned it to the bucket but when I try to deploy it I get an error "Invalid request provided: Illegal configuration: The origin type and OAC origin type differ." Here's my code:
// S3 bucket
export class S3Bucket extends Bucket {
constructor(scope: Construct) {
super(scope, S3_BUCKET_NAME, {
websiteIndexDocument: 'index.html',
blockPublicAccess: BlockPublicAccess.BLOCK_ALL
});
}
};
// CloudFront Distribution
export class CloudFrontDistribution extends cloudfront.Distribution {
constructor(scope: Construct, bucket: Bucket) {
super(scope, CLOUD_FRONT_DISTRIBUTION_NAME, {
defaultBehavior: {
origin: new S3Origin(bucket),
viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
compress: true
}
});
const oac = new cloudfront.CfnOriginAccessControl(this, 'MyOriginAccessControl', {
originAccessControlConfig: {
name: 'MyOriginAccessControl',
originAccessControlOriginType: 's3',
signingBehavior: 'always',
signingProtocol: 'sigv4'
}
});
const allowOriginAccessIdentityPolicy = new PolicyStatement({
actions: ['s3:GetObject'],
principals: [new ServicePrincipal(this.distributionId)],
effect: Effect.ALLOW,
resources: [oac.attrId]
});
const allowCloudFrontReadOnlyPolicy = new PolicyStatement({
actions: ['s3:GetObject'],
principals: [new ServicePrincipal('cloudfront.amazonaws.com')],
effect: Effect.ALLOW,
conditions: {
'StringEquals': {
"AWS:SourceArn": this.distributionId
}
}
});
bucket.addToResourcePolicy(allowCloudFrontReadOnlyPolicy)
bucket.addToResourcePolicy(allowOriginAccessIdentityPolicy)
const cfnDistribution = this.node.defaultChild as cloudfront.CfnDistribution
cfnDistribution.addPropertyOverride(
'DistributionConfig.Origins.0.OriginAccessControlId',
oac.getAtt('Id')
)
};
};
In the console I can see that wrong Origin name is set. This is the bucket website endpoint which doesn't allow adding Origin Access Control.
When I change it to the S3 REST API address then OAC appears.
How do I change this in CDK?
I've figured this out. I read CDK source code and it looks like when you create your bucket with
websiteIndexDocument: 'index.html'
CDK enables website hosting for the bucket automatically under the hood. It also uses bucket's website endpoint as origin domain (which was my problem). If website hosting is not enabled it creates Origin Access Identity, creates a policy for the bucket to allow access to the bucket only from OAI and it uses bucket's regional domain name as origin's domain. The solution was to remove
websiteIndexDocument: 'index.html'
and block public access to the bucket
blockPublicAccess: BlockPublicAccess.BLOCK_ALL

Delete code when sync github source code to S3 with CDK

I am trying to use code pipeline in AWS CDK to automatically deploy code from GitHub source to S3 bucket. The code is as follows:
import * as codepipeline from '#aws-cdk/aws-codepipeline';
import * as codepipeline_actions from '#aws-cdk/aws-codepipeline-actions';
import * as s3 from '#aws-cdk/aws-s3';
import { Construct, Stack, StackProps } from '#aws-cdk/core';
export class S3PipelineStack extends Stack {
constructor(scope: Construct, id: string, props: StackProps = {}) {
super(scope, id, props);
const dagsBucket = s3.Bucket.fromBucketName(this, 'my-bucket', `test-bucket`);
const pipeline = new codepipeline.Pipeline(this, 'my-s3-pipeline', {
pipelineName: 'MyS3Pipeline',
});
const sourceOutput = new codepipeline.Artifact();
const sourceAction = new codepipeline_actions.CodeStarConnectionsSourceAction({
actionName: 'Source',
owner: '***',
repo: '***',
connectionArn: 'arn:aws:***',
output: sourceOutput,
branch: 'master',
});
const deployAction = new codepipeline_actions.S3DeployAction({
actionName: 'S3Deploy',
bucket: dagsBucket,
input: sourceOutput,
});
pipeline.addStage({
stageName: 'Source',
actions: [sourceAction],
});
pipeline.addStage({
stageName: 'Deploy',
actions: [deployAction],
});
}
}
This code works but the only problem is the S3 bucket can only add or change the code when source change in GitHub but cannot delete any code when anything deleted from source.
And I found a note in aws docs that said:
Another possible solution is from s3deploy.BucketDeployment but again it doesn't support connecting source from git but only can pass a source from local or another S3 bucket.
So does anybody know how to sync the GitHub and S3 bucket with add/change/delete from source in the right way?

How to limit SSM based on user starting the command

I have an EC2 that I am connecting to using the AWS Systems manager, the EC2 has a role of AmazonSSMManagedInstanceCore attached and I am able to use ssm startSession from the CLI.
Without adding permissions to the users themselves am I able to limit which users are allowed to initiate a session to the EC2s?
I have tried adding a second policy to the EC2s where I block access to ssm:StartSession (which works when I apply it with no condition) with a condition containing aws.userid and aws:ssmmessages:session-id but neither of these blocked access.
I am using federated users in this account.
Below is an example of the most recent policy attempting to block access to that specific email but not others (which does not work).
const myPolicy = new ManagedPolicy(this, "sendAndBlockPoicy", {
statements: [
new PolicyStatement({
sid: "AllowSendCommand",
effect: Effect.ALLOW,
resources: [`arn:aws:ec2:${Aws.REGION}:${Aws.ACCOUNT_ID}:*`],
actions: ["ssm:SendCommand"],
}),
new PolicyStatement({
sid: "blockUsers",
effect: Effect.DENY,
resources: ["*"],
actions: ["ssm:*", "ssmmessages:*", "ec2messages:*"],
conditions: {
StringLike: {
"aws:ssmmessages:session-id":
"ABCDEFGHIJKLMNOPQRSTUV:me#email.com",
},
},
}),
],
});
const managedSSMPolicy = ManagedPolicy.fromAwsManagedPolicyName(
"AmazonSSMManagedInstanceCore",
);
const role = new Role(this, 'ec2Role', {
assumedBy: new ServicePrincipal('ec2.amazonaws.com')
managedPolicies: [managedSSMPolicy, myPolicy ]
}

AWS Textract StartDocumentAnalysis function not publishing a message to the SNS Topic

I am working with AWS Textract and I want to analyze a multipage document, therefore I have to use the async options, so I first used startDocumentAnalysisfunction and I got a JobId as the return, But it needs to trigger a function that I have set to trigger when the SNS topic got a message.
These are my serverless file and handler file.
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: { "Fn::Join": ["", ["arn:aws:s3:::${self:custom.secrets.IMAGE_BUCKET_NAME}", "/*" ] ] }
- Effect: "Allow"
Action:
- "sts:AssumeRole"
- "SNS:Publish"
- "lambda:InvokeFunction"
- "textract:DetectDocumentText"
- "textract:AnalyzeDocument"
- "textract:StartDocumentAnalysis"
- "textract:GetDocumentAnalysis"
Resource: "*"
custom:
secrets: ${file(secrets.${opt:stage, self:provider.stage}.yml)}
functions:
routes:
handler: src/functions/routes/handler.run
events:
- s3:
bucket: ${self:custom.secrets.IMAGE_BUCKET_NAME}
event: s3:ObjectCreated:*
textract:
handler: src/functions/routes/handler.detectTextAnalysis
events:
- sns: "TextractTopic"
resources:
Resources:
TextractTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: "Start Textract API Response"
TopicName: TextractResponseTopic
Handler.js
module.exports.run = async (event) => {
const uploadedBucket = event.Records[0].s3.bucket.name;
const uploadedObjetct = event.Records[0].s3.object.key;
var params = {
DocumentLocation: {
S3Object: {
Bucket: uploadedBucket,
Name: uploadedObjetct
}
},
FeatureTypes: [
"TABLES",
"FORMS"
],
NotificationChannel: {
RoleArn: 'arn:aws:iam::<accont-id>:role/qvalia-ocr-solution-dev-us-east-1-lambdaRole',
SNSTopicArn: 'arn:aws:sns:us-east-1:<accont-id>:TextractTopic'
}
};
let textractOutput = await new Promise((resolve, reject) => {
textract.startDocumentAnalysis(params, function(err, data) {
if (err) reject(err);
else resolve(data);
});
});
}
I manually published an sns message to the topic and then it is firing the textract lambda, which currently has this,
module.exports.detectTextAnalysis = async (event) => {
console.log('SNS Topic isssss Generated');
console.log(event.Records[0].Sns.Message);
};
What is the mistake that I have and why the textract startDocumentAnalysis is not publishing a message and making it trigger the lambda?
Note: I haven't use the startDocumentTextDetection before using the startTextAnalysis function, though it is not necessary to call it before this.
Make sure you have in your Trusted Relationships of the role you are using:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"textract.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
The SNS Topic name must be AmazonTextract
At the end your arn should look this:
arn:aws:sns:us-east-2:111111111111:AmazonTextract
I was able got this working directly via Serverless Framework by adding a Lambda execution resource to my serverless.yml file:
resources:
Resources:
IamRoleLambdaExecution:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
- textract.amazonaws.com
Action: sts:AssumeRole
And then I just used the same role generated by Serverless (for the lambda function) as the notification channel role parameter when starting the Textract document analysis:
Thanks to this this post for pointing me in the right direction!
For anyone using the CDK in TypeScript, you will need to add Lambda as a ServicePrincipal as usual to the Lambda Execution Role. Next, access the assumeRolePolicy of the execution role and call the addStatements method.
The basic execution role without any additional statement (add those later)
this.executionRole = new iam.Role(this, 'ExecutionRole', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
Next, add Textract as an additional ServicePrincipal
this.executionRole.assumeRolePolicy?.addStatements(
new PolicyStatement({
principals: [
new ServicePrincipal('textract.amazonaws.com'),
],
actions: ['sts:AssumeRole']
})
);
Also, ensure the execution role has full permissions on the target SNS topic (note the topic is created already and accessed via fromTopicArn method)
const stmtSNSOps = new PolicyStatement({
effect: iam.Effect.ALLOW,
actions: [
"SNS:*"
],
resources: [
this.textractJobStatusTopic.topicArn
]
});
Add the policy statement to a global policy (within the active stack)
this.standardPolicy = new iam.Policy(this, 'Policy', {
statements: [
...
stmtSNSOps,
...
]
});
Finally, attach the policy to the execution role
this.executionRole.attachInlinePolicy(this.standardPolicy);
If you have your bucket encrypted you should grant kms permissions, otherwise it won't work