Does anyone know if I can use a wildcard and give access to everything within S3 bucket?
Instead of adding every location explicitly like I am currently doing?
const policyDoc = new PolicyDocument({
statements: [
new PolicyStatement({
sid: 'Grant role to read/write to S3 bucket',
resources: [
`${this.attrArn}`,
`${this.attrArn}/*`,
`${this.attrArn}/emailstore`,
`${this.attrArn}/emailstore/*`,
`${this.attrArn}/attachments`,
`${this.attrArn}/attachments/*`
],
actions: ['s3:*'],
effect: Effect.ALLOW,
principals: props.allowedArnPrincipals
})
]
});
You should be able to use:
resources: [
`${this.attrArn}`,
`${this.attrArn}/*`
],
The first one gives permission for actions on the bucket itself (eg ListBucket), while /* gives permission for actions inside the bucket (eg GetObject).
Related
I have created a s3 bucket with CDK
const test_bucket = new s3.Bucket(this, 'assets-bucket-id', {
bucketName: 'assets-bucket-name',
cors: [
{
allowedHeaders: [
"*"
],
allowedMethods: [
s3.HttpMethods.POST,
s3.HttpMethods.PUT,
s3.HttpMethods.GET,
],
allowedOrigins: [
"*"
],
exposedHeaders: [
'x-amz-server-side-encryption',
'x-amz-request-id',
'x-amz-id-2',
'ETag'
],
}
],
})
however i want to add folders of protected, public, private since i m using that for cognito uploads and those are required https://docs.amplify.aws/lib/storage/configureaccess/q/platform/js/
is there anyway i can use cdk s3 module to achieve that?
thanks
As per what #Jarmod clarified in comments, even though it's possible to use Lambda or some scripts to automate creation of folders upon resource creation by CDK (CDK has no native way of doing it at the time being), it's not needed for my use case.
The respective folders will be created upon successful upload.
Tested by configuring to specify desired 'level' (e.g. 'protected') docs.amplify.aws/lib/storage/configureaccess/q/platform/js and it automatically create a folder as the level upon successful upload.
let servicePrincipal: any = new iam.ServicePrincipal("lambda.amazonaws.com");
let policyDoc = new iam.PolicyDocument({
statements: [
new iam.PolicyStatement({
actions: ["sts:AssumeRole"],
principals: [servicePrincipal],
effect: iam.Effect.ALLOW,
resources: ["arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"],
sid: ""
})
],
});
let accessRole: any = new iam.Role(this, 'git-access-role', {
assumedBy: servicePrincipal,
inlinePolicies: { policyDoc }
});
I'm creating a cdk lambda with a role that has AWSLambdaBasicExecutionRole but I get an error saying
A PolicyStatement used in an identity-based policy cannot specify any IAM principals
not quite sure...what does it mean and what should I do?
Looks like you're trying to generate the assume role policy with policyDoc. The assumedBy: servicePrincipal line will automatically generate the trust policy. If all you want to do is assign the lambda basic execution policy to the role, then it should look like this:
const accessRole = new iam.Role(this, 'git-access-role', {
assumedBy: new iam.ServicePrincipal("lambda.amazonaws.com"),
managedPolicies: [iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AWSLambdaBasicExecutionRole')]
});
If the lambda needs access to git as the construct id of the role seems to indicate then you can add those permissions as inline policies. But this code would create a role that is assumable by a lambda and it would have the most basic permissions a lambda needs to run.
I have an EC2 that I am connecting to using the AWS Systems manager, the EC2 has a role of AmazonSSMManagedInstanceCore attached and I am able to use ssm startSession from the CLI.
Without adding permissions to the users themselves am I able to limit which users are allowed to initiate a session to the EC2s?
I have tried adding a second policy to the EC2s where I block access to ssm:StartSession (which works when I apply it with no condition) with a condition containing aws.userid and aws:ssmmessages:session-id but neither of these blocked access.
I am using federated users in this account.
Below is an example of the most recent policy attempting to block access to that specific email but not others (which does not work).
const myPolicy = new ManagedPolicy(this, "sendAndBlockPoicy", {
statements: [
new PolicyStatement({
sid: "AllowSendCommand",
effect: Effect.ALLOW,
resources: [`arn:aws:ec2:${Aws.REGION}:${Aws.ACCOUNT_ID}:*`],
actions: ["ssm:SendCommand"],
}),
new PolicyStatement({
sid: "blockUsers",
effect: Effect.DENY,
resources: ["*"],
actions: ["ssm:*", "ssmmessages:*", "ec2messages:*"],
conditions: {
StringLike: {
"aws:ssmmessages:session-id":
"ABCDEFGHIJKLMNOPQRSTUV:me#email.com",
},
},
}),
],
});
const managedSSMPolicy = ManagedPolicy.fromAwsManagedPolicyName(
"AmazonSSMManagedInstanceCore",
);
const role = new Role(this, 'ec2Role', {
assumedBy: new ServicePrincipal('ec2.amazonaws.com')
managedPolicies: [managedSSMPolicy, myPolicy ]
}
I would like to create an S3 bucket that is configured to work as a website, and I would like to restrict access to the S3 website to requests coming from inside a particular VPC only.
I am using Cloudformation to set up the bucket and the bucket policy.
The bucket CF has the WebsiteConfiguration enabled and has AccessControl set to PublicRead.
ContentStorageBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: PublicRead
BucketName: "bucket-name"
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: error.html
The bucket policy includes two conditions: one condition grants access full access to the bucket when on the office IP, and the other condition grants access through a VPC endpoint. The code is as follows:
ContentStorageBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ContentStorageBucket
PolicyDocument:
Id: BucketPolicy
Version: '2012-10-17'
Statement:
- Sid: FullAccessFromParticularIP
Action:
- s3:*
Effect: "Allow"
Resource:
- !GetAtt [ ContentStorageBucket, Arn ]
- Fn::Join:
- '/'
- - !GetAtt [ ContentStorageBucket, Arn ]
- '*'
Principal: "*"
Condition:
IpAddress:
aws:SourceIp: "x.x.x.x"
- Sid: FullAccessFromInsideVpcEndpoint
Action:
- s3:*
Effect: "Allow"
Resource:
- !GetAtt [ ContentStorageBucket, Arn ]
- Fn::Join:
- '/'
- - !GetAtt [ ContentStorageBucket, Arn ]
- '*'
Principal: "*"
Condition:
StringEquals:
aws:sourceVpce: "vpce-xxxx"
To test the above policy conditions, I have done the following:
I've added a file called json.json to the S3 bucket;
I've created an EC2 instance and placed it inside the VPC referenced in the bucket.
I've made a curl request to the file endpoint http://bucket-name.s3-website-us-east-1.amazonaws.com/json.json from inside the whitelisted IP address, and the request succeeds;
I've made a curl request to the file endpoint from inside the EC2 instance (placed in the VPC), and the request fails with a 403 Access Denied
Notes:
I have made sure that the EC2 instance is in the correct VPC.
The aws:sourceVpce is not using the value of the VPC ID, but it is using the value of the Endpoint ID of the corresponding VPC.
I have also used aws:sourceVpc with the VPC ID, instead of using the aws:sourceVpce with the endpoint ID, but this produced the same results as the one mentioned above.
Given this, I currently am not sure how to proceed in further debugging this. Do you have any suggestions about what might be the problem? Please let me know if the question is not clear or anything needs clarification. Thank you for your help!
In order for resources to use the VPC endpoint for S3, the VPC router must point all traffic destined for S3 to the VPC endpoint. Rather than maintain a list of all of the CIDR blocks that are S3 specific on your own, AWS allows you to use BGP prefix lists which are a first-class resource in AWS.
To find the prefix list for S3 run the following command (your output should match mine, since this should be the same region wide across all accounts, but best to check). Use the region of your VPC.
aws ec2 describe-prefix-lists --region us-east-1
I get the following output:
{
"PrefixLists": [
{
"Cidrs": [
"54.231.0.0/17",
"52.216.0.0/15"
],
"PrefixListId": "pl-63a5400a",
"PrefixListName": "com.amazonaws.us-east-1.s3"
},
{
"Cidrs": [
"52.94.0.0/22",
"52.119.224.0/20"
],
"PrefixListId": "pl-02cd2c6b",
"PrefixListName": "com.amazonaws.us-east-1.dynamodb"
}
]
}
For com.amazonaws.us-east-1.s3, the prefix list id is pl-63a5400a,
so you can then create a route in whichever route table services the subnet in question. The Destination should be the prefix list (pl-63a5400a), and the target should be the VPC endpoint ID (vpce-XXXXXXXX) (which you can find with a aws ec2 describe-vpc-endpoints).
This is trivial from the console. I don't remember how to do this from the command line, I think you have to send a cli-input-json with something like the below, but I haven't tested. this is left as an exercise for the reader.
{
"DestinationPrefixListId": "pl-63a5400a",
"GatewayId": "vpce-12345678",
"RouteTableid": "rt-90123456"
}
I've been surprised to find out that file deletion was not replicated in a S3 bucket Cross Region Replication situation, running this simple test:
simplest configuration of a CRR
upload a new file
check it is replicated
delete the file (not a version of the file)
So I checked the documentation and I find this statement:
If you delete an object from the source bucket, the following occurs:
If you make a DELETE request without specifying an object version ID, Amazon S3 adds a delete marker. Amazon S3 deals with the delete
marker as follows:
If using latest version of the replication configuration, that is you specify the Filter element in a replication configuration rule,
Amazon S3 does not replicate the delete marker.
If don't specify the Filter element, Amazon S3 assumes replication configuration is a prior version V1. In the earlier
version, Amazon S3 handled replication of delete markers differently.
For more information, see Backward Compatibility .
The later link to backward compat tell me that:
When you delete an object from your source bucket without specifying an object version ID, Amazon S3 adds a delete marker. If you use V1 of the replication configuration XML, Amazon S3 replicates delete markers that resulted from user actions.[...]
In V2, Amazon S3 doesn't replicate delete markers and therefore you must set the DeleteMarkerReplication element to Disabled.
So if I sum this up:
CRR configuration is considered v1 if there is no Filter
with CRR configuration v1, file deletion is replicated, not with v2
Well, this is my configuration :
{
"ReplicationConfiguration": {
"Role": "arn:aws:iam::271226720751:role/service-role/s3crr_role_for_mybucket_to_myreplica",
"Rules": [
{
"ID": "first replication rule",
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::myreplica"
}
}
]
}
}
And deletion is not replicated. So it makes me think that my configuration is still considered V2 (even if I have no filter).
So, can someone confirm this presumption?
And could someone tell me what does:
In V2, Amazon S3 doesn't replicate delete markers and therefore you must set the DeleteMarkerReplication element to Disabled
really mean?
There are two different configuration when replicating delete marker, V1 and V2.
Currently, when you enable S3 Replication (CRR or SRR) from the console, V2 configuration is enabled by default. However, if your use case requires you to delete replicated objects whenever they are deleted from the source bucket, you need the V1 configuration
Here is the difference between V1 and V2:
V1 configuration
The delete marker is replicated (V1 configuration). A subsequent GET request to the deleted object in both the source and the destination bucket does not return the object.
V2 configuration
The delete marker is not replicated (V2 configuration). A subsequent GET request to the deleted object returns the object only in the destination bucket.
To enable V1 configuration (to replicate delete marker), use the policy below. Keep in mind that certain replication features such as tag-based filtering and Replication Time Control (RTC) that are only available in V2 configurations.
{
"Role": " IAM-role-ARN ",
"Rules": [
{
"ID": "Replication V1 Rule",
"Prefix": "",
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::<destination-bucket>"
}
}
]
}
Here is the blog that describes these behavior in details:
https://aws.amazon.com/blogs/storage/managing-delete-marker-replication-in-amazon-s3/
I have seen exactly the same behaviour. I was unable to create a v1 situation to get DeleteMarker replication to occur.
The issue comes from still not clear documentation from AWS.
To use DeleteMarkerReplication, you need V1 of the configuration. To let AWS know that you want V1, you need to specify a Prefix element in your configuration, and no DeleteMarkerReplication element, so your first try was almost correct.
{
"ReplicationConfiguration": {
"Role": "arn:aws:iam::271226720751:role/service-role/s3crr_role_for_mybucket_to_myreplica",
"Rules": [
{
"ID": "first replication rule",
"Prefix": "",
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::myreplica"
}
}
]
}
}
And of course you need the s3:ReplicateDelete permission in your policy.
I believe I've figured this out. It looks like whether the Delete Markers are replicated or not depends on the permissions in the Replication Role.
If your replication role has the permission s3:ReplicateDelete on the destination, then Delete Markers will be replicated. If if does not have that permission they are not.
Below is the Cloudformation YAML for my Replication role with the ReplicateDelete permission commented out as an example. With this setup it does not replicate Delete Markers, uncomment the permission and it will. Note the permissions is based on what AWS actually creates if you set up the replication via the console (and they differ slightly from those in the documentation).
ReplicaRole:
Type: AWS::IAM::Role
Properties:
#Path: "/service-role/"
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- s3.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: "replication-policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Resource:
- !Sub "arn:aws:s3:::${LiveBucketName}"
- !Sub "arn:aws:s3:::${LiveBucketName}/*"
Action:
- s3:Get*
- s3:ListBucket
- Effect: Allow
Resource: !Sub "arn:aws:s3:::${LiveBucketName}-replica/*"
Action:
- s3:ReplicateObject
- s3:ReplicateTags
- s3:GetObjectVersionTagging
#- s3:ReplicateDelete
Adding a comment as an answer because I cannot comment to #john-eikenberry's answer. I have tested answer suggested by John (Action "s3:ReplicateDelete") but it is not working.
Edit: A failed attempt:
I have also tried to put bucket replication with delete marker enabled but it failed. Error message is:
An error occurred (MalformedXML) when calling the PutBucketReplication
operation: The XML you provided was not well-formed or did not
validate against our published schema
Experiment details:
Existing replication configuration:
aws s3api get-bucket-replication --bucket my-source-bucket > my-source-bucket.json
{
"Role": "arn:aws:iam::account-number:role/s3-cross-region-replication-role",
"Rules": [
{
"ID": " s3-cross-region-replication-role",
"Priority": 1,
"Filter": {},
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::my-destination-bucket"
},
"DeleteMarkerReplication": {
"Status": "Disabled"
}
}
]
}
aws s3api put-bucket-replication --bucket my-source-bucket --replication-configuration file://my-source-bucket-updated.json
{
"Role": "arn:aws:iam::account-number:role/s3-cross-region-replication-role",
"Rules": [
{
"ID": " s3-cross-region-replication-role",
"Priority": 1,
"Filter": {},
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::my-destination-bucket"
},
"DeleteMarkerReplication": {
"Status": "Enabled"
}
}
]
}