I'm trying to add an IAM user for using sagemaker. I used the AmazonSageMakerFullAccess policy. But when I log in as this user I can see all of the s3 buckets of the root account and download files from them.
The sagemaker documentation states
When attaching the AmazonSageMakerFullAccess policy to a role, you must do one of the following to allow Amazon SageMaker to access your S3 bucket:
Include the string "SageMaker" or "sagemaker" in the name of the bucket where you store training data, or the model artifacts resulting from model training, or both.
Include the string "SageMaker" or "sagemaker" in the object name of the training data object(s).
Tag the S3 object with "sagemaker=true". The key and value are case sensitive. For more information, see Object Tagging in the Amazon Simple Storage Service Developer Guide.
Add a bucket policy that allows access for the execution role. For more information, see Using Bucket Policies and User Policies in the Amazon Simple Storage Service Developer Guide.
This seems to be inaccurate the user can access s3 buckets lacking sagemaker in the name. How do I limit the access?
the full policy is below
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sagemaker:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"cloudwatch:PutMetricData",
"cloudwatch:PutMetricAlarm",
"cloudwatch:DescribeAlarms",
"cloudwatch:DeleteAlarms",
"ec2:CreateNetworkInterface",
"ec2:CreateNetworkInterfacePermission",
"ec2:DeleteNetworkInterface",
"ec2:DeleteNetworkInterfacePermission",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcs",
"ec2:DescribeDhcpOptions",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"application-autoscaling:DeleteScalingPolicy",
"application-autoscaling:DeleteScheduledAction",
"application-autoscaling:DeregisterScalableTarget",
"application-autoscaling:DescribeScalableTargets",
"application-autoscaling:DescribeScalingActivities",
"application-autoscaling:DescribeScalingPolicies",
"application-autoscaling:DescribeScheduledActions",
"application-autoscaling:PutScalingPolicy",
"application-autoscaling:PutScheduledAction",
"application-autoscaling:RegisterScalableTarget",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:GetLogEvents",
"logs:PutLogEvents"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::*SageMaker*",
"arn:aws:s3:::*Sagemaker*",
"arn:aws:s3:::*sagemaker*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "*",
"Condition": {
"StringEqualsIgnoreCase": {
"s3:ExistingObjectTag/SageMaker": "true"
}
}
},
{
"Action": "iam:CreateServiceLinkedRole",
"Effect": "Allow",
"Resource": "arn:aws:iam::*:role/aws-service-role/sagemaker.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_SageMakerEndpoint",
"Condition": {
"StringLike": {
"iam:AWSServiceName": "sagemaker.application-autoscaling.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:PassedToService": "sagemaker.amazonaws.com"
}
}
}
]
}
looks like the sagemaker notebook wizard has you create a role that has limited s3 access. If I add this and the default AmazonSageMakerFullAccess the user is properly restricted.
Related
I have two S3 buckets in two different regions on two different accounts. I want to use a S3 replication rule to replicate all files (including existing ones) from bucket-a to bucket-b.
bucket-a is an existing bucket with objects in it already, bucket-b is a new, empty bucket.
I created a replication rule and ran the batch operation job to replicate existing objects. After the job finished, 63% of objects failed to replicate, with the errors DstPutObjectNotPermitted or DstMultipartUploadNotPermitted and no further information. Nothing comes up on Google for these errors. (these are coming from the csv file that gets generated after job completion). The remaining objects got replicated as expected.
Here's my configuration:
bucket-a has versioning enabled and it is encrypted with a default aws-managed KMS key. ACL's are enabled, and this is the bucket policy:
{
"Version": "2008-10-17",
"Id": "NoBucketDelete",
"Statement": [
{
"Sid": "NoBucketDeleteStatement",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::bucket-a"
},
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-a/*",
"arn:aws:s3:::bucket-a"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
bucket-b also has versioning and ACL's enabled, and is encrypted with a customer-managed key.
The bucket policy is:
{
"Version": "2012-10-17",
"Id": "Policy1644945280205",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-b/*",
"arn:aws:s3:::bucket-b"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
},
{
"Sid": "Stmt1644945277847",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345:role/bucket-replication-role"
},
"Action": [
"s3:ReplicateObject",
"s3:ReplicateTags",
"s3:ObjectOwnerOverrideToBucketOwner",
"s3:ReplicateDelete"
],
"Resource": "arn:aws:s3:::bucket-b/*"
}
]
}
...and the KMS key policy is
{
"Version": "2012-10-17",
"Id": "key-consolepolicy-3",
"Statement": [
{
"Sid": "Allow access through S3 for all principals in the account that are authorized to use S3",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"kms:CallerAccount": "12345",
"kms:ViaService": "s3.us-west-2.amazonaws.com"
}
}
},
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Allow access for Key Administrators",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::12345:user/root",
"arn:aws:iam::12345:user/user"
]
},
"Action": [
"kms:Create*",
"kms:Describe*",
"kms:Enable*",
"kms:List*",
"kms:Put*",
"kms:Update*",
"kms:Revoke*",
"kms:Disable*",
"kms:Get*",
"kms:Delete*",
"kms:TagResource",
"kms:UntagResource",
"kms:ScheduleKeyDeletion",
"kms:CancelKeyDeletion"
],
"Resource": "*"
},
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345:user/user"
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
},
{
"Sid": "Allow attachment of persistent resources",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345:user/user"
},
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*",
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": "true"
}
}
}
]
}
I have a role in account-a, bucket-replication-role, with a trust relationship allowing S3 assume role and an attached policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
and an attached policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ReplicateDelete"
],
"Resource": "arn:aws:s3:::bucket-b/*"
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:kms:us-east-1:12345:key/[account-a-kms-key-id]"
]
},
{
"Effect": "Allow",
"Action": [
"kms:GenerateDataKey",
"kms:Encrypt"
],
"Resource": [
"arn:aws:kms:us-west-2:12345:key/[account-b-kms-key-id]"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ObjectOwnerOverrideToBucketOwner"
],
"Resource": "arn:aws:s3:::bucket-b/*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket-a"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectVersionForReplication",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTagging"
],
"Resource": [
"arn:aws:s3:::bucket-a/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ReplicateObject",
"s3:ReplicateTags"
],
"Resource": "arn:aws:s3:::bucket-b/*"
}
]
}
Here is my replication rule, on bucket-a
The above role is attached as well, during creation.
and the batch operation is the default one that gets prompted on the replication rule creation.
The files are just small png's, jsons, html files, etc- nothing weird in there. You can see the replication status FAILED in the object information
Most of my policy rules came from this AWS support page: https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-replication/
Update
I added the following policy to account-b KMS key:
{
"Sid": "AllowS3ReplicationSourceRoleToUseTheKey",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345:role/bucket-replication-role"
},
"Action": ["kms:GenerateDataKey", "kms:Encrypt"],
"Resource": "*"
}
and the DstPutObjectNotPermitted errors have gone away, now its just the DstMultipartUploadNotPermitted errors I'm seeing.
Update 2
I tried to recreate the issue with two new buckets, and can not reproduce the issue, so I assume it's something to do with how some of the older files in bucket-a are stored.
This required some help from AWS Support, this was the relevant points of their response:
"DstMultipartUploadNotPermitted" status code indicates that the source objects are multipart uploads and the permissions required for their replication haven't been granted in the resource policies. Note that if a source object is uploaded using multipart upload to the source bucket, then the IAM replication role will also upload the replica object to destination bucket using multipart upload.
I would like to inform you that some extra permissions are to be granted for allowing multipart uploads in an S3 bucket. The list of permissions required for the IAM replication role to perform multipart uploads when KMS encryption is involved are listed below.
s3:PutObject on resource "arn:aws:s3:::DESTINATION-BUCKET/*"
kms:Decrypt and kms:GenerateDataKey on resource "arn:aws:kms:REGION:DESTINATION-ACCOUNT-ID:key/KEY-ID"
...as well as
ensure that the destination bucket policy is granting the "s3:PutObject" permission on resource "arn:aws:s3:::bucket-b/*" to the IAM replication role "arn:aws:iam::12345:role/bucket-replication-role".
...and finally
I would also request you to please grant "kms:Decrypt", and "kms:GenerateDataKey" permissions on the destination KMS key to the IAM replication role "arn:aws:iam::12345:role/bucket-replication-role" in the destination KMS key policy.
After adding all these additional permissions, everything worked as expected.
We need to configure an IAM policy in Amazon Web Services to allow someone to use the CLI (command line interface) to download objects from certain folders/prefixed but not others. For example, if our bucket were called "companybucket", we have three folders/prefixes:
companybucket/apples
companybucket/oranges
companybucket/bananas
We need the external user to be able to download objects in the "apples" and "bananas" folders but not the oranges.
So far, we have created the following IAM policy to get started with the "apples"
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetLifecycleConfiguration",
"s3:GetBucketTagging",
"s3:GetInventoryConfiguration",
"s3:GetObjectVersionTagging",
"s3:ListBucketVersions",
"s3:GetBucketLogging",
"s3:ListBucket",
"s3:GetAccelerateConfiguration",
"s3:GetBucketPolicy",
"s3:GetObjectVersionTorrent",
"s3:GetObjectAcl",
"s3:GetEncryptionConfiguration",
"s3:GetBucketObjectLockConfiguration",
"s3:GetBucketRequestPayment",
"s3:GetAccessPointPolicyStatus",
"s3:GetObjectVersionAcl",
"s3:GetObjectTagging",
"s3:GetMetricsConfiguration",
"s3:GetBucketPublicAccessBlock",
"s3:GetBucketPolicyStatus",
"s3:ListBucketMultipartUploads",
"s3:GetObjectRetention",
"s3:GetBucketWebsite",
"s3:GetBucketVersioning",
"s3:GetBucketAcl",
"s3:GetObjectLegalHold",
"s3:GetBucketNotification",
"s3:GetReplicationConfiguration",
"s3:ListMultipartUploadParts",
"s3:GetObject",
"s3:GetObjectTorrent",
"s3:DescribeJob",
"s3:GetBucketCORS",
"s3:GetAnalyticsConfiguration",
"s3:GetObjectVersionForReplication",
"s3:GetBucketLocation",
"s3:GetAccessPointPolicy",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::companybucket/apples",
"arn:aws:s3:::companybucket/apples/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:GetAccessPoint",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:ListAccessPoints",
"s3:ListJobs",
"s3:HeadBucket"
],
"Resource": "*"
}
]
}
Then, in the AWS console UI website, we have gone to IAM (Identify & Access Management) >> Users >> chosen the user in question >> permissions tab >> directly attached the policy to the user
Then, the user executes the following command in CLI:
aws s3 sync s3://companybucket/apples MYFILES
The following error appears: "An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied"
What are we doing wrong? Any help would be greatly appreciated, thanks.
The following approach has been successful for us to allow a AWS user set up in IAM the ability to download files/objects from some (but not all) folders in a bucket.
FIRST, set up the following policy in IAM.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowStatement01",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::companybucket"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
"",
"apples",
"bananas"
]
}
}
},
{
"Sid": "AllowStatement02",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::companybucket"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"apples/*",
"bananas/*"
]
}
}
},
{
"Sid": "AllowStatement03",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::companybucket/apples/*",
"arn:aws:s3:::companybucket/bananas/*"
]
}
]
}
SECOND, attach the newly created policy directly to the user in IAM.
THIRD, tell the user to set up a root-level local folder called MYFILES and then enter the following command in CLI (command line interface):
aws s3 sync s3://companybucket/apples MYFILES
I recently ran into a problem with IAM policies while using Code-Build. And I am trying to understand the difference between the following 2 policies and check if there are any security implications of using version 2 over version 1.
Version 1 doesn't work, so I decided to go with version 2. But why does version 2 work and why doesn't version 1 doesn't work?
Version 1 only gives access to the CodePipeline resource and allows to read and write to S3 bucket object.
However Version 2 gives access to all S3 buckets, doesn't it? Would this be considered a security loophole?
Version 1
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:logs:ap-southeast-1:682905754632:log-group:/aws/codebuild/Backend-API-Build",
"arn:aws:logs:ap-southeast-1:682905754632:log-group:/aws/codebuild/Backend-API-Build:*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-ap-southeast-1-*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
}
]
}
Version 2
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:logs:ap-southeast-1:682905754632:log-group:/aws/codebuild/Backend-API-Build",
"arn:aws:logs:ap-southeast-1:682905754632:log-group:/aws/codebuild/Backend-API-Build:*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-ap-southeast-1-*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
},
{
"Sid": "S3AccessPolicy",
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:GetObject",
"s3:List*",
"s3:PutObject"
],
"Resource": "*"
}
]
}
I have replicated the scenario by giving the restricted access to specific S3 Bucket.
Block 1: Allow required Amazon S3 console permissions Here i have granted CodePipeline to list all the buckets in the AWS account.
Block 2: Allow listing objects in root folders here my S3 Bucket Name is "aws-codestar-us-east-1-493865049436-larvel-test-pipe"
but i am surprised as when i followed the Steps from Creating CodePipeline to Create Build from the same Pipeline Console itself, i had got the same policy as your version 1 and it executed as well. However, as a next step, i gave a specific permission to a bucket in S3 as given below policy and it has worked. So in your version two rather than granting all permission to your resources Resource": "*" you can restrict a permission to a bucket only specific as described in below sample policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:logs:us-east-1:493865049436:log-group:/aws/codebuild/larvel-test1",
"arn:aws:logs:us-east-1:493865049436:log-group:/aws/codebuild/larvel-test1:*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-us-east-1-*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::aws-codestar-us-east-1-493865049436-larvel-test-pipe/*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
}
]
}
I'm using boto3 in order to create a presigned post url for s3.
s3 = boto3.client('s3')
post = s3.generate_presigned_post(
Bucket=bucket_name,
Key=f"{userid}.{suffix}"
)
The aws_access_key_id that is being used is incorrect, one of the ways of using the right one is by adding an environment variable through environment variables.
Any idea how can I enforce using the aws access key as defined by the user I've created in IAM ?
Update 1
Policy attached to the IAM role that executes the lambda
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"xray:PutTraceSegments",
"xray:PutTelemetryRecords"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:AttachNetworkInterface",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeInstances",
"ec2:DescribeNetworkInterfaces",
"ec2:DetachNetworkInterface",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ResetNetworkInterfaceAttribute"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"kinesis:*"
],
"Resource": "arn:aws:kinesis:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"sns:*"
],
"Resource": "arn:aws:sns:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"sqs:*"
],
"Resource": "arn:aws:sqs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "arn:aws:dynamodb:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"route53:*"
],
"Resource": "*"
}
]
}
Update 2
Role in IAM
Role configuration in Lambda configuration
You should not use an IAM User for any sort of execution permissions in lambda. Rather, use an IAM role with appropriate policies attached and attach that role to your Lambda function.
Also note that once the role is configured, the access id and secret will be automatically set as environment variables by lambda and Boto will retrieve them without any additional configuration.
I want to restrict the access to a single folder in S3 bucket.
I have written a IAM role for the same. Somehow I am not upload/sync the files to this folder. Here, bucket is the bucket name and folder is the folder where I want to give access.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "AllowRootAndHomeListingOfBucket",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
""
],
"s3:delimiter": [
"/"
]
}
}
},
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:HeadObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"folder/*"
]
}
}
}
]
}
Please suggest where I am wrong.
This restrictive IAM policy grants only list and upload access to a particular prefix in a particular bucket. It also intends to allow multipart uploads.
References:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket",
"Condition": {
"StringLike": {
"s3:prefix": "my/prefix/is/this/*"
}
}
},
{
"Sid": "UploadObject",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::mybucket/my/prefix/is/this/*",
]
}
]
}
Note that specifying the s3:ListBucket resource compactly as "arn:aws:s3:::mybucket/my/prefix/is/this/*" didn't work.
Since you have requested to suggest, where you are wrong:
1> In AllowListingOfUserFolder, you have used the object as resource but you have used bucket level operations and "s3:prefix" will not work with object level APIs.
Please refer to the sample policies listed here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-policies-s3.html#iam-policy-ex1
https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket-name"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": ["arn:aws:s3:::bucket-name/*"]
}
]
}
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html
I believe for the scenario you're describing AWS recommends Bucket Policies. AWS IAM should be used to secure AWS resources such as S3 itself, whereas Bucket policies can be used to secure S3 buckets and documents.
Check out this AWS blog post on the subject:
https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/