I have a SFTP server setup in AWS Transfer Family tied to a S3 bucket and when the user tries uploading to it without IP restriction, it works. However, when IP restriction is added, we get a Permission denied error. (Some information has been deidentified for privacy reasons)
sftp> put file.pdf
Uploading file.pdf to /file.pdf
remote open("/file.pdf"): Permission denied
sftp>
Policy without IP restriction:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": "arn:aws:kms:us-east-1:XXXXX:key/XXXXX"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:ListBucket",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::data/*",
"arn:aws:s3:::data"
]
}
]
}
File upload successful. Cloudwatch logs without IP restriction:
2022-08-03T18:49:56.945-05:00 username.XXXX CONNECTED SourceIP=X.X.X.X User=username HomeDir=LOGICAL Client=SSH-2.0-OpenSSH_7.4 Role=arn:aws:iam::XXXX:role/TransferBucketRW Kex=ecdh-sha2-nistp256 Ciphers=chacha20-poly1305#openssh.com,chacha20-poly1305#openssh.com
2022-08-03T18:50:26.134-05:00 username.XXXX OPEN Path=/data/uploads/file.pdf Mode=CREATE|TRUNCATE|WRITE
2022-08-03T18:50:26.240-05:00 username.XXXX CLOSE Path=/data/uploads/file.pdf BytesIn=347971
Policy with IP restriction
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": "arn:aws:kms:us-east-1:XXXX:key/XXXX"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:ListBucket",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::data/*",
"arn:aws:s3:::data"
],
"Condition": {
"Bool": {
"aws:ViaAWSService": "false"
},
"IpAddress": {
"aws:SourceIp": "X.X.X.X/32"
}
}
}
]
}
File upload failed. Cloudwatch Logs with IP restriction
2022-08-03T18:59:14.498-05:00 username.XXXX CONNECTED SourceIP=X.X.X>X User=username HomeDir=LOGICAL Client=SSH-2.0-OpenSSH_7.4 Role=arn:aws:iam::XXXXX:role/TransferBucketRW Kex=ecdh-sha2-nistp256 Ciphers=chacha20-poly1305#openssh.com,chacha20-poly1305#openssh.com
2022-08-03T18:59:39.323-05:00 username.XXXX ERROR Message="Access Denied" Operation=OPEN Path=/data/uploads/file.pdf Mode=CREATE|TRUNCATE|WRITE RequestID=P9S2XW6FNMAW9T4T S3ExtendedRequestID=Omk8mugElCEwQpv1zXQtflAk8kEnky2/LrsetgW03js4g64ZI2XCjp6i8zgQvDZBf+hAp8ZdLS0=
2022-08-03T18:59:39.323-05:00 username.XXXX ERROR Message="Access denied"
I can confirm that the IP in the policy matched the SourceIP seen in the Cloudwatch Logs. Wondering if I need add this IP restriction in the Trust Policy of the role in addition to this IAM policy.
Related
I'm attempting to set up permissions for a user account on AWS Transfer Service with SFTP protocol. I have a use case where a user should be able to add a file to a directory but not list the files in it.
When I tweak the IAM role to deny 's3:ListBucket' for a specific subdirectory the put operation fails as well. Theoretically s3 does allow to Put object without having the ability to list the prefixes. AWS transfer service however seems to be implicitly using the list bucket operation before put. Has anyone managed to deny listing ability while still being able to upload.
IAM policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:ListBucket",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<my-bucket>"
],
"Sid": "AllowListDirectories",
"Condition": {
"StringLike": {
"s3:prefix": [
"data/partner_2/*"
]
}
}
},
{
"Sid": "DenyMkdir",
"Action": [
"s3:PutObject"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::<my-bucket>/*/"
},
{
"Sid": "DenyListFilesInSubDirectory",
"Action": [
"s3:ListBucket"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::<my-bucket>",
"Condition": {
"StringLike": {
"s3:prefix": [
"data/partner_2/data/incoming/*"
]
}
}
},
{
"Effect": "AllowReadWirteInSubDirectory",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectTagging",
"s3:PutObjectVersionAcl",
"s3:PutObjectVersionTagging"
],
"Resource": "arn:aws:s3:::<my-bucket>/data/partner_2/data/incoming/*"
},
{
"Effect": "AllowOnlyReadInADifferentDirectory",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::<my-bucket>/data/partner_2/data/outgoing/*"
}
]
}
The output from SFTP client:
sftp> cd data/incoming
sftp> ls
Couldn't read directory: Permission denied
sftp> put /Users/foo/Downloads/test.log
Uploading /Users/foo/Downloads/test.log to /data/incoming/test.log
remote open("/data/incoming/test.log"): Permission denied
sftp> get test-one.txt
Fetching /data/incoming/test-one.txt to test-one.txt
sftp> exit
Since you will have to allow the upload to your s3 bucket through SFTP, this answer doesn't quite meet your requirements. If the SFTP requirement wasn't there, you may be able to provide pre-signed urls to the client to upload files securely.
I couldn't manage to find an exact solution, however, a workaround could be allowing list+upload permission to a directory in your bucket that is specific to the client/user. Got a helpful video here to share, and corresponding medium article.
Basically, the IAM policy the user is attached to will have the following permissions to a specific folder while you block all public access to your bucket.
{
"Version": "2023-02-16",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<my-bucket>"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"restricted-folder-user-1/*",
"restricted-folder-user-1"
]
}
}
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::<arn of restricted-folder-user-1>*"
}
]
}
I have two S3 buckets in two different regions on two different accounts. I want to use a S3 replication rule to replicate all files (including existing ones) from bucket-a to bucket-b.
bucket-a is an existing bucket with objects in it already, bucket-b is a new, empty bucket.
I created a replication rule and ran the batch operation job to replicate existing objects. After the job finished, 63% of objects failed to replicate, with the errors DstPutObjectNotPermitted or DstMultipartUploadNotPermitted and no further information. Nothing comes up on Google for these errors. (these are coming from the csv file that gets generated after job completion). The remaining objects got replicated as expected.
Here's my configuration:
bucket-a has versioning enabled and it is encrypted with a default aws-managed KMS key. ACL's are enabled, and this is the bucket policy:
{
"Version": "2008-10-17",
"Id": "NoBucketDelete",
"Statement": [
{
"Sid": "NoBucketDeleteStatement",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::bucket-a"
},
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-a/*",
"arn:aws:s3:::bucket-a"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
bucket-b also has versioning and ACL's enabled, and is encrypted with a customer-managed key.
The bucket policy is:
{
"Version": "2012-10-17",
"Id": "Policy1644945280205",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-b/*",
"arn:aws:s3:::bucket-b"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
},
{
"Sid": "Stmt1644945277847",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345:role/bucket-replication-role"
},
"Action": [
"s3:ReplicateObject",
"s3:ReplicateTags",
"s3:ObjectOwnerOverrideToBucketOwner",
"s3:ReplicateDelete"
],
"Resource": "arn:aws:s3:::bucket-b/*"
}
]
}
...and the KMS key policy is
{
"Version": "2012-10-17",
"Id": "key-consolepolicy-3",
"Statement": [
{
"Sid": "Allow access through S3 for all principals in the account that are authorized to use S3",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"kms:CallerAccount": "12345",
"kms:ViaService": "s3.us-west-2.amazonaws.com"
}
}
},
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Allow access for Key Administrators",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::12345:user/root",
"arn:aws:iam::12345:user/user"
]
},
"Action": [
"kms:Create*",
"kms:Describe*",
"kms:Enable*",
"kms:List*",
"kms:Put*",
"kms:Update*",
"kms:Revoke*",
"kms:Disable*",
"kms:Get*",
"kms:Delete*",
"kms:TagResource",
"kms:UntagResource",
"kms:ScheduleKeyDeletion",
"kms:CancelKeyDeletion"
],
"Resource": "*"
},
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345:user/user"
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
},
{
"Sid": "Allow attachment of persistent resources",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345:user/user"
},
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*",
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": "true"
}
}
}
]
}
I have a role in account-a, bucket-replication-role, with a trust relationship allowing S3 assume role and an attached policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
and an attached policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ReplicateDelete"
],
"Resource": "arn:aws:s3:::bucket-b/*"
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:kms:us-east-1:12345:key/[account-a-kms-key-id]"
]
},
{
"Effect": "Allow",
"Action": [
"kms:GenerateDataKey",
"kms:Encrypt"
],
"Resource": [
"arn:aws:kms:us-west-2:12345:key/[account-b-kms-key-id]"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ObjectOwnerOverrideToBucketOwner"
],
"Resource": "arn:aws:s3:::bucket-b/*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket-a"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectVersionForReplication",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTagging"
],
"Resource": [
"arn:aws:s3:::bucket-a/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ReplicateObject",
"s3:ReplicateTags"
],
"Resource": "arn:aws:s3:::bucket-b/*"
}
]
}
Here is my replication rule, on bucket-a
The above role is attached as well, during creation.
and the batch operation is the default one that gets prompted on the replication rule creation.
The files are just small png's, jsons, html files, etc- nothing weird in there. You can see the replication status FAILED in the object information
Most of my policy rules came from this AWS support page: https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-replication/
Update
I added the following policy to account-b KMS key:
{
"Sid": "AllowS3ReplicationSourceRoleToUseTheKey",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345:role/bucket-replication-role"
},
"Action": ["kms:GenerateDataKey", "kms:Encrypt"],
"Resource": "*"
}
and the DstPutObjectNotPermitted errors have gone away, now its just the DstMultipartUploadNotPermitted errors I'm seeing.
Update 2
I tried to recreate the issue with two new buckets, and can not reproduce the issue, so I assume it's something to do with how some of the older files in bucket-a are stored.
This required some help from AWS Support, this was the relevant points of their response:
"DstMultipartUploadNotPermitted" status code indicates that the source objects are multipart uploads and the permissions required for their replication haven't been granted in the resource policies. Note that if a source object is uploaded using multipart upload to the source bucket, then the IAM replication role will also upload the replica object to destination bucket using multipart upload.
I would like to inform you that some extra permissions are to be granted for allowing multipart uploads in an S3 bucket. The list of permissions required for the IAM replication role to perform multipart uploads when KMS encryption is involved are listed below.
s3:PutObject on resource "arn:aws:s3:::DESTINATION-BUCKET/*"
kms:Decrypt and kms:GenerateDataKey on resource "arn:aws:kms:REGION:DESTINATION-ACCOUNT-ID:key/KEY-ID"
...as well as
ensure that the destination bucket policy is granting the "s3:PutObject" permission on resource "arn:aws:s3:::bucket-b/*" to the IAM replication role "arn:aws:iam::12345:role/bucket-replication-role".
...and finally
I would also request you to please grant "kms:Decrypt", and "kms:GenerateDataKey" permissions on the destination KMS key to the IAM replication role "arn:aws:iam::12345:role/bucket-replication-role" in the destination KMS key policy.
After adding all these additional permissions, everything worked as expected.
I have a bucket policy that works properly to restrict access to a bucket to certain IP's, but I actually want to deny listing the bucket itself or bucket(s) to only certain IP's.
I got it working with everything except the listing. I can deny listing the bucket contents, uploading, downloading, but can't deny doing a simple command like Get-S3Bucket -BucketName "bucket"
My current policy:
{
"Version": "2012-10-17",
"Id": "bucket-policy",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": "*",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketName/*",
"arn:aws:s3:::bucketName"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": ["1.2.3.4.5","6.7.8.9"]
}
}
}
]
}
The issue was specific to PowerShell Toolkit. Which I learned the Get-S3Bucket actually runs an "s3 collection" which I believe to include the following:
s3:ListBucket,
s3:ListAllMyBuckets,
s3:HeadBucket
So the issue for me wasn't the bucket policy, it was the IAM user policy.
I solved it via the user policy like follows:
{
"Sid": "listBuckets",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListAllMyBuckets",
"s3:HeadBucket"
],
"Resource": "*",
"Condition": {
"IpAddress": {
"aws:SourceIp": ["1.2.3.4","5.6.7.8"]
}
}
}
On ElasticBeanstalk, under Logs section, when I access this tab I immediately get an error, An error occurred retrieving logs: Access Denied.
If I click on request latest 100 lines of log I get another error on EB events.
Failed retrieveEnvironmentInfo activity. Reason: Access Denied
On events log I get two errors:
ERROR Failed retrieveEnvironmentInfo activity. Reason: Access Denied
INFO [Instance: i-0aa53b9c5f88fe09b] Successfully finished tailing 36 log(s)
INFO Pulled logs for environment instances.
ERROR Service:Amazon S3, Message:Access Denied
My role policy atm allow me for these operations:
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:*",
"cloudformation:*",
"cloudwatch:*",
"dynamodb:*",
"ec2:Describe*",
"ec2:Get*",
"ec2messages:Get*",
"elasticbeanstalk:*",
"iam:*",
"kms:ListAliases",
"lambda:Get*",
"lambda:List*",
"logs:Describe*",
"logs:FilterLogEvents",
"logs:Get*",
"logs:List*",
"logs:ListTagsLogGroup",
"logs:TestMetricFilter",
"sdb:Get*",
"s3:Get*",
"s3:List*",
"ses:*",
"sns:*",
"sqs:*"
],
"Resource": "*"
},
{
"Effect": "Deny",
"Action": [
"cloudformation:DeleteStack",
"dynamodb:DeleteTable",
"elasticbeanstalk:DeleteEnvironment*",
"elasticbeanstalk:DeleteApplication",
"iam:Create*",
"iam:Delete*",
"iam:Remove*",
"s3:DeleteBucket",
"sqs:DeleteQueue"
],
"Resource": "*"
}
I also have my EB policy.
"autoscaling:Describe*",
"autoscaling:SuspendProcesses",
"autoscaling:ResumeProcesses",
"cloudwatch:*",
"cloudformation:List*",
"cloudformation:Describe*",
"cloudformation:Get*",
"elasticbeanstalk:*",
"elasticfilesystem:Describe*",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"health:Describe*",
"health:Get*",
"health:List*",
"lambda:UpdateFunctionCode",
"lambda:CreateAlias",
"logs:*",
"s3:Get*",
"s3:List*",
"s3:Head*",
"s3:Put*",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": "*"
So, when you use ELB and try to see logs, does it use the user role policy or the service policy to check for permission? it seems pretty weird.
I was having a similar issue and was able to solve it by adding the following to my policy.
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::elasticbeanstalk-*"
}
Not quite sure all that is done in the elasticbeanstalk s3 bucket, but this covered it. Here's my full policy that allowed me to pull beanstalk logs.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticbeanstalk:List*",
"elasticbeanstalk:Describe*",
"elasticbeanstalk:Describe*",
"elasticbeanstalk:Request*",
"elasticbeanstalk:Retrieve*",
"ec2:Describe*",
"ec2:Get*",
"cloudformation:Describe*",
"cloudformation:List*",
"cloudformation:Get*",
"autoscaling:Describe*",
"elasticloadbalancing:Describe*",
"s3:Head*",
"s3:List*",
"s3:Get*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::elasticbeanstalk-*"
}
]
}
Using "s3:*" is too permissive. In addition to the other elastic beanstalk permissions, I found that these s3 permission were sufficient to be able to pull logs.
It not the minimum set of s3 permission that can be used but it's certainly more secure than "s3:*".
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:HeadBucket",
"s3:HeadObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-*",
"arn:aws:s3:::elasticbeanstalk-*/*"
]
}
I have been trying to move all the objects in a folder bucketA/product/pic/ up one level within the same bucket bucketA/pic/
I can sync files between local host and the s3 server with
s3cmd sync /script/ s3://bucketA/
as well as put an object:
s3cmd put zip.sh s3://bucketA/
But I'm getting Access Denied error when syncing files within the same bucket:
[root]s3cmd sync s3://bucketA/product/pic s3://bucketA/pic/
WARNING: Empty object name on S3 found, ignoring.
Summary: 441 source files to copy, 0 files at destination to delete
ERROR: S3 error: Access Denied
Is it possible to change the locations of the objects in a folder within the same bucket?
Here's my IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt123456",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "Stmt123457",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::bucketA",
"arn:aws:s3:::bucketA/*"
]
}
]
}
Here's my bucket policy which is set to prevent hotlinking:
{
"Version": "2012-10-17",
"Id": "HTTP referrer policy",
"Statement": [
{
"Sid": "Allow in my domains",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketA/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mylocalhostip/*",
"http://mylocalhostip/*"
]
}
}
},
{
"Sid": "Deny access if referer is not my sites",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketA/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://mylocalhostip/*",
"https://mylocalhostip/*"
]
}
}
}
]
}
These days, it is recommended to use the AWS Command-Line Interface (CLI).
The AWS CLI includes a sync command. See: https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html