I'm trying to allow my AWS Cognito users to be able to upload their files in their own folder inside my S3 bucket. They should also be able to read them back. But no one should be able to upload files to any other folder, nor should they be able to read anything from any other folder.
Therefore, I'm creating each user's folder using their Cognito username and putting their files therein. But I just found that usernames are unique only within the User Pool in which they are created, so I want to include both the pool id and username in the Resource path.
I have found the variable for username (${aws:username}), but haven't been able to locate anything for pool id (${USER_POOL_ID_VARIABLE} placeholder below). Can someone help me with this and also check if the policy I have created below is okay for my purpose?
(Alternately, I'm okay if we could find some variable that is globally unique and could be used instead of creating two levels hierarchy):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my.bucket/${USER_POOL_ID_VARIABLE}/${aws:username}/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my.bucket/${USER_POOL_ID_VARIABLE}/${aws:username}/*"
]
}
]
}
Related
I can use this policy to upload files to my bucket from user A, in group Z. A different user in group B also is in group Z and therefore has the same policy. However, I cannot read the file when logged in as B to the AWS management console. I'm especially mystified because according to the Policy Simulator, this policy (plus the Admin access user B has) should fully enable B to see the file in question.
Instead, user B only gets Access Denied.
Help? I feel like I'm missing something very simple here.
My complete (if redacted) group policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::my-bucket/*",
"arn:aws:s3:::my-bucket"
]
}
]
}
My auto-generated bucket policy is:
{
"Version": "2012-10-17",
"Id": "S3-Console-Auto-Gen-Policy-1645709424074",
"Statement": [
{
"Sid": "S3PolicyStmt-DO-NOT-MODIFY-1645709423946",
"Effect": "Allow",
"Principal": {
"Service": "logging.s3.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
My bucket has Block Public Access turned on.
The problem here is that even though you are logged into the AWS console, your authentication does not extend to downloading an S3 object via a simple HTTP GET of the object at https://mybucket.s3.region.amazonaws.com/myfile.png, as would happen if you pasted that URL into a new tab.
Instead, you can generate and use an S3 pre-signed URL to download the object. A pre-signed URL is time-limited and is signed with your secret key so it includes all the auth needed to download the object.
You can use the S3 console, awscli, or any AWS SDK to generate a pre-signed S3 URL, for example:
aws s3 presign s3://mybucket/myfile.png
Note that pre-signed URLs behave somewhat like a bearer token. Whoever has the pre-signed URL can use it to download the object, until it expires.
You can also download individual objects, or even entire buckets of objects, using the awscli (with appropriate authentication).
I am trying to do a simple task where in I must be able to download a file in S3 which has a specific tag via SFTP. I am trying to achieve this by inserting a condition to the SFTP IAM Policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket_name"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket_name/*",
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/key": "value"
}
}
}
]
}
But when I use this policy with the SFTP role, WinSCP throws permission denied error when I try to login to the server. I am able to login only if I remove the Condition part in the policy. If anyone knows how to do this, please guide me. Thanks in advance.
It is not possible to restrict GetObject based on object tags.
IAM checks whether the user is entitled to make the request before it looks at the objects in Amazon S3 themselves. The tags are not available during this process.
Putting the objects in folders and restricting access by folder should meet your requirements.
In AWS S3, I have one bucket named "Environments" under that I have 4 folders named "sandbox", "staging", "prod1" and "prod2" respectively and the permission of the whole bucket is "public".
Now I want to restrict One AWS user named "developer" to write anything into "prod1" and "prod2" folder but it can view them.
Kindly help me out with this
Create below policy and attach to a user developer
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::Environments",
"arn:aws:s3:::Environments/sandbox/*",
"arn:aws:s3:::Environments/staging/*",
]
}
]
}
This policy allows to full permission to folder sandbox and staging, but restrict another folder to user developer
Is it possible to dynamically create an access/secret keys for each user in my system that will allow them to read/write/list contents on a bucket with a specific prefix.
for example: my bucket is userDataBucket.
a user with id 77 is logged in. I would like to send him to the client access/secret so he can work with this directory:
s3://userDataBucket/users/77/
he must work only with this directory and not have access to any other directory.
is such a thing possible?
note: you should review iam limits on users and policies. There are limits to scale with finite users/policies, for example, you can only have a max of 5000 IAM users, 250 Roles and 100 Groups. Consider using temporary credentials if you need a scalable solution.
Here's an IAM policy allowing access to the /users/${username} directory in the userDataBucket bucket. The policy uses a variable so that whatever user is authenticated would have access to the directory matching their username. You could apply this to the bucket and use it as a global strategy if the plan is to use IAM users.
{
"Version":"2012-10-17",
"Statement": [
{
"Sid": "AllowListingWithinUserDirectory",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::userDataBucket"],
"Condition":{"StringLike":{"s3:prefix":["users/${aws:username}/*"]}}
},
{
"Sid": "AllowAllActionsWithinUserDirectory",
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::userDataBucket/users/${aws:username}/*"]
}
]
}
If you want to explicitly identify each user by an auto-increment id an explicit policy would look like this:
{
"Version":"2012-10-17",
"Statement": [
{
"Sid": "AllowListingWithinUserDirectory",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::userDataBucket"],
"Condition":{"StringLike":{"s3:prefix":["users/77/*"]}}
},
{
"Sid": "AllowAllActionsWithinUserDirectory",
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::userDataBucket/users/77/*"]
}
]
}
The AWS SDKs offer methods to automate the creation of policy documents and the ability to attach them to users and groups. Some example methods from the Java SDK:
AmazonIdentityManagementClient.createPolicy()
AmazonIdentityManagementClient.attachUserPolicy()
AmazonIdentityManagementClient.attachGroupPolicy()
I am using Amazon S3 to archive my client's documents within a single bucket and a series of folders as such, to distinguish each client.
MyBucket/0000001/..
MyBucket/0000002/..
MyBucket/0000003/..
My clients are now looking for a way to independently backup their files to their local machine. I'd like to create a set of permissions at a given folder level to view/download those files only within a specific folder.
I'm looking to do this outside the scope of my application, by this I mean, I'd like to create a set of permissions in the S3 browser and tell my clients to use some 3rd Party App to link to their area. Does anybody know if this is possible? I'm opposed to writing a module to automate this as at present as their simply isn't a big enough demand.
You can use IAM policies in conjunction with bucket policies to manage such access.
Each individual client would need their own IAM profile, and you would set up policies to limit object access to only those accounts.
Here is the AWS documentation:
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingIAMPolicies.html
I would particularly point out Example 1 in that document, which does exactly what you want.
Please refer to the following policy to restrict the user to upload or list objects only to specific folders. I have created a policy that allows me to list only the objects of folder1 and folder2, and also allows to put the object to folder1 and deny uploads to other folders of the buckets.
The policy does as below:
1.List all the folders of bucket
2.List objects and folders of allowed folders
3.Uploads files only to allowed folders
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "AllowListingOfFolder1And2",
"Action": [
"s3:*"
],
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::bucketname"
],
"Condition": {
"StringNotLike": {
"s3:prefix": [
"folder1/*",
"folder2/*"
]
},
"StringLike": {
"s3:prefix": "*"
}
}
},
{
"Sid": "Allowputobjecttofolder1only",
"Effect": "Deny",
"Action": "s3:PutObject",
"NotResource": "arn:aws:s3:::bucketname/folder1/*"
}
]
}