i'm trying to setup a Only PutObject policy to by bucket as following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt####",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
However when i try to upload a file thought AWS SDK I receive a 403 response from AWS.
I'm absolutely sure to use the correct access key of the IAM user that has this policy attached to it.
Anyone knows why AWS3 complain with this policy when it shouldn't?
Edit:
After hours of trials, I came across a weird behaviour which i would like to be explained.
If I add s3:ListBucket to the above policy it just works fine. Without it, it will return a 403. Why amazon force me to put ListBucket action when i don't want to have it?
Thanks
Best way to troubleshoot this is to give your policy following action and resources:
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
This will confirm you're using correct access key. If it goes through, you're most likely using unauthorized actions (e.g. s3:ListBucket). You can use CloudTrail to find which unauthorized actions are being called.
Related
My objective is userone buckets shoud not show to other users:
s3:ListAllMyBucket
Returns a list of all buckets owned by the authenticated sender of the request. To use this operation, you must have the s3:ListAllMyBuckets permission.
This is my policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:ListAllMyBuckets"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
s3.ListAllMyBuckets is not working i don't know why?
If i misunderstand something please let me know
This Solution works but i need to know why s3:ListAllMyBuckets not working or if misunderstand something please let me know
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::prefix*"
]
}
]
}
So there's no concept of a 'bucket owner' in MinIO as there is in AWS S3. The s3:ListAllMyBuckets operation effectively grants access to the ListBuckets API operation.
For what you want, there are a few patterns you can explore:
Using prefixes in a bucket per user and configuring the resource as "arn:aws:s3:::${aws:username}"
Creating a bucket per-user and creating a corresponding policy for that user only granting access to that bucket
MinIO adopts S3's deny-by-default attitude, so as long as you explicitly state which resources a user has access to, the others will fall off on their own.
I am trying to simulate an IAM policy I want to attach to a user so I can restrict their access to two buckets, one for file upload and one for file download.
The policy simulator tells me that the following policy does not work and I cannot figure out why, but it seems to be to do with the wildcards.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket-*-report-output/*.csv"
]
},
{
"Sid": "PutObjects",
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::mybucket-*-report-input/*.csv"
]
}
]
}
The policy simulator says the following policy does work however:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket-*-report-output"
]
},
{
"Sid": "PutObjects",
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::mybucket-*-report-input"
]
}
]
}
There must be something I am missing about how to structure the policy, but I want to restrict access to the buckets in the policy, for the operations mentioned, but I also want to ensure that the user can only add and retrieve files with .csv extension.
Below is a screenshot of the simulator:
Your policy is 100% correct - the IAM Policy Simulator is showing wrong results for some absurd reason.
I also can reproduce your problem using the above policy, and the results are all over the place - sometimes both allowed, both denied, only one allowed etc.
It seems to be having an issue with the double wildcard, and sometimes it is coming back with the wrong resource ARN being evaluated in the HTTP response being returned (I'm sometimes seeing both ARNs set to output instead of only 1 set to output in the network tab for the HTTP response - caching?).
It's not limited to PutObject either only and it's giving me loads of conflicting results with the double wildcard, even for other actions like s3:RestoreObject.
Regardless, I'm not sure what the issue is but your policy is correct - ignore IAM Policy Simulator in this case.
If you have access to AWS Support, I would create a support ticket there or post this same question as a potential bug on the AWS forums.
Evidence of a conflicting result, even though I have exactly recreated your scenario:
I am trying to import a disk image into AWS, for EC2 instance launching. I follow the guide as stated and fulfill all the prerequisites as stated. However I am faced with an error that I've been trying (unsuccessfully) to debug. The error is as follows. An error occurred (InvalidParameter) when calling the ImportImage operation: The service role vmimport provided does not exist or does not have sufficient permissions However when I check the permissions of the vmimport role it has all necessary permissions for EC2 and S3! My aws cli user also has full privileges to EC2 and S3. I've tried many different solutions to this problem, including, 1. Making the S3 bucket public, 2. Adding an access policy so that my AWS cli user had permissions to access the bucket. Everything I have tried still returns this exact same error message... I'm thinking there might be a region problem? I'm using us-east-2 in my AWS cli user configuration, and in the S3 buckets region. Is there something else I have not considered?P.S. I'm trying to import an OVA 1 format vm image.
Here is my trust policy
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals":{
"sts:Externaloid": "vmimport"
}
}
}
]
}
and my roles policy
"Version":"2012-10-17",
"Statement":[
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::MY-IMPORT-BUCKET",
"arn:aws:s3:::MY-IMPORT-BUCKET/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::MY-EXPORT-BUCKET",
"arn:aws:s3:::MY-EXPORT-BUCKET/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource": "*"
}
]
}
And finally the containers.json
[
{
"Description": "My Special OVA",
"Format": "ova",
"Url": "s3://MY-IMPORT-BUCKET/VM.ova"
}
]
UPDATE: After investigating the problem further, I found that the role vmimport last access was "Not accessed", i.e. never, meaning that the role is not even being attempted to be used! So this error is clearly saying that it does not exist (it can't find the service role). In the final command there is nothing in the command that suggests that vmimport is going to be used, neither in the containers.json . I thought this was the purpose of allowing vmie.amazonaws.com to take control. Clearly it isn't assuming the role, so I need to investigate into this and sts.
The problem is in your(my) Trust Policy.json file. If you notice the conditions for it to assume the role, is that the Externaloid must be equal to vmimport. There is an added o in the attribute that it is checking, this will always be false and so vmie can never assume the role. Remove the o from the trust policy and try again and your policy works.
I had exactly the same scenario, you need to create vmimport role as described here (AWS docs):
https://docs.aws.amazon.com/vm-import/latest/userguide/required-permissions.html
I am trying to do a simple task where in I must be able to download a file in S3 which has a specific tag via SFTP. I am trying to achieve this by inserting a condition to the SFTP IAM Policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket_name"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket_name/*",
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/key": "value"
}
}
}
]
}
But when I use this policy with the SFTP role, WinSCP throws permission denied error when I try to login to the server. I am able to login only if I remove the Condition part in the policy. If anyone knows how to do this, please guide me. Thanks in advance.
It is not possible to restrict GetObject based on object tags.
IAM checks whether the user is entitled to make the request before it looks at the objects in Amazon S3 themselves. The tags are not available during this process.
Putting the objects in folders and restricting access by folder should meet your requirements.
Using s3cmd after configuring with my root privileges (access key and secret key), whenever I try to download something from a bucket using sync or get , I receive this strange error of permission for my root account:
WARNING: Remote file S3Error: 403 (Forbidden):
The owner is another user I have made using IAM console, but am I correct to expect that the root user should always get full and unrestricted access?
Also using aws-cli i get an unknown error
A client error (Unknown) occurred when calling the GetObject operation: Unknown
Also I thought I had to add a bucket policy to allow for root access (as strange as it sounds), as the first step I added annonymous access with this policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::myBucket/*"
]
}
]
}
But still the errors are the same as above. The owner of the bucket is also the root user (the one trying to access is the same as owner). What am I understanding wrong here? How can I restore root user's access to my own bucket that was made by one of my own IAM users?
For any of the S3 read permissions to work, you need not just allow those objects but also allow ListBucket on the bucket(s), ListAllMyBuckets and GetBucketLocation, my consolidated version:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::myBucket/*",
"arn:aws:s3:::myBucket"
]
}
]
}
See more examples at AWS IAM Documentation
It is always a good idea to recheck the status of storage, and whether S3 has been under a life cycle, so that in this case it could have been transferred to Glacier. Here, I tried to access a Glacier object using s3cmd commands, and I received uninformative and irrelevant permission errors. It would be a good idea to add this as an enhancement to future versions of s3cmd to get better warning/error messages.