How do i grant a customer read/write access to a specific S3 bucket in my AWS account without giving them access to any other buckets or resources?
They should be able to access this bucket from a powershell script in some ec2 instance of theirs.
found this policy
{
"Version": "2012-10-17",
"Id": "PolicyForBucketX",
"Statement": [
{
"Sid": "AllowCustomerRWAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::bucket-x/*"
}
]
}
Giving customer AWS access to my AWS's specific s3 bucket?
with this, they might be able to access s3 via their access key in powershell. However they might not be using access key hardcoded to use s3. They might be using STS with instance role for the ec2 to access their s3 resources.
Would this work still? Would they then have to add my bucket x into their instance role permissions buckets?
Any better way? I might/might not have details of their AWS resource IDs.
With Bucket policy and IAM policy (either for user or a role) you can restrict users/resources based on the requirement.
I agree with Maurice here as extent of restriction would heavily depend on what you specifically want to do.
You can also use CloudFront and restrict access to your bucket objects for users not managed by IAM.
In general you should think of access as two part task. On the side of the resource, you grant permissions to a resource, in this case you are doing that for a specific bucket (resource) for a cross account (principal). You're done.
Now, the identity that will access it will also needs permissions given to them by the account administrator (root) the same way. I.e. grant the user/role the permissions to
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
If they would like to use an instance which has AWS PowerShell installed, they can create an instance profile / role that has the above permissions, and they will be able to run the commands and access your bucket. That's right way to do it.
Regardless of how they access to the instance, when they make the api call from the instance to your bucket, AWS will first check to see if the caller (which could be instance profile or a role they assumed) has permissions to these actions (customer setup). It will then be checked to see if the resource allows these actions (your setup).
Related
I am logged in AWS with admin privilege. I am trying to make a bucket public read, write. I have deselected these options. I also followed this one Duplicate and tried to update bucket policy, but getting access denied error. All those answers are from 2018.
The link that you referred is still working fine!
Along with "Bucket Public Access" option, you should paste the following bucket policy in "Bucket Policy":
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal":"*",
"Action":[
"s3:GetObject",
"s3:PutObject"
],
"Resource":[
"arn:aws:s3:::your-bucket-arn-here/*"
]
}]
}
This grants both read and write operations into your bucket.
Please, remember to change your bucket's Amazon Resource Name (arn) where key in bucket policy points to "Resource".
You can find your bucket's arn above bucket policy paste field.
Also, you may make use of AWS policies generator for further access grants.
I hope I might help.
If your bucket policy is using the following then at least this is setup to allow public read and write. Be aware this is anonymous so anyone can perform a read or write to the bucket, you will still be responsible for this.
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal":"*",
"Action":[
"s3:GetObject",
"s3:PutObject"
],
"Resource":[
"arn:aws:s3:::bucketname/*"
]
}]
}
Be aware that if ACLs are involved you would also need to apply the permissions of s3:PutObjectAcl. In addition the ACL would need to grant the public read/write which would be counterintuitive as you're using a bucket policy to do this.
If you're getting access denied when updating the bucket policy your user is prohibited from performing the action.
There are a few reasons why this could occur:
Your IAM user does not have the IAM policy permissions.
Your account is part of an AWS organisation that is using an SCP (Service Control Policy) to prohibit applying a bucket policy.
Your IAM user has been configured with an IAM boundary that it prohibiting this access.
If you do not have the ability to modify your permissions or your account was given to you via a service you would need to communicate with them.
I have an application where I am using Cognito to authenticate users and giving temporary access to AWS Console but that user is able to see all other buckets, I want that user just should be able to see or access buckets created by him.
Currently, I have given S3FullAccess Policy to Cognito users. Can anyone suggest which policy I should attach?
As per my R&D, I can some policies are there that can restrict particular user or allow particular user but my users will be dynamic, so I cannot hard-code the values and also policies like allowing/restricting access to particular buckets, I want only users who create buckets should be able to access not other users.
This is something which i found
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket-name",
"Condition": {
"StringLike": {
"s3:prefix": [
"",
"home/",
"home/${aws:userid}/*"
]
}
}
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name/home/${aws:userid}",
"arn:aws:s3:::bucket-name/home/${aws:userid}/*"
]
}
]
}
But this is listing all buckets and the only accessible bucket is what put in the code above, I want for new user, it should show nothing and as it creates, it should show that only
This is not going to be easy and you will need to create your own policy and enforce some conventions. You have 3 options.
But first, if each user just needs their own S3 space look at S3 Prefix [here](
https://aws.amazon.com/blogs/mobile/understanding-amazon-cognito-authentication-part-3-roles-and-policies/) Also, you can do this on the S3 resource bucket. I have a template for doing this here in gitlab
Now back to answering your question.
Option 1; They will need to set a tag when they create the bucket where an "owner" tag is equal to their identity. I striked this one out because despite being listed in the IAM policy I'm pretty sure it doesn't work with S3.
Option 2: The prefix of the bucket name is equal to their identity.
Then you can use the feature of variables and tags in IAM Policy. Read here
Note that coginto users are web federated identities so the variable aws:username is not aviable for you. Use the aws:userid variable and the value will be role id:caller-specified-role-name where role id is the unique id of the role and the caller-specified-role-name is specified by the RoleSessionName parameter passed to the AssumeRoleWithWebIdentity request
Option 3: Use IAM Access Policy
I can not find a link to the how to at the moment. But from here is a detailed description.
Q: How do I control what a federated user is allowed to do when signed in to the console?
When you request temporary security credentials for your federated
user using an AssumeRole API, you can optionally include an access
policy with the request. The federated user’s privileges are the
intersection of permissions granted by the access policy passed with
the request and the access policy attached to the IAM role that was
assumed. The access policy passed with the request cannot elevate the
privileges associated with the IAM role being assumed. When you
request temporary security credentials for your federated user using
the GetFederationToken API, you must provide an access control policy
with the request. The federated user’s privileges are the intersection
of the permissions granted by the access policy passed with the
request and the access policy attached to the IAM user that was used
to make the request. The access policy passed with the request cannot
elevate the privileges associated with the IAM user used to make the
request. These federated user permissions apply to both API access and
actions taken within the AWS Management Console.
The nice thing about this approach is you programmatically create the access policy.
I need to fire up an S3 bucket so my EC2 instances have access to store image files to it. The EC2 instances need read/write permissions. I do not want to make the S3 bucket publicly available, I only want the EC2 instances to have access to it.
The other gotcha is my EC2 instances are being managed by OpsWorks and I can have may different instances being fired up depending on load/usage. If I were to restrict it by IP, I may not always know the IP the EC2 instances have. Can I restrict by VPC?
Do I have to make my S3 bucket enabled for static website hosting?
Do I need to make all files in the bucket public as well for this to work?
You do not need to make the bucket public readable, nor the files public readable. The bucket and it's contents can be kept private.
Don't restrict access to the bucket based on IP address, instead restrict it based on the IAM role the EC2 instance is using.
Create an IAM EC2 Instance role for your EC2 instances.
Run your EC2 instances using that role.
Give this IAM role a policy to access the S3 bucket.
For example:
{
"Version": "2012-10-17",
"Statement":[{
"Effect": "Allow",
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
]
}
If you want to restrict access to the bucket itself, try an S3 bucket policy.
For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::111122223333:role/my-ec2-role"]
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
]
}
Additional information: http://blogs.aws.amazon.com/security/post/TxPOJBY6FE360K/IAM-policies-and-Bucket-Policies-and-ACLs-Oh-My-Controlling-Access-to-S3-Resourc
This can be done very simply.
Follow the following steps:
Open the AWS EC2 on console.
Select the instance and navigate to actions.
Select instances settings and select Attach/Replace IAM Role
Create a new role and attach S3FullAccess policy to that role.
When this is done, connect to the AWS instance and the rest will be done via the following CLI commands:
aws s3 cp filelocation/filename s3://bucketname
Please note... the file location refers to the local address. And the bucketname is the name of your bucket.
Also note: This is possible if your instance and S3 bucket are in the same account.
Cheers.
IAM role is the solution for you.
You need create role with s3 access permission, if the ec2 instance was started without any role, you have to re-build it with that role assigned.
Refer: Allowing AWS OpsWorks to Act on Your Behalf
I'm developing a mobile application & i want to upload/get/delete a file in AWS S3 bucket.
But I'm very concern about the security problem.
S3 Bucket: It should not be public and only authorize IAM user can access who have the permission to access my bucket.
So, need help to configure permission of my S3 bucket & create an IAM user.
That is not how you authorize access for mobile applications. Yes, you can create IAM user, generate access key and secret access key, store those keys in the application code and configure right permissions for the IAM user. Then you don't even need to configure bucket policy. By default, bucket is private and only IAM users in your account with appropriate permissions are able to access it. If you allow IAM user to access specific S3 bucket then you would need to configure explicit deny on bucket policy to override it.
But the above approach is against every security good practice. What you really want to do is to create IAM role that allows access to the bucket and assume that role from within the application. You can set up Cognito + web federation (or some other web federation provider) for your users and ask STS service to generate short lived credentials using sts:assumeRoleWithWebIdentity command.
As for the IAM permissions, you will need to allow s3:PutObject, s3:GetObject and s3:DeleteObject so the policy can look something like this.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "<arn-of-your-bucket>"
}
]
}
You can be even more granular and allow Cognito users to access only "their" folder inside of a bucket if you need to.
As for the role, you just need to attach the above policy to it and configure trust relationship between the role and web identity provider (as mentioned above, this can be Cognito or any OpenID provider).
Is it possible to expose Amazon S3 account bucket (shared by ACL setings) to the users setup using new Amazon AIM API under different account?
I'm able to create working IAM policy when related to the users and objects belonging to a single account. But as it seems this no longer works when two different accounts are involved - despite account 2 being able to access account 1's bucket directly.
Sample policy is:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::test1234.doom",
"arn:aws:s3:::test.doom"
],
"Condition": {}
}
]
}
In this case AIM user is able to list test.doom bucket (owned by the same AWS account) and not 'test1234.doom' bucket (owned by the different AWS account). This is despite one account having correct ACL permissions to access the other bucket.
It looks like this can't be done.
http://aws.amazon.com/iam/faqs/#Will_users_be_able_to_access_data_controlled_by_AWS_Accounts_other_than_the_account_under_which_they_are_defined
Although it looks like in the future they might be allowed to create data under another account.
http://aws.amazon.com/iam/faqs/#Will_users_be_able_to_create_data_under_AWS_Accounts_other_than_the_account_under_which_they_are_defined