Restricting a Role's S3 Write Privileges - amazon-web-services

I would like to restrict my role from writing to unauthorized buckets in different accounts. For example, I will have role A in account A. S3 bucket B is created in account B and has a bucket policy allowing role A to write into it. I need a policy on role A/account A to prevent role A from being able to write into bucket B.
Is this possible?

Probably the easiest (from an IAM Policy perspective!) way to achieve what you are looking for, while minimizing the risk of overlooking something and introducing a potential security problem, is to use Access Points.
You can create Access Points and associate them with your buckets. Then, instead of trying to interact with the bucket directly, you interact with the Access Point.
The reason this can help you is that there's an IAM Policy Condition Key available to test the Account ID that owns an Access Point. What you need to do, then, is simply add a statement to your IAM Role's Policy that will "Effect": "Deny" all S3 actions, on all resources, when the request matches a condition that tests "StringNotEquals": { "s3:DataAccessPointAccount": "YOUR_ACCT_NUMBER" }.
Note that you'll not be able to access any S3 resources without going through an Access Point. So this will increase your initial setup complexity (and the complexity of creating new buckets, since now you'll also need to create and associate an access point). It will also become more complex to interact with S3, since you'll now always need to go through the Access Point.
Those are trade-offs you'll need to accept if you want to implement a solution like this. But it'll achieve your goal: it'll be impossible for this IAM Role to access S3 buckets outside of your account.
Here's what the Policy would look like (tailor it to your specific needs):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "denyWithoutAccessPoint",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"s3:DataAccessPointAccount": "YOUR_AWS_ACCT_ID"
}
}
}
]
}
Keep in mind that you'll also need to Allow any operations that you need.
If this trade-off (more complexity interacting with S3 for a completely straightforward IAM Policy) doesn't work for you, you'll need something different.
Keep in mind that there's always a trade-off.
Alternative 1
One alternative possibility is what #Marcin described: explicitly deny access to these buckets in the other accounts.
However, the trade-off here is that you'll never know what are all the S3 Buckets, owned by other AWS accounts, that granted access to your IAM Role.
So you can only deny access to the buckets you know about.
In a threat model in which the attacker wants to exfiltrate data from your account, they could create a new bucket that you don't know about, grant access to the IAM Role through a Bucket Policy on that new bucket, and then somehow make to role write into that newly created bucket.
Benefit: no changes to how your applications use S3.
Disadvantage: possible attack scenario since you can't know the entire list of buckets from other accounts that allow access to your role (i.e., you'd be blocking access "reactively", that is, only after something bad could have already happened).
Alternative 2
Another alternative is for you to instead create an IAM Policy that explicitly denies access to all buckets NOT enumerated in the policy.
To implement this "negated list", you use NotResource, rather than the more common Resource policy element.
Here's what the policy would look like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "denyWithoutAccessPoint",
"Effect": "Deny",
"Action": "s3:*",
"NotResource": [
"arn:aws:s3:::my-bucket-1",
"arn:aws:s3:::my-bucket-1/*",
"arn:aws:s3:::my-bucket-2",
"arn:aws:s3:::my-bucket-2/*"
]
}
]
}
Again, like in the other sample policy in this answer, remember that you'll still need to explicitly allow actions.
Benefit: you don't need to change the way you interact with S3, and you don't have to know the names of all buckets in other accounts that allow access to your IAM Role.
Disadvantage: you need to maintain this growing list of buckets. Also, keep in mind that there's a maximum policy size that you may eventually hit, making this solution limited in scale (although it can grow quite a lot).

By default buckets are private. But if you already have a bucket policy allowing a role in Account A to write to it and don't want to check the policy, you could add
an explicit deny to the role.
The deny would prohibit s3::PutObject on the bucket and its objects. This works because:
an explicit deny in any of these policies overrides the allow.
En example of such a policy which could be added to the role in Account A is following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "deny-puts-to-bucket-in-acc-b",
"Effect": "Deny",
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::<bucket-from-account-B>/*",
]
}
]
}
This only denies PutObject. You may consider other actions as well such as PutObjectAcl or actions on the bucket itself and many more.
Nevertheless, the above policy should be a good start to tailor it to your specific requirements.

Related

Giving customer AWS access to my AWS's specific s3 bucket?

How do i grant a customer read/write access to a specific S3 bucket in my AWS account without giving them access to any other buckets or resources?
They should be able to access this bucket from a powershell script in some ec2 instance of theirs.
found this policy
{
"Version": "2012-10-17",
"Id": "PolicyForBucketX",
"Statement": [
{
"Sid": "AllowCustomerRWAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::bucket-x/*"
}
]
}
Giving customer AWS access to my AWS's specific s3 bucket?
with this, they might be able to access s3 via their access key in powershell. However they might not be using access key hardcoded to use s3. They might be using STS with instance role for the ec2 to access their s3 resources.
Would this work still? Would they then have to add my bucket x into their instance role permissions buckets?
Any better way? I might/might not have details of their AWS resource IDs.
With Bucket policy and IAM policy (either for user or a role) you can restrict users/resources based on the requirement.
I agree with Maurice here as extent of restriction would heavily depend on what you specifically want to do.
You can also use CloudFront and restrict access to your bucket objects for users not managed by IAM.
In general you should think of access as two part task. On the side of the resource, you grant permissions to a resource, in this case you are doing that for a specific bucket (resource) for a cross account (principal). You're done.
Now, the identity that will access it will also needs permissions given to them by the account administrator (root) the same way. I.e. grant the user/role the permissions to
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
If they would like to use an instance which has AWS PowerShell installed, they can create an instance profile / role that has the above permissions, and they will be able to run the commands and access your bucket. That's right way to do it.
Regardless of how they access to the instance, when they make the api call from the instance to your bucket, AWS will first check to see if the caller (which could be instance profile or a role they assumed) has permissions to these actions (customer setup). It will then be checked to see if the resource allows these actions (your setup).

Significance of resource in Resource Based Policy

I am trying to understand resource based policy in IAM.
I understand : it is attached to a resource like s3,KMS, secrets manager etc.
My question is what is the significance Resource in a resource based policy.
For example a permission policy for AWS secrets manager(https://aws.amazon.com/premiumsupport/knowledge-center/secrets-manager-resource-policy/)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "secretsmanager:*",
"Principal": {"AWS": "arn:aws:iam::123456789999:user/Mary"},
"Resource": "*"
}
]
}
Here the Resource is * or the resource can be the ARN of the secrets manager. (Is there any other value allowed in this case ? ) For S3 I can think of the root bucket or other prefixes.
So my question is what is the use case for Resource here ? Please let me know if I am reading it wrong.
Thanks in advance.
Looking in the User Guide, you can see:
Resource: which secrets they can access. See Secrets Manager resources.
The wildcard character (*) has different meaning depending on what you attach the policy to:
In a policy attached to a secret, * means the policy applies to this secret.
In a policy attached to an identity, * means the policy applies to all resources, including secrets, in the account.
So in the case where it is attached to the secret, it effectively has no meaning that differs from *, but it is when you attach it to an identity that it becomes more useful. Then you can give differing identities different action permissions on various secrets.
Resource is the resource that the policy refers to. It allows for more fine grained control over policies.
Take an example-
You host several DynamoDB tables, each of which have multiple indexes. You want to grant users in group A access to some of the tables, along with their indexes.
You want to give users in group B access to a single table, but none of the indexes.
And you want to give users in group C access to a single table, along with all 3 of its indexes.
When you specify the resource in the policy for group A
"resource": ["arn::<table-a-arn>/","arn::<table-b-arn>/","arn::<table-b-arn>/index/gsi1"]
The resource policy for group B "resource": "arn::<table-c-arn>/"
And for group C "resource": ["arn::<table-a-arn>/","arn::<table-a-arn>/index/*"]
Another use case if for explicit denies. An explicit deny always overrides an implicit allow. If you grant full access to EC2 in an account with a policy with EC2 permissions with "resource": * but there is a single instance that you want to limit access to by the entity to which you are applying the policy you would also add a deny statement to the policy with "resource": <some-super-private-instance>

What is the purpose of 'resource' in an AWS resource policy?

As per title, what is the purpose of having the resource field when defining a resource policy when the resource policy is already going to be applied to a particular resource.
For example, in this aws tutorial, the following policy is defined an attached to a queue. What is the purpose of the resource field?
{
"Version": "2008-10-17",
"Id": "example-ID",
"Statement": [
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SQS:SendMessage"
],
"Resource": "arn:aws:sqs:REGION:ACCOUNT-ID:QUEUENAMEHERE",
"Condition": {
"ArnLike": { "aws:SourceArn": "arn:aws:s3:*:*:bucket-name" }
}
}
]
}
S3 is a good example of where you need to include the resource statement in the policy. Let's say you want to have a upload location on S3 bucket.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Upload",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:PutObject"],
"Resource":["arn:aws:s3:::examplebucket/uploads/*"]
}
]
}
In these cases you really don't want to default the Resource to the bucket as it could accidentally cause global access. It is better to make sure the user clearly understands what access is being allowed or denied.
But why make it required for resource policies where it isn't need like SQS? For this let's dive into how resource policies are used.
You can grant access to a resources 2 ways:
Identity based policies for IAM principals (users and roles).
Resource based policies
The important part to understand is how are resource polices used? Resource policies are actually used by IAM in the policy evaluation logic for authorization. To put it another way, resources are not responsible for the actual authorization that is left to IAM (Identity and Access Management).
Since IAM requires that every policy statement have a Resource or NotResource this means the service would need to add the resource when sending it to IAM if it was missing. So let us look at the implications from a design perspective of having the service add the resource if it is missing.
The service no longer would need to just verify the policy is correct.
If the resource is missing from the statement the service would need to update the policy before sending it to IAM.
There is now the potential for two different versions of a resource policy. The one the user created for editing and the one sent to IAM.
It increases the potential for user error and accidentally opening up access by attaching a policy to the wrong resource. If we modify the policy statement in the question drop the resource and condition statement we have a pretty open policy. This could easily be attached to the wrong resource especially from the CLI or terraform.
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"*"
]
}
Note I answered this from a general design perspective based on my understanding of how AWS implements access management. How AWS implemented the system might be a little different but I doubt it because policy evaluation logic really needs to be optimized for performance so it's better do to that in in one service, IAM, instead of in each service.
Hope that helps.
Extra reading if you are interested in the details of the Policy Evaluation Logic.
You can deny access 6 ways:
Identity Policy
Resource policies
Organizational Polices if your account is part of an organization
IAM permission boundaries if set
Session Assumed Policy if used
Implicitly if there was no allow policy
Here is the complete IAM policy evaluation logic workflow.
There is a Policy as you defined.
Policy applied resource : A, I don't know where you will apply this.
The resource in the policy : B, arn:aws:sqs:REGION:ACCOUNT-ID:QUEUENAMEHERE
Once you apply the polity to some service like ec2 instance that is A, then the instance only can do SQS:SendMessage through the resource B. A and B are totally different.
If you want to restrict the permission for the resource A that shouldn't access to other resources but can only access to the defined resources, then you have to define the resource such as B in the policy.
Your policy is only valid for that resource B and this is not the resource what you applied A.

My image in s3 bucket is accessible to public which is not supposed to be

I am facing a very strange issue which is either my lack of knowledge or a bug in aws s3 :
So I create an s3bucket which is not accessible to anyone in public and then I put an image in it. So when I try that image that is definitely no visible to everyone which is good.(So both my bucket and image have no public access)
Then I added the following bucket policy to it:
{
"Version": "2012-10-17",
"Id": "Policy1506624486110",
"Statement": [
{
"Sid": "Stmt1506624421375",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucketname/*"
}
]
}
At this point based on my understanding all aws resources are accessible to this image but not any other people in the public. Strangely I see that people in public, any stranger can access this image. Can anyone explain what that bucket policy magically does that it make it available to public?
You're explicitly making your bucket public.
To grant permission to everyone, also referred as anonymous access, you set the wildcard, "*", as the Principal value. For example, if you configure your bucket as a website, you want all the objects in the bucket to be publicly accessible. The following are equivalent:
"Principal":"*"
"Principal":{"AWS":"*"}
http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-bucket-user-policy-specifying-principal-intro.html
The option of either using the "AWS" at the "beginning" (as the object key in a JSON object) or the bare scalar string "*" is presumably for historical reasons, one being an older or newer form than the other, but that doesn't appear to be documented. The object key refers to an authority type, with other documented values including "CanonicalUser", "Federated", and "Service".
There are very few valid use cases for using "*" in a policy, unless additional condition tests in the policy are used to narrow the policy's scope.
Note also that the * is not a true wildcard, here. It's only a placeholder for "everyone." You can't use it in a principal to match a portion of an ARN. For example, "AWS": [ "arn:aws:iam:account-id:user/*" ] does not mean all IAM users in the specified account.
The best practice recommendation is not to use bucket policies when the desired action can be accomplished with user or role policies.
You should be specific to the principal. You can give multiple ARN's instead of '*'. You bucket policy generator to generate policy and specify which ARN you want in the principal. It would be worth read below link,
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

One IAM Role across multiple AWS accounts

For security reasons, we have a pre-prod and a prod AWS account. We're now beginning to use IAM Roles for S3 access to js/css files through django-storage / boto.
While this is working correctly on a per account basis, now a need has risen where the QA instance needs to access one S3 bucket on a the prod account.
Is there a way to have one IAM role that can grant access to the pre-prod And prod S3 buckets? As I'm writing it seems impossible, but it never hearts to ask!
Here's the AWS doc on this: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
Essentially, you have to delegate permissions to one account from the other account using the Principal block of your Bucket's IAM policy, and then set up your IAM user in the second account as normal.
Example bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account-ID>:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucket-name>"
]
}
]
}
This works well for read-only access, but there can be issues with write access. Primarily, the account writing the object will still be the owner of that object. When dealing with Write permissions, you'll usually want to make sure the account owning the bucket still has the ability to access objects written by the other account, which requires the object to be written with a particular header: x-amz-grant-full-control
You can set up your bucket policy so that the bucket will not accept cross-account objects that do not supply this header. There's an example of that at the bottom of this page: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html (under "Granting Cross-Account Permissions to Upload Objects While Ensuring the Bucket Owner Has Full Control")
This makes use of a conditional Deny clause in the bucket policy, like so:
{
"Sid":"112",
"Effect":"Deny",
"Principal":{"AWS":"1111111111" },
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotEquals": {"s3:x-amz-grant-full-control":["emailAddress=xyz#amazon.com"]}
}
}
I generally avoid cross-account object writes, myself...they are quite fiddly to set up.