AWS IAM user-based policy vs resource-based policy vs both - amazon-web-services

Let's assume a user-based IAM policy i.e. one that can be attached to a user, group or role.
Let's say one that gives full access to a DynamoDB table:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:us-west-2:123456789:table/Books"
}
}
Based on this policy, any user who somehow ends up with that policy attached to them (via assuming a role or directly for example) gets full access to that DynamoDB table.
Question 1: Is it worth having a resource-based policy on the other end i.e. on the DynamoDB table to complement the user-based policy?
Example:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::123456789012:user/bob"},
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
The motivation here is that the previous policy might end up being attached to someone by accident and using the resource-based one would ensure that only user Bob will ever be given these permissions.
Question 2: Is using the stricter resource-policy only preferable maybe?
Question 3: In general, are there any best practices / patterns for picking between user-based vs resource-based policies (for the services that support resource-based policies that is)?

Answer 0: DynamoDB does not support resource-based policies.
The Console GUI looks like it, but the API does not have an operation for that.
And the documentation is clear: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/access-control-overview.html#access-control-manage-access-resource-based
Answer 1: Do not use IAM and resource policies on the same resource
The challenge with access control is maintaining it over the long run:
Permissions for new hires must be set up correctly and swiftly (they want to work!)
Permissions for leavers must be removed swiftly (for whatever reason)
Someone has to regulary review the permissions and approve them
All of the 3 tasks above are much easier, if there is only a single location where to look for. And use "Effect": "Deny", if you want to restrict access.
Any "accidental assignment" would be caught by the review.
Answer 1b:
Of course it depends on the use case (e.g. 4 eyes principle can demand it). And some permissions cannot be set in IAM, ( e.g. "Everyone") and must be set on the resource. Or if you destroy/recreate the resource, the resource-based permission disappears.
Answer 2: IAM policy is easier to manage
If the situation allows both IAM and resource policy, they have the same grammar and can be made equally strict, at least in your case. Assuming all other being equal, IAM policies are much easier to manage.
Answer 3: Best practice
Unfortunately, I am not aware of a best practice issued by AWS, apart from "minimal privileges" of course. I suggest you go with the best practice in terms of maintainability as for other permissions outside of AWS.

It depends whether you are making a request within same aws account, or a cross-account request.
Within the same AWS account, meaning your user belongs to aws account that owns the resource (S3, SQS, SNS etc), you can have either identity-based (user, group, role) OR a resource-based policy (SQS, SNS, S3, API gateway). Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow
However when you are delegating access to different AWS accounts. It can vary. For API gateway, you need explicit allow from identity-based role and resource-based, for example.
source: API Gateway Authorization Flow

The answer for all of your questions is it depends
Both IAM policies and Resource policies are equally important depends upon the usecase.
Let say you want to provide permission to AWS managed services like providing permissions to cloud front to read s3 bucket.. it's better to use resource policies ...
But for uploading/changing content, it's better to via IAM policies..
In simple terms it's better to use IAM policies when providing access from some user/external system/user managed instances.. and for providing access in between AWS managed services use resource policies.

Related

How to give access of s3 bucket residing in Account A to different iam users from multiple aws accounts?

I am working on aws SAM project and i have a requirement of giving access to my S3 bucket to multiple iam users from unknown aws accounts but i can't make bucket publicly accessible. I want to secure my bucket as well as i want any iam user from any aws account to access the contents of my S3 bucket. Is this possible?
Below is the policy i tried and worked perfectly.
{
"Version": "2012-10-17",
"Id": "Policy1616828964582",
"Statement": [
{
"Sid": "Stmt1616828940658",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/STS_Role_demo"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::new-demo-bkt/*"
}
]
}
Above policy is for one user but i want any user from other AWS account to access my contents without making the bucket and objects public so how can i achieve this?
This might be possible using a set of Conditions on the incoming requests.
I can think of two options:
You create an IAM role that your SAM application uses even when running in other accounts
You create S3 bucket policies that allow unknown users access
If you decide to look into S3 bucket policies, I suggest using an S3 Access Point to better manage access policies.
Access points are named network endpoints that are attached to buckets
that you can use to perform S3 object operations, such as GetObject
and PutObject. Each access point has distinct permissions and network
controls that S3 applies for any request that is made through that
access point. Each access point enforces a customized access point
policy that works in conjunction with the bucket policy that is
attached to the underlying bucket.
You can use a combination of S3 Conditions to restrict access. For example, your SAM application could include specific condition keys when making S3 requests, and the bucket policy then allows access based on those conditions.
You can also apply global IAM conditions to S3 policies.
This isn't great security though, malicious actors might be able to figure out the headers and spoof requests to your bucket. As noted on some conditions such as aws:UserAgent:
This key should be used carefully. Since the aws:UserAgent value is
provided by the caller in an HTTP header, unauthorized parties can use
modified or custom browsers to provide any aws:UserAgent value that
they choose. As a result, aws:UserAgent should not be used to
prevent unauthorized parties from making direct AWS requests. You can
use it to allow only specific client applications, and only after
testing your policy.

AWS - Granting access to all resources with specific tags

I'm trying to create a IAM group that can fully access all resources that have certain tags.
For example, if an S3 bucket and an EC2 instance are tagged env:qa, the group project-qa should have full access to them.
So far I've tried the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
Effect: "Allow",
Action: "*",
Resource: "*",
Condition: {
"StringEquals": {
"aws:ResourceTag/env": "qa"
}
}
}
]
}
I created an account to test this, but when browsing buckets I was immediately told that I lack permissions for the s3:ListAllMyBuckets action - which I assumed would be covered by Action: "*"
Yes, the ListAllMyBuckets action is covered by the * but its "resource" is not tagged because its resource is not actually a really existing resource and therefore you are not allowed to perform that operation. You either have ListAllMyBuckets for * or you don't, there is no way to restrict that based on the bucket, because you are not listing a bucket, you are listing all buckets and this "all buckets" does not have a tag, it does not really have anything, it does not really exist. See https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html: the Resource types for ListAllMyBuckets is empty, there is no actual resource to interact with.
Listing object in a bucket can work based on tags, but not listing all the buckets in the first place. Same thing will happen in a lot of places, listing does not respect the permission for the resources that are being listed. Look at it this way: you may not be allowed to do anything with the bucket, you cannot browse it, configure it but you are still allowed to see it in the list.
The only way around this is to add these special permission one by one as soon as you see that you are missing one.
Note that there are actual AWS resources that are not taggable and therefore you will not succeed with that strategy at all.
Be very careful with the above policy, you are giving access to destroy resources for example route53 DeleteHostedZone and many resources as #luk2302 pointed out doesn't follow the tag-based condition at all.
This is a kind of hard-learned lesson for me. Keep this document AWS services that work with IAM handy while writing policies.
This gives you the idea which service supports what kind of restrictions like:
Resource-level permissions
source-based policies
Authorization based on tags
Temporary credentials
Service-linked roles
To be frank, If I were you, I would use selective allow actions which I want as by default in IAM all actions are denied.

Terraform: what does AssumeRole: Service: ec2 do?

What exactly does this AWS role do?
The most relevant bits seem to be:
"Action": "sts:AssumeRole", and
"Service": "ec2.amazonaws.com"
The full role is here:
resource "aws_iam_role" "test_role" {
name = "test_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
From: https://www.terraform.io/docs/providers/aws/r/iam_role.html
To understand the meaning of this it is necessary to understand some details of how IAM Roles work.
An IAM role is similar to a user in its structure, but rather than it being accessed by a fixed set of credentials it is instead used by assuming the role, which means to request and obtain temporary API credentials that allow taking action with the privileges that are granted to the role.
The sts:AssumeRole action is the means by which such temporary credentials are obtained. To use it, a user or application calls this API using some already-obtained credentials, such as a user's fixed access key, and it returns (if permitted) a new set of credentials to act as the role. This is the mechanism by which AWS services can call into other AWS services on your behalf, by which IAM Instance Profiles work in EC2, and by which a user can temporarily switch access level or accounts within the AWS console.
The assume role policy determines which principals (users, other roles, AWS services) are permitted to call sts:AssumeRole for this role. In this example, the EC2 service itself is given access, which means that EC2 is able to take actions on your behalf using this role.
This role resource alone is not useful, since it doesn't have any IAM policies associated and thus does not grant any access. Thus an aws_iam_role resource will always be accompanied by at least one other resource to specify its access permissions. There are several ways to do this:
Use aws_iam_role_policy to attach a policy directly to the role. In this case, the policy will describe a set of AWS actions the role is permitted to execute, and optionally other constraints.
Use aws_iam_policy to create a standalone policy, and then use aws_iam_policy_attachment to associate that policy with one or more roles, users, and groups. This approach is useful if you wish to attach a single policy to multiple roles and/or users.
Use service-specific mechanisms to attach policies at the service level. This is a different way to approach the problem, where rather than attaching the policy to the role, it is instead attached to the object whose access is being controlled. The mechanism for doing this varies by service, but for example the policy attribute on aws_s3_bucket sets bucket-specific policies; the Principal element in the policy document can be used to specify which principals (e.g. roles) can take certain actions.
IAM is a flexible system that supports several different approaches to access control. Which approach is right for you will depend largely on how your organization approaches security and access control concerns: managing policies from the role perspective, with aws_iam_role_policy and aws_iam_policy_attachment, is usually appropriate for organizations that have a centralized security team that oversees access throughout an account, while service-specific policies delegate the access control decisions to the person or team responsible for each separate object. Both approaches can be combined, as part of a defense in depth strategy, such as using role- and user-level policies for "border" access controls (controlling access from outside) and service-level policies for internal access controls (controlling interactions between objects within your account).
More details on roles can be found in the AWS IAM guide IAM Roles. See also Access Management, which covers the general concepts of access control within IAM.

How should I set up my bucket policy so I can deploy to S3?

I've been working on this a long time and I am getting nowhere.
I created a user and it gave me
AWSAccessKeyId
AWSSecretKey
I created a bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObjectAcl",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::abc9876/*"
}
]
}
Now when I use a gulp program to upload to the bucket I see this:
[20:53:58] Starting 'deploy'...
[20:53:58] Finished 'deploy' after 25 ms
[20:53:58] [cache] app.js
Process terminated with code 0.
To me it looks like it should have worked but when I go to the console I cannot see anything in my bucket.
Can someone tell me if my bucket policy looks correct and give me some suggestions on what I could do to test out the uploading. Could I for example test this out from the command line?
There are multiple ways to manage access control on S3. These different mechanisms can be used simultaneously, and the authorization of a request will be the result of the interaction of all the rules in all these mechanisms. Things can get confusing!
Let's try to make things easier to understand. You have:
IAM policies - these are policies you define for specific Users or Groups (or Roles, but let's not get into that...).
S3 bucket policies - these are policies that you define at the bucket level.
S3 ACLs (access control lists) - these are rules that you define both at the bucket level and the object level. This is that permissions area mentioned on a comment to another answer.
Whenever you send a request to S3, e.g. downloading an object, the request will be processed by an authorization system. This system will calculate the union of all the policies/rules described above, and then will follow a process that can be simplified as follows:
If there is any rule explicitly denying the request, it's denied. Period. Otherwise...
Otherwise, if there is any rule explicitly allowing the request, it's allowed. Period. Otherwise...
Otherwise, the request is denied.
Let's say you have all the mechanisms in place. For the request to be accepted, you must not have any rules Denying that request, and need to have at least one rule allowing that request.
Making your policies easier to understand...
My suggestion to you is to simplify your policies. Choose one access control mechanism and use stick to that one.
In your specific situation, from your very brief description, I feel that using IAM policies could be a good idea. You can use either an IAM User Policy (that you define and attach specifically to your IAM User) or an IAM Group Policy (that you define and attach to a group your IAM User belongs to). Let's forget about IAM Roles, that is a whole different story.
Then delete your ACLs and Bucket Policies. Your requests should be allowed then.
As an additional hint, make sure the software you are using to upload objects to S3 is actually using those 2 API calls: PutObject and PutObjectAcl. Keep in mind that S3 supports multi-part upload, through the use of a different set of API calls. If your tool is doing multi-part uploads under the hood, then be sure to allow those API calls as well (many tools will, including the AWS CLI, and many SDKs have a higher level S3 API that will do that as well)!
For more information on this matter, I'd suggest the following post from the AWS Security Blog:
IAM policies and Bucket Policies and ACLs! Oh My! (Controlling Access to S3 Resources)
You don't need to define "Principal": "*" , since you have already created a IAM user
The Bucket Policy looks fine, if there was a problem with access it would have given you an appropriate error.
Just make sure your "Keyname" is correct while calling AWS APIs, the keyname which uniquely identifies the object in a bucket.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html

IAM access to EC2 REST API?

I'm new to AWS. My client uses AWS to host his EC2 instances. Right now, we are trying to get me API access. Obviously, I need my authentication details to do this.
He set me up an IAM identity under his account, so I can login to the AWS web console and configure EC2 instances. I cannot, however, for the life of me, figure out where my API access keys are displayed. I don't have permissions to view 'My Account', which is where I imagine they'd be displayed.
So, what I'm asking, is how can he grant me API access through his account? How can I access the AWS API using my IAM identity?
Michael - sqlbot's answer is correct (+1), but not entirely complete given the comparatively recent but highly useful addition of Variables in AWS Access Control Policies:
Today we’re extending the AWS access policy language to include
support for variables. Policy variables make it easier to create
and manage general policies that include individualized access
control.
This enables implementation of an 'IAM Credentials Self Management' group policy, which would usually be assigned to the most basic IAM group like the common 'Users'.
Please note that the following solution still needs to be implemented by the AWS account owner (or an IAM user with permissions to manage IAM itself), but this needs to be done once only to enable credentials self management by other users going forward.
Official Solution
A respective example is included in the introductory blog post (and meanwhile has been available at Allow a user to manage his or her own security credentials in the IAM documentation too - Update: this example vanished again, presumably due to being applicable via custom solutions using the API only and thus confusing):
Variable substitution also simplifies allowing users to manage their
own credentials. If you have many users, you may find it impractical
to create individual policies that allow users to create and rotate
their own credentials. With variable substitution, this becomes
trivial to implement as a group policy. The following policy permits
any IAM user to perform any of the key and certificate related actions
on their own credentials. [emphasis mine]
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action":["iam:*AccessKey*","iam:*SigningCertificate*"],
"Resource":["arn:aws:iam::123456789012:user/${aws:username}"]
}
]
}
The resource scope arn:aws:iam::123456789012:user/${aws:username} ensures that every user is effectively only granted access to his own credentials.
Please note that this solution still has usability flaws depending on how AWS resources are accessed by your users, i.e. via API, CLI, or the AWS Management Console (the latter requires additional permissions for example).
Also, the various * characters are a wildcard, so iam:*AccessKey* addresses all IAM actions containing AccessKey (see IAM Policy Elements Reference for details).
Extended Variation
Disclaimer: The correct configuration of IAM policies affecting IAM access in particular is obviously delicate, so please make your own judgement concerning the security impact of the following solution!
Here's a more explicit and slightly extended variation, which includes AWS Multi-Factor Authentication (MFA) device self management and a few usability enhancements to ease using the AWS Management Console:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"iam:CreateAccessKey",
"iam:DeactivateMFADevice",
"iam:DeleteAccessKey",
"iam:DeleteSigningCertificate",
"iam:EnableMFADevice",
"iam:GetLoginProfile",
"iam:GetUser",
"iam:ListAccessKeys",
"iam:ListGroupsForUser",
"iam:ListMFADevices",
"iam:ListSigningCertificates",
"iam:ListUsers",
"iam:ResyncMFADevice",
"iam:UpdateAccessKey",
"iam:UpdateLoginProfile",
"iam:UpdateSigningCertificate",
"iam:UploadSigningCertificate"
],
"Effect": "Allow",
"Resource": [
"arn:aws:iam::123456789012:user/${aws:username}"
]
},
{
"Action": [
"iam:CreateVirtualMFADevice",
"iam:DeleteVirtualMFADevice",
"iam:ListVirtualMFADevices"
],
"Effect": "Allow",
"Resource": "arn:aws:iam::123456789012:mfa/${aws:username}"
}
]
}
"You" can't, but:
In IAM, under Users, after he selects your user, he needs to click Security Credentials > Manage Access Keys, and then choose "Create Access Key" to create an API Key and its associated Secret, associated with your IAM user. On the next screen, there's a message:
Your access key has been created successfully.
This is the last time these User security credentials will be available for download.
You can manage and recreate these credentials any time.
Where "manage" means "deactivate or delete," and "recreate" means "start over with a new one." The IAM admin can subsequently see the keys, but not the associated secrets.
From that screen, and only from that screen, and only right then, is where the IAM admin can view the both key and the secret associated with the key or download them to a CSV file. Subsequently, one with appropriate privileges can see the keys for a user within IAM but you can never view the secret again after this one chance (and it would be pretty preposterous if you could).
So, your client needs to go into IAM, under the user he created for you, and create an API key/secret pair, save the key and secret, and forward that information to you via an appropriately-secure channel... if he created it but didn't save the associated secret, he should delete the key and create a new one associated with your username.
If you don't have your own AWS account, you should sign up for one so you can go into the console with full permissions as yourself and understand the flow... it might make more sense than my description.