AWS SQS Policy not restricting root user - amazon-web-services

I created some SQS queues as a root user. - Now when I like to restrict access via policies it does not seem to work. - Even with a test policy like this
{
"Version": "2008-10-17",
"Id": "PolicyDenyTest",
"Statement": [
{
"Sid": "DenyIt",
"Effect": "Deny",
"Principal": "*",
"Action": [
"sqs:DeleteMessage",
"sqs:ReceiveMessage",
"sqs:SendMessage"
],
"Resource": "arn:aws:sqs:us-west-2:xxxxxxxxxx:TST"
}
]
}
I can still send/retrieve/delete messages from the queue from my local machine. - Are policies only valid when creating queues with an IAM user?

The credentials of the account owner allow full access to all resources in the account. You cannot use IAM policies to explicitly deny the root user access to resources. You can only use an AWS Organizations service control policy (SCP) to limit the permissions of the root user. Because of this, we recommend that you create an IAM user with administrator permissions to use for everyday AWS tasks and lock away the access keys for the root user.
https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html
The root key is all-powerful key that can be used to recover everything even if you mistakenly deny all access to all your resources. This is a well thought-out decision that is explained in the linked doc

Related

Bucket policy to prevent bucket delete

I am looking for a bucket policy which allows only the root account user and the bucket creator to delete the bucket. something like below. Please suggest. How to restrict to only bucket creator and root?
{
"Version": "2012-10-17",
"Id": "PutObjBucketPolicy",
"Statement": [
{
"Sid": "Prevent bucket delete",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxx:root"
},
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::test-bucket-s3"
},
{
"Sid": "Prevent bucket delete",
"Effect": "Deny",
"Principal": *,
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::test-bucket-s3"
}
]
}
A Deny always beats an Allow. Therefore, with this policy, nobody would be allowed to delete the bucket. (I assume, however, that the root user would be able to do so, since it exists outside of IAM.)
There is no need to assign permissions to the root, since it can always do anything.
Also, there is no concept of the "bucket creator". It belongs to the account, not a user.
Therefore:
Remove the Allow section (it does nothing)
Test whether the policy prevents non-root users from deleting it
Test whether the policy still permits the root user to delete it
There are 2 different type of permission in S3.
Resource Based policies
User Policies
So Bucket policies and access control lists (ACLs) are part of Resource Based and which attached to the bucket.
if all users are in same aws account. you can consider user policy which is attached to user or role.
if you are dealing with multiple aws accounts, Bucket policies or ACL is better.
only different is, Bucket policies allows you grant or deny access and apply too all object in the bucket.
ACL is grant basic read or write permission and can't add conditional check.

Bucket policy to prevent bucket delete except by a specific role [duplicate]

I am looking for a bucket policy which allows only the root account user and the bucket creator to delete the bucket. something like below. Please suggest. How to restrict to only bucket creator and root?
{
"Version": "2012-10-17",
"Id": "PutObjBucketPolicy",
"Statement": [
{
"Sid": "Prevent bucket delete",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxx:root"
},
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::test-bucket-s3"
},
{
"Sid": "Prevent bucket delete",
"Effect": "Deny",
"Principal": *,
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::test-bucket-s3"
}
]
}
A Deny always beats an Allow. Therefore, with this policy, nobody would be allowed to delete the bucket. (I assume, however, that the root user would be able to do so, since it exists outside of IAM.)
There is no need to assign permissions to the root, since it can always do anything.
Also, there is no concept of the "bucket creator". It belongs to the account, not a user.
Therefore:
Remove the Allow section (it does nothing)
Test whether the policy prevents non-root users from deleting it
Test whether the policy still permits the root user to delete it
There are 2 different type of permission in S3.
Resource Based policies
User Policies
So Bucket policies and access control lists (ACLs) are part of Resource Based and which attached to the bucket.
if all users are in same aws account. you can consider user policy which is attached to user or role.
if you are dealing with multiple aws accounts, Bucket policies or ACL is better.
only different is, Bucket policies allows you grant or deny access and apply too all object in the bucket.
ACL is grant basic read or write permission and can't add conditional check.

How to get AWS Glue crawler to assume a role in another AWS account to get data from that account's S3 bucket?

There's some CSV data files I need to get in S3 buckets belonging to a series of AWS accounts belonging to a third-party; the owner of the other accounts has created a role in each of the accounts which grants me access to those files; I can use the AWS web console (logged in to my own account) to switch to each role and get the files. One at a time, I switch to the role for each of the accounts and then get the files for that account, then move on to the next account and get those files, and so on.
I'd like to automate this process.
It looks like AWS Glue can do this, but I'm having trouble with the permissions.
What I need it to do is create permissions so that an AWS Glue crawler can switch to the right role (belonging to each of the other AWS accounts) and get the data files from the S3 bucket of those accounts.
Is this possible and if so how can I set it up? (e.g. what IAM roles/permissions are needed?) I'd prefer to limit changes to my own account if possible rather than having to ask the other account owner to make changes on their side.
If it's not possible with Glue, is there some other easy way to do it with a different AWS service?
Thanks!
(I've had a series of tries but I keep getting it wrong - my attempts are so far from being right that there's no point in me posting the details here).
Yes, you can automate your scenario with Glue by following these steps:
Create an IAM role in your AWS account. This role's name must start with AWSGlueServiceRole but you can append whatever you want. Add a trust relationship for Glue, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Attach two IAM policies to your IAM role. The AWS managed policy named AWSGlueServiceRole and a custom policy that provides the access needed to all the target cross account S3 buckets, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::examplebucket1",
"arn:aws:s3:::examplebucket2",
"arn:aws:s3:::examplebucket3"
]
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::examplebucket1/*",
"arn:aws:s3:::examplebucket2/*",
"arn:aws:s3:::examplebucket3/*"
]
}
]
}
Add S3 bucket policies to each target bucket that allows your IAM role the same S3 access that you granted it in your account, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your_account_number:role/AWSGlueServiceRoleDefault"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::examplebucket1"
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your_account_number:role/AWSGlueServiceRoleDefault"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket1/*"
}
]
}
Finally, create Glue crawlers and jobs in your account (in the same regions as the target cross account S3 buckets) that will ETL the data from the cross account S3 buckets to your account.
Using the AWS CLI, you can create named profiles for each of the roles you want to switch to, then refer to them from the CLI. You can then chain these calls, referencing the named profile for each role, and include them in a script to automate the process.
From Switching to an IAM Role (AWS Command Line Interface)
A role specifies a set of permissions that you can use to access AWS
resources that you need. In that sense, it is similar to a user in AWS
Identity and Access Management (IAM). When you sign in as a user, you
get a specific set of permissions. However, you don't sign in to a
role, but once signed in as a user you can switch to a role. This
temporarily sets aside your original user permissions and instead
gives you the permissions assigned to the role. The role can be in
your own account or any other AWS account. For more information about
roles, their benefits, and how to create and configure them, see IAM
Roles, and Creating IAM Roles.
You can achieve this with AWS lambda and Cloudwatch Rules.
You can create a lambda function that has a role attached to it, lets call this role - Role A, depending on the number of accounts you can either create 1 function per account and create one rule in cloudwatch to trigger all functions or you can create 1 function for all the accounts (be cautious to the limitations of AWS Lambda).
Creating Role A
Create an IAM Role (Role A) with the following policy allowing it to assume the role given to you by the other accounts containing the data.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509358389000",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"",
"",
....
"
]// all the IAM Role ARN's from the accounts containing the data or if you have 1 function for each account you can opt to have separate roles
}
]
}
Also you will need to make sure that a trust relationship with all the accounts are present in Role A's Trust Relationship policy document.
Attach Role A to the lambda functions you will be running. you can use serverless for development.
Now your lambda function has Role A attached to it and Role A has sts:AssumeRole permissions over the role's created in the other accounts.
Assuming that you have created 1 function for 1 account in you lambda's code you will have to first use STS to switch to the role of the other account and obtain temporary credentials and pass these to S3 options before fetching the required data.
if you have created 1 function for all the accounts you can have the role ARN's in an array and iterate over it, again when doing this be aware of the limits of AWS lambda.

Why does this S3 policy not allow me to download files?

This is the policy I have:
{
"Version": "2012-10-17",
"Id": "Policy1477084949492",
"Statement": [
{
"Sid": "Stmt1477084932198",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::__redacted__"
},
{
"Sid": "Stmt1477084947291",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::__redacted__/*"
}
]
}
I am able to view the files in the bucket via aws s3 ls but am not able to download.
My understanding is that these permissions should give full access to any AWS identity.
Question- Is there some reason that is not the case here?
Your policy works for me when I test it in my account.
In IAM, a deny overwrites an allow, and I suspect that you have a conflicting policy somewhere. Check all user policies, and groups that the user is a member of for conflicting policies.
You don't explicitly say you are doing this, but just to cover all bases. If you are running the s3 get on an instance with an IAM Role associated with it, check to make sure that the IAM Roles permissions are appropriate.
Depending on what you are actually doing this could explain your situation. If you are using an EC2 instance with an IAM Role, it will be using that IAM Role for permissions by default not your IAM User permissions. If you run aws configure and explicitly configure it with IAM User issued key and secret then it will use the IAM User policies.
Best practices say that if you are performing work on an EC2 instance, where possible and where your use case allows for it; you should not be using keys and secrets on the host but using an EC2 IAM Role.
Additional Reading:
IAM Policy Evaluation Logic
http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html

One IAM Role across multiple AWS accounts

For security reasons, we have a pre-prod and a prod AWS account. We're now beginning to use IAM Roles for S3 access to js/css files through django-storage / boto.
While this is working correctly on a per account basis, now a need has risen where the QA instance needs to access one S3 bucket on a the prod account.
Is there a way to have one IAM role that can grant access to the pre-prod And prod S3 buckets? As I'm writing it seems impossible, but it never hearts to ask!
Here's the AWS doc on this: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
Essentially, you have to delegate permissions to one account from the other account using the Principal block of your Bucket's IAM policy, and then set up your IAM user in the second account as normal.
Example bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account-ID>:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucket-name>"
]
}
]
}
This works well for read-only access, but there can be issues with write access. Primarily, the account writing the object will still be the owner of that object. When dealing with Write permissions, you'll usually want to make sure the account owning the bucket still has the ability to access objects written by the other account, which requires the object to be written with a particular header: x-amz-grant-full-control
You can set up your bucket policy so that the bucket will not accept cross-account objects that do not supply this header. There's an example of that at the bottom of this page: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html (under "Granting Cross-Account Permissions to Upload Objects While Ensuring the Bucket Owner Has Full Control")
This makes use of a conditional Deny clause in the bucket policy, like so:
{
"Sid":"112",
"Effect":"Deny",
"Principal":{"AWS":"1111111111" },
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotEquals": {"s3:x-amz-grant-full-control":["emailAddress=xyz#amazon.com"]}
}
}
I generally avoid cross-account object writes, myself...they are quite fiddly to set up.