Can I grant access to objects owned by another account in AWS S3 using bucket policies? - amazon-web-services

I want to control access to an object, which is created by another AWS account. Can I do that by bucket policies?
In other words, does bucket policies apply to objects that are owned by another account?
I do not have 2 AWS accounts so I can not test this case in action.

No.
The ability to grant access to objects can only be done from the Account that owns the bucket/object.
If you think about it, this makes sense -- you would not want me granting access to your objects. Only the account that owns the bucket/object can do this.

Yes you can by creating a policy. Please find a sample policy below,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB-ID:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::examplebucket"
]
}
]
}
You can do it through AWS console, CLI or API. Please find CLI sample below,
aws s3 ls s3://examplebucket --profile AccountBadmin
aws s3api get-bucket-location --bucket examplebucket --profile AccountBadmin
Please read documentation here for more details

Related

Why some AWS IAM actions require all resources

when creating a IAM policy, if I choose ListAllMyBuckets, the resource become "*", rather than "arn:aws:s3:::*". I do not understand it. How can ListAllMyBuckets requires resources not in s3? Doesn't "Bucket" mean S3 bucket?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
}
]
}
Since s3:ListAllMyBuckets only applies to S3 and no other AWS resource, "" is shorthand for "arn:aws:s3:::". In this case both statements are exactly the same.
The actions in IAM policies have direct relation with AWS API. For example, the AWS S3 API has a ListAllMyBuckets call that serve to, surprise: List All Buckets of your account. There is no point in give permission to "List All Your Buckets" and at the same time do not allow list some of then.
The same happens with DescribeInstances API for EC2. If you need to create limitation in your policies, you must use Conditions. (https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html). But they are not available in all cases.

AWS S3 bucket access control

In AWS, I (joe.doe#accountXYZ) created a S3 bucket, thus I am this s3 bucket owner.
I want to configure this S3 bucket based on the IAM role, thus only some IAM roles, such as [role_xyz, role_abc, role_cde], can can read this bucket.
From the AWS console, it seems that I can not configure it.
Can anyone tell me whether it is possible to do that?
========
I understand that from the IAM role side you can configure a policy for this s3 resource. But my question here is on the s3 resource side, whether I can define a access policy based IAM roles.
It appears that your requirement is to permit certain specific roles access to a particular Amazon S3 bucket.
There are two ways to do this:
Option 1: Add permissions to the Role
This is the preferred option. You can add a policy to the IAM Role that grants access to the bucket. It would look similar to:
{
"Id": "Policy1",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
]
}
]
}
This is a good method because you just add the policy to the desired Role(s), without having to touch the actual buckets.
Option 2: Add a Bucket Policy
This involves putting the permissions on the bucket, which grants access to a specific role. This is less desirable because you would have to put the policy on every bucket and refer to every Role.
It would look something like:
{
"Id": "Policy1",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::my-bucket/*"
],
"Principal": "arn:aws:iam::123456789012:role/my-role"
}
]
}
Please note that these policies are granting s3:* permissions on the bucket, that might be too wide for your purposes. It is always best to only grant the specific, required permissions rather than granting all permissions.

Cross account S3 access through CloudFormation CLi

I am trying to create a CloudFormation Stack using the AWS CLI by running the following command:
aws cloudformation create-stack --debug --stack-name ${stackName} --template-url ${s3TemplatePath} --parameters '${parameters}' --region eu-west-1
The template resides in an S3 bucket in the another account, lets call this account 456. The bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123:root"
]
},
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::cloudformation.template.eberry.digital/*"
}
]
}
("Action: * " is for debugging).
Now for a twist. I am logged into account 456 and I run
aws sts assume-role --role-arn arn:aws:iam::123:role/delegate-access-to-infrastructure-account-role --role-session-name jenkins
and the set the correct environment variables to access 123. The policy attached to the role that I assume allow the user Administrator access while I debug - which still doesn't work.
aws s3api list-buckets
then display the buckets in account 123.
To summarize:
Specifying a template in an S3 bucket owned by account 456, into CloufFormation in the console, while logged into account 123 works.
Specifying a template in an S3 bucket owned by account 123, using the CLI, works.
Specifying a template in an S3 bucket owned by account 456, using the CLI, doesn't work.
The error:
An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
I don't understand what I am doing wrong and would by thankful for any ideas. In the meantime I will upload the template to all accounts that will use it.
Amazon S3 provides cross-account access through the use of bucket policies. These are IAM resource policies (which are applied to resources—in this case an S3 bucket—rather than IAM principals: users, groups, or roles). You can read more about how Amazon S3 authorises access in the Amazon S3 Developer Guide.
I was a little confused about which account is which, so instead I'll just say that you need this bucket policy when you want to deploy a template in a bucket owned by one AWS account as a stack in a different AWS account. For example, the template is in a bucket owned by AWS account 111111111111 and you want to use that template to deploy a stack in AWS account 222222222222. In this case, you'll need to be logged in to account 222222222222 and specify that account as the principal in the bucket policy.
The following is an example bucket policy that provides access to another AWS account; I use this on my own CloudFormation templates bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWS_ACCOUNT_ID_WITHOUT_HYPHENS:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::S3_BUCKET_NAME",
"arn:aws:s3:::S3_BUCKET_NAME/*"
]
}
]
}
You'll need to use the 12-digit account identifier for the AWS account you want to provide access to, and the name of the S3 bucket (you can probably use "Resource": "*", but I haven't tested this).

How to get AWS Glue crawler to assume a role in another AWS account to get data from that account's S3 bucket?

There's some CSV data files I need to get in S3 buckets belonging to a series of AWS accounts belonging to a third-party; the owner of the other accounts has created a role in each of the accounts which grants me access to those files; I can use the AWS web console (logged in to my own account) to switch to each role and get the files. One at a time, I switch to the role for each of the accounts and then get the files for that account, then move on to the next account and get those files, and so on.
I'd like to automate this process.
It looks like AWS Glue can do this, but I'm having trouble with the permissions.
What I need it to do is create permissions so that an AWS Glue crawler can switch to the right role (belonging to each of the other AWS accounts) and get the data files from the S3 bucket of those accounts.
Is this possible and if so how can I set it up? (e.g. what IAM roles/permissions are needed?) I'd prefer to limit changes to my own account if possible rather than having to ask the other account owner to make changes on their side.
If it's not possible with Glue, is there some other easy way to do it with a different AWS service?
Thanks!
(I've had a series of tries but I keep getting it wrong - my attempts are so far from being right that there's no point in me posting the details here).
Yes, you can automate your scenario with Glue by following these steps:
Create an IAM role in your AWS account. This role's name must start with AWSGlueServiceRole but you can append whatever you want. Add a trust relationship for Glue, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Attach two IAM policies to your IAM role. The AWS managed policy named AWSGlueServiceRole and a custom policy that provides the access needed to all the target cross account S3 buckets, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::examplebucket1",
"arn:aws:s3:::examplebucket2",
"arn:aws:s3:::examplebucket3"
]
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::examplebucket1/*",
"arn:aws:s3:::examplebucket2/*",
"arn:aws:s3:::examplebucket3/*"
]
}
]
}
Add S3 bucket policies to each target bucket that allows your IAM role the same S3 access that you granted it in your account, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your_account_number:role/AWSGlueServiceRoleDefault"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::examplebucket1"
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your_account_number:role/AWSGlueServiceRoleDefault"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket1/*"
}
]
}
Finally, create Glue crawlers and jobs in your account (in the same regions as the target cross account S3 buckets) that will ETL the data from the cross account S3 buckets to your account.
Using the AWS CLI, you can create named profiles for each of the roles you want to switch to, then refer to them from the CLI. You can then chain these calls, referencing the named profile for each role, and include them in a script to automate the process.
From Switching to an IAM Role (AWS Command Line Interface)
A role specifies a set of permissions that you can use to access AWS
resources that you need. In that sense, it is similar to a user in AWS
Identity and Access Management (IAM). When you sign in as a user, you
get a specific set of permissions. However, you don't sign in to a
role, but once signed in as a user you can switch to a role. This
temporarily sets aside your original user permissions and instead
gives you the permissions assigned to the role. The role can be in
your own account or any other AWS account. For more information about
roles, their benefits, and how to create and configure them, see IAM
Roles, and Creating IAM Roles.
You can achieve this with AWS lambda and Cloudwatch Rules.
You can create a lambda function that has a role attached to it, lets call this role - Role A, depending on the number of accounts you can either create 1 function per account and create one rule in cloudwatch to trigger all functions or you can create 1 function for all the accounts (be cautious to the limitations of AWS Lambda).
Creating Role A
Create an IAM Role (Role A) with the following policy allowing it to assume the role given to you by the other accounts containing the data.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509358389000",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"",
"",
....
"
]// all the IAM Role ARN's from the accounts containing the data or if you have 1 function for each account you can opt to have separate roles
}
]
}
Also you will need to make sure that a trust relationship with all the accounts are present in Role A's Trust Relationship policy document.
Attach Role A to the lambda functions you will be running. you can use serverless for development.
Now your lambda function has Role A attached to it and Role A has sts:AssumeRole permissions over the role's created in the other accounts.
Assuming that you have created 1 function for 1 account in you lambda's code you will have to first use STS to switch to the role of the other account and obtain temporary credentials and pass these to S3 options before fetching the required data.
if you have created 1 function for all the accounts you can have the role ARN's in an array and iterate over it, again when doing this be aware of the limits of AWS lambda.

Amazon S3 Bucket Policy: How to lock down access to only your EC2 Instances

I am looking to lock down an S3 bucket for security purposes - i'm storing deployment images in the bucket.
What I want to do is create a bucket policy that supports anonymous downloads over http only from EC2 instances in my account.
Is there a way to do this?
An example of a policy that I'm trying to use (it won't allow itself to be applied):
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[my bucket name]",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:ec2:us-east-1:[my account id]:instance/*"
}
}
}
]
}
Just to clarify how this is normally done. You create a IAM policy, attach it to a new or existing role, and decorate the ec2 instance with the role. You can also provide access through bucket policies, but that is less precise.
Details below:
S3 buckets are default deny except for my the owner. So you create your bucket and upload the data. You can verify with a browser that the files are not accessible by trying https://s3.amazonaws.com/MyBucketName/file.ext. Should come back with error code "Access Denied" in the xml. If you get an error code of "NoSuchBucket", you have the url wrong.
Create an IAM policy based on arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess. Starts out looking like the snip below. Take a look at the "Resource" key, and note that it is set to a wild card. You just modify this to be the arn of your bucket. You have to do one for the bucket and its contents so it becomes: "Resource": ["arn:aws:s3:::MyBucketName", "arn:aws:s3:::MyBucketName/*"]
Now that you have a policy, what you want to do is to decorate your instances with a IAM Role that automatically grants it this policy. All without any authentication keys having to be in the instance. So go to Role, create new role, make an Amazon EC2 role, find the policy you just created, and your Role is ready.
Finally you create your instance, and add the IAM role you just created. If the machine already has its own role, you just have to merge the two roles into a new one for the machine. If the machine is already running, it wont get the new role until you restart.
Now you should be good to go. The machine has the rights to access the s3 share. Now you can use the following command to copy files to your instance. Note you have to specify the region
aws s3 cp --region us-east-1 s3://MyBucketName/MyFileName.tgz /home/ubuntu
Please Note, the term "Security through obscurity" is only a thing in the movies. Either something is provably secure, or it is insecure.
I used something like
{
"Version": "2012-10-17",
"Id": "Allow only My VPC",
"Statement": [
{
"Sid": "Allow only My VPC",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject", "s3:ListBucket",
"Resource": [
"arn::s3:::{BUCKET_NAME}",
"arn::s3:::{BUCKET_NAME}/*"
],
"Condition": {
"StringLike": {
"aws:sourceVpc": "{VPC_ID}" OR "aws:sourceVpce": "{VPCe_ENDPOINT}"
}
}
}
]
}