Unlike other AWS services, s3 resources, ARN does not contain AWS account number.
Few sample ARNs are:
-- Elastic Beanstalk application version --
arn:aws:elasticbeanstalk:us-east-1:123456789012:environment/My
App/MyEnvironment
-- IAM user name --
arn:aws:iam::123456789012:user/David
-- Amazon RDS instance used for tagging -- arn:aws:rds:eu-west-1:123456789012:db:mysql-db
On the other hand s3 bucket ARN looks like:
arn:aws:s3:::my_corporate_bucket/exampleobject.png
S3 Bucket ARNs do not require an account number or region since bucket names are unique across all accounts/regions.
The question is "Why does S3 bucket ARN not contain AWS account number?" and the answer to that is because S3 was the first AWS service to be launched and many things have changed since then. S3 hasn't managed yet to implement the ARN in the bucket name. We don't know why that is. It could be that it's technically challenging or that it's just not being prioritized by the service team.
One way to validate that the bucket objects are being uploaded to belongs to you to avoid accidental data leak to other people's buckets is to use the recently released bucket owner condition:
https://aws.amazon.com/about-aws/whats-new/2020/09/amazon-s3-bucket-owner-condition-helps-validate-correct-bucket-ownership
https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-owner-condition.html
Another way (where supported) is to use S3 Access Points: https://aws.amazon.com/s3/features/access-points/
The problem with this, however, is that it is not possible to write a policy that restricts actions only on a bucket in my account. The risk being that some user in my account may leak data out by pushing data to another account’s bucket.
Related
I have one AWS S# and Redshift question:
A company uses two AWS accounts for accessing various AWS services. The analytics team has just configured an Amazon S3 bucket in AWS account A for writing data from the Amazon Redshift cluster provisioned in AWS account B. The team has noticed that the files created in the S3 bucket using UNLOAD command from the Redshift cluster are not accessible to the bucket owner user of the AWS account A that created the S3 bucket.
What could be the reason for this denial of permission for resources belonging to the same AWS account?
I tried to reproduce the scenario for the question, but I can't.
I don't get the S3 Object Ownership and Bucket Ownership.
You are not the only person confused by Amazon S3 object ownership. When writing files from one AWS Account to a bucket owned by a different AWS Account, is possible for the 'ownership' of objects to remain with the 'sending' account. This causes all types of problems.
Fortunately, AWS has introduced a new feature into S3 called Edit Object Ownership that overrides all these issues:
By setting "ACLs disabled" for an S3 Bucket, objects will always be owned by the AWS Account that owns the bucket.
So, you should configure this option for the S3 bucket in your AWS account B and it should all work nicely.
The problem is that the bucket owner in account A does not have access to files that were uploaded by the account B, usually that is solved by specifying acl parameter when uploading files --acl bucket-owner-full-control. Since the upload is done via Redshift you need to tell Redshift to assume a role in the account A for UNLOAD command so files don't change the ownership and continue to belong to account A. Check the following page for more examples on configuring cross account LOAD/UNLOAD https://aws.amazon.com/premiumsupport/knowledge-center/redshift-s3-cross-account/
Is there an easy way to see what are the effective access permissions for a specific bucket? To be more specific about the environment, access to buckets is granted through identity policies, there are more than 170 IAM roles and users and 1000+ policies (not all of them are attached to IAM role or user). I need to see who has the s3:GetObject, s3:PutObject and s3:DeleteObject permission on a specific bucket. Is there some tool that can give me that kind of report? I can write a script that goes through all roles, policies attached to them, pulls out statements that contain specific bucket and then I can cross reference allows and denys, but I'm sure there is some smarter way of doing this.
I am not aware of any better way than you described. You can export your IAM settings (unless you already have them in CloudFormation or CDK scripts) as described at https://aws.amazon.com/blogs/security/a-simple-way-to-export-your-iam-settings/.
Then you can scan (manually or programatically) for policies of interest and to what users or roles are they attached.
From Using Access Analyzer for S3 - Amazon Simple Storage Service:
Access Analyzer for S3 alerts you to S3 buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization. For each public or shared bucket, you receive findings into the source and level of public or shared access. For example, Access Analyzer for S3 might show that a bucket has read or write access provided through a bucket access control list (ACL), a bucket policy, or an access point policy. Armed with this knowledge, you can take immediate and precise corrective action to restore your bucket access to what you intended.
I know the credentials (access key and secret) of a S3 bucket of a different AWS account. Now I want to create an Database Migration task with this s3 bucket of the other account as source endpoint. Has anybody an idea what steps I need to do to use this S3 bucket for a migration task?
Regards Gerrit
You don't need the credentials of the other account. You need two (2) things for a resource in one account to use a bucket in an external account.
You need to make sure the DMS service access role you specify in the source endpoint has the S3 IAM permissions to read from that bucket. Take a look at Prerequisites When Using Amazon S3 as a Source for AWS DMS
You need to make sure the bucket in the external account allow other accounts to access it. This is accomplished with a bucket policy.
From Example 1 (How Amazon S3 Authorizes a Request for a Bucket Operation):
In the bucket context, Amazon S3 reviews the bucket policy to determine if the requester has permission to perform the operation. Amazon S3 authorizes the request.
What is the need for evaluating bucket policy when the requester is the bucket owner (request made using root credentials) ?
If I'm the bucket owner, shouldn't it implicitly mean that I'll always have the permissions to perform any operation on the bucket?
Say company XYZ follows an SOP that every developer who needs a bucket in S3 should write/request to the infra guy sitting in his cabin, looking at monitors.
But should that infra guy have access to dev's content in their...???
Another example...
If you have rented your house out to me.
It doesn't mean you have the right to look into my wardrobe ;)
It's actually possible to store objects in a bucket that are not accessible by the bucket owner. This often happens when one account stores objects in another account's bucket. (I know, sounds weird, doesn't it!)
That's why it's a good idea to use the bucket-owner-full-control ACL when copying files.
See: S3 Bucket Owner Access
I want to allow a certain AWS account read permissions to one of my S3 objects (file) via a URL.
Is it possible to grant permissions to other AWS account using his AWS account ID (The user's AWS account Id is the only information I have about his account)?
Yes, you can do this. You want to use the Principal element.
You can find examples here.
(I know links are generally frowned upon, but AWS technologies change at such a rapid pace that actual examples may be obsolete within days or months)