S3 cross account Data transfer - amazon-web-services

Looking into sharing data from our S3 bucket to our external partner. Going to setup a AWS Role in our VPC and share that with our external partner. Their access from their system would assume the AWS role created in our account and access the bucket. The data in our S3 bucket is encrypted #rest...
Say if the external vendor after assumption of the role...copies the data from our S3 bucket to their staging environment...how to ensure that the data in Transit will also be encrypted?
Our S3 data is using the defaule SSE-S3 AES256 encryption.

You should do a couple of things here:
use cross-account roles to allow them to get temporary credentials
use an S3 bucket policy that blocks access over insecure channels using aws:SecureTransport (see below)
Note: this will not stop them doing a couple of things that you probably want to avoid:
retrieving your data from outside the AWS region, leading to egress charges for you
copying the data elsewhere in an insecure way, after they download it
Example S3 bucket policy:
{
"Version": "2012-10-17",
"Id": "id1234",
"Statement": [
{
"Sid": "sid1234",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}

Related

How can I find what external S3 buckets (AWS-owned) are being accessed?

I'm using WorkSpaces Web (not WorkSpaces!) with an S3 VPC endpoint. I would like to be able to restrict S3 access via the S3 endpoint policy to only the buckets required by WorkSpaces Web. I cannot find any documentation with the answers, and AWS support does not seem to know what these buckets are. How can I find out what buckets the service is talking to? I see the requests in VPC flow logs, but that obviously doesn't show what URL or bucket it is trying to talk to. I have tried the same policy used for WorkSpaces (below), but it was not correct (or possibly not enough). I have confirmed that s3:GetObject is the only action needed.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Access-to-specific-bucket-only",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::aws-windows-downloads-us-east-1/*",
"arn:aws:s3:::amazon-ssm-us-east-1/*",
"arn:aws:s3:::amazon-ssm-packages-us-east-1/*",
"arn:aws:s3:::us-east-1-birdwatcher-prod/*",
"arn:aws:s3:::aws-ssm-distributor-file-us-east-1/*",
"arn:aws:s3:::aws-ssm-document-attachments-us-east-1/*",
"arn:aws:s3:::patch-baseline-snapshot-us-east-1/*",
"arn:aws:s3:::amazonlinux.*.amazonaws.com/*",
"arn:aws:s3:::repo.*.amazonaws.com/*",
"arn:aws:s3:::packages.*.amazonaws.com/*"
]
}
]
}

how to allow AWS Textract access to a protected S3 bucket

I have bucket policy which allows access only from a VPC:
{
"Version": "2012-10-17",
"Id": "aksdhjfaksdhf",
"Statement": [
{
"Sid": "Access-only-from-a-specific-VPC",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::zzzz",
"arn:aws:s3:::zzzz/*"
],
"Condition": {
"StringNotEquals": {
"aws:SourceVpc": "vpc-xxxx"
}
}
}
]
}
I'd like to allow traffic coming from AWS Textract to this bucket as well. I've tried various methods but because of the absolute precedence of 'explicit deny' (which I require), I cannot make it work.
Is there a different policy formulation or a different method altogether to restrict the access to this S3 Bucket to traffic from the VPC AND from Textract service exclusively?
This will not be possible.
In general, it's a good idea to avoid Deny policies since they override any Allow policy. They can be notoriously hard to configure correctly.
One option would be to remove the Deny and be very careful in who is granted Allow access to the bucket.
However, if this is too hard (eg Admins are given access to all buckets by default), then a common practice is to move sensitive data to an S3 bucket in a different AWS Account and only grant cross-account access to specific users.

How to give access to one specific bucket?

I have an Amazon AWS account and I'm using Amazon S3.
I'd like to give access to specific people to a Amazon S3 bucket.
Here's what I'd like to do :
Amazon AWS: Access limited to my account
Amazon S3: Access limited to my account
Bucket "website-photos": Access authorized to 3 people that will be able to read and write in the bucket through AWS management console.
Files in the bucket "website-photos": Public can read them.
How can I setup this config?
Just create an IAM policy and attach to the users you want to give access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket-name"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": ["arn:aws:s3:::bucket-name/*"]
}
]
}
See: Amazon S3: Allows Read and Write Access to Objects in an S3 Bucket - AWS Identity and Access Management
The general approach is:
If you want something to be "public" (accessible by anyone), then use a Bucket Policy
If you want to only assign permissions to a specific IAM User, then attach a policy to the IAM User
If you want to only assign permissions to a group of IAM Users, then create an IAM Group, attach a policy and assign the group to the desired IAM Users

Restrict Amazon S3 access to single HTTPS host

I want to proxy an Amazon S3 bucket through our reverse proxy (Nginx).
For higher security, I want to forbid the read access to the bucket to anything except of the HTTPS host at which I ran the proxy.
Is there a way to configure Amazon S3 for this task?
Please provide the configuration.
I considered to add a password in S3 bucket name, but it is not a solution, because we need also signed uploads to the bucket and so the bucket name will be publicly available.
If your reverse proxy has a Public IP address, then you would add this policy to the Amazon S3 bucket:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "54.240.143.22/32"}
}
}
]
}
This grants permissions to GetObject if the request is coming from the specific IP address. Amazon S3 is private by default, so this is granting access only in that particular situation. You will also want to grant access to IAM Users/Groups (via IAM, not a Bucket Policy) so that bucket content can be updated.
See: Bucket Policy Examples - Amazon Simple Storage Service

Only allow EC2 instance to access static website on S3

I have a static website hosted on S3, I have set all files to be public.
Also, I have an EC2 instance with nginx that acts as a reverse proxy and can access the static website, so S3 plays the role of the origin.
What I would like to do now is set all files on S3 to be private, so that the website can only be accessed by traffic coming from the nginx (EC2).
So far I have tried the following. I have created and attached a new policy role to the EC2 instance with
Policies Granting Permission: AmazonS3ReadOnlyAccess
And have rebooted the EC2 instance.
I then created a policy in my S3 bucket console > Permissions > Bucket Policy
{
"Version": "xxxxx",
"Id": "xxxxxxx",
"Statement": [
{
"Sid": "xxxxxxx",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::XXX-bucket/*"
}
]
}
As principal I have set the ARN I got when I created the role for the EC2 instance.
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
However, this does not work, any help is appreciated.
If the Amazon EC2 instance with nginx is merely making generic web requests to Amazon S3, then the question becomes how to identify requests coming from nginx as 'permitted', while rejecting all other requests.
One method is to use a VPC Endpoint for S3, which allows direct communication from a VPC to Amazon S3 (rather than going out an Internet Gateway).
A bucket policy can then restrict access to the bucket such that it can only be accessed via that endpoint.
Here is a bucket policy from Example Bucket Policies for VPC Endpoints for Amazon S3:
The following is an example of an S3 bucket policy that allows access to a specific bucket, examplebucket, only from the VPC endpoint with the ID vpce-1a2b3c4d. The policy uses the aws:sourceVpce condition key to restrict access to the specified VPC endpoint.
{
"Version": "2012-10-17",
"Id": "Policy",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Action": "s3:*",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-1a2b3c4d"
}
},
"Principal": "*"
}
]
}
So, the complete design would be:
Object ACL: Private only (remove any current public permissions)
Bucket Policy: As above
IAM Role: Not needed
Route Table configured for VPC Endpoint
Permissions in Amazon S3 can be granted in several ways:
Directly on an object (known as an Access Control List or ACL)
Via a Bucket Policy (which applies to the whole bucket, or a directory)
To an IAM User/Group/Role
If any of the above grant access, then the object can be accessed publicly.
Your scenario requires the following configuration:
The ACL on each object should not permit public access
There should be no Bucket Policy
You should assign permissions in the Policy attached to the IAM Role
Whenever you have permissions relating to a User/Group/Role, it is better to assign the permission in IAM rather than on the Bucket. Use Bucket Policies for general access to all users.
The policy on the Role would be:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowBucketAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
This policy is directly applied to the IAM Role, so there is no need for a principal field.
Please note that this policy only allows GetObject -- it does not permit listing of buckets, uploading objects, etc.
You also mention that "I have set all files to be public". If you did this by making each individual object publicly readable, then anyone will still be able to access the objects. There are two ways to prevent this -- either remove the permissions from each object, or create a Bucket Policy with a Deny statement that stops access, but still permits the Role to get access.
That's starting to get a bit tricky and hard to maintain, so I'd recommend removing the permissions from each object. This can be done via the management console by editing the permissions on each object, or by using the AWS Command-Line Interface (CLI) with a command like:
aws s3 cp s3://my-bucket s3://my-bucket --recursive --acl private
This copies the files in-place but changes the access settings.
(I'm not 100% sure whether to use --acl private or --acl bucket-owner-full-control, so play around a bit.)