Generate S3 Presigned URL in Cross-Account Bucket - amazon-web-services

I have 2 AWS accounts: prod and dev
In prod, I've provided cross-account access to a bucket, a, to the dev account via ACL set in the S3 console. I've granted all permissions to the dev account on this bucket.
At this point I can list, add, remove objects in the a bucket under the dev account credentials. I figured I should be able to create presigned URL's — however, the credentials I create are always AccessDenied in the page
Is there some permission I’ve not included, or something I’m misunderstanding?
I assumed that providing READ access to objects in this bucket from dev account would allow me to generate presigned URL’s that would allow my frontend app to download the files.
I assumed that providing WRITE access to objects in this bucket from dev account would allow me to generate presigned URL’s that would allow my frontend app to upload the files.
I’ve tried another approach as well, using this link to create STS credentials in dev to assume a role I’ve defined in prod that grants Full S3 access to bucket a. Similar results — full ability to list, download and add objects in the bucket but my presign URL is still showing a page that says AccessDenied... leading me to believe im just not granting the proper permission, but can't seem to find the docs to tell me which one.
Thanks in advance
EDIT
Policy I'm using
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::a"
}

Related

Simplest way to setup bucket-per-client in Amazon S3

I want some clients to be able to programmatically upload files for me. Amazon S3 obviously sounds great in terms of availability and durability. But setup seems like such a complicated (and hence, error-prone) step for this: AFAIK I can't avoid creating users, groups, roles, policies...
Is there something simpler that would allow me to create a bucket with a token, so that I can simply give that token to the client w/o wasting time and w/o the risk of clicking something wrong that could lead to problems or security holes?
P.S. I won't need to do that for thousands of client, just a few.
You can make a automatized system for create many s3 buckets you need, you can use Terraform for interact with the API of your cloud provider and make this tasks programatically, so with this you can create many things around this (maybe a frontend/backend for create this buckets from your browser and of course you can make a function for send through a email all info around the access for the bucket by example).
In this repo you can see an example: https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/s3-replication
Your first consideration should be how these clients interact with Amazon S3. This will then impact how you allow them to access Amazon S3.
There are several options:
Option 1: Provide IAM User credentials
Normally, IAM credentials should only be given to staff in your own company. However, if you have a small number of well-known clients, you could create an IAM User for each of them.
You can then assign permissions that allow them to access a specific bucket, or a path within a shared bucket, and they can use the AWS CLI to upload/download files, or programmatically via an AWS SDK.
You would need to give them an Access Key + Secret Key to access their S3 storage.
Rather than using separate buckets, you could grant access to a path within a shared bucket with a relatively simple Bucket Policy that grants access to a path based on their IAM Username. See:IAM policy elements: Variables and tags - AWS Identity and Access Management
Option 2: Provide temporary credentials
If they are programmatically accessing AWS, then the clients could:
Programmatically authenticate against your back-end application
The back-end application uses the AWS Security Token Service (STS) to generate temporary credentials and returns them to the client
The client then uses those credentials in the same way as Option 1
The difference with this option is that the clients authenticate to your own back-end rather than using IAM User credentials.
Option 3: Pre-signed URLs
Instead of providing credentials to your clients, your back-end app can generate Amazon S3 pre-signed URLs, which are time-limited URL that provides temporary access to upload/download private objects in Amazon S3.
This allows the back-end to totally control which objects the users can upload or download. For example, think of a photo-sharing application that keeps photos private. When a user wants to view one of their photos, the app can generate a pre-signed URL that grants access to a private object without having to provide AWS credentials.
Bottom line
The simplest option is to create an IAM user for each client (Option 1) and provide them credentials. They could then use the AWS Command-Line Interface (CLI) or a program you provide to interact with S3.
If you consider this to be too complex, then you might want to use services like box.com or even Microsoft OneDrive, which provide a more friendly interface on top of storage services.
Thanks to John Rotenstein's great detailed answer with various options, I think I found the simplest available option for my case. It is based on IAM policy elements: Variables and tags and involves creating a single bucket with separate "home folders" in it (one per client). No need for user groups or roles.
Here is a step-by-step guide:
Create a common bucket (let's call it bucket-shared-with-clients).
Create a single (universal) policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::bucket-shared-with-clients"],
"Condition": {"StringLike": {"s3:prefix": ["${aws:username}/*"]}}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::bucket-shared-with-clients/${aws:username}/*"]
}
]
}
Create IAM user accounts - one per client. The users need to be with Programmatic access enabled and in the Permission view simply go for the Attach existing policies directly option.
Here is an example Python client that uploads a file:
import boto3
ACCESS_KEY = 'YourAccessKeyComesHere'
SECRET_KEY = 'YourSecterKeyComesHere'
USERNAME = 'client-a' # The username is also used as a "home folder", as can be seen below
FILE_TO_UPLOAD = 'some-file.json'
session = boto3.Session(aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
s3 = session.resource('s3')
bucket = s3.Bucket('bucket-shared-with-clients')
key = f'{USERNAME}/{FILE_TO_UPLOAD}' # If USERNAME didn't match the client's IAM User, we would get an AccessDenied error
bucket.upload_file(FILE_TO_UPLOAD, key)

How to give access of s3 bucket residing in Account A to different iam users from multiple aws accounts?

I am working on aws SAM project and i have a requirement of giving access to my S3 bucket to multiple iam users from unknown aws accounts but i can't make bucket publicly accessible. I want to secure my bucket as well as i want any iam user from any aws account to access the contents of my S3 bucket. Is this possible?
Below is the policy i tried and worked perfectly.
{
"Version": "2012-10-17",
"Id": "Policy1616828964582",
"Statement": [
{
"Sid": "Stmt1616828940658",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/STS_Role_demo"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::new-demo-bkt/*"
}
]
}
Above policy is for one user but i want any user from other AWS account to access my contents without making the bucket and objects public so how can i achieve this?
This might be possible using a set of Conditions on the incoming requests.
I can think of two options:
You create an IAM role that your SAM application uses even when running in other accounts
You create S3 bucket policies that allow unknown users access
If you decide to look into S3 bucket policies, I suggest using an S3 Access Point to better manage access policies.
Access points are named network endpoints that are attached to buckets
that you can use to perform S3 object operations, such as GetObject
and PutObject. Each access point has distinct permissions and network
controls that S3 applies for any request that is made through that
access point. Each access point enforces a customized access point
policy that works in conjunction with the bucket policy that is
attached to the underlying bucket.
You can use a combination of S3 Conditions to restrict access. For example, your SAM application could include specific condition keys when making S3 requests, and the bucket policy then allows access based on those conditions.
You can also apply global IAM conditions to S3 policies.
This isn't great security though, malicious actors might be able to figure out the headers and spoof requests to your bucket. As noted on some conditions such as aws:UserAgent:
This key should be used carefully. Since the aws:UserAgent value is
provided by the caller in an HTTP header, unauthorized parties can use
modified or custom browsers to provide any aws:UserAgent value that
they choose. As a result, aws:UserAgent should not be used to
prevent unauthorized parties from making direct AWS requests. You can
use it to allow only specific client applications, and only after
testing your policy.

Allow access to S3 Bucket from all EC2 instances of specific Account

Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
I would like to provide data that should be very simple for clients to download to their instances. Ideally, automatically via the post_install script option of AWS ParallelCluster.
However, it seems like this requires a lot of setup, as is described in this tutorial by AWS:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-instance-access-bucket/
This is not feasible for me. Clients should not have to create IAM roles.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
export AWS_ACCESS_KEY_ID=<key-id>
export AWS_SECRETE_ACCESS_KEY=<secret-key>
aws s3 cp s3://<bucket> . --recursive
Unfortunately, this is also not ideal as I would like to provide ready-to-use AWS Parallelcluster post_install scripts. These scripts should automatically download the required data on cluster startup.
Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
Yes. It's a 2 step process. In summary:
1) On your side, the bucket must trust the account id of the other accounts that will access it, and you must decide which calls you will allow.
2) On the other accounts that will access the bucket, the instances must be authorised to run AWS API calls on your bucket using IAM policies.
In more detail:
Step 1: let's work through this and break it down.
On your bucket, you'll need to configure a bucket policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID_TO_TRUST:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
You can find more examples of bucket policies in the AWS documentation here.
WARNING 1: "arn:aws:iam::ACCOUNT_ID:root" will trust everything that has permissions to connect to your bucket on the other AWS account. This shouldn't be a problem for what you're trying to do, but it's best you completely understand how this policy works to prevent any accidents.
WARNING 2: Do not grant s3:* - you will need to scope down the permissions to actions such as s3:GetObject etc. There is a website to help you generate these policies here. s3:* will contain delete permissions which if used incorrectly could result in nasty surprises.
Now, once that's done, great work - that's things on your end covered.
Step 2: The other accounts that want to read the data will have to assign an instance role to the ec2 instances they launch and that role will need a policy attached to it granting access to your bucket. Those instances can then run AWS CLI commands on your bucket, provided your bucket policy authorises the call on your side and the instance policy authorises the call on their side.
The policy that needs to be attached to the instance role should look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
Keep in mind, just because this policy grants s3:* it doesn't mean they can do anything on your bucket, not unless you have s3:* in your bucket policy. Actions of this policy will be limited to whatever you've scoped the permissions to in your bucket policy.
This is not feasible for me. Clients should not have to create IAM roles.
If they have an AWS account it's up to them on how they choose to access the bucket as long as you define a bucket policy that trusts their account the rest is on them. They can create an ec2 instance role and grant it permissions to your bucket, or an IAM User and grant it access to your bucket. It doesn't matter.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
If the code will run on an ec2 instance, it's bad practice to use access keys and instead should use an ec2 instance role.
Ideally, automatically via CloudFormation on instance startup.
I think you mean via instance userdata, which you can define through CloudFormation.
You say "Clients should not have to create IAM roles". This is perfectly correct.
I presume that you are creating the instances for use by the clients. If so, then you should create an IAM Role that has access to the desired bucket.
Then, when you create an Amazon EC2 instance for your clients, associate the IAM Role to the instance. Your clients will then be able to use the AWS Command-Line Interface (CLI) to access the S3 bucket (list, upload, download, or whatever permissions you put into the IAM Role).
If you want the data to be automatically downloaded when you first create their instance, then you can add User Data script that will execute when the instance starts. This can download the files from S3 to the instance.

S3 - Revoking "full_control" permission from owned object

While writing S3 server implementation, ran into question I can't really find answer anywhere.
For example I'm the bucket owner, and as well owner of uploaded object.
In case I revoke "full_control" permission from object owner (myself), will I be able to access and modify that object?
What's the expected behaviour in following example:
s3cmd setacl --acl-grant full_control:ownerID s3://bucket/object
s3cmd setacl --acl-revoke full_control:ownerID s3://bucket/object
s3cmd setacl --acl-grant read:ownerID s3://bucket/object
Thanks
So there's the official answer from AWS support:
The short answer for that question would be yes, the bucket/object
owner has permission to read and update the bucket/object ACL,
provided that there is no bucket policy attached that explicitly
removes these permissions from the owner. For example, the following
policy would prevent the owner from doing anything on the bucket,
including changing the bucket's ACL:
{
"Id": "Policy1531126735810",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example bucket policy",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::<bucket>",
"Principal": "*"
}
]
}
However, as root (bucket owner) you'd still have permission to delete
that policy, which would then restore your permissions as bucket owner
to update the ACL.
By default, all S3 resources, buckets, objects and subresources, are
private; only the resource owner, which is the AWS account that
created it, can access the resource[1]. As the resource owner (AWS
account), you can optionally grant permission to other users by
attaching an access policy to the users.
Example: let's say you created an IAM user called -S3User1-, and gave
it permission to create buckets in S3 and update its ACLs. The user in
question then goes ahead and create a bucket and name it
"s3user1-bucket". After that, he goes further and remove List objects,
Write objects, Read bucket permission and Write bucket permissions
from the root account on the ACL section. At this point, if you log in
as root and attempt to read the objects in that bucket, an "Access
Denied" error will be thrown. However, as root you'll be able to go to
the "Permissions" section of the bucket and add these permissions
back.
These days it is recommended to use the official AWS Command-Line Interface (CLI) rather than s3cmd.
You should typically avoid using object-level permissions to control access. It is best to make them all "bucket-owner full control" and then use Bucket Policies to grant access to the bucket or a path.
If you wish to provide per-object access, it is recommended to use Amazon S3 pre-signed URLs, which give time-limited access to a private object. Once the time expires, the URL no longer works. Your application would be responsible for determining whether a user is permitted to access an object, and then generates the pre-signed URL (eg as a link or href on an HTML page).

S3 giving someone permission to read and write

I've created a s3 server which contain a large number of images. I'm now trying to create a bucket policy, which fits my needs. First of all i want everybody to have read permission, so they can see the images. However i also want to give a specific website the permission to upload and delete images. this website is not stored on a amazon server? how can i achieve this? so far i've created an bucket policy which enables everybody to see the images
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*"
}
]
}
You can delegate access to your bucket. To do this, the other server will need AWS credentials.
If the other server were an EC2 instance that you owned then you could do this easily by launching it with an IAM role. If the other server were an EC2 instance that someone else owned, then you could delegate access to them by allowing them to assume an appropriate IAM role in your account. But for a non-EC2 server, as seems to be the case here, you will have to provide AWS credentials in some other fashion.
One way to do this is by adding an IAM user with a policy allowing s3:PutObject and s3:DeleteObject on resource "arn:aws:s3:::examplebucket/*", and then give the other server those credentials.
A better way would be to create an IAM role that has the same policy and then have the other server assume that role. The upside is that the credentials must be rotated periodically so if they are leaked then the window of exposure is smaller. To assume a role, however, the other server will still need to authenticate so will need some base IAM user credentials (unless you have some way to get credentials via identity federation). You could add a base IAM user who has permissions to assume the aforementioned role (but has no other permissions) and supply the base IAM user credentials to the other server. When using AssumeRole in this fashion you should require an external ID. You may also be able to restrict the entity assuming this role to the specific IP address(es) of the other server using a policy condition (not 100% sure if this is possible).
The Bucket Policy will work nicely to give everybody read-only access.
To give specific permissions to an application:
Create an IAM User for the application (this also creates access credentials)
Assign a policy to the IAM User that gives the desired permissions (very similar to a Bucket Policy)
The application then makes API calls to Amazon S3 using the supplied access credentials
See also: Amazon S3 Developer Guide