Django Storage S3 bucket Access with IAM Role - django

I have an EC2 instance attached with an IAM Role. That role has full s3 access. The aws cli work perfectly, and so does the meta-data curl check to get the temporary Access and Secret keys.
I have also read that when the Access and Secret keys are missing from the settings module, boto will automatically get the temporary keys from the meta-data url.
I however cannot access the css/js files stored on the bucket via the browser. When I add a bucket policy allowing a principal of *, everything works.
I tried the following policy:
{
"Version": "2012-10-17",
"Id": "PolicyNUM",
"Statement": [
{
"Sid": "StmtNUM",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account-id:role/my-role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
But all css/js are still getting 403's. What can I change to make it work?

Requests from your browser don't have the ability to send the required authz headers, which boto is handling for you elsewhere. The bucket policy cannot determine the principal and is correctly denying the request.
Add another sid to Allow principle * access to everything under /public, for instance.

The reason is that AWS is setting your files to binary/octet-stream.
check this solution to handle it.

Related

Grant access to Amazon S3 bucket only to one IAM User

I wish to have a bucket that only one IAM user could access using the AWS Console, list its content and access object files inside it.
So, I have created the IAM user, the bucket itself, and later:
bucket policy as follow:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0000000:user/dave"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::testbucket1234"
},
{
"Sid": "statement2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0000000:user/dave"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::testbucket1234/*"
}
]
}
And also a inline policy attached to my user's group, as follow:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:*Object",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::testbucket1234/*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
}
]
}
Now: I can list my buckets, access the desired bucket, list its content (so far so good). The problem is when I try to open one file object inside the bucket and I get "access denied" error. If I turn the object public, I can access it, but I can also access it using other IAM accounts, and that is not the intention. I want to access the bucket, list its contents and access objects only by usage of this specific IAM account. What am I doing wrong? How can I reach this goal? Thanks in advance.
By default, no IAM User can access any bucket. It is only by granting permissions to users that they can access resources.
However, many people tend to grant Amazon S3 permissions for all buckets, at least for Administrators. This then makes it difficult to remove permissions so that a bucket can only be accessed by one user. While it can be done with Deny policies, such policies are difficult to craft correctly.
For situations where specific data should only be accessed by one user, or a specific group of users (eg HR staff), I would recommend that you create a separate AWS Account and only grant permission to specific IAM Users or IAM Groups via a Bucket Policy (which works fine cross-account). This way, any generic policies that grant access to "all buckets" will not apply to buckets in this separate account.
Update: Accessing private objects
Expanding on what is mentioned in the comments below, a private object in Amazon S3 can be accessed by an authorized user. However, when accessing the object, it is necessary to identify who is accessing the object and their identity must be proved. This can be done in one of several ways:
In the Amazon S3 management console, use the Open command (in the Actions menu). This will open the object using a pre-signed URL that authorizes the access based upon the user who logged into the console. The same method is used for the Download option.
Using the AWS Command-Line Interface (CLI), you can download objects. The AWS CLI needs to be pre-configured with your IAM security credentials to prove your identity.
Programs using an AWS SDK can access S3 objects using their IAM security credentials. In fact, the AWS CLI is simply a Python program that uses the AWS SDK.
If you want to access the object via a URL, an application can generate an Amazon S3 pre-signed URLs. This URL includes the user's identity and a security signature that grants access to a private object for a limited period (eg 5 minutes). This method is commonly used when web applications want to grant access to a private object, such as a document or photo. The S3 management console actually uses this method when a user selects Actions/Open, so that the user can view a private object in their browser.

S3 Bucket access denied, even for Administrator

First, I have full access to all my s3 buckets (I've administrator permission).
after paying with my s3 bucket policy I'm getting a problem that I cannot view or edit anything in my bucket, and getting the "Access Denied" error message.
It sounds like you have added a Deny rule on a Bucket Policy, which is overriding your Admin permissions. (Yes, it is possible to block access even for Administrators!)
In such a situation:
Log on as the "root" login (the one using an email address)
Delete the Bucket Policy
Fortunately, the account's "root" user always has full permissions. This is also why it should be used infrequently and access should be well-protected (eg using Multi-Factor Authentication).
I hope you have s3-bucket-Full-access in IAM role policies along with you need to setup
1.set Access-Control-list and Bucket Policies has to be public.
Bucket policies like below
{
"Version": "2012-10-17",
"Id": "Policy159838074858",
"Statement": [
{
"Sid": "S3access",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::your bucketname/*"
}
]
}
here i just added read and update access to my s3 bucket in Action section if you need create and delete access add those actions there.
You can try with
aws s3api delete-bucket-policy --bucket s3-bucket-name
Or otherwise, enter with root access and modify the policy

Only allow EC2 instance to access static website on S3

I have a static website hosted on S3, I have set all files to be public.
Also, I have an EC2 instance with nginx that acts as a reverse proxy and can access the static website, so S3 plays the role of the origin.
What I would like to do now is set all files on S3 to be private, so that the website can only be accessed by traffic coming from the nginx (EC2).
So far I have tried the following. I have created and attached a new policy role to the EC2 instance with
Policies Granting Permission: AmazonS3ReadOnlyAccess
And have rebooted the EC2 instance.
I then created a policy in my S3 bucket console > Permissions > Bucket Policy
{
"Version": "xxxxx",
"Id": "xxxxxxx",
"Statement": [
{
"Sid": "xxxxxxx",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::XXX-bucket/*"
}
]
}
As principal I have set the ARN I got when I created the role for the EC2 instance.
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
However, this does not work, any help is appreciated.
If the Amazon EC2 instance with nginx is merely making generic web requests to Amazon S3, then the question becomes how to identify requests coming from nginx as 'permitted', while rejecting all other requests.
One method is to use a VPC Endpoint for S3, which allows direct communication from a VPC to Amazon S3 (rather than going out an Internet Gateway).
A bucket policy can then restrict access to the bucket such that it can only be accessed via that endpoint.
Here is a bucket policy from Example Bucket Policies for VPC Endpoints for Amazon S3:
The following is an example of an S3 bucket policy that allows access to a specific bucket, examplebucket, only from the VPC endpoint with the ID vpce-1a2b3c4d. The policy uses the aws:sourceVpce condition key to restrict access to the specified VPC endpoint.
{
"Version": "2012-10-17",
"Id": "Policy",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Action": "s3:*",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-1a2b3c4d"
}
},
"Principal": "*"
}
]
}
So, the complete design would be:
Object ACL: Private only (remove any current public permissions)
Bucket Policy: As above
IAM Role: Not needed
Route Table configured for VPC Endpoint
Permissions in Amazon S3 can be granted in several ways:
Directly on an object (known as an Access Control List or ACL)
Via a Bucket Policy (which applies to the whole bucket, or a directory)
To an IAM User/Group/Role
If any of the above grant access, then the object can be accessed publicly.
Your scenario requires the following configuration:
The ACL on each object should not permit public access
There should be no Bucket Policy
You should assign permissions in the Policy attached to the IAM Role
Whenever you have permissions relating to a User/Group/Role, it is better to assign the permission in IAM rather than on the Bucket. Use Bucket Policies for general access to all users.
The policy on the Role would be:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowBucketAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
This policy is directly applied to the IAM Role, so there is no need for a principal field.
Please note that this policy only allows GetObject -- it does not permit listing of buckets, uploading objects, etc.
You also mention that "I have set all files to be public". If you did this by making each individual object publicly readable, then anyone will still be able to access the objects. There are two ways to prevent this -- either remove the permissions from each object, or create a Bucket Policy with a Deny statement that stops access, but still permits the Role to get access.
That's starting to get a bit tricky and hard to maintain, so I'd recommend removing the permissions from each object. This can be done via the management console by editing the permissions on each object, or by using the AWS Command-Line Interface (CLI) with a command like:
aws s3 cp s3://my-bucket s3://my-bucket --recursive --acl private
This copies the files in-place but changes the access settings.
(I'm not 100% sure whether to use --acl private or --acl bucket-owner-full-control, so play around a bit.)

How to lockdown S3 bucket to specific users and IAM role(s)

In our environment, all IAM user accounts are assigned a customer-managed policy that grants read-only access to a lot of AWS services. Here's what I want to do:
Migrate a sql server 2012 express database from on-prem to a RDS instance
Limit access to the S3 bucket containing the database files
Here's the requirements according to AWS:
A S3 bucket to store the .bak database file
A role with access to the bucket
SQLSERVER_BACKUP_RESTORE option attached to RDS instance
So far, I've done the following:
Created a bucket under the name "test-bucket" (and uploaded the .bak file here)
Created a role under the name "rds-s3-role"
Created a policy under the name "rds-s3-policy" with these settings:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::test-bucket/"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectMetaData",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Assigned the policy to the role
Gave the AssumeRole permissions to the RDS service to assume the role created above
Created a new option group in RDS with the SQLSERVER_BACKUP_RESTORE option and linked it to my RDS instance
With no restrictions on my S3 bucket, I can perform the restore just fine; however, I can't find a solid way of restricting access to the bucket without hindering the RDS service from doing the restore.
In terms of my attempts to restrict access to the S3 bucket, I found a few posts online recommending using an explicit Deny statement to deny access to all types of principals and grant access based on some conditional statements.
Here's the contents of my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1486769843194",
"Statement": [
{
"Sid": "Stmt1486769841856",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-bucket",
"arn:aws:s3:::test-bucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userid": [
"<root_id>",
"<user1_userid>",
"<user2_userid>",
"<user3_userid>",
"<role_roleid>:*"
]
}
}
}
]
}
I can confirm the bucket policy does restrict access to only the IAM users that I specified, but I am not sure how it treats IAM roles. I used the :* syntax above per a document I found on the aws forums where the author stated the ":*" is a catch-all for every principal that assumes the specified role.
The only thing I'm having a problem with is, with this bucket policy in place, when I attempt to do the database restore, I get an access denied error. Has anyone ever done something like this? I've been going at it all day and haven't been able to find a working solution.
The following, admittedly, is guesswork... but reading between the lines of the somewhat difficult to navigate IAM documentation and elsewhere, and taking into account the way I originally interpreted it (incorrectly), I suspect that you are using the role's name rather than the role's ID in the policy.
Role IDs look similar to AWSAccessKeyIds except that they begin with AROA....
For the given role, find RoleId in the output from this:
$ aws iam get-role --role-name ROLE-NAME
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
Use caution when creating a broad Deny policy. You can end up denying s3:PutBucketPolicy to yourself, which leaves you in a situation where your policy prevents you from changing the policy... in which case, your only recourse is presumably to persuade AWS support to remove the bucket policy. A safer configuration would be to use this to deny only the object-level permissions.

S3 IAM Policy to access other account

We need to create an IAM user that is allowed to access buckets in our client's S3 accounts (provided that they have allowed us access to those buckets as well).
We have created an IAM user in our account with the following inline policy:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
}
]
}
In addition to this, we will request that our clients use the following policy and apply it to their relevant bucket:
{
"Version": "2008-10-17",
"Id": "Policy1416999097026",
"Statement": [
{
"Sid": "Stmt1416998971331",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::229569340673:user/our-iam-user"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::client-bucket-name/*"
},
{
"Sid": "Stmt1416999025675",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::229569340673:user/our-iam-user"
},
"Action": [
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::client-bucket-name"
}
]
}
Whilst this all seems to work fine, the one major issue that we have discovered is our own internal inline policy seems to give full access to our-iam-user to all of our own internal buckets.
Have we mis-configured something, or are we missing something else obvious here?
According to AWS support, this is not the right way to approach the problem:
https://forums.aws.amazon.com/message.jspa?messageID=618606
I am copying the answer from them here.
AWS:
The policy you're using with your IAM user grants access to any Amazon S3 bucket. In this case this will include any S3 bucket in your account and any bucket in any other account, where the account owner has granted your user access. You'll want to be more specific with the policy of your IAM user. For example, the following policy will limit your IAM user access to a single bucket.
You can also grant access to an array of buckets, if the user requires access to more than one.
Me
Unfortunately, we don't know beforehand all of our client's bucket names when we create the inline policy. As we get more and more clients to our service, it would be impractical to keep adding new client bucket names to the inline policy.
I guess another option is to create a new AWS account used solely for the above purpose - i.e. this account will not itself own anything, and will only ever be used for uploading to client buckets.
Is this acceptable, or are there any other alternatives options open to us?
AWS
Having a separate AWS account would provide clear security boundaries. Keep in mind that if you ever create a bucket in that other account, the user would inherit access to any bucket if you grant access to "arn:aws:s3:::*".
Another approach would be to use blacklisting (note whitelisting as suggested above is a better practice).
As you can see, the 2nd statement explicitly denies access to an array of buckets. This will override the allow in the first statment. The disadvantage here is that by default the user will inherit access to any new bucket. Therefore, you'd need to be diligent about adding new buckets to the blacklist. Either approach will require you to maintain changes to the policy. Therefore, I recommend my previous policy (aka whitelisting) where you only grant access to the S3 buckets that the user requires.
Conclusion
For our purposes, the white listing/blacklisting approach is not acceptable because we don't know before all the buckets that will be supplied by our clients. In the end, we went the route of creating a new AWS account with a single user, and that user does not have of its own s3 buckets
The policy you grant to your internal user gives this user access to all S3 bucket for the API listed (the first policy in your question). This is unnecessary as your client's bucket policies will grant your user required privileges to access to client's bucket.
To solve your problem, remove the user policy - or - explicitly your client's bucket in the list of allowed [Resources] instead of using "*"