AWS S3 – allow user to only access his files - amazon-web-services

I would like to add an image upload possibility for my users.
So far I've followed a simple YouTube tutorial and created a new bucket with the following Bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1578265217545",
"Statement": [
{
"Sid": "statement-1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/images/*"
}
]
}
And the following CORS policy:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I've also created an IAM user, and attached the following policy to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "statement1",
"Effect": "Allow",
"Action": [
"s3:Put*",
"s3:Get*",
"s3:Delete*"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
I got my access and secret keys that I successfully used to upload/delete files – success.
I have a strong feeling, the above policies are not really secure at this moment (e.g. I'm planning to make the CORS policy more strict, by only allowing the bucket to be accessed from a certain domain).
My main question now is – How can I make sure that if user A uploads his image, no other user (until allowed) can access it?

I think this would be possible if each user of the application has an IAM user account in AWS. Then you could have restrict the images using the corresponding AWS IAM user. But I believe this is probably not the case.
Something better would be, instead of accessing the images directly on AWS, access the images via your application. You could have a table storing the image path in the bucket on AWS, the corresponding owner(s) and also a flag indicating if the image can be accessed publicly or not.
Then when you need a specific image, you would make a request to your application, which would check if the user making the request is the owner of the image, if yes, the application would download the image from AWS using the AWS S3 SDK and send it over to the user.
This approach will decouple AWS from your end users and your app will be responsible for managing who can access what. Given every request to AWS will pass through your app, there is less risk on compromising the AWS infrastructure in place.

Object tagging and attribute-based access control could be used for conditional access to different objects.
Use case: Application not supporting individual IAM users:
Objects are assigned ownerID tag with id value,
Users are assigned an uuid or their profile has a tag with some kind of id value and
API function used to fetch objects compares object tag and user id/tag and retrieves only objects with matching values
Use case: Application supporting AWS IAM users / SSO users:
Objects are assigned a tag with appropriate value (id,
department, etc),
AWS users are assigned a tag with appropriate value
(id, department etc.),
An IAM role and an access control policy are
created for allowing conditional access depending on tag values
https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-and-policies.html

Related

How do I create a IAM policy to only allow access to select RDS instances?

I want to create an IAM policy that only allows access to the development and staging RDS instances I have running. This policy will be attached to a user group so that all its users can only read / write to the development and staging instances and not view any details or connect to the production instance.
I have created a test user that is a part of the above mentioned user group for testing out this policy, but it's allowing me to view / alter all db instances I have in RDS right now, including the production instance.
Below is the JSON for IAM policy.
Any help would be greatly appreciated!
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"rds:DescribeDBProxyTargetGroups",
"rds:StartDBCluster",
"rds:RestoreDBInstanceFromS3",
"rds:ResetDBParameterGroup",
"rds:DescribeGlobalClusters",
"rds:ModifyDBProxyEndpoint",
"rds:PurchaseReservedDBInstancesOffering",
"rds:CreateDBSubnetGroup",
"rds:ModifyCustomDBEngineVersion",
"rds:DescribeDBProxyTargets",
"rds:ModifyDBParameterGroup",
"rds:DownloadDBLogFilePortion",
"rds:AddRoleToDBCluster",
"rds:DescribeReservedDBInstances",
"rds:CreateDBSnapshot",
"rds:CreateEventSubscription",
"rds:DescribeDBClusterBacktracks",
"rds:FailoverDBCluster",
"rds:AddRoleToDBInstance",
"rds:ModifyDBProxy",
"rds:CreateDBInstance",
"rds:DescribeDBInstances",
"rds:DescribeDBProxies",
"rds:ModifyActivityStream",
"rds:DescribeDBProxyEndpoints",
"rds:StartDBInstanceAutomatedBackupsReplication",
"rds:ModifyEventSubscription",
"rds:DescribeDBSnapshotAttributes",
"rds:ModifyDBProxyTargetGroup",
"rds:RebootDBCluster",
"rds:ModifyDBSnapshot",
"rds:ListTagsForResource",
"rds:CreateDBCluster",
"rds:ApplyPendingMaintenanceAction",
"rds:BacktrackDBCluster",
"rds:RemoveRoleFromDBInstance",
"rds:ModifyDBSubnetGroup",
"rds:FailoverGlobalCluster",
"rds:DescribeDBInstanceAutomatedBackups",
"rds:RemoveRoleFromDBCluster",
"rds:CreateGlobalCluster",
"rds:DeregisterDBProxyTargets",
"rds:CreateOptionGroup",
"rds:CreateDBProxyEndpoint",
"rds:AddSourceIdentifierToSubscription",
"rds:CopyDBParameterGroup",
"rds:ModifyDBClusterParameterGroup",
"rds:ModifyDBInstance",
"rds:RegisterDBProxyTargets",
"rds:ModifyDBClusterSnapshotAttribute",
"rds:CopyDBClusterParameterGroup",
"rds:CreateDBClusterEndpoint",
"rds:StopDBCluster",
"rds:CreateDBParameterGroup",
"rds:DescribeDBSnapshots",
"rds:DescribeDBSecurityGroups",
"rds:RemoveFromGlobalCluster",
"rds:PromoteReadReplica",
"rds:StartDBInstance",
"rds:StopActivityStream",
"rds:RestoreDBClusterFromS3",
"rds:DescribeValidDBInstanceModifications",
"rds:RestoreDBInstanceFromDBSnapshot",
"rds:ModifyDBClusterEndpoint",
"rds:ModifyDBCluster",
"rds:CreateDBClusterSnapshot",
"rds:CreateDBClusterParameterGroup",
"rds:ModifyDBSnapshotAttribute",
"rds:PromoteReadReplicaDBCluster",
"rds:DescribeOptionGroups",
"rds:ModifyOptionGroup",
"rds:RestoreDBClusterFromSnapshot",
"rds:DescribeDBSubnetGroups",
"rds:StartActivityStream",
"rds:DescribePendingMaintenanceActions",
"rds:DescribeDBParameterGroups",
"rds:StopDBInstanceAutomatedBackupsReplication",
"rds:RemoveSourceIdentifierFromSubscription",
"rds:RevokeDBSecurityGroupIngress",
"rds:DescribeDBParameters",
"rds:ModifyCurrentDBClusterCapacity",
"rds:ResetDBClusterParameterGroup",
"rds:RestoreDBClusterToPointInTime",
"rds:CreateCustomDBEngineVersion",
"rds:DescribeDBClusterSnapshotAttributes",
"rds:DescribeDBClusterParameters",
"rds:DescribeEventSubscriptions",
"rds:CopyDBSnapshot",
"rds:CopyDBClusterSnapshot",
"rds:DescribeDBLogFiles",
"rds:StopDBInstance",
"rds:CopyOptionGroup",
"rds:SwitchoverReadReplica",
"rds:CreateDBSecurityGroup",
"rds:RebootDBInstance",
"rds:ModifyGlobalCluster",
"rds:DescribeDBClusterSnapshots",
"rds:DescribeOptionGroupOptions",
"rds:DownloadCompleteDBLogFile",
"rds:DescribeDBClusterEndpoints",
"rds:CreateDBInstanceReadReplica",
"rds:DescribeDBClusters",
"rds:DescribeDBClusterParameterGroups",
"rds:RestoreDBInstanceToPointInTime"
],
"Resource": [
"arn:aws:rds:us-east-2:<ACCOUNT_NUMBER>:db:development",
"arn:aws:rds:us-east-2:<ACCOUNT_NUMBER>:db:staging"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"rds:DescribeDBInstances",
"rds:DescribeDBClusters"
],
"Resource": "arn:aws:rds:*:<ACCOUNT_NUMBER>:db:*"
},
]
}
There are two different elements to consider:
Ability to 'use' the database
Ability to 'manage' the database
You have said that users "are authenticating via database credentials", so this access is controlled totally within the database and is unrelated to any IAM policies.
Typically, businesses use different AWS Accounts to separate Production from other environments. This avoids accidents and ensures that services are deployed and maintained in a reproducible manner (rather than via changes from random people). If you are keeping both Dev & Prod in the same AWS Account, then you will need to be very careful about how permissions are granted.
It is not possible to limit the listing of databases. A user either has permission to list ALL databases, or NONE of the databases.

AWS policy interpretation

I am trying to understand the need for the condition "home/${aws:userid}/*" . This condition actually feels like it is satisified in the "arn:aws:s3:::bucket-name/home/${aws:userid}/*" .
When we have allowed for all s3 operations in the third statement for that user. then why do we need to allow s3:listbuckets specifically for that user?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket-name",
"Condition": {
"StringLike": {
"s3:prefix": [
"",
"home/",
"home/${aws:userid}/*"
]
}
}
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name/home/${aws:userid}",
"arn:aws:s3:::bucket-name/home/${aws:userid}/*"
]
}
]
}
ref https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_federated-home-directory-console.html
s3:ListBucket is a bucket-level permission, while arn:aws:s3:::bucket-name/home/${aws:userid}/* is a wildcard object ARN, not a bucket ARN.
An attempt to grant s3:ListBucket will not match any Resource that isn't a bucket ARN, so the s3:* grant -- which only includes object ARNs -- does not actually allow object listings.
So this example policy does not contain any redundancy.
If this implementation still seems a bit counter-intuitive or perhaps convoluted, it does become somewhat clearer if you consider how the S3 API works on the wire. The ListObjects API action (as well as the newer ListObjectsV2) is submitted against the bucket -- there's no path in the request... or, more precisely, the path in the HTTP request is always¹ /... and the query string contains prefix= and the object key prefix where the requested listing is to be anchored.
While there's no compulsory correlation between the way the underlying API works and the way IAM policies work, it does make sense that the s3:prefix condition key is what's used to control use of the prefix parameter to ListObjects, instead of an object-level ARN, and that the bucket -- not an object key or wildcard pattern -- is the resource being accessed.
¹ always / except when it's /${bucket} as required by the old deprecated path-style URLs that are finally being phased out after a false start or two, at least for new buckets. The resource as expressed in the path component of the request URI is always the bucket itself, not the bucket plus a key prefix.
The resources are different. In the third statement, user can only access bucket-name/home/${aws:userid}. This means that when the user goes into the S3 console, and clicks bucket bucket-name it will get access denied. So user won't be able to list the bucket content, and will not see that there is home folder there. They will also not see that in bucket-name/home there is a folder with their username.
Thus, to overcome this issue, there is the second statement, which allows to list all content of ``bucket-nameand thenbucket-name/home`. This way users can navigate easily in S3 console to get to their actual home folder.
Without the second statement, users would have to type url of their home folders in browser to go to directly to it, which is not very user friendly.

AWS IAM grant user read access to specific VPC only

I have tried to limit access to a VPC without success. Maybe approaching the issue from the other side is a better idea, but I can't get that to work either.
I have tried:
Limit by tags as shown here:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/client": "<client>"
}
}
}
]
}
Limit by VPC as suggested here:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1508450090000",
"Effect": "Allow",
"Action": [
"ec2:Describe*"
],
"Resource": [
"arn:aws:ec2:<region>:<account>:subnet/*"
],
"Condition": {
"StringEquals": {
"ec2:Vpc": "arn:aws:ec2:<region>:<account>:vpc/<vpc_id>"
}
}
}
]
}
Both policies result in not even listing any instances, see screenshot.
This seems to be a very obvious and commonly needed policy to me.
Any help is appreciated.
According to the documentation: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_IAM.html#readonlyvpciam
The following policy grants users permission to list your VPCs and
their components. They can't create, update, or delete them.
{
"Version": "2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":["ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeInternetGateways",
"ec2:DescribeEgressOnlyInternetGateways",
"ec2:DescribeVpcEndpoints",
"ec2:DescribeNatGateways",
"ec2:DescribeCustomerGateways",
"ec2:DescribeVpnGateways",
"ec2:DescribeVpnConnections",
"ec2:DescribeRouteTables",
"ec2:DescribeAddresses",
"ec2:DescribeSecurityGroups",
"ec2:DescribeNetworkAcls",
"ec2:DescribeDhcpOptions",
"ec2:DescribeTags",
"ec2:DescribeInstances"],
"Resource":"*"
}
]
}
Further, if you have multiple VPCs that you do not want them to even see, perhaps you should consider creating a sub-account with only the portion of your network that they should have visibility across:
Setup Consolidated Billing
As a first step, log into your AWS account and click the "Sign up for Consolidated Billing" button.
Create a new account
From a non-logged in browser, you will then want to sign up again to AWS again like this:
Give this new account the appropriate name for your client. Note the email address you signed up with.
Link the accounts
In your main account, head back to ConsolidatedBilling and click the Send a Request button. Provide the email address for your new sub-account.
You should receive an email to the email address for your new sub-account. Copy the activation link and paste it into your browser logged in to the sub-account.
Your accounts are now linked!
Create your clients VPC and enable the services that the client requires.
Next, you can create the VPC & services the client requires, and restrict their access via the policy above.
You cannot restrict Describe* calls in the manner you want.
Calls that create resources can be restricted (eg give permission to launch an instance in a particular VPC), but calls that list resources cannot be restricted.
If you require the ability to prevent certain users from listing resources, then you'll either need to build your own front-end that filters the information before presenting it to users, or use multiple AWS accounts since they are fully isolated from each other.

S3 Principal Bucket Policy Permissions

I have a cloudformation template up in an S3 bucket (the url follows the pattern but is not exactly equal to: https://s3.amazonaws.com/bucket-name/cloudform.yaml). I need to be able to access it from CLI for a bash script. I'd prefer that everybody in an organization (all in this single account) has access to this template but other people outside of the organization don't have access to the template. A bucket policy I've tried looks like:
{
"Version": "2012-10-17",
"Id": "Policy11111111",
"Statement": [
{
"Sid": "Stmt111111111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::7777777777:root"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
With this policy, I and a couple other people in my office are unable to access the url. Even when I'm logged in with the root account I'm getting Access Denied.
Also, this change (only setting Principal to *) makes the bucket accessible to anybody:
{
"Version": "2012-10-17",
"Id": "Policy11111111",
"Statement": [
{
"Sid": "Stmt111111111",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
Obviously the signs point to my Principal field being misconfigured. 777777777 is the replacement for the Account ID I see under the My Account page.
So, do I need to worry about this on the IAM front? Considering that I am logged in as the root user, I'd guess I should have access to this as long as I put in a bucket policy. Any help would be much appreciated.
Short and sweet:
The bucket policy doesn't allow you to do what you want because of a wildcard limitation of the Principal element. Your best bet is to create an IAM group and put all IAM users into that group if they need access.
Long version:
Just to make it clear, any request to https://s3.amazonaws.com/bucket-name/cloudform.yaml MUST be signed and have the necessary authentication parameters or the request will be rejected with Access Denied. The only exception is if the bucket policy or the bucket ACL allows public access, but it doesn't sound like this is what you want.
When you say "everybody in an organization (all in this single account)" I assume you mean IAM users under the account who are accessing the file from the AWS console, or IAM users who are using some other code or tool (e.g. AWS CLI) to access the file.
So what it sounds like what you want is the ability to specify the Principal as
"Principal": {
"AWS": "arn:aws:iam::777777777777:user/*"
}
since that is what the pattern would be for any IAM user under the 777777777777 account id. Unfortunately this is not allowed because no wildcard is allowed in the Principal unless you use the catch-all wildcard "*". In other words "*" is allowed, but either "prefix*" or "*suffix" is not. (I wish AWS documented this better.)
You could specify every IAM user you have in the bucket policy like so:
"Principal": {
"AWS": [
"arn:aws:iam::777777777777:user/alice",
"arn:aws:iam::777777777777:user/bob",
"arn:aws:iam::777777777777:user/carl",
...
"arn:aws:iam::777777777777:user/zed",
}
But you probably don't want to update the policy for every new user.
It would be easiest to create an IAM group that grants access to that file. Then you would add all IAM users to that group. If you add new users then you'll have to manually add them to that group, so it is not as convenient as what you originally wanted.

S3 Bucket Policy - GET Implicitly Allowed

When using the following bucket policy, I see that it restricts PUT access as expected - however GET is allowed on the created object, even though there is nothing which should allow this operation.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPut",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<BUCKET>/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<IP ADDRESS>"
]
}
}
}
]
}
I am able to PUT files to <BUCKET> from <IP ADDRESS> using curl as follows:
curl https://<BUCKET>.s3-<REGION>.amazonaws.com/ --upload-file test.txt
The file uploads successfully, and appears in the S3 console. I am now for some reason able to GET the file from anywhere on the internet.
curl https://<BUCKET>.s3-<REGION>.amazonaws.com/test.txt -XGET
This only applies for files uploaded using the above method. When uploading a file in the S3 web console, I am not able to use curl to GET it (access denied). So I assume that it is an object level permission issue. Though I don't understand why the bucket policy would not implicitly deny this access.
When looking at the object level permissions in the console, the only differences between a file uploaded through the console (method 1), and one uploaded from the allowed <IP ADDRESS> (method 2) are that the file in method 2 does not have an 'Owner', Permissions, or Metadata - while the method 1 file has all of these.
Furthermore - when attempting to GET the objects using a Lambda script (boto3 download_file()) which assumes a role with full access to the bucket, it fails for objects uploaded with method 2. Though it succeeds for objects uploaded with method 1.
Issue Summary
To summarise the issue:
you have a policy that permits anonymous upload of objects from a given source IP address
those objects are then not readable by your authenticated users (specifically an Iam Role adopted by your lambda function)
those objects ARE readable from ANY IP by unauthenticated users
Other observations
unauthenticated user is unable to delete the object
The desired outcome is:
objects can be uploaded by an unauthenticated user from a known IP address
objects are not then downloadable by unauthenticated users from any IP address
objects are retrievable by an authenticated Iam user
Root Cause
Here is what's happening:
Anonymous user uploads the object
The Anonymous user becomes the object owner
Verifiable by retrieving the object acl (do a GET request for the object with query string ?acl) - you will receive:
<?xml version="1.0" encoding="UTF-8"?>
<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>65a011a29cdf8ec533ec3d1ccaae921c</ID>
</Owner>
<AccessControlList>
<Grant>
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>65a011a29cdf8ec533ec3d1ccaae921c</ID></Grantee>
<Permission>FULL_CONTROL</Permission>
</Grant>
</AccessControlList>
</AccessControlPolicy>
The Owner ID is the universal id of the anonymous user - I have seen the same id referenced in some AWS forum discussions.
Being the object owner has the following impact:
Anonymous user has FULL_CONTROL (see acl above)
Anonymous user is unable to Delete - this appears to be an AWS blanket rule that cannot be changed - the anonymous user is never allowed to delete anything, even if they have FULL_CONTROL
Anonymous user is, however, able to PUT an empty object over the top of the existing object, as a result of FULL_CONTROL
When a bucket contains a object owned by a user who is not part of the bucket's account:
Bucket owner has no permission on the object (not referenced in acl)
Bucket owner is not able to read the object
Bucket owner is able to see the object in a bucket list operation due to bucket acl
Bucket owner is able to delete the object - this is a blanket rule that cannot be changed - as the person paying the bill, you always reserve the right to delete the object - even if you can't read it
Resolution
There is a way to achieve your desired outcome - unfortunately you have to reference the arn of the specific Iam entity (user, role, group) you want to be able to read the object in the bucket acl.
The key elements of the solution are:
Require the anonymous user to grant the bucket owner full access
This ensures the bucket owner and owner account Iam users aren't denied access by the object acl
Explicitly deny all non-PUT access to all users who aren't your nominated user/role
This ensure anonymous users can't read the object
Sample policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allow-anonymous-put",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<BUCKETNAME>/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "<IPADDRESS>"
},
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "deny-not-my-user-everything-else",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::<ACCOUNTNUMBER>:role/<ROLENAME>"
},
"NotAction": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::<BUCKETNAME>/*"
}
]
}
The key to the second statement is the use of NotPrincipal and NotAction.
I've tested this locally, but only with a regular Iam user granted access, not with a Lamba function assuming a role - but the principal should hold. Good luck!
The following articles were helpful in understanding what was going on - they each present a scenario similar, but not quite the same as yours, but the methods they used to tackle their scenarios led the way:
http://jayendrapatil.com/aws-s3-permisions/
http://prettyplease.me/anonymous-s3-upload-with-full-owner-control/
https://gist.github.com/jareware/d7a817a08e9eae51a7ea