SOS AWS S3 Bucket Policy - amazon-web-services

I am trying to restrict access to my AWS S3 Bucket, so that only a few domains, 1 IP-address and AWS Lambda functions will have access to it.
This is what I have written, but it is not working :-(
{
"Version": "2012-10-17",
"Id": "httpRefererPolicy",
"Statement": [
{
"Sid": "AllowRequestsReferred",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject","s3:GetObjectAcl"],
"Resource": "arn:aws:s3:::example/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://www.example.com/*",
"https://example.com/*",
"https://example.herokuapp.com/*",
"https://dfgdsfgdfg.cloudfront.net/*"
]
},
"IpAddress": {
"aws:SourceIp": "219.77.225.296"
}
}
},
{
"Sid": "DenyRequestsReferred",
"Effect": "Deny",
"NotPrincipal": {
"Service": "lambda.amazonaws.com"
},
"Action": ["s3:GetObject","s3:GetObjectAcl"],
"Resource": "arn:aws:s3:::example/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://www.example.com/*",
"https://example.com/*",
"https://example.herokuapp.com/*",
"https://dfgdsfgdfg.cloudfront.net/*"
]
},
"NotIpAddress": {
"aws:SourceIp": "219.77.225.296"
}
}
}
]
}
What have I written wrong?

Your policy says:
ALLOW GetObject access from an (invalid) IP address if the request was referred from certain websites.
DENY GetObject access if the request is not from Lambda and is not an (invalid) IP address and was not referred from certain websites.
So, the first thing is that IpAddress needs to be in CIDR notation, so you should use:
"aws:SourceIp": "219.77.225.296/32"
Second, there is nothing in this policy that is granting access to the Lambda function (since it is not on the IP address in the ALLOW statement). Also, your method of granting access looks unlikely to work. I would recommend granting access to the IAM Role being used by the Lambda function.
I would suggest you only create ALLOW statements, and give access to each source independently. If you want to grant access based on referer OR IpAddress OR Lambda, you'd need:
ALLOW based on referer
ALLOW based on IpAddress
ALLOW based on Lambda (You'll need to do this by permitting access from the IAM Role used by the Lambda function)
Only use DENY if you need to override a permission that was previously granted via ALLOW. It is best to avoid DENY if possible, to keep things easier to understand.

Related

How can I add IP restrictions to s3 bucket(in the bucket Policy) already having a User restriction

I have a few s3 buckets, for which I have given access to only a specific IAM user. I did it by setting the following bucket policies :
Effect : "Deny"
NotPrincipal : { "AWS " : "<My_IAM_User>" }
I'm able to access the buckets only from the IAM user, so the policy works as expected, but I also want to restrict the bucket access to only a specific IP. This IP is the ec2 IP address my server is running on. The policy values I've used is as :
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"<My_EC2_Server_IP_Address>"
]
}
}
I was expecting the above policy would allow only my EC2 server to access the s3 bucket objects, but if I'm making a call from any other IP ( eg : running the server on my local machine and trying to access the buckets. ) it's still responds with valid objects from the bucket.
The above policy does NOT seem to block any request to access the bucket is made from other random IP addresses.
My entire bucket policy looks like :
{
"Version": "<Version_value>",
"Statement": [
{
"Sid": "<Sid_value>",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "<My_IAM_User>"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<My_Bucket_name>/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "<My_EC2_Server_IP_Address>"
}
}
}
]
}
My References :
1. https://aws.amazon.com/premiumsupport/knowledge-center/block-s3-traffic-vpc-ip/
2. https://medium.com/#devopslearning/100-days-of-devops-day-11-restricting-s3-bucket-access-to-specific-ip-addresses-a46c659b30e2
If your intention is to deny all AWS credentials except a given IAM user and to deny all IP addresses other than a given IP, then I would write that policy as two, independent deny statements.
Something like this:
{
"Version": "<Version_value>",
"Statement": [
{
"Sid": "deny1",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "<My_IAM_User>"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<My_Bucket_name>/*"
},
{
"Sid": "deny2",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<My_Bucket_name>/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "<My_EC2_Server_IP_Address>"
}
}
}
]
}
Be careful with the IP address condition. Unless you are using an Elastic IP, your EC2 instance's IP can change e.g. if you stop then restart the instance.
Also note: you should not be using IAM User credentials on an EC2 instance. Instead, you should be using IAM Roles.

Grant Access to AWS S3 bucket from specific IP without credentials

I do not want to make my S3 bucket publicly accessible. But I expect it to be accessible from my local organization network without the AWS CLI or any credentials. How can I achieve it?.
I tried bucket policy with principal as * and source IP as the public IP of organization network.
If the intention is to grant anonymous access to a particular CIDR range, while also permitting IAM policies to grant additional access to specific people (eg Administrators), then this would not be appropriate.
IF you were to follow the initial example laid out by the AWS documentation - you’ll end up with a policy that probably looks similar to this.
{
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "x.x.x.x/xx"
}
}
}
]
}
What you’re going to find, after banging your head on the table a few times, is that this policy does not work. There does not appear to be an implied deny rule with S3 buckets (similar to how IAM access policies are setup).
By default accounts are restricted from accessing S3 unless they have been given access via policy.
However, S3 is designed by default to allow any IP address access. So to block IP's you would have to specify denies explicitly in the policy instead of allows.
Once You learn this - the policy is easy to adjust. You just flipp around the policy from allowing access from only my IP address to denying access from everywhere that was NOT my IP address.
{
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "xxx.x.x/xx"
}
}
}
] }
Hope this helps!
Yes, that is the correct way to do it.
From Bucket Policy Examples - Amazon Simple Storage Service:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "54.240.143.0/24"}
}
}
]
}

AWS Bucket policy for a reverse proxy

I am trying to use an s3 bucket as a simple web host, but want to put it behind a reverse proxy capable of layering some required security controls.
I have IP addresses associated with the reverse proxy that I would like to restrict the s3 web access to. When I apply the IP based restriction in the bucket policy though it seems to make administrative interaction in the account extremely difficult to blocked.
I would like not disrupt access from within the account via console/IAM user/federated role, but enable http access to the s3 site for just the IPs associated with the reverse proxy.
The AWS documentation on what is required to enable web access shows that I need this policy statement, so I have included it to start with.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
Then I want to restrict the web traffic to a particular set of IPs so I have added this statement.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
}
]
}
This deny policy has the unintended consequence of blocking my ability to access it from inside my account with IAM users or assumed federated roles, so I have added an explicit allow for those resources. I would like to just place a blanket allow for "the account" if possible. That leaves me with this policy, and it just doesn't seem to work how I would like it to. I can't seem to either manage it as my users or access the web content from the proxy.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
},
{
"Sid": "AllowAccountUsersAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:IAM::999999999999:user/user#place",
"arn:aws:IAM::999999999999:user/user2#place",
"999999999999"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::my bucket"
]
}
]
}
Is there a way to have an S3 bucket be a static web host restricted to only select IP ranges for web access, without disrupting the ability to manage the bucket itself from the account?
There are multiple ways that access can be granted to resources in an Amazon S3 bucket:
IAM permissions
Bucket Policy
Pre-signed URL
If an access request meets any of the above, they will be granted access (although a Deny might override it).
IAM Permissions are used to assign permissions to a User or Group. For example, if you want to have access to the bucket, you can add create a policy and assign it to you as an IAM user. If you wish all of your administrators to access the bucket, then put them in an IAM Group and assign the policy to the group. All access made this way needs to be done with AWS credentials (no anonymous access).
A Bucket Policy is typically used to grant anonymous access (no credentials required), but can include restrictions such as IP address ranges, SSL-only, and time-of-day. This is the way you would grant access to your reverse proxy, since it is not sending credentials as part of its requests.
A Pre-signed URL can be generated by applications to grant temporary access to a specific object. The URL includes a calculated signature that authenticates the access. This is typically used when generating links on HTML pages (eg to link to private images).
Your situation
So, firstly, you should grant access to yourself and your administrators, using a policy similar to:
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
Note that there is no Principal because it applies to whatever users/groups have been assigned this policy.
Next, you wish to grant access to your reverse proxies. This can be done via a Bucket Policy:
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
}
]
}
This policy is permitting (Allow) access to the specified bucket, but only if the request is coming from one of the stated IP addresses.
Granting access via Allow is always preferable to denying access because Deny always overrides Allow. Therefore, use Deny sparingly since once something is denied, it cannot then be Allowed (eg if a Deny blocks administrative access, you cannot then Allow the access). Deny is mostly used where you definitely want to block something (eg a known bad actor).
VPC Endpoint
A final option worth considering is use of a VPC Endpoint for S3. This allows a directly communication between a VPC and S3, without having to go via an Internet Gateway. This is excellent for situations where resources in a Private Subnet wish to communicate with S3 without using a NAT Gateway.
Additional policies can be added to a VPC Endpoint to define which resources can access the VPC Endpoint (eg your range of Reverse Proxies). Bucket Policies can specifically refer to VPC Endpoints, permitting requests from that access method. For example, you could configure a bucket policy that permits access only from a specific VPC -- this is useful for separating Dev/Test/Prod access to buckets.
However, it probably isn't suitable for your given use-case because it would force all S3 traffic to go via the VPC Endpoint, even outside of your reverse proxies. This might not be desired behavior for your architecture.
Bottom line: IAM policies grant access to users. Bucket Policies grant anonymous access.
You certainly do not "need" the first policy you have listed, and in fact you should rarely ever use that policy because it grants complete access to the bucket.

AWS S3 Bucket Policy Limit GETs to a set of IPs?

I currently have a bucket named mets-logos. It has this bucket policy currently, which allows GetObjects from anyone.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mets-logos/*"
}
]
}
I wish to only allow GetObjects from a whitelist of IPs. Here is what I tried, but it does not work (outside IP's can still get objects)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mets-logos/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"100.77.17.80/32",
"100.77.26.210/32",
]
}
}
}
]
}
Side question: If my bucket policy is correct, do I need to wait for AWS to reflect this change, or should it be reflected immediately?
Try adding a Deny, along with exceptions, like this:
{
"Version":"2012-10-17",
"Id":"S3PolicyId1",
"Statement":
[
{
"Sid" : "IPAllow",
"Effect" : "Deny",
"Principal": "*",
"Action" : "s3:GetObject",
"Resource" : "arn:aws:s3:::mets-logos/*",
"Condition": {
"IpAddress" : {
"aws:SourceIp": "0.0.0.0/0"
},
"NotIpAddress": {
"aws:SourceIp": "100.77.17.80/32"
},
"NotIpAddress": {
"aws:SourceIp": "100.77.26.210/32"
}
}
}
]
}
This explicitly denies access to all IP addresses but allows the two addresses that you are whitelisting to perform GetObject.
I can see how this would be useful if you are accessing S3 under IAM credentials but you want to further control access at the bucket level. The Deny in this policy will override existing IAM user policies.
To answer your side question, policy changes take effect immediately.
Access to S3 buckets is governed by both the S3 bucket policy and the IAM access policies that are attached to the principals accessing the bucket.
So it's possible that an IAM access policy may "overrule" an S3 bucket policy.
Your S3 bucket policy says "allow get if the IP is such-and-such". But there's nothing in the bucket policy that's saying "don't allow anyone else".
If your IAM user/role that's accessing the bucket allows s3:GetObject on your bucket (or *), then that policy lets them access the bucket.
If the IAM user/role does not have an explicit "allow" for "s3:GetObject" (or "s3:*"), then your policy would work.
To prevent users/roles that would otherwise be permitted access to the bucket, to be restricted to the IP addresses, then you need to change your policy to "deny" anyone NOT in an allowed IP address. An explicit "deny" in the bucket policy would overrule any "allow" in the IAM user/role's policy.
Try this policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyAllButFromAllowedIp",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mets-logos/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"100.77.17.80/32",
"100.77.26.210/32",
]
}
}
}
]
}

Permissions to access ElasticSearch from Lambda?

I'm trying to use Elasticsearch for data storage for a Lambda function connected to Alexa Skills Kit. The Lambda works alright without Elasticsearch but ES provides much-needed fuzzy matching.
The only way I've been able to access it from Lambda is by enabling Elasticsearch global access but that's a really bad idea. I've also been able to access from my computer via open access policy or IP address policy. Is there a way to do read-only access via Lambda and read-write via IP?
On IAM I granted my Lambda role AmazonESReadOnlyAccess. On the ES side I tried this but it only worked for IP address:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::NUMBER:root",
"arn:aws:iam::NUMBER:role/lambda_basic_execution"
]
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "MY IP"
}
}
}
]
}
This forum post asks the same question but went unanswered.
The only way I know of to do this is to use a resource-based policy or an IAM-based policy on your ES domain. This would restrict access to a particular IAM user or role. However, to make this work you also need to sign your requests to ES using SigV4.
There are libraries that will do this signing for you, for example this one extends the popular Python requests library to sign ElasticSearch requests via SigV4. I believe similar libraries exist for other languages.
Now it's possible from your code with elasticsearch.js. Before you try it, you must install http-aws-es module.
const AWS = require('aws-sdk');
const httpAwsEs = require('http-aws-es');
const elasticsearch = require('elasticsearch');
const client = new elasticsearch.Client({
host: 'YOUR_ES_HOST',
connectionClass: httpAwsEs,
amazonES: {
region: 'YOUR_ES_REGION',
credentials: new AWS.EnvironmentCredentials('AWS')
}
});
// client.search({...})
Of course, before using it, configure access to elasticsearch domain:
For external (outside AWS) access to your Elasticsearch cluster, you want to create the cluster with an IP-based access policy. Something like the below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<<IP/CIDR>>"
]
}
},
"Resource": "arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
}
]
}
For your Lambda function, create the role that the Lambda function will assume with the below policy snippet.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:ESHttpGet"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_all/_settings",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_cluster/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_mapping/<<TYPE>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/*/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_stats"
]
}
I think you could more easily condense the above two policy statements into the following:
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpGet",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
}
I managed to piece the above together from the following sources:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
How to access Kibana from Amazon elasticsearch service?
https://forums.aws.amazon.com/thread.jspa?threadID=217149
You need to go to the access policy of Lambda and provide the AWS ARN to connect
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html#es-aws-integrations-s3-lambda-es-authorizations
AWS Lambda runs on public EC2 instances. So simply adding a whitelist of IP addresses to the Elasticsearch access policy will not work. One way to do this will be to give the Lambda execution role appropriate permissions to the Elasticsearch domain. Make sure that the Lambda Execution role has permissions to the ES domain and the ES domain access policy has a statement which allows this Lambda Role ARN to do the appropriate actions. Once this is done all you would have to do is sign your request via SigV4 while accessing the ES endpoint Hope that helps!