AWS IAM Policy - allow from IP Addresses AND allow Firehose - amazon-web-services

I'm trying to set up an ES instance that allows access from a couple of IP addresses, in addition to allowing a Kinesis Firehose IAM role to deliver data to the instance.
I'm having trouble combining the two policies though. Each one works on its own. With just the IP address policy in place, I can view ES from Kibana, but I can't deliver data with Firehose. Likewise with only the Firehose policy, I can deliver data but not query ES.
Can someone help me see my error in constructing this access policy?
Here's the policy on the ES instance:
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::iiiiiiiiiiii:role/firehose_delivery_role"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-west-2:iiiiiiiiiiii:domain/es-test/*"
},
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-west-2:iiiiiiiiiiii:domain/es-test/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"xxx.xxx.xx.xxx",
"yyy.yy.y.yyy"
]
}
}
}
]

Add the following prior to the Statement: "Version": "2012-10-17",
For your source IP's, have you specified a subnet mask like /32 or /24? It's required per http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Conditions_IPAddress
Add a unique "Sid" to the first statement, you have one for the 2nd statement. Documentation says it's optional, however I have a working policy very close to yours except for these differences.

Related

Is aws:SourceVpc condition key present in the request context when interacting with S3 over web console?

I have a Bucket Policy (listed below) that is supposed to prevent access to an S3 bucket when accessed from anywhere other than a specific VPC. I launched an EC2 instance in the VPC, tested and confirmed that S3 access works fine. Now, when I access the same S3 bucket over web console, I get 'Error - Access Denied' message.
Does this mean that aws:SourceVpc condition key is present in the request context when interacting with S3 over web console as well?
My assumption is that it is present in the request context as otherwise policy statement would have failed such that the statement's "Effect" does not apply because there is no "Ifexists" added to StringNotEquals - Asking this question as I could not find this information in AWS Documentation. Even after adding "Ifexists" to StringNotEquals, results are same - can someone confirm?
{
"Version": "2012-10-17",
"Id": "Policy1589385141624",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::abhxy12bst3",
"arn:aws:s3:::abhxy12bst3/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-0xy915sdfedb5667"
}
}
}
]
}
Yes, you are right. I tested the following bucket policy, the operations from the AWS S3 console are denied.
{
"Version": "2012-10-17",
"Id": "Policy1589385141624",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::abhxy12bst3",
"arn:aws:s3:::abhxy12bst3/*"
],
"Condition": {
"StringLike": {
"aws:sourceVpc": "vpc-30*"
}
}
}
]
}
It means there is definitely some vpc id present in the request. It might be same for each account or it could be different.
This will apply to all requests interacting with S3. The console just provides a GUI on top of the AWS API.

Grant Access to AWS S3 bucket from specific IP without credentials

I do not want to make my S3 bucket publicly accessible. But I expect it to be accessible from my local organization network without the AWS CLI or any credentials. How can I achieve it?.
I tried bucket policy with principal as * and source IP as the public IP of organization network.
If the intention is to grant anonymous access to a particular CIDR range, while also permitting IAM policies to grant additional access to specific people (eg Administrators), then this would not be appropriate.
IF you were to follow the initial example laid out by the AWS documentation - you’ll end up with a policy that probably looks similar to this.
{
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "x.x.x.x/xx"
}
}
}
]
}
What you’re going to find, after banging your head on the table a few times, is that this policy does not work. There does not appear to be an implied deny rule with S3 buckets (similar to how IAM access policies are setup).
By default accounts are restricted from accessing S3 unless they have been given access via policy.
However, S3 is designed by default to allow any IP address access. So to block IP's you would have to specify denies explicitly in the policy instead of allows.
Once You learn this - the policy is easy to adjust. You just flipp around the policy from allowing access from only my IP address to denying access from everywhere that was NOT my IP address.
{
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "xxx.x.x/xx"
}
}
}
] }
Hope this helps!
Yes, that is the correct way to do it.
From Bucket Policy Examples - Amazon Simple Storage Service:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "54.240.143.0/24"}
}
}
]
}

How to Give Amazon SES Permission to Write to Your Amazon S3 Bucket

I want my SES(AWS) can receive emails, so I follow the following tutorial,
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-getting-started-receipt-rule.html
When I am at last step - creating rule, it comes with following error,
Could not write to bucket: "email-receiving"
I google and found this information on (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-permissions.html) can fix the issue.
However, when adding my policy statement, it comes with an error - This policy contains the following error: Has prohibited field Principal For more information about the IAM policy grammar, see AWS IAM Policies.
My policy statement is,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GiveSESPermissionToWriteEmail",
"Effect": "Allow",
"Principal": {
"Service": [
"ses.amazonaws.com"
]
},
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mybulketname/*",
"Condition": {
"StringEquals": {
"aws:Referer": "my12accountId"
}
}
}
]
}
If I take off
"Principal": {
"Service": [
"ses.amazonaws.com"
]
}
Validate policy will pass.
Thanks
Find bucket->permission->bucketPolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSESPuts",
"Effect": "Allow",
"Principal": {
"Service": "ses.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::BUCKEN_NAME/*",
"Condition": {
"StringEquals": {
"aws:Referer": "YOUR ID"
}
}
}
]
}
Read more here https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-permissions.html
To find your AWS account ID number on the AWS Management Console, choose Support on the navigation bar on the upper-right, and then choose Support Center. Your currently signed-in account ID appears in the upper-right corner below the Support menu.
Read more here https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html
I follow this advice but I was still having the issue. After much debugging, I realized that SES was failing to write because I had default server-side encryption (on the bucket) set to "AWS-KMS"
I did a 5 minute google search and couldn't find this incompatibility documented anywhere.
You can work around this by updating your default encryption setting on the target bucket to either "AES-256" or "None".
This problem has been resolved.
Create the policy on the bucket you want to grant the SES permission, not in the IAM
Note, I continued to have this error even after correctly specifying permissions. If you are using cross-region (e.g. SES is in N Virginia and S3 Bucket is in Africa) then you either need to specify the bucket name with the region or else just make the bucket in the same region.
I have the same problem, if I only delete the "Condition"
the policy passes and the "RuleSet" is Ok:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GiveSESPermissionToWriteEmail",
"Effect": "Allow",
"Principal": {
"Service": "ses.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::mybulketname/*"
}
]
}

Is it possible to combine IP-based and IAM-based access control policies?

I have an AWS ElasticSearch cluster to which I want to restrict access.
Ideally I want to use both IAM access (to allow our other service components to contact the cluster) and IP-based access (to allow ad-hoc testing via Sense from within our local network), but when I just tried to add both options to the access control policy it didn't allow access from the listed IP addresses.
Is it possible to combine the two policy styles like this? I assumed from "Statement": [] in the autogenerated JSON that it would be.
An anonymised MWE:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "PLACEHOLDER",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"PLACEHOLDER",
"PLACEHOLDER",
"PLACEHOLDER"
]
}
}
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "PLACEHOLDER"
},
"Action": "es:*",
"Resource": "PLACEHOLDER"
}
]
}
The IpAddress section had previously worked, but I have no easy way to test if the IAM section worked as we lost access to ad hoc testing when trying it (as expected - the ad hoc testing is not from inside the AWS account).

Permissions to access ElasticSearch from Lambda?

I'm trying to use Elasticsearch for data storage for a Lambda function connected to Alexa Skills Kit. The Lambda works alright without Elasticsearch but ES provides much-needed fuzzy matching.
The only way I've been able to access it from Lambda is by enabling Elasticsearch global access but that's a really bad idea. I've also been able to access from my computer via open access policy or IP address policy. Is there a way to do read-only access via Lambda and read-write via IP?
On IAM I granted my Lambda role AmazonESReadOnlyAccess. On the ES side I tried this but it only worked for IP address:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::NUMBER:root",
"arn:aws:iam::NUMBER:role/lambda_basic_execution"
]
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "MY IP"
}
}
}
]
}
This forum post asks the same question but went unanswered.
The only way I know of to do this is to use a resource-based policy or an IAM-based policy on your ES domain. This would restrict access to a particular IAM user or role. However, to make this work you also need to sign your requests to ES using SigV4.
There are libraries that will do this signing for you, for example this one extends the popular Python requests library to sign ElasticSearch requests via SigV4. I believe similar libraries exist for other languages.
Now it's possible from your code with elasticsearch.js. Before you try it, you must install http-aws-es module.
const AWS = require('aws-sdk');
const httpAwsEs = require('http-aws-es');
const elasticsearch = require('elasticsearch');
const client = new elasticsearch.Client({
host: 'YOUR_ES_HOST',
connectionClass: httpAwsEs,
amazonES: {
region: 'YOUR_ES_REGION',
credentials: new AWS.EnvironmentCredentials('AWS')
}
});
// client.search({...})
Of course, before using it, configure access to elasticsearch domain:
For external (outside AWS) access to your Elasticsearch cluster, you want to create the cluster with an IP-based access policy. Something like the below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<<IP/CIDR>>"
]
}
},
"Resource": "arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
}
]
}
For your Lambda function, create the role that the Lambda function will assume with the below policy snippet.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:ESHttpGet"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_all/_settings",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_cluster/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_mapping/<<TYPE>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/*/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_stats"
]
}
I think you could more easily condense the above two policy statements into the following:
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpGet",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
}
I managed to piece the above together from the following sources:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
How to access Kibana from Amazon elasticsearch service?
https://forums.aws.amazon.com/thread.jspa?threadID=217149
You need to go to the access policy of Lambda and provide the AWS ARN to connect
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html#es-aws-integrations-s3-lambda-es-authorizations
AWS Lambda runs on public EC2 instances. So simply adding a whitelist of IP addresses to the Elasticsearch access policy will not work. One way to do this will be to give the Lambda execution role appropriate permissions to the Elasticsearch domain. Make sure that the Lambda Execution role has permissions to the ES domain and the ES domain access policy has a statement which allows this Lambda Role ARN to do the appropriate actions. Once this is done all you would have to do is sign your request via SigV4 while accessing the ES endpoint Hope that helps!