Kibana specific permissions for AWS ElasticSearch domain - amazon-web-services

We've stumbled across an issue that is bothering us for some time and we would really appreciate your help.
Our problem lies with permissions for ElasticSearch service in combination with Kibana. We have ES domain with WRITE/READ permissions only from our Lambdas and READ permissions only from the IPs of our company offices. We use Kibana(on AWS, not local) for visualizing our data, and the problem is that we need to allow POST actions over the ES domains in order Kibana to be able to work, otherwise we just get permission issues. But that also means that we need to open POST requests for our company IPs, which is something we don’t want. Our question is how we can allow only Kibana to have that sort of permissions. Our goal is to have the READ/WRITE access only for our lambdas, READ permissions only for our company IPs and the needed permissions for Kibana in order to work without interfering with the rest of the permissions.
This is the access policy we have on the ES domain:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "<ROLE_USED_BY_OUR_LAMBDA>"
},
"Action": "es:*",
"Resource": "<ELASTICSEARCH_DOMAIN>/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"es:ESHttpGet",
"es:ESHttpHead",
"es:Describe*",
"es:List*"
],
"Resource": "<ELASTICSEARCH_DOMAIN>/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<COMPANY_IP_1>",
"<COMPANY_IP_2>",
"<COMPANY_IP_3>"
]
}
}
}
}
]
}

Related

Grant Access to AWS S3 bucket from specific IP without credentials

I do not want to make my S3 bucket publicly accessible. But I expect it to be accessible from my local organization network without the AWS CLI or any credentials. How can I achieve it?.
I tried bucket policy with principal as * and source IP as the public IP of organization network.
If the intention is to grant anonymous access to a particular CIDR range, while also permitting IAM policies to grant additional access to specific people (eg Administrators), then this would not be appropriate.
IF you were to follow the initial example laid out by the AWS documentation - you’ll end up with a policy that probably looks similar to this.
{
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "x.x.x.x/xx"
}
}
}
]
}
What you’re going to find, after banging your head on the table a few times, is that this policy does not work. There does not appear to be an implied deny rule with S3 buckets (similar to how IAM access policies are setup).
By default accounts are restricted from accessing S3 unless they have been given access via policy.
However, S3 is designed by default to allow any IP address access. So to block IP's you would have to specify denies explicitly in the policy instead of allows.
Once You learn this - the policy is easy to adjust. You just flipp around the policy from allowing access from only my IP address to denying access from everywhere that was NOT my IP address.
{
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "xxx.x.x/xx"
}
}
}
] }
Hope this helps!
Yes, that is the correct way to do it.
From Bucket Policy Examples - Amazon Simple Storage Service:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "54.240.143.0/24"}
}
}
]
}

AWS Bucket policy for a reverse proxy

I am trying to use an s3 bucket as a simple web host, but want to put it behind a reverse proxy capable of layering some required security controls.
I have IP addresses associated with the reverse proxy that I would like to restrict the s3 web access to. When I apply the IP based restriction in the bucket policy though it seems to make administrative interaction in the account extremely difficult to blocked.
I would like not disrupt access from within the account via console/IAM user/federated role, but enable http access to the s3 site for just the IPs associated with the reverse proxy.
The AWS documentation on what is required to enable web access shows that I need this policy statement, so I have included it to start with.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
Then I want to restrict the web traffic to a particular set of IPs so I have added this statement.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
}
]
}
This deny policy has the unintended consequence of blocking my ability to access it from inside my account with IAM users or assumed federated roles, so I have added an explicit allow for those resources. I would like to just place a blanket allow for "the account" if possible. That leaves me with this policy, and it just doesn't seem to work how I would like it to. I can't seem to either manage it as my users or access the web content from the proxy.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
},
{
"Sid": "AllowAccountUsersAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:IAM::999999999999:user/user#place",
"arn:aws:IAM::999999999999:user/user2#place",
"999999999999"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::my bucket"
]
}
]
}
Is there a way to have an S3 bucket be a static web host restricted to only select IP ranges for web access, without disrupting the ability to manage the bucket itself from the account?
There are multiple ways that access can be granted to resources in an Amazon S3 bucket:
IAM permissions
Bucket Policy
Pre-signed URL
If an access request meets any of the above, they will be granted access (although a Deny might override it).
IAM Permissions are used to assign permissions to a User or Group. For example, if you want to have access to the bucket, you can add create a policy and assign it to you as an IAM user. If you wish all of your administrators to access the bucket, then put them in an IAM Group and assign the policy to the group. All access made this way needs to be done with AWS credentials (no anonymous access).
A Bucket Policy is typically used to grant anonymous access (no credentials required), but can include restrictions such as IP address ranges, SSL-only, and time-of-day. This is the way you would grant access to your reverse proxy, since it is not sending credentials as part of its requests.
A Pre-signed URL can be generated by applications to grant temporary access to a specific object. The URL includes a calculated signature that authenticates the access. This is typically used when generating links on HTML pages (eg to link to private images).
Your situation
So, firstly, you should grant access to yourself and your administrators, using a policy similar to:
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
Note that there is no Principal because it applies to whatever users/groups have been assigned this policy.
Next, you wish to grant access to your reverse proxies. This can be done via a Bucket Policy:
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
}
]
}
This policy is permitting (Allow) access to the specified bucket, but only if the request is coming from one of the stated IP addresses.
Granting access via Allow is always preferable to denying access because Deny always overrides Allow. Therefore, use Deny sparingly since once something is denied, it cannot then be Allowed (eg if a Deny blocks administrative access, you cannot then Allow the access). Deny is mostly used where you definitely want to block something (eg a known bad actor).
VPC Endpoint
A final option worth considering is use of a VPC Endpoint for S3. This allows a directly communication between a VPC and S3, without having to go via an Internet Gateway. This is excellent for situations where resources in a Private Subnet wish to communicate with S3 without using a NAT Gateway.
Additional policies can be added to a VPC Endpoint to define which resources can access the VPC Endpoint (eg your range of Reverse Proxies). Bucket Policies can specifically refer to VPC Endpoints, permitting requests from that access method. For example, you could configure a bucket policy that permits access only from a specific VPC -- this is useful for separating Dev/Test/Prod access to buckets.
However, it probably isn't suitable for your given use-case because it would force all S3 traffic to go via the VPC Endpoint, even outside of your reverse proxies. This might not be desired behavior for your architecture.
Bottom line: IAM policies grant access to users. Bucket Policies grant anonymous access.
You certainly do not "need" the first policy you have listed, and in fact you should rarely ever use that policy because it grants complete access to the bucket.

AWS IAM Policy - allow from IP Addresses AND allow Firehose

I'm trying to set up an ES instance that allows access from a couple of IP addresses, in addition to allowing a Kinesis Firehose IAM role to deliver data to the instance.
I'm having trouble combining the two policies though. Each one works on its own. With just the IP address policy in place, I can view ES from Kibana, but I can't deliver data with Firehose. Likewise with only the Firehose policy, I can deliver data but not query ES.
Can someone help me see my error in constructing this access policy?
Here's the policy on the ES instance:
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::iiiiiiiiiiii:role/firehose_delivery_role"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-west-2:iiiiiiiiiiii:domain/es-test/*"
},
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-west-2:iiiiiiiiiiii:domain/es-test/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"xxx.xxx.xx.xxx",
"yyy.yy.y.yyy"
]
}
}
}
]
Add the following prior to the Statement: "Version": "2012-10-17",
For your source IP's, have you specified a subnet mask like /32 or /24? It's required per http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Conditions_IPAddress
Add a unique "Sid" to the first statement, you have one for the 2nd statement. Documentation says it's optional, however I have a working policy very close to yours except for these differences.

Is it possible to combine IP-based and IAM-based access control policies?

I have an AWS ElasticSearch cluster to which I want to restrict access.
Ideally I want to use both IAM access (to allow our other service components to contact the cluster) and IP-based access (to allow ad-hoc testing via Sense from within our local network), but when I just tried to add both options to the access control policy it didn't allow access from the listed IP addresses.
Is it possible to combine the two policy styles like this? I assumed from "Statement": [] in the autogenerated JSON that it would be.
An anonymised MWE:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "PLACEHOLDER",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"PLACEHOLDER",
"PLACEHOLDER",
"PLACEHOLDER"
]
}
}
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "PLACEHOLDER"
},
"Action": "es:*",
"Resource": "PLACEHOLDER"
}
]
}
The IpAddress section had previously worked, but I have no easy way to test if the IAM section worked as we lost access to ad hoc testing when trying it (as expected - the ad hoc testing is not from inside the AWS account).

Permissions to access ElasticSearch from Lambda?

I'm trying to use Elasticsearch for data storage for a Lambda function connected to Alexa Skills Kit. The Lambda works alright without Elasticsearch but ES provides much-needed fuzzy matching.
The only way I've been able to access it from Lambda is by enabling Elasticsearch global access but that's a really bad idea. I've also been able to access from my computer via open access policy or IP address policy. Is there a way to do read-only access via Lambda and read-write via IP?
On IAM I granted my Lambda role AmazonESReadOnlyAccess. On the ES side I tried this but it only worked for IP address:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::NUMBER:root",
"arn:aws:iam::NUMBER:role/lambda_basic_execution"
]
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "MY IP"
}
}
}
]
}
This forum post asks the same question but went unanswered.
The only way I know of to do this is to use a resource-based policy or an IAM-based policy on your ES domain. This would restrict access to a particular IAM user or role. However, to make this work you also need to sign your requests to ES using SigV4.
There are libraries that will do this signing for you, for example this one extends the popular Python requests library to sign ElasticSearch requests via SigV4. I believe similar libraries exist for other languages.
Now it's possible from your code with elasticsearch.js. Before you try it, you must install http-aws-es module.
const AWS = require('aws-sdk');
const httpAwsEs = require('http-aws-es');
const elasticsearch = require('elasticsearch');
const client = new elasticsearch.Client({
host: 'YOUR_ES_HOST',
connectionClass: httpAwsEs,
amazonES: {
region: 'YOUR_ES_REGION',
credentials: new AWS.EnvironmentCredentials('AWS')
}
});
// client.search({...})
Of course, before using it, configure access to elasticsearch domain:
For external (outside AWS) access to your Elasticsearch cluster, you want to create the cluster with an IP-based access policy. Something like the below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<<IP/CIDR>>"
]
}
},
"Resource": "arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
}
]
}
For your Lambda function, create the role that the Lambda function will assume with the below policy snippet.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:ESHttpGet"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_all/_settings",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_cluster/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_mapping/<<TYPE>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/*/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_stats"
]
}
I think you could more easily condense the above two policy statements into the following:
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpGet",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
}
I managed to piece the above together from the following sources:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
How to access Kibana from Amazon elasticsearch service?
https://forums.aws.amazon.com/thread.jspa?threadID=217149
You need to go to the access policy of Lambda and provide the AWS ARN to connect
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html#es-aws-integrations-s3-lambda-es-authorizations
AWS Lambda runs on public EC2 instances. So simply adding a whitelist of IP addresses to the Elasticsearch access policy will not work. One way to do this will be to give the Lambda execution role appropriate permissions to the Elasticsearch domain. Make sure that the Lambda Execution role has permissions to the ES domain and the ES domain access policy has a statement which allows this Lambda Role ARN to do the appropriate actions. Once this is done all you would have to do is sign your request via SigV4 while accessing the ES endpoint Hope that helps!