Permissions to access ElasticSearch from Lambda? - amazon-web-services

I'm trying to use Elasticsearch for data storage for a Lambda function connected to Alexa Skills Kit. The Lambda works alright without Elasticsearch but ES provides much-needed fuzzy matching.
The only way I've been able to access it from Lambda is by enabling Elasticsearch global access but that's a really bad idea. I've also been able to access from my computer via open access policy or IP address policy. Is there a way to do read-only access via Lambda and read-write via IP?
On IAM I granted my Lambda role AmazonESReadOnlyAccess. On the ES side I tried this but it only worked for IP address:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::NUMBER:root",
"arn:aws:iam::NUMBER:role/lambda_basic_execution"
]
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "MY IP"
}
}
}
]
}
This forum post asks the same question but went unanswered.

The only way I know of to do this is to use a resource-based policy or an IAM-based policy on your ES domain. This would restrict access to a particular IAM user or role. However, to make this work you also need to sign your requests to ES using SigV4.
There are libraries that will do this signing for you, for example this one extends the popular Python requests library to sign ElasticSearch requests via SigV4. I believe similar libraries exist for other languages.

Now it's possible from your code with elasticsearch.js. Before you try it, you must install http-aws-es module.
const AWS = require('aws-sdk');
const httpAwsEs = require('http-aws-es');
const elasticsearch = require('elasticsearch');
const client = new elasticsearch.Client({
host: 'YOUR_ES_HOST',
connectionClass: httpAwsEs,
amazonES: {
region: 'YOUR_ES_REGION',
credentials: new AWS.EnvironmentCredentials('AWS')
}
});
// client.search({...})
Of course, before using it, configure access to elasticsearch domain:

For external (outside AWS) access to your Elasticsearch cluster, you want to create the cluster with an IP-based access policy. Something like the below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<<IP/CIDR>>"
]
}
},
"Resource": "arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
}
]
}
For your Lambda function, create the role that the Lambda function will assume with the below policy snippet.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:ESHttpGet"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_all/_settings",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_cluster/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_mapping/<<TYPE>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/*/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_stats"
]
}
I think you could more easily condense the above two policy statements into the following:
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpGet",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
}
I managed to piece the above together from the following sources:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
How to access Kibana from Amazon elasticsearch service?
https://forums.aws.amazon.com/thread.jspa?threadID=217149

You need to go to the access policy of Lambda and provide the AWS ARN to connect
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html#es-aws-integrations-s3-lambda-es-authorizations

AWS Lambda runs on public EC2 instances. So simply adding a whitelist of IP addresses to the Elasticsearch access policy will not work. One way to do this will be to give the Lambda execution role appropriate permissions to the Elasticsearch domain. Make sure that the Lambda Execution role has permissions to the ES domain and the ES domain access policy has a statement which allows this Lambda Role ARN to do the appropriate actions. Once this is done all you would have to do is sign your request via SigV4 while accessing the ES endpoint Hope that helps!

Related

Why is aws lambda getting "AccessDeniedExceptionKMS" Error Message?

I just deployed a lambda (using Terraform from gitlab runner) to a new aws account. This pipeline deploys a lambda to another (dev/test) account without issues, but when I try to deploy to my prod account, I get the following error:
I'm honing in on the statement, "The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access."
I have confirmed that the encryption config for the env vars are set to use default aws/lambda key instead of a customer master key. That seems to contradict the language of the error which refers to a customer master key...?
The role assumed by the lambda does have a policy which includes two kms actions:
"Sid": "AWSKeyManagementService",
"Action": [
"kms:Decrypt",
"kms:DescribeKey"
]
By process of elimination, I wonder if the issue is a lack on the part of the resource-based policy on the kms key. Looking in the kms keys, under aws managed, I find the aws/lambda key has the following key policy:
{
"Version": "2012-10-17",
"Id": "auto-awslambda",
"Statement": [
{
"Sid": "Allow access through AWS Lambda for all principals in the account that are authorized to use AWS Lambda",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:CreateGrant",
"kms:DescribeKey"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"kms:ViaService": "lambda.us-east-1.amazonaws.com",
"kms:CallerAccount": "REDACTED"#<-- Account where lambda deployed
}
}
},
{
"Sid": "Allow direct access to key metadata to the account",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::REDACTED:root"#<-- Account where lambda deployed
},
"Action": [
"kms:Describe*",
"kms:Get*",
"kms:List*",
"kms:RevokeGrant"
],
"Resource": "*"
}
]
}
This is very puzzling. Any pointers appreciated!
This was solved by simply deleting the lambda and then re-running my pipeline to re-deploy it. All I can conclude is that something was corrupted in the first deployment.
Also deleting and redeploying the lambda sorted it out for me after many other tries to sort this.

Is aws:SourceVpc condition key present in the request context when interacting with S3 over web console?

I have a Bucket Policy (listed below) that is supposed to prevent access to an S3 bucket when accessed from anywhere other than a specific VPC. I launched an EC2 instance in the VPC, tested and confirmed that S3 access works fine. Now, when I access the same S3 bucket over web console, I get 'Error - Access Denied' message.
Does this mean that aws:SourceVpc condition key is present in the request context when interacting with S3 over web console as well?
My assumption is that it is present in the request context as otherwise policy statement would have failed such that the statement's "Effect" does not apply because there is no "Ifexists" added to StringNotEquals - Asking this question as I could not find this information in AWS Documentation. Even after adding "Ifexists" to StringNotEquals, results are same - can someone confirm?
{
"Version": "2012-10-17",
"Id": "Policy1589385141624",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::abhxy12bst3",
"arn:aws:s3:::abhxy12bst3/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-0xy915sdfedb5667"
}
}
}
]
}
Yes, you are right. I tested the following bucket policy, the operations from the AWS S3 console are denied.
{
"Version": "2012-10-17",
"Id": "Policy1589385141624",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::abhxy12bst3",
"arn:aws:s3:::abhxy12bst3/*"
],
"Condition": {
"StringLike": {
"aws:sourceVpc": "vpc-30*"
}
}
}
]
}
It means there is definitely some vpc id present in the request. It might be same for each account or it could be different.
This will apply to all requests interacting with S3. The console just provides a GUI on top of the AWS API.

Kibana specific permissions for AWS ElasticSearch domain

We've stumbled across an issue that is bothering us for some time and we would really appreciate your help.
Our problem lies with permissions for ElasticSearch service in combination with Kibana. We have ES domain with WRITE/READ permissions only from our Lambdas and READ permissions only from the IPs of our company offices. We use Kibana(on AWS, not local) for visualizing our data, and the problem is that we need to allow POST actions over the ES domains in order Kibana to be able to work, otherwise we just get permission issues. But that also means that we need to open POST requests for our company IPs, which is something we don’t want. Our question is how we can allow only Kibana to have that sort of permissions. Our goal is to have the READ/WRITE access only for our lambdas, READ permissions only for our company IPs and the needed permissions for Kibana in order to work without interfering with the rest of the permissions.
This is the access policy we have on the ES domain:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "<ROLE_USED_BY_OUR_LAMBDA>"
},
"Action": "es:*",
"Resource": "<ELASTICSEARCH_DOMAIN>/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"es:ESHttpGet",
"es:ESHttpHead",
"es:Describe*",
"es:List*"
],
"Resource": "<ELASTICSEARCH_DOMAIN>/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<COMPANY_IP_1>",
"<COMPANY_IP_2>",
"<COMPANY_IP_3>"
]
}
}
}
}
]
}

AWS IAM Policy - allow from IP Addresses AND allow Firehose

I'm trying to set up an ES instance that allows access from a couple of IP addresses, in addition to allowing a Kinesis Firehose IAM role to deliver data to the instance.
I'm having trouble combining the two policies though. Each one works on its own. With just the IP address policy in place, I can view ES from Kibana, but I can't deliver data with Firehose. Likewise with only the Firehose policy, I can deliver data but not query ES.
Can someone help me see my error in constructing this access policy?
Here's the policy on the ES instance:
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::iiiiiiiiiiii:role/firehose_delivery_role"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-west-2:iiiiiiiiiiii:domain/es-test/*"
},
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-west-2:iiiiiiiiiiii:domain/es-test/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"xxx.xxx.xx.xxx",
"yyy.yy.y.yyy"
]
}
}
}
]
Add the following prior to the Statement: "Version": "2012-10-17",
For your source IP's, have you specified a subnet mask like /32 or /24? It's required per http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Conditions_IPAddress
Add a unique "Sid" to the first statement, you have one for the 2nd statement. Documentation says it's optional, however I have a working policy very close to yours except for these differences.

Is it possible to combine IP-based and IAM-based access control policies?

I have an AWS ElasticSearch cluster to which I want to restrict access.
Ideally I want to use both IAM access (to allow our other service components to contact the cluster) and IP-based access (to allow ad-hoc testing via Sense from within our local network), but when I just tried to add both options to the access control policy it didn't allow access from the listed IP addresses.
Is it possible to combine the two policy styles like this? I assumed from "Statement": [] in the autogenerated JSON that it would be.
An anonymised MWE:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "PLACEHOLDER",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"PLACEHOLDER",
"PLACEHOLDER",
"PLACEHOLDER"
]
}
}
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "PLACEHOLDER"
},
"Action": "es:*",
"Resource": "PLACEHOLDER"
}
]
}
The IpAddress section had previously worked, but I have no easy way to test if the IAM section worked as we lost access to ad hoc testing when trying it (as expected - the ad hoc testing is not from inside the AWS account).