Why is aws lambda getting "AccessDeniedExceptionKMS" Error Message? - amazon-web-services

I just deployed a lambda (using Terraform from gitlab runner) to a new aws account. This pipeline deploys a lambda to another (dev/test) account without issues, but when I try to deploy to my prod account, I get the following error:
I'm honing in on the statement, "The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access."
I have confirmed that the encryption config for the env vars are set to use default aws/lambda key instead of a customer master key. That seems to contradict the language of the error which refers to a customer master key...?
The role assumed by the lambda does have a policy which includes two kms actions:
"Sid": "AWSKeyManagementService",
"Action": [
"kms:Decrypt",
"kms:DescribeKey"
]
By process of elimination, I wonder if the issue is a lack on the part of the resource-based policy on the kms key. Looking in the kms keys, under aws managed, I find the aws/lambda key has the following key policy:
{
"Version": "2012-10-17",
"Id": "auto-awslambda",
"Statement": [
{
"Sid": "Allow access through AWS Lambda for all principals in the account that are authorized to use AWS Lambda",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:CreateGrant",
"kms:DescribeKey"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"kms:ViaService": "lambda.us-east-1.amazonaws.com",
"kms:CallerAccount": "REDACTED"#<-- Account where lambda deployed
}
}
},
{
"Sid": "Allow direct access to key metadata to the account",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::REDACTED:root"#<-- Account where lambda deployed
},
"Action": [
"kms:Describe*",
"kms:Get*",
"kms:List*",
"kms:RevokeGrant"
],
"Resource": "*"
}
]
}
This is very puzzling. Any pointers appreciated!

This was solved by simply deleting the lambda and then re-running my pipeline to re-deploy it. All I can conclude is that something was corrupted in the first deployment.

Also deleting and redeploying the lambda sorted it out for me after many other tries to sort this.

Related

Getting "Insufficient permissions to list objects" error with S3 bucket policy

I setup a bucket policy to allow two external users arn:aws:iam::123456789012:user/user1 and arn:aws:iam::123456789012:user/user2 to access everything under a particular path in our S3 bucket - s3:my-bucket-name/path/. But the user is getting the following error when trying to access the path on AWS console:
Insufficient permissions to list objects
After you or your AWS administrator have updated your permissions to allow the s3:ListBucket action, refresh the page. Learn more about identity and access management in Amazon S3.
Here's the policy document. What am I missing here?
{
"Version": "2012-10-17",
"Id": "allowAccessToBucketPath",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:user/user1",
"arn:aws:iam::123456789012:user/user2"
]
},
"Action": [
"s3:PutObject",
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::my-bucket-name/path/*",
"arn:aws:s3:::my-bucket-name/path"
]
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:user/user1",
"arn:aws:iam::123456789012:user/user2"
]
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket-name",
"Condition": {
"StringLike": {
"s3:prefix": "path/*"
}
}
}
]
}
I would check if you have any ACLs enabled for your bucket. In your bucket settings, check if Object Ownership is set to "ACLs enabled", in which case I would suggest you change it to "ACLs disabled".
If that doesn't work, I would suggest using the IAM Access Analyzer to help troubleshoot -- if the Access Analyzer says that your policy does in fact allow the access you want, then that would indicate that this policy is correctly defined, and you have other configurations on your bucket preventing the access.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-analyzer.html

Attribute Based Access Controll issue for AWS Lambda with IAM policy

I am trying to follow this article for Secret Manager and tried applying attribute based access controll (ABAC) for AWS Lambda by using this user role policy linkage:
Create IAM user
Assign a role to this IAM user
Role is assigned an ABAC policy for lambda.
currently my ABAC policy for Lambda usage for different users in a project is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "LambdaPolicyForProject",
"Effect": "Allow",
"Action": [
"cloudformation:DescribeStacks",
"cloudformation:ListStackResources",
"cloudwatch:GetMetricData",
"cloudwatch:ListMetrics",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"kms:ListAliases",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListAttachedRolePolicies",
"iam:ListRolePolicies",
"iam:ListRoles",
"logs:DescribeLogGroups",
"lambda:Get*",
"lambda:List*",
"states:DescribeStateMachine",
"states:ListStateMachines",
"tag:GetResources",
"xray:GetTraceSummaries",
"xray:BatchGetTraces"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/accessproject": "${aws:PrincipalTag/accessproject}",
"aws:ResourceTag/accessteam": "${aws:PrincipalTag/accessteam}",
"aws:ResourceTag/costcenter": "${aws:PrincipalTag/costcenter}"
}
}
}
]
}
This does not work for a user when the costcenter, accessteam, accessproject tags are similar for both IAM user and lambda.
However, it works when I remove the condition in the above policy (this shows IAM is able to access lambda policy).
Can I know what I am missing from the tutorial above? I did cross check all tags for lambda, policies and IAM users, and they are same as per the docs.
The issue seems to be in the Actions you defined. According to the tutorial you followed:
[...] see Actions, Resources, and Condition Keys for AWS Secrets Manager. That page shows that actions performed on the Secret resource type support the secretsmanager:ResourceTag/tag-key condition key. Some Secrets Manager actions don't support that resource type, including GetRandomPassword and ListSecrets.
Have a look at actions, resources, and condition keys for AWS services and for each service make sure the action supports the aws:ResourceTag/${TagKey} condition. I didn't go through all the permissions but already the CloudWatch actions GetMetricData and ListMetrics do not support the aws:ResourceTag/${TagKey} condition. Same goes for ec2:DescribeSecurityGroups,
ec2:DescribeSubnets, ec2:DescribeVpcs, and probably a few more.
You must create additional statements to allow those actions i.e:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "LambdaPolicyForProject",
"Effect": "Allow",
"Action": [
"cloudformation:DescribeStacks",
"cloudformation:ListStackResources",
"kms:ListAliases",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListAttachedRolePolicies",
"iam:ListRolePolicies",
"iam:ListRoles",
"logs:DescribeLogGroups",
"lambda:Get*",
"lambda:List*",
"states:DescribeStateMachine",
"states:ListStateMachines",
"tag:GetResources",
"xray:GetTraceSummaries",
"xray:BatchGetTraces"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/accessproject": "${aws:PrincipalTag/accessproject}",
"aws:ResourceTag/accessteam": "${aws:PrincipalTag/accessteam}",
"aws:ResourceTag/costcenter": "${aws:PrincipalTag/costcenter}"
}
}
},{
"Sid": "LambdaPolicyForProjectNoTags",
"Effect": "Allow",
"Action": [
"cloudwatch:GetMetricData",
"cloudwatch:ListMetrics",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs"
],
"Resource": "*"
}
]
}
Once you have a working policy, please familiarize yourself with the IAM best practices as the use of wildcard resouce access should be avoided whenever possible (principle of granting least privilege).

IAM permission for EC2 Data Lifecycle Manager is not working

I have created an IAM user in my AWS account. IAM user requires permission to access Amazon data Lifecycle Manager. I had given the following permissions to the IAM user
AmazonEC2FullAccess,
AWSDataLifecycleManagerServiceRole
and AWSDataLifecycleManagerServiceRoleForAMIManagement.
But when I tried to access Amazon Data Lifecycle Manager with this IAM user account, I get this following statement on the lifecycle manager page
It is taking a bit longer than usual to fetch your data.
(The page keepy on loading for a longer period of time)
This message doesn't appear when I tried to access the same page with the same IAM user but this time with Administrator-Access.
Can somebody please let me know what's going wrong here, because I want to grant limited permission for my IAM user to manage my AWS resources.
The policies that you mencioned does not include permissions to access Data Lifecycle Manager.
This is another service that is not included on EC2 (this is why AmazonEC2FullAccess does not give you permissions). Additionally, AWSDataLifecycleManagerServiceRole and AWSDataLifecycleManagerServiceRoleForAMIManagement are managed policies to allow AWS Data Lifecycle Manager itself to take actions on AWS resources. So these policies should not be applied to IAM Users.
You need to create a custom IAM Policy with the proper permissions. In case of read only you can use this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DataLifecycleManagerRead",
"Effect": "Allow",
"Action": [
"dlm:Get*",
"dlm:List*"
],
"Resource": "*"
}
]
}
UPDATE
To create policies through web console, some additional permissions are required because the web shows more information to help during creation process. So in order to have enough permissions to create policies via web use this (some of these are referenced on documentation but seems to be incomplete):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dlm:*",
"iam:GetRole",
"ec2:DescribeTags",
"iam:ListRoles",
"iam:PassRole",
"iam:CreateRole",
"iam:AttachRolePolicy"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot",
"ec2:CreateSnapshots",
"ec2:DeleteSnapshot",
"ec2:DescribeInstances",
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:EnableFastSnapshotRestores",
"ec2:DescribeFastSnapshotRestores",
"ec2:DisableFastSnapshotRestores",
"ec2:CopySnapshot",
"ec2:ModifySnapshotAttribute",
"ec2:DescribeSnapshotAttribute"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*::snapshot/*"
},
{
"Effect": "Allow",
"Action": [
"events:PutRule",
"events:DeleteRule",
"events:DescribeRule",
"events:EnableRule",
"events:DisableRule",
"events:ListTargetsByRule",
"events:PutTargets",
"events:RemoveTargets"
],
"Resource": "arn:aws:events:*:*:rule/AwsDataLifecycleRule.managed-cwe.*"
}
]
}

Error connecting to AWS Transfer (SFTP service) via Filezilla [duplicate]

I am having trouble connecting to AWS Transfer for SFTP. I successfully set up a server and tried to connect using WinSCP.
I set up an IAM role with trust relationships like follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "transfer.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I paired this with a scope down policy as described in the documentation using a home directory homebucket and home directory homedir
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListHomeDir",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketAcl"
],
"Resource": "arn:aws:s3:::${transfer:HomeBucket}"
},
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:GetObjectVersionAcl",
"s3:GetObjectTagging",
"s3:PutObjectTagging",
"s3:PutObjectAcl",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
}
]
}
I was able to authenticate using an ssh key, but when it came to actually reading/writing files I just kept getting opaque errors like "Error looking up homedir" and failed "readdir". This all smells very much like problems with my IAM policy but I haven't been able to figure it out.
We had similar issues getting the scope down policy to work with our users on AWS Transfer. The solution that worked for us, was creating two different kinds of policies.
Policy to attach to the role which has general rights on the whole bucket.
Scope down policy to apply to the user which makes use of the transfer service variables like {transfer:UserName}.
We concluded that maybe only the extra attached policy is able to resolve the transfer service variables. We are not sure if this is correct and if this is the best solution, because this opens the possible risk when forgiving to attach the scope down policy to create a kind of "admin" user. So I'd be glad to get input to further lock this down a little bit.
Here is how it looks in my console when looking at the transfer user details:
Here are our two policies we use:
General policy to attach to IAM role
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-s3-bucket"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3::: my-s3-bucket/*"
}
]
}
Scope down policy to apply to transfer user
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::${transfer:HomeBucket}"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"${transfer:UserName}/*",
"${transfer:UserName}"
]
}
}
},
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
}
]
}
I had a similar problem but with a different error behavior. I managed to log in successfully, but then the connection was almost immediately closed.
I did the following things:
Make sure that the IAM role that allows bucket access also contains KMS access if your bucket is encrypted.
Make sure that the trust relationship is also part of that role.
Make sure that the server itself has a Cloudwatch role also with a trust relationship to transfer.amazonaws.com! This was the solution for me. I don't get why this is needed but without the trust relationship in the Cloudwatch role, my connection get's closed.
I hope that helps.
Edit: Added a picture for the settings of the CloudWatch role:
The bucket policy for the IAM user role can look like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<your bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<your bucket>/*"
]
}
]
}
Finally, also add a Trust Relationship as shown above for the user IAM role.
If you can connect to your sftp but then get a readdir error when trying to list contents, e.g. with the command "ls", then that's a sign that you have no bucket permission. If your connection get's closed right away it seems to be a Trust Relationship issue or a KMS issue.
According to the somewhat cryptic documentation #limfinity was correct. To scope down access you need a general Role/Policy combination granting access to see the bucket. This role gets applied to the SFTP user you create. In addition you need a custom policy which grants CRUD rights only to the user's bucket. The custom policy is also applied to the SFTP user.
From page 24 of this doc... https://docs.aws.amazon.com/transfer/latest/userguide/sftp.ug.pdf#page=28&zoom=100,0,776
To create a scope-down policy, use the following policy variables in your IAM policy:
AWS Transfer for SFTP User Guide
Creating a Scope-Down Policy
• ${transfer:HomeBucket}
• ${transfer:HomeDirectory}
• ${transfer:HomeFolder}
• ${transfer:UserName}
Note
You can't use the variables listed preceding as policy variables in an IAM role definition. You create these variables in an IAM policy and supply them directly when setting up your user. Also, you can't use the ${aws:Username}variable in this scope-down policy. This variable refers to an IAM user name and not the user name required by AWS SFTP.
Can't comment, sorry if I'm posting incorrectly.
Careful with AWS's default policy!
This solution did work for me in that I was able to use scope-down policies for SFTP users as expected. However, there's a catch:
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
This section of the policy will enable SFTP users using this policy to change directory to root and list all of your account's buckets. They won't have access to read or write, but they can discover stuff which is probably unnecessary. I can confirm that changing the above to:
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "${transfer:HomeBucket}"
},
... appears to prevent SFTP users from listing buckets. However, they can still cd to directories if they happen to know buckets that exist -- again they dont' have read/write but this is still unnecessary access. I'm probably missing something to prevent this in my policy.
Proper jailing appears to be a backlog topic: https://forums.aws.amazon.com/thread.jspa?threadID=297509&tstart=0
We were using the updated version of SFTP with Username and Password and had to spend quite some time to figure out all details. For the new version, the Scope down policy needs to be specified as 'Policy' key within Secrets Manager. This is very important for the whole flow to work.
We have documented the full setup on our site here - https://coderise.io/sftp-on-aws-with-username-and-password/
Hope that helps!

Permissions to access ElasticSearch from Lambda?

I'm trying to use Elasticsearch for data storage for a Lambda function connected to Alexa Skills Kit. The Lambda works alright without Elasticsearch but ES provides much-needed fuzzy matching.
The only way I've been able to access it from Lambda is by enabling Elasticsearch global access but that's a really bad idea. I've also been able to access from my computer via open access policy or IP address policy. Is there a way to do read-only access via Lambda and read-write via IP?
On IAM I granted my Lambda role AmazonESReadOnlyAccess. On the ES side I tried this but it only worked for IP address:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::NUMBER:root",
"arn:aws:iam::NUMBER:role/lambda_basic_execution"
]
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "MY IP"
}
}
}
]
}
This forum post asks the same question but went unanswered.
The only way I know of to do this is to use a resource-based policy or an IAM-based policy on your ES domain. This would restrict access to a particular IAM user or role. However, to make this work you also need to sign your requests to ES using SigV4.
There are libraries that will do this signing for you, for example this one extends the popular Python requests library to sign ElasticSearch requests via SigV4. I believe similar libraries exist for other languages.
Now it's possible from your code with elasticsearch.js. Before you try it, you must install http-aws-es module.
const AWS = require('aws-sdk');
const httpAwsEs = require('http-aws-es');
const elasticsearch = require('elasticsearch');
const client = new elasticsearch.Client({
host: 'YOUR_ES_HOST',
connectionClass: httpAwsEs,
amazonES: {
region: 'YOUR_ES_REGION',
credentials: new AWS.EnvironmentCredentials('AWS')
}
});
// client.search({...})
Of course, before using it, configure access to elasticsearch domain:
For external (outside AWS) access to your Elasticsearch cluster, you want to create the cluster with an IP-based access policy. Something like the below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<<IP/CIDR>>"
]
}
},
"Resource": "arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
}
]
}
For your Lambda function, create the role that the Lambda function will assume with the below policy snippet.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:ESHttpGet"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_all/_settings",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_cluster/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_mapping/<<TYPE>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/*/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_stats"
]
}
I think you could more easily condense the above two policy statements into the following:
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpGet",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
}
I managed to piece the above together from the following sources:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
How to access Kibana from Amazon elasticsearch service?
https://forums.aws.amazon.com/thread.jspa?threadID=217149
You need to go to the access policy of Lambda and provide the AWS ARN to connect
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html#es-aws-integrations-s3-lambda-es-authorizations
AWS Lambda runs on public EC2 instances. So simply adding a whitelist of IP addresses to the Elasticsearch access policy will not work. One way to do this will be to give the Lambda execution role appropriate permissions to the Elasticsearch domain. Make sure that the Lambda Execution role has permissions to the ES domain and the ES domain access policy has a statement which allows this Lambda Role ARN to do the appropriate actions. Once this is done all you would have to do is sign your request via SigV4 while accessing the ES endpoint Hope that helps!