Unsettled Promises in AWS Lambda from SSM Parameters - amazon-web-services

I am using the serverless framework to deploy and program my aws lambda function and since my function is ready for production I need to remove the sensitive keys and decided to use aws systems manager (ssm parameter store) to use these keys in a secure manner, but on deployment, I receive the following error message related to the use of these keys. I thought it might be something related to the Iam Role that I manually associated with the lambda, but I'm not sure what would be off with it.
Error:
Serverless Information ----------------------------------
##########################################################################################
# 47555: 0 of 2 promises have settled
# 47555: 2 unsettled promises:
# 47555: ssm:mg-production-domain~true waited on by: undefined
# 47555: ssm:mg-production-api-key~true waited on by: undefined
# This can result from latent connections but may represent a cyclic variable dependency
##########################################################################################
YAML:
provider:
name: aws
runtime: nodejs10.x
stage: dev
region: us-east-1
environment:
MG_PRODUCTION_DOMAIN: ${ssm:mg-production-domain~true}
MG_PRODUCTION_API_KEY: ${ssm:mg-production-api-key~true}
Here is the Iam Role policy I added to the lambda, but I believe there is probably a better way to do this by adding the Iam Role via the YAML file:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ssm:DescribeParameters",
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": "arn:aws:ssm:us-east-1:*account-id*:parameter/*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "ssm:GetParameter",
"Resource": "arn:aws:ssm:us-east-1:*account-id*:parameter/*"
}
]
}

Related

Why is aws lambda getting "AccessDeniedExceptionKMS" Error Message?

I just deployed a lambda (using Terraform from gitlab runner) to a new aws account. This pipeline deploys a lambda to another (dev/test) account without issues, but when I try to deploy to my prod account, I get the following error:
I'm honing in on the statement, "The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access."
I have confirmed that the encryption config for the env vars are set to use default aws/lambda key instead of a customer master key. That seems to contradict the language of the error which refers to a customer master key...?
The role assumed by the lambda does have a policy which includes two kms actions:
"Sid": "AWSKeyManagementService",
"Action": [
"kms:Decrypt",
"kms:DescribeKey"
]
By process of elimination, I wonder if the issue is a lack on the part of the resource-based policy on the kms key. Looking in the kms keys, under aws managed, I find the aws/lambda key has the following key policy:
{
"Version": "2012-10-17",
"Id": "auto-awslambda",
"Statement": [
{
"Sid": "Allow access through AWS Lambda for all principals in the account that are authorized to use AWS Lambda",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:CreateGrant",
"kms:DescribeKey"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"kms:ViaService": "lambda.us-east-1.amazonaws.com",
"kms:CallerAccount": "REDACTED"#<-- Account where lambda deployed
}
}
},
{
"Sid": "Allow direct access to key metadata to the account",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::REDACTED:root"#<-- Account where lambda deployed
},
"Action": [
"kms:Describe*",
"kms:Get*",
"kms:List*",
"kms:RevokeGrant"
],
"Resource": "*"
}
]
}
This is very puzzling. Any pointers appreciated!
This was solved by simply deleting the lambda and then re-running my pipeline to re-deploy it. All I can conclude is that something was corrupted in the first deployment.
Also deleting and redeploying the lambda sorted it out for me after many other tries to sort this.

AWS Tutorial Code - Lambda Access Denied, incorrect server location showing in error message

I am very new to AWS and have only just started learning it. I am following AWS's full-stack tutorial, however, when I test module 4, my lambda function is not authorized to perform dynamodb:PutItem. In the error message, I can see the ARN has us-east-1 in it, however, the ARN I passed into the JSON for the IAM policy is eu-west-2. I have set everything up on eu-west-2 servers.
Here is the JSON used in the IAM policy, I have replaced my ID with xxxxx, but it is the same as what's listed in the table details on the DynamoDB dashboard.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": "arn:aws:dynamodb:eu-west-2:xxxxxxxxx:table/HelloWorldDatabase/*"
}
]
}
Is there anything I should be checking elsewhere they could be wrong?
EDIT:
Having changed some JSON from comments, JSON now looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListAndDescribe",
"Effect": "Allow",
"Action": [
"dynamodb:List*",
"dynamodb:DescribeReservedCapacity*",
"dynamodb:DescribeLimits",
"dynamodb:DescribeTimeToLive"
],
"Resource": "*"
},
{
"Sid": "SpecificTable",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGet*",
"dynamodb:DescribeStream",
"dynamodb:DescribeTable",
"dynamodb:Get*",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:BatchWrite*",
"dynamodb:CreateTable",
"dynamodb:Delete*",
"dynamodb:Update*",
"dynamodb:PutItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/HelloWorldDatabase"
}
]
}
This is the full stack trace I am now getting:
Requested resource not found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: S62KLPBAGKNLA66SSI77RC1AC7VV4KQNSO5AEMVJF66Q9ASUAAJG)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1799)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1383)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1359)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1139)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:796)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:5110)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:5077)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executePutItem(AmazonDynamoDBClient.java:2721)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.putItem(AmazonDynamoDBClient.java:2687)
at com.amazonaws.services.dynamodbv2.document.internal.PutItemImpl.doPutItem(PutItemImpl.java:85)
at com.amazonaws.services.dynamodbv2.document.internal.PutItemImpl.putItem(PutItemImpl.java:63)
at com.amazonaws.services.dynamodbv2.document.Table.putItem(Table.java:168)
at com.example.app.SavePersonHandler.persistData(SavePersonHandler.java:38)
at com.example.app.SavePersonHandler.handleRequest(SavePersonHandler.java:27)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
From DynamoDB this is the table details:
Region EU (London)
Amazon Resource Name (ARN) arn:aws:dynamodb:eu-west-2:xxxxxxxxx:table/HelloWorldDatabase
The Problem is the region name 3rd module step named Create a WebApp With Amplify Console
quoting from the above step:
In a new browser window, log into the Amplify Console. NOTE: We will be using the Oregon (us-west-2) region for this tutorial.
Please use the Amazon DynamoDB: Allows access to a specific table
Below policy shows how you might create a policy that allows full access to the HelloWorldDatabase DynamoDB table. This policy grants the permissions necessary to complete this action from the AWS API or AWS CLI only.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListAndDescribe",
"Effect": "Allow",
"Action": [
"dynamodb:List*",
"dynamodb:DescribeReservedCapacity*",
"dynamodb:DescribeLimits",
"dynamodb:DescribeTimeToLive"
],
"Resource": "*"
},
{
"Sid": "SpecificTable",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGet*",
"dynamodb:DescribeStream",
"dynamodb:DescribeTable",
"dynamodb:Get*",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:BatchWrite*",
"dynamodb:CreateTable",
"dynamodb:Delete*",
"dynamodb:Update*",
"dynamodb:PutItem"
],
"Resource": "arn:aws:dynamodb:eu-west-2:xxxxxx:table/HelloWorldDatabase"
}
]
}
If you want to learn how to build Lambda functions that interact with AWS Services, such as Amazon DynamoDB, you can use the Lambda Java runtime API. This gives you full control exactly what you want the Lambda function to perform.
To interact with the AWS Services, you have to use an IAM role (as discussed in this tutorial). For example, to use DynamoDB, the IAM role has to have a policy that lets it use Amazon DynamoDB.
All of these concepts are covered in this API development tutorial. In addition, this tutorial shows you how to schedule the Lambda function using scheduled events:
Creating scheduled events to invoke Lambda functions
I met the same issue. To solve the issue, please use only usa-east-1 server along the way when doing the tutorial. The jar file seems to hard-code the server address.

AWS Sagemaker + AWS Lambda

I try to use AWS SageMaker following documentation. I successfully loaded data, trained and deployed the model.
deployed-model
My next step have to be using AWS Lambda, connect it to this SageMaker endpoint.
I saw, that I need to give Lambda IAM execution role permission to invoke a model endpoint.
I add some data to IAM policy JSON and now it has this view
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:us-east-1:<my-account>:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-1:<my-account>:log-group:/aws/lambda/test-sagemaker:*"
]
},
{
"Effect": "Allow",
"Action": "sagemaker:InvokeEndpoint",
"Resource": "*"
}
]
}
Problem that even with role that have permission for invoking SageMaker endpoint my Lambda function didn't see it
An error occurred (ValidationError) when calling the InvokeEndpoint operation: Endpoint xgboost-2020-10-02-12-15-36-097 of account <my-account> not found.: ValidationError
I found an error by myself. Problem was in different regions. For training and deploying model I used us-east-2 and for lambda I used us-east-1. Just creating all in same region fixed this issue!

AWS CloudWatch and EC2 Role + Policy

I've followed a great tutorial by Martin Thwaites outlining the process of logging to AWS CloudWatch using Serilog and .Net Core.
I've got the logging portion working well to text and console, but just can't figure out the best way to authenticate to AWS CloudWatch from my application. He talks about inbuilt AWS authentication by setting up an IAM policy which is great and supplies the JSON to do so but I feel like something is missing. I've created the IAM Policy as per the example with a LogGroup matching my appsettings.json, but nothing comes though on the CloudWatch screen.
My application is hosted on an EC2 instance. Are there more straight forward ways to authenticate, and/or is there a step missing where the EC2 and CloudWatch services are "joined" together?
More Info:
Policy EC2CloudWatch attached to role EC2Role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
ALL EC2 READ ACTIONS HERE
],
"Resource": "*"
},
{
"Sid": "LogStreams",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:cloudwatch-analytics-staging
:log-stream:*"
},
{
"Sid": "LogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "arn:aws:logs:*:*:log-group:cloudwatch-analytics-staging"
}
]
}
In order to effectively apply the permissions, you need to assign the role to the EC2 instance.

AWS CloudWatch Cross-Account Logging with EC2 Instance Profile

When I originally setup CloudWatch, I created an EC2 Instance Profile to automatically grant access to write to the account's own CloudWatch service. Now, I would like to consolidate the logs from several accounts into a central account.
I'd like to implement a simplified architecture that is based on Centralized Logging on AWS. However, these logs will feed an on-premise ELK stack, so I'm only trying to implement the components outlined in red. I would like to solve this without the use of Kinesis.
Either the CloudWatch Agent (CWAgent) doesn't support assuming a role or I can't wrap my mind around how to craft the EC2 Instance Profile to allow the CWAgent to assume a role in a different account.
Logging Target (AWS Account 111111111111)
IAM LogStreamerRole:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::999999999999:role/EC2CloudWatchLoggerRole"
]
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
Logging Source (AWS Account 999999999999)
IAM Instance Profile Role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::111111111111:role/LogStreamerRole"
}
]
}
The CWAgent is producing the following error:
/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log:
2018-02-12T23:27:43Z E! CreateLogStream / CreateLogGroup with log group name Linux/var/log/messages stream name i-123456789abcdef has errors. Will retry the request: AccessDeniedException: User: arn:aws:sts::999999999999:assumed-role/EC2CloudWatchLoggerRole/i-123456789abcdef is not authorized to perform: logs:CreateLogStream on resource: arn:aws:logs:us-west-2:999999999999:log-group:Linux/var/log/messages:log-stream:i-123456789abcdef
status code: 400, request id: 53271811-1234-11e8-afe1-a3c56071215e
It is still trying to write to its own CloudWatch service, instead of to the central CloudWatch service.
From the logs, I see that the instance profile is used.
arn:aws:sts::999999999999:assumed-role/EC2CloudWatchLoggerRole/i-123456789abcdef
Just add the following to the /etc/awslogs/awscli.conf to assume the LogStreamerRole role.
role_arn = arn:aws:iam::111111111111:role/LogStreamerRole
credential_source=Ec2InstanceMetadata