Cognito User Pool Lambda Trigger permission - amazon-web-services

I'm using Terraform to create a Cognito User pool. I'd like to use a lambda function for sending a custom message when a user signs up. When I run attempt to sign up on the client, I get an error saying that "CustomMessage invocation failed due to error AccessDeniedException." I've used Lambda Permissions before, but I can't find any examples of this configuration. How do I give the lambda function permission? The following is my current configuration.
resource "aws_cognito_user_pool" "main" {
name = "${var.user_pool_name}_${var.stage}"
username_attributes = [ "email" ]
schema {
attribute_data_type = "String"
mutable = true
name = "name"
required = true
}
schema {
attribute_data_type = "String"
mutable = true
name = "email"
required = true
}
password_policy {
minimum_length = "8"
require_lowercase = true
require_numbers = true
require_symbols = true
require_uppercase = true
}
mfa_configuration = "OFF"
lambda_config {
custom_message = aws_lambda_function.custom_message.arn
post_confirmation = aws_lambda_function.post_confirmation.arn
}
}
...
resource "aws_lambda_permission" "get_blog" {
statement_id = "AllowExecutionFromCognito"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.custom_message.function_name
principal = "cognito-idp.amazonaws.com"
source_arn = "${aws_cognito_user_pool.main.arn}/*/*"
depends_on = [ aws_lambda_function.custom_message ]
}
...
resource "aws_lambda_function" "custom_message" {
filename = "${var.custom_message_path}/${var.custom_message_file_name}.zip"
function_name = var.custom_message_file_name
role = aws_iam_role.custom_message.arn
handler = "${var.custom_message_file_name}.handler"
source_code_hash = filebase64sha256("${var.custom_message_path}/${var.custom_message_file_name}.zip")
runtime = "nodejs12.x"
timeout = 10
layers = [ var.node_layer_arn ]
environment {
variables = {
TABLE_NAME = var.table_name
RESOURCENAME = "blogAuthCustomMessage"
REGION = "us-west-2"
}
}
tags = {
Name = var.developer
}
depends_on = [
data.archive_file.custom_message,
]
}

Based on OP's feedback in the comment section, changing source_arn property in the aws_lambda_permission.get_blog to aws_cognito_user_pool.main.arn works.

Related

Terraform For Using AWS Lambda With Amazon Lex

I have been having some trouble trying to get the fulfillment_activity codehook to work so that I can use Lambda Functions for the backend. For some reason, I am getting this error message from Terraform.
Error: error waiting for Lex Bot (helloBot) create: unexpected state 'FAILED', wanted target 'NOT_BUILT, READY, READY_BASIC_TESTING'. last error: Intent 'sample_intent' has an invalid message version defined for its fulfillment.
Here is my Terraform config:
# AWS Lex Bot
resource "aws_lex_bot" "helloBot" {
depends_on = [ aws_lex_intent.sample_intent ]
locale = "en-US"
name = "helloBot"
process_behavior = "BUILD"
voice_id = "Salli"
create_version = true
idle_session_ttl_in_seconds = 300
child_directed = false
abort_statement {
message {
content = "Abort Abort!"
content_type = "PlainText"
}
}
clarification_prompt {
max_attempts = 2
message {
content = "No Idea What You're Saying!"
content_type = "PlainText"
}
}
intent {
intent_name = "sampleIntentName"
intent_version = aws_lex_intent.sample_intent.version
}
}
resource "aws_lambda_permission" "lex_sample_intent_lambda" {
statement_id = "AllowExecutionFromAmazonLex"
action = "lambda:InvokeFunction"
function_name = "someLambdaFunctionName"
principal = "lex.amazonaws.com"
# https://docs.aws.amazon.com/lex/latest/dg/gs-cli-update-lambda.html
source_arn = "arn:aws:lex:myRegion:accountId:intent:sampleIntentName:*"
}
# AWS Lex Intents
data "aws_lambda_function" "existing" {
function_name = "someLambdaFunctionName"
qualifier = "dev"
}
resource "aws_lex_intent" "sample_intent" {
create_version = true
name = "sampleIntentName"
fulfillment_activity {
type = "CodeHook"
code_hook {
message_version = "1.0"
uri = data.aws_lambda_function.existing.qualified_arn
}
}
sample_utterances = [
"hi",
"hello"
]
}
I looked at the cli documentation and it appears that we are supposed to use "1.0" for the message version.
It looks like the Terraform configuration should be correct. The problem was related to the data type of the message version when it was a variable. It was incorrectly set as a number instead of a string.

Terraform error while creating AWS Cognito User Pool

Below is my terraform code to create AWS Cognito User Pool:
resource "aws_cognito_user_pool" "CognitoUserPool" {
name = "cgup-aws-try-cogn-createcgup-001"
password_policy {
minimum_length = 8
require_lowercase = true
require_numbers = true
require_symbols = true
require_uppercase = true
temporary_password_validity_days = 7
}
lambda_config {
}
schema {
attribute_data_type = "String"
developer_only_attribute = false
mutable = false
name = "sub"
string_attribute_constraints {
max_length = "2048"
min_length = "1"
}
required = true
}
}
the code consists of several schemas, but I think that may be enough.
It was exported from aws from an existing cognito user pool, but when I try a terraform plan I get the following error:
Error: "schema.1.name" cannot be longer than 20 character
with aws_cognito_user_pool.CognitoUserPool,
on main.tf line 216, in resource "aws_cognito_user_pool" "CognitoUserPool":
216: resource "aws_cognito_user_pool" "CognitoUserPool" {
No matter how much I reduce the length of the name, I get the same error.
I tried to deploy
resource "aws_cognito_user_pool" "CognitoUserPool" {
name = "cgup-aws-try"
password_policy {
minimum_length = 8
require_lowercase = true
require_numbers = true
require_symbols = true
require_uppercase = true
temporary_password_validity_days = 7
}
lambda_config {
}
schema {
attribute_data_type = "String"
developer_only_attribute = false
mutable = false
name = "sub"
string_attribute_constraints {
max_length = "2048"
min_length = "1"
}
required = true
}
}
and it was succesfull.
Maybe try to start fresh in a new workspace.

Sagemaker workforce with cognito

i am trying to build the terraform for sagemaker private work force with private cognito
Following : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sagemaker_workforce
it working fine
main.tf
resource "aws_sagemaker_workforce" "workforce" {
workforce_name = "workforce"
cognito_config {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
}
}
resource "aws_cognito_user_pool" "user_pool" {
name = "sagemaker-cognito-userpool"
}
resource "aws_cognito_user_pool_client" "congnito_client" {
name = "congnito-client"
generate_secret = true
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_group" "user_group" {
name = "user-group"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_pool_domain" "domain" {
domain = "sagemaker-user-pool-ocr-domain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_sagemaker_workteam" "workteam" {
workteam_name = "worker-team"
workforce_name = aws_sagemaker_workforce.workforce.id
description = "worker-team"
member_definition {
cognito_member_definition {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
user_group = aws_cognito_user_group.user_group.id
}
}
}
resource "aws_sagemaker_human_task_ui" "template" {
human_task_ui_name = "human-task-ui-template"
ui_template {
content = file("${path.module}/sagemaker-human-task-ui-template.html")
}
}
resource "aws_sagemaker_flow_definition" "definition" {
flow_definition_name = "flow-definition"
role_arn = var.aws_iam_role
human_loop_config {
human_task_ui_arn = aws_sagemaker_human_task_ui.template.arn
task_availability_lifetime_in_seconds = 1
task_count = 1
task_description = "Task description"
task_title = "Please review the Key Value Pairs in this document"
workteam_arn = aws_sagemaker_workteam.workteam.arn
}
output_config {
s3_output_path = "s3://${var.s3_output_path}"
}
}
it's creating the cognito user pool with callback urls. These callback urls is coming from aws_sagemaker_workforce.workforce.subdomain and getting set in cognito automatically which is what i want.
But i also want to set config in cognito userpool like
allowed_oauth_flows = ["code", "implicit"]
allowed_oauth_scopes = ["email", "openid", "profile"]
now when i add above two line we need to add callbackurl also which i dont want.
i tried
allowed_oauth_flows = ["code", "implicit"]
allowed_oauth_scopes = ["email", "openid", "profile"]
callback_urls = [aws_sagemaker_workforce.workforce.subdomain]
which is giving error :
Cycle: module.sagemaker.aws_cognito_user_pool_client.congnito_client, module.sagemaker.aws_sagemaker_workforce.workforce
as both resource are dependent on each other, i want to pass those two line but it forces me to add callback url also.
here is the final main.tf which is failing with that three line
resource "aws_sagemaker_workforce" "workforce" {
workforce_name = "workforce"
cognito_config {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
}
}
resource "aws_cognito_user_pool" "user_pool" {
name = "sagemaker-cognito-userpool"
}
resource "aws_cognito_user_pool_client" "congnito_client" {
name = "congnito-client"
generate_secret = true
user_pool_id = aws_cognito_user_pool.user_pool.id
explicit_auth_flows = ["ALLOW_REFRESH_TOKEN_AUTH", "ALLOW_USER_PASSWORD_AUTH", "ALLOW_CUSTOM_AUTH", "ALLOW_USER_SRP_AUTH"]
allowed_oauth_flows_user_pool_client = true
supported_identity_providers = ["COGNITO"]
allowed_oauth_flows = ["code", "implicit"]
allowed_oauth_scopes = ["email", "openid", "profile"]
callback_urls = [aws_sagemaker_workforce.workforce.subdomain]
}
resource "aws_cognito_user_group" "user_group" {
name = "user-group"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_pool_domain" "domain" {
domain = "sagemaker-user-pool-ocr-domain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_sagemaker_workteam" "workteam" {
workteam_name = "worker-team"
workforce_name = aws_sagemaker_workforce.workforce.id
description = "worker-team"
member_definition {
cognito_member_definition {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
user_group = aws_cognito_user_group.user_group.id
}
}
}
resource "aws_sagemaker_human_task_ui" "template" {
human_task_ui_name = "human-task-ui-template"
ui_template {
content = file("${path.module}/sagemaker-human-task-ui-template.html")
}
}
resource "aws_sagemaker_flow_definition" "definition" {
flow_definition_name = "flow-definition"
role_arn = var.aws_iam_role
human_loop_config {
human_task_ui_arn = aws_sagemaker_human_task_ui.template.arn
task_availability_lifetime_in_seconds = 1
task_count = 1
task_description = "Task description"
task_title = "Please review the Key Value Pairs in this document"
workteam_arn = aws_sagemaker_workteam.workteam.arn
}
output_config {
s3_output_path = "s3://${var.s3_output_path}"
}
}
You do not need to specify the callback URL for the workforce. It is sufficient to specify the following in order to create the aws_cognito_user_pool_client resource:
callback_urls = [
"https://${aws_cognito_user_pool_domain.domain>.cloudfront_distribution_arn}",
]
Then you reference the user pool client in your workforce definition:
resource "aws_sagemaker_workforce" "..." {
workforce_name = "..."
cognito_config {
client_id = aws_cognito_user_pool_client.<client_name>.id
user_pool = aws_cognito_user_pool_domain.<domain_name>.user_pool_id
}
}
Existence of the callback URLs can be proven after applying the terraform configuration by running aws cognito-idp describe-user-pool-client --user-pool-id <pool_id> --client-id <client_id>:
"UserPoolClient": {
...
"CallbackURLs": [
"https://____.cloudfront.net",
"https://____.labeling.eu-central-1.sagemaker.aws/oauth2/idpresponse"
],
"LogoutURLs": [
"https://____.labeling.eu-central-1.sagemaker.aws/logout"
],
It seems as terraform itself does not do anything special on workforce creation (see https://github.com/hashicorp/terraform-provider-aws/blob/main/internal/service/sagemaker/workforce.go). So the callback urls seem to be added by AWS SageMaker itself.
This means that you have to instruct terraform to ignore changes on those attributes in the aws_cognito_user_pool_client configuration:
lifecycle {
ignore_changes = [
callback_urls, logout_urls
]
}

Terraform EKS:Cluster is already at the desired configuration with desired configuration

I have created an EKS cluster using terraform after creation I am trying to update one parameter which is endpoint_public_access=false
But I am getting the following error
Error: error updating EKS Cluster
(ec1-default-ics-common-alz-eks-cluster) config:
InvalidParameterException: Cluster is already at the desired
configuration with endpointPrivateAccess: false ,
endpointPublicAccess: true, and Public Endpoint Restrictions:
[0.0.0.0/0] { ClusterName: "ec1-default-ics-common-alz-eks-cluster",
Message_: "Cluster is already at the desired configuration with
endpointPrivateAccess: false , endpointPublicAccess: true, and Public
Endpoint Restrictions: [0.0.0.0/0]" } on
../../terraform-hli-aws-eks/eks_cluster/main.tf line 1, in resource
"aws_eks_cluster" "eks_cluster": 1: resource "aws_eks_cluster"
"eks_cluster" {
Here is the terraform plan
~ resource "aws_eks_cluster" "eks_cluster" {
arn = "<arn>"
certificate_authority = [
{
data = "<datat>"
},
]
created_at = "2020-03-09 08:59:28 +0000 UTC"
enabled_cluster_log_types = [
"api",
"audit",
]
endpoint = "<url>.eks.amazonaws.com"
id = "ec1-default-ics-common-alz-eks-cluster"
identity = [
{
oidc = [
{
issuer = "<url>"
},
]
},
]
name = "ec1-default-ics-common-alz-eks-cluster"
platform_version = "eks.9"
role_arn = "<url>"
status = "ACTIVE"
tags = {
"Environment" = "common"
"Project" = "ics-dlt"
"Terraform" = "true"
}
version = "1.14"
~ vpc_config {
cluster_security_group_id = "sg-05ab244e50689862a"
endpoint_private_access = false
endpoint_public_access = true
~ public_access_cidrs = [
- "0.0.0.0/0",
]
security_group_ids = [
"sg-081527f14bf1a6646",
]
subnet_ids = [
"subnet-08011850bb5b7d7ca",
"subnet-0fab8917fdc533eb3",
]
vpc_id = "vpc-07ba84e4a6f54d91f"
}
}
Terraform code
resource "aws_eks_cluster" "eks_cluster" {
name = var.name
role_arn = aws_iam_role.eks_cluster_role.arn
vpc_config {
subnet_ids = var.cluster_subnet_ids
endpoint_private_access = var.endpoint_private_access
endpoint_public_access = var.endpoint_public_access
public_access_cidrs = var.public_access_cidrs
security_group_ids = var.security_group_ids
}
enabled_cluster_log_types = var.enabled_cluster_log_types
tags = var.tags
depends_on = [
aws_iam_role_policy_attachment.eks_cluster_role-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.eks_cluster_role-AmazonEKSServicePolicy,
]
}
data "template_file" "eks_cluster_role" {
template = "${file("${path.module}/roles/cluster_role.json")}"
}
resource "aws_iam_role" "eks_cluster_role" {
name = var.cluster_role_name
assume_role_policy = data.template_file.eks_cluster_role.rendered
}
resource "aws_iam_role_policy_attachment" "eks_cluster_role-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_iam_role_policy_attachment" "eks_cluster_role-AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = aws_iam_role.eks_cluster_role.name
}
if the cluster has this setting:
endpoint_public_access = true
then you need to "disable" this setting:
public_access_cidrs = null
you could do something like this:
public_access_cidrs = var.endpoint_public_access == true ? var.public_access_cidrs : null

Error creating aws_s3_notification with Terraform

I'm currently having an issue with my aws_s3_notification resource creation. Whenever I attempt to deploy this resource, I receive this error
Error putting S3 notification configuration: InvalidArgument: Unable to validate the following destination configurations
I've tried setting depends_on parameters and adjusting permissions. One interesting thing is in my main.tf file, I'm creating two lambda functions. Both are extremely similar (just vary by code). My "controller" configuration deploys with no issue but my "chunker" function seems to have an issue creating the s3_notification.. I have included both configs for comparison.
#S3
resource "aws_s3_bucket" "ancb" {
for_each = toset(var.ancb_bucket)
bucket = format("ancb-%s-%s-%s",var.env,var.product_name,each.value)
acl = "private"
versioning {
enabled = true
}
tags = {
Environment = var.env
Terraform = true
}
}
#Chunker
resource "aws_lambda_function" "ancb_chunker" {
function_name = format("ancb-chunker-%s-%s",var.env,var.product_name)
s3_bucket = aws_s3_bucket.ancb["config"].id
s3_key = var.lambda_zip_chunker
handler = "handler.chunk"
runtime = "nodejs8.10"
role = aws_iam_role.lambda_exec.arn
environment {
variables = {
ORIGINAL_BUCKET = aws_s3_bucket.ancb["original"].id
TO_PROCESS_BUCKET = aws_s3_bucket.ancb["to-process"].id
ENVIRONMENT = var.env
CHUNK_SIZE = 5000
}
}
tags = {
Environment = var.env
Terraform = true
}
depends_on = [
aws_s3_bucket_object.ancb["chunker.zip"],
aws_s3_bucket.ancb["chunker"]
]
}
resource "aws_lambda_permission" "ancb_chunker_s3" {
statement_id = "AllowExecutionFromS3Bucket-Chunker"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.ancb_controller.arn
principal = "s3.amazonaws.com"
source_arn = aws_s3_bucket.ancb["original"].arn
}
resource "aws_s3_bucket_notification" "chunker" {
bucket = aws_s3_bucket.ancb["original"].id
lambda_function {
lambda_function_arn = aws_lambda_function.ancb_chunker.arn
events = ["s3:ObjectCreated:*"]
}
depends_on = [
aws_lambda_permission.ancb_chunker_s3,
aws_lambda_function.ancb_chunker,
aws_s3_bucket.ancb["original"]
]
}
#Controller
resource "aws_lambda_function" "ancb_controller" {
function_name = format("ancb-controller-%s-%s",var.env,var.product_name)
s3_bucket = aws_s3_bucket.ancb["config"].id
s3_key = var.lambda_zip_controller
handler = "handler.controller"
runtime = "nodejs8.10"
role = aws_iam_role.lambda_exec.arn
environment {
variables = {
DESTINATION_BUCKET = aws_s3_bucket.ancb["destination"].id
ENVIRONMENT = var.env
ERROR_BUCKET = aws_s3_bucket.ancb["error"].id
GEOCODIO_APIKEY = <insert>
GEOCODIO_ENDPOINT = <insert>
GEOCODIO_VERSION = <insert>
ORIGINAL_BUCKET = aws_s3_bucket.ancb["original"].id
SOURCE_BUCKET = aws_s3_bucket.ancb["source"].id
TO_PROCESS_BUCKET = aws_s3_bucket.ancb["to-process"].id
WORKING_BUCKET = aws_s3_bucket.ancb["working"].id
}
}
tags = {
Environment = var.env
Terraform = true
}
depends_on = [
aws_s3_bucket_object.ancb["controller.zip"]
]
}
resource "aws_lambda_permission" "ancb_controller_s3" {
statement_id = "AllowExecutionFromS3Bucket-Controller"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.ancb_controller.arn
principal = "s3.amazonaws.com"
source_arn = aws_s3_bucket.ancb["source"].arn
}
resource "aws_s3_bucket_notification" "controller" {
bucket = aws_s3_bucket.ancb["source"].id
lambda_function {
lambda_function_arn = aws_lambda_function.ancb_controller.arn
events = ["s3:ObjectCreated:*"]
}
depends_on = [
aws_lambda_permission.ancb_controller_s3,
aws_s3_bucket.ancb["source"]
]
}
UPDATE: If I manually create the trigger and run terraform apply again, terraform is able to move forward with no problem....