Terraform error while creating AWS Cognito User Pool - amazon-web-services

Below is my terraform code to create AWS Cognito User Pool:
resource "aws_cognito_user_pool" "CognitoUserPool" {
name = "cgup-aws-try-cogn-createcgup-001"
password_policy {
minimum_length = 8
require_lowercase = true
require_numbers = true
require_symbols = true
require_uppercase = true
temporary_password_validity_days = 7
}
lambda_config {
}
schema {
attribute_data_type = "String"
developer_only_attribute = false
mutable = false
name = "sub"
string_attribute_constraints {
max_length = "2048"
min_length = "1"
}
required = true
}
}
the code consists of several schemas, but I think that may be enough.
It was exported from aws from an existing cognito user pool, but when I try a terraform plan I get the following error:
Error: "schema.1.name" cannot be longer than 20 character
with aws_cognito_user_pool.CognitoUserPool,
on main.tf line 216, in resource "aws_cognito_user_pool" "CognitoUserPool":
216: resource "aws_cognito_user_pool" "CognitoUserPool" {
No matter how much I reduce the length of the name, I get the same error.

I tried to deploy
resource "aws_cognito_user_pool" "CognitoUserPool" {
name = "cgup-aws-try"
password_policy {
minimum_length = 8
require_lowercase = true
require_numbers = true
require_symbols = true
require_uppercase = true
temporary_password_validity_days = 7
}
lambda_config {
}
schema {
attribute_data_type = "String"
developer_only_attribute = false
mutable = false
name = "sub"
string_attribute_constraints {
max_length = "2048"
min_length = "1"
}
required = true
}
}
and it was succesfull.
Maybe try to start fresh in a new workspace.

Related

how to enable dynamic block conditionally for creating GCS buckets

Im trying to add retention policy but I want to enable it conditionally, as you can see from the code
buckets.tf
locals {
team_buckets = {
arc = { app_id = "20390", num_buckets = 2, retention_period = null }
ana = { app_id = "25402", num_buckets = 2, retention_period = 631139040 }
cha = { app_id = "20391", num_buckets = 2, retention_period = 631139040 } #20 year
}
}
module "team_bucket" {
source = "../../../../modules/gcs_bucket"
for_each = {
for bucket in flatten([
for product_name, bucket_info in local.team_buckets : [
for i in range(bucket_info.num_buckets) : {
name = format("%s-%02d", product_name, i + 1)
team = "ei_${product_name}"
app_id = bucket_info.app_id
retention_period = bucket_info.retention_period
}
]
]) : bucket.name => bucket
}
project_id = var.project
name = "teambucket-${each.value.name}"
app_id = each.value.app_id
team = each.value.team
retention_period = each.value.retention_period
}
root module is defined as follows
main.tf
resource "google_storage_bucket" "bucket" {
project = var.project_id
name = "${var.project_id}-${var.name}"
location = var.location
labels = {
app_id = var.app_id
ei_team = var.team
cost_center = var.cost_center
}
uniform_bucket_level_access = var.uniform_bucket_level_access
dynamic "retention_policy" {
for_each = var.retention_policy == null ? [] : [var.retention_period]
content {
retention_period = var.retention_period
}
}
}
but I can't seem to make the code pick up the value,
for example as you see below the value doesn't get implemented
~ resource "google_storage_bucket" "bucket" {
id = "teambucket-cha-02"
name = "teambucket-cha-02"
# (11 unchanged attributes hidden)
- retention_policy {
- is_locked = false -> null
- retention_period = 3155760000 -> null
}
}
variables.tf for retention policy is as follows
variable "retention_policy" {
description = "Configuation of the bucket's data retention policy for how long objects in the bucket should be retained"
type = any
default = null
}
variable "retention_period" {
default = null
}
Your var.retention_policy is always null, as its default value. You are not changing the default value at all. Probably you wanted the following:
for_each = var.retention_period == null ? [] : [var.retention_period]
instead of
for_each = var.retention_policy == null ? [] : [var.retention_period]

Sagemaker workforce with cognito

i am trying to build the terraform for sagemaker private work force with private cognito
Following : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sagemaker_workforce
it working fine
main.tf
resource "aws_sagemaker_workforce" "workforce" {
workforce_name = "workforce"
cognito_config {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
}
}
resource "aws_cognito_user_pool" "user_pool" {
name = "sagemaker-cognito-userpool"
}
resource "aws_cognito_user_pool_client" "congnito_client" {
name = "congnito-client"
generate_secret = true
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_group" "user_group" {
name = "user-group"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_pool_domain" "domain" {
domain = "sagemaker-user-pool-ocr-domain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_sagemaker_workteam" "workteam" {
workteam_name = "worker-team"
workforce_name = aws_sagemaker_workforce.workforce.id
description = "worker-team"
member_definition {
cognito_member_definition {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
user_group = aws_cognito_user_group.user_group.id
}
}
}
resource "aws_sagemaker_human_task_ui" "template" {
human_task_ui_name = "human-task-ui-template"
ui_template {
content = file("${path.module}/sagemaker-human-task-ui-template.html")
}
}
resource "aws_sagemaker_flow_definition" "definition" {
flow_definition_name = "flow-definition"
role_arn = var.aws_iam_role
human_loop_config {
human_task_ui_arn = aws_sagemaker_human_task_ui.template.arn
task_availability_lifetime_in_seconds = 1
task_count = 1
task_description = "Task description"
task_title = "Please review the Key Value Pairs in this document"
workteam_arn = aws_sagemaker_workteam.workteam.arn
}
output_config {
s3_output_path = "s3://${var.s3_output_path}"
}
}
it's creating the cognito user pool with callback urls. These callback urls is coming from aws_sagemaker_workforce.workforce.subdomain and getting set in cognito automatically which is what i want.
But i also want to set config in cognito userpool like
allowed_oauth_flows = ["code", "implicit"]
allowed_oauth_scopes = ["email", "openid", "profile"]
now when i add above two line we need to add callbackurl also which i dont want.
i tried
allowed_oauth_flows = ["code", "implicit"]
allowed_oauth_scopes = ["email", "openid", "profile"]
callback_urls = [aws_sagemaker_workforce.workforce.subdomain]
which is giving error :
Cycle: module.sagemaker.aws_cognito_user_pool_client.congnito_client, module.sagemaker.aws_sagemaker_workforce.workforce
as both resource are dependent on each other, i want to pass those two line but it forces me to add callback url also.
here is the final main.tf which is failing with that three line
resource "aws_sagemaker_workforce" "workforce" {
workforce_name = "workforce"
cognito_config {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
}
}
resource "aws_cognito_user_pool" "user_pool" {
name = "sagemaker-cognito-userpool"
}
resource "aws_cognito_user_pool_client" "congnito_client" {
name = "congnito-client"
generate_secret = true
user_pool_id = aws_cognito_user_pool.user_pool.id
explicit_auth_flows = ["ALLOW_REFRESH_TOKEN_AUTH", "ALLOW_USER_PASSWORD_AUTH", "ALLOW_CUSTOM_AUTH", "ALLOW_USER_SRP_AUTH"]
allowed_oauth_flows_user_pool_client = true
supported_identity_providers = ["COGNITO"]
allowed_oauth_flows = ["code", "implicit"]
allowed_oauth_scopes = ["email", "openid", "profile"]
callback_urls = [aws_sagemaker_workforce.workforce.subdomain]
}
resource "aws_cognito_user_group" "user_group" {
name = "user-group"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_pool_domain" "domain" {
domain = "sagemaker-user-pool-ocr-domain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_sagemaker_workteam" "workteam" {
workteam_name = "worker-team"
workforce_name = aws_sagemaker_workforce.workforce.id
description = "worker-team"
member_definition {
cognito_member_definition {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
user_group = aws_cognito_user_group.user_group.id
}
}
}
resource "aws_sagemaker_human_task_ui" "template" {
human_task_ui_name = "human-task-ui-template"
ui_template {
content = file("${path.module}/sagemaker-human-task-ui-template.html")
}
}
resource "aws_sagemaker_flow_definition" "definition" {
flow_definition_name = "flow-definition"
role_arn = var.aws_iam_role
human_loop_config {
human_task_ui_arn = aws_sagemaker_human_task_ui.template.arn
task_availability_lifetime_in_seconds = 1
task_count = 1
task_description = "Task description"
task_title = "Please review the Key Value Pairs in this document"
workteam_arn = aws_sagemaker_workteam.workteam.arn
}
output_config {
s3_output_path = "s3://${var.s3_output_path}"
}
}
You do not need to specify the callback URL for the workforce. It is sufficient to specify the following in order to create the aws_cognito_user_pool_client resource:
callback_urls = [
"https://${aws_cognito_user_pool_domain.domain>.cloudfront_distribution_arn}",
]
Then you reference the user pool client in your workforce definition:
resource "aws_sagemaker_workforce" "..." {
workforce_name = "..."
cognito_config {
client_id = aws_cognito_user_pool_client.<client_name>.id
user_pool = aws_cognito_user_pool_domain.<domain_name>.user_pool_id
}
}
Existence of the callback URLs can be proven after applying the terraform configuration by running aws cognito-idp describe-user-pool-client --user-pool-id <pool_id> --client-id <client_id>:
"UserPoolClient": {
...
"CallbackURLs": [
"https://____.cloudfront.net",
"https://____.labeling.eu-central-1.sagemaker.aws/oauth2/idpresponse"
],
"LogoutURLs": [
"https://____.labeling.eu-central-1.sagemaker.aws/logout"
],
It seems as terraform itself does not do anything special on workforce creation (see https://github.com/hashicorp/terraform-provider-aws/blob/main/internal/service/sagemaker/workforce.go). So the callback urls seem to be added by AWS SageMaker itself.
This means that you have to instruct terraform to ignore changes on those attributes in the aws_cognito_user_pool_client configuration:
lifecycle {
ignore_changes = [
callback_urls, logout_urls
]
}

Loop over maps and assign value to local - Terraform

I am trying to pass the values s3 name and create_user into local block in main.tf so that both of them have the value in list and then I am passing list_of_bucket in local block in module s3 to create the buckets and looping of user_to_create in module s3_user to create the user if the boolean is set to true. All of these values are passed to variable.tf and then to main.tf
dev.tfvars
wea-nonprod = {
services = {
s3 = [
sthree = {
create_user = true,
}
sfour = {
create_user = true,
}
sfive = {
create_user = true,
}
]
}
}
variable.tf
variable "s3_buckets" {
type = list(map)
}
main.tf
locals {
users_to_create = ""
list_of_buckets = ""
}
module "s3" {
source = "../../s3"
name = join("-", [var.name_prefix, "s3"])
tags = merge(var.tags, {Name = join("-", [var.name_prefix, "s3"])})
buckets = list_of_buckets
sse_algorithm = "AES256"
access_log_bucket_name = var.access_log_bucket_name
}
module "s3_user" {
for_each = local.users_to_create
source = "./service-s3-bucket-user"
name = join("-", [var.name_prefix, each.key])
tags = var.tags
bucket_arn = module.s3.bucket_arns[each.key]
depends_on = [module.s3]
}
Just iterate over your wea-nonprod map:
locals {
users_to_create = [ for name in var.wea-nonprod.services.s3 if name.create_user == true ]
list_of_buckets = [ for bucket in var.wea-nonprod.services.s3 ]
}
And a few changes to your module blocks:
module "s3" {
source = "../../s3"
name = "${var.name_prefix}-s3"
tags = merge(var.tags, { Name = "${var.name_prefix}-s3" })
buckets = local.list_of_buckets
sse_algorithm = "AES256"
access_log_bucket_name = var.access_log_bucket_name
}
module "s3_user" {
count = length(local.users_to_create)
source = "./service-s3-bucket-user"
name = "${var.name_prefix}${local.users_to_create[count.index]}"
tags = var.tags
bucket_arn = module.s3.bucket_arns[local.users_to_create[count.index]]
depends_on = [module.s3]
}

Cognito User Pool Lambda Trigger permission

I'm using Terraform to create a Cognito User pool. I'd like to use a lambda function for sending a custom message when a user signs up. When I run attempt to sign up on the client, I get an error saying that "CustomMessage invocation failed due to error AccessDeniedException." I've used Lambda Permissions before, but I can't find any examples of this configuration. How do I give the lambda function permission? The following is my current configuration.
resource "aws_cognito_user_pool" "main" {
name = "${var.user_pool_name}_${var.stage}"
username_attributes = [ "email" ]
schema {
attribute_data_type = "String"
mutable = true
name = "name"
required = true
}
schema {
attribute_data_type = "String"
mutable = true
name = "email"
required = true
}
password_policy {
minimum_length = "8"
require_lowercase = true
require_numbers = true
require_symbols = true
require_uppercase = true
}
mfa_configuration = "OFF"
lambda_config {
custom_message = aws_lambda_function.custom_message.arn
post_confirmation = aws_lambda_function.post_confirmation.arn
}
}
...
resource "aws_lambda_permission" "get_blog" {
statement_id = "AllowExecutionFromCognito"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.custom_message.function_name
principal = "cognito-idp.amazonaws.com"
source_arn = "${aws_cognito_user_pool.main.arn}/*/*"
depends_on = [ aws_lambda_function.custom_message ]
}
...
resource "aws_lambda_function" "custom_message" {
filename = "${var.custom_message_path}/${var.custom_message_file_name}.zip"
function_name = var.custom_message_file_name
role = aws_iam_role.custom_message.arn
handler = "${var.custom_message_file_name}.handler"
source_code_hash = filebase64sha256("${var.custom_message_path}/${var.custom_message_file_name}.zip")
runtime = "nodejs12.x"
timeout = 10
layers = [ var.node_layer_arn ]
environment {
variables = {
TABLE_NAME = var.table_name
RESOURCENAME = "blogAuthCustomMessage"
REGION = "us-west-2"
}
}
tags = {
Name = var.developer
}
depends_on = [
data.archive_file.custom_message,
]
}
Based on OP's feedback in the comment section, changing source_arn property in the aws_lambda_permission.get_blog to aws_cognito_user_pool.main.arn works.

Terraform regenerating resources when a new object is added to map

I have a Terraform (pre 0.12) module that generates an Amazon Cognito user pool + a client and domain.
resource "aws_cognito_user_pool" "pool" {
count = "${var.user_pool_count}"
name = "${lookup(var.user_pools[count.index], "name")}"
username_attributes = ["email"]
auto_verified_attributes = ["email"]
password_policy {
minimum_length = "${lookup(var.user_pools[count.index], "password_minimum_length")}"
require_lowercase = "${lookup(var.user_pools[count.index], "password_require_lowercase")}"
require_numbers = "${lookup(var.user_pools[count.index], "password_require_numbers")}"
require_symbols = "${lookup(var.user_pools[count.index], "password_require_symbols")}"
require_uppercase = "${lookup(var.user_pools[count.index], "password_require_uppercase")}"
}
verification_message_template = {
default_email_option = "CONFIRM_WITH_LINK"
}
lambda_config = {
pre_token_generation = "${var.lambda_pre_token_generation}"
custom_message = "${var.lambda_custom_message}"
}
email_configuration = {
reply_to_email_address = "${lookup(var.user_pools[count.index], "reply_to_email_address")}"
source_arn = "${lookup(var.user_pools[count.index], "source_arn")}"
email_sending_account = "${lookup(var.user_pools[count.index], "email_sending_account")}"
}
schema = [
< REDACTED >
]
}
resource "aws_cognito_user_pool_client" "client" {
count = "${var.user_pool_count}"
name = "${lookup(var.user_pools[count.index], "name")}"
user_pool_id = "${element(aws_cognito_user_pool.pool.*.id,count.index)}"
explicit_auth_flows = ["ADMIN_NO_SRP_AUTH", "USER_PASSWORD_AUTH"]
}
resource "aws_cognito_user_pool_domain" "main" {
count = "${var.user_pool_count}"
domain = "${lookup(var.user_pools[count.index], "domain")}"
user_pool_id = "${element(aws_cognito_user_pool.pool.*.id,count.index)}"
}
This accepts a list of maps called user_pools to define the Cognito user pools required.
Unfortunately, when I add a new map with the definition of a new pool in it, Terraform forces the recreation of aws_cognito_user_pool_client and aws_cognito_user_pool_domain for all pools. This appears to be because it sees a change in:
user_pool_id: "eu-west-1_R8SDX8Yqj" => "${element(aws_cognito_user_pool.pool.*.id,count.index)}" (forces new resource)
I am assuming this is because Terraform is seeing a change in aws_cognito_user_pool.pool.*.id and forcing the recreation. Can anyone explain how to get around this? It is suboptimal for me to have all of my domains and clients regenerated.
For anyone reading this. I found the following issue on Github - https://github.com/hashicorp/terraform/issues/14357
Changing my syntax to the following appeared to fix it.
user_pool_id = "${aws_cognito_user_pool.pool.*.id[count.index]}"