I have a Terraform (pre 0.12) module that generates an Amazon Cognito user pool + a client and domain.
resource "aws_cognito_user_pool" "pool" {
count = "${var.user_pool_count}"
name = "${lookup(var.user_pools[count.index], "name")}"
username_attributes = ["email"]
auto_verified_attributes = ["email"]
password_policy {
minimum_length = "${lookup(var.user_pools[count.index], "password_minimum_length")}"
require_lowercase = "${lookup(var.user_pools[count.index], "password_require_lowercase")}"
require_numbers = "${lookup(var.user_pools[count.index], "password_require_numbers")}"
require_symbols = "${lookup(var.user_pools[count.index], "password_require_symbols")}"
require_uppercase = "${lookup(var.user_pools[count.index], "password_require_uppercase")}"
}
verification_message_template = {
default_email_option = "CONFIRM_WITH_LINK"
}
lambda_config = {
pre_token_generation = "${var.lambda_pre_token_generation}"
custom_message = "${var.lambda_custom_message}"
}
email_configuration = {
reply_to_email_address = "${lookup(var.user_pools[count.index], "reply_to_email_address")}"
source_arn = "${lookup(var.user_pools[count.index], "source_arn")}"
email_sending_account = "${lookup(var.user_pools[count.index], "email_sending_account")}"
}
schema = [
< REDACTED >
]
}
resource "aws_cognito_user_pool_client" "client" {
count = "${var.user_pool_count}"
name = "${lookup(var.user_pools[count.index], "name")}"
user_pool_id = "${element(aws_cognito_user_pool.pool.*.id,count.index)}"
explicit_auth_flows = ["ADMIN_NO_SRP_AUTH", "USER_PASSWORD_AUTH"]
}
resource "aws_cognito_user_pool_domain" "main" {
count = "${var.user_pool_count}"
domain = "${lookup(var.user_pools[count.index], "domain")}"
user_pool_id = "${element(aws_cognito_user_pool.pool.*.id,count.index)}"
}
This accepts a list of maps called user_pools to define the Cognito user pools required.
Unfortunately, when I add a new map with the definition of a new pool in it, Terraform forces the recreation of aws_cognito_user_pool_client and aws_cognito_user_pool_domain for all pools. This appears to be because it sees a change in:
user_pool_id: "eu-west-1_R8SDX8Yqj" => "${element(aws_cognito_user_pool.pool.*.id,count.index)}" (forces new resource)
I am assuming this is because Terraform is seeing a change in aws_cognito_user_pool.pool.*.id and forcing the recreation. Can anyone explain how to get around this? It is suboptimal for me to have all of my domains and clients regenerated.
For anyone reading this. I found the following issue on Github - https://github.com/hashicorp/terraform/issues/14357
Changing my syntax to the following appeared to fix it.
user_pool_id = "${aws_cognito_user_pool.pool.*.id[count.index]}"
Related
I have a requirement to create multiple VMs in GCP using the Instance Template module located here:
https://github.com/terraform-google-modules/terraform-google-vm/tree/master/modules/instance_template
My Instance Template code looks like this:
module "db_template" {
source = "terraform-google-modules/vm/google//modules/instance_template"
version = "7.8.0"
name_prefix = "${var.project_short_name}-db-template"
machine_type = var.app_machine_type
disk_size_gb = 20
source_image = "debian-10-buster-v20220719"
source_image_family = "debian-10"
source_image_project = "debian-cloud"
additional_disks = var.additional_disks
labels = {
costing = "db",
inventory = "gcp",
}
network = var.network
subnetwork = var.subnetwork
access_config = []
service_account = {
email = var.service_account_email
scopes = ["cloud-platform"]
}
tags = ["compute"]
}
in my tfvars I have this:
additional_disks = [
{ disk_name = "persistent-disk-1"
device_name = "persistent-disk-1"
auto_delete = true
boot = false
disk_size_gb = 50
disk_type = "pd-standard"
interface = "SCSI"
disk_labels = {}
}
]
However when my code has multiple VMs to deploy with this template, only 1 VM gets deployed--the first--and the subsequent VMs error out with this message:
Error: Error creating instance: googleapi: Error 409: The resource 'projects/<PATH>/persistent-disk-1' already exists, alreadyExists
I understand what is happening but I don't know how to fix it. The subsequent VMs cannot be created because the additional_disk name has already been taken by the first VM. I thought the whole point of using the instance template would be that there is logic built into this where you can use the same template and create multiple VMs of that type.
But it seems like I have to do some additional coding to get multiple VMs deployed with this template.
Can anyone suggest how to do this?
Ultimately got this working with various for_each constructs:
locals {
app_servers = ["inbox", "batch", "transfer", "tools", "elastic", "artemis"]
db_servers = ["inboxdb", "batchdb", "transferdb", "gatewaydb", "artemisdb"]
}
resource "google_compute_disk" "db_add_disk" {
for_each = toset(local.db_servers)
name = "${each.value}-additional-disk"
type = "pd-standard" // pd-ssd
zone = var.zone
size = 50
// interface = "SCSI"
labels = {
environment = "dev"
}
physical_block_size_bytes = 4096
}
module "db_template" {
source = "terraform-google-modules/vm/google//modules/instance_template"
version = "7.8.0"
name_prefix = "${var.project_short_name}-db-template"
machine_type = var.app_machine_type
disk_size_gb = 20
source_image = "debian-10-buster-v20220719"
source_image_family = "debian-10"
source_image_project = "debian-cloud"
labels = {
costing = "db",
inventory = "gcp",
}
network = var.network
subnetwork = var.subnetwork
access_config = []
service_account = {
email = var.service_account_email
scopes = ["cloud-platform"]
}
tags = ["compute"]
}
resource "google_compute_instance_from_template" "db_server-1" {
for_each = toset(local.db_servers)
name = "${var.project_short_name}-${each.value}-1"
zone = var.zone
source_instance_template = module.db_template.self_link
// Override fields from instance template
labels = {
costing = "db",
inventory = "gcp",
component = "${each.value}"
}
lifecycle {
ignore_changes = [attached_disk]
}
}
resource "google_compute_attached_disk" "db_add_disk" {
for_each = toset(local.db_servers)
disk = google_compute_disk.db_add_disk[each.key].id
instance = google_compute_instance_from_template.db_server-1[each.key].id
}
Below is my terraform code to create AWS Cognito User Pool:
resource "aws_cognito_user_pool" "CognitoUserPool" {
name = "cgup-aws-try-cogn-createcgup-001"
password_policy {
minimum_length = 8
require_lowercase = true
require_numbers = true
require_symbols = true
require_uppercase = true
temporary_password_validity_days = 7
}
lambda_config {
}
schema {
attribute_data_type = "String"
developer_only_attribute = false
mutable = false
name = "sub"
string_attribute_constraints {
max_length = "2048"
min_length = "1"
}
required = true
}
}
the code consists of several schemas, but I think that may be enough.
It was exported from aws from an existing cognito user pool, but when I try a terraform plan I get the following error:
Error: "schema.1.name" cannot be longer than 20 character
with aws_cognito_user_pool.CognitoUserPool,
on main.tf line 216, in resource "aws_cognito_user_pool" "CognitoUserPool":
216: resource "aws_cognito_user_pool" "CognitoUserPool" {
No matter how much I reduce the length of the name, I get the same error.
I tried to deploy
resource "aws_cognito_user_pool" "CognitoUserPool" {
name = "cgup-aws-try"
password_policy {
minimum_length = 8
require_lowercase = true
require_numbers = true
require_symbols = true
require_uppercase = true
temporary_password_validity_days = 7
}
lambda_config {
}
schema {
attribute_data_type = "String"
developer_only_attribute = false
mutable = false
name = "sub"
string_attribute_constraints {
max_length = "2048"
min_length = "1"
}
required = true
}
}
and it was succesfull.
Maybe try to start fresh in a new workspace.
i am trying to build the terraform for sagemaker private work force with private cognito
Following : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sagemaker_workforce
it working fine
main.tf
resource "aws_sagemaker_workforce" "workforce" {
workforce_name = "workforce"
cognito_config {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
}
}
resource "aws_cognito_user_pool" "user_pool" {
name = "sagemaker-cognito-userpool"
}
resource "aws_cognito_user_pool_client" "congnito_client" {
name = "congnito-client"
generate_secret = true
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_group" "user_group" {
name = "user-group"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_pool_domain" "domain" {
domain = "sagemaker-user-pool-ocr-domain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_sagemaker_workteam" "workteam" {
workteam_name = "worker-team"
workforce_name = aws_sagemaker_workforce.workforce.id
description = "worker-team"
member_definition {
cognito_member_definition {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
user_group = aws_cognito_user_group.user_group.id
}
}
}
resource "aws_sagemaker_human_task_ui" "template" {
human_task_ui_name = "human-task-ui-template"
ui_template {
content = file("${path.module}/sagemaker-human-task-ui-template.html")
}
}
resource "aws_sagemaker_flow_definition" "definition" {
flow_definition_name = "flow-definition"
role_arn = var.aws_iam_role
human_loop_config {
human_task_ui_arn = aws_sagemaker_human_task_ui.template.arn
task_availability_lifetime_in_seconds = 1
task_count = 1
task_description = "Task description"
task_title = "Please review the Key Value Pairs in this document"
workteam_arn = aws_sagemaker_workteam.workteam.arn
}
output_config {
s3_output_path = "s3://${var.s3_output_path}"
}
}
it's creating the cognito user pool with callback urls. These callback urls is coming from aws_sagemaker_workforce.workforce.subdomain and getting set in cognito automatically which is what i want.
But i also want to set config in cognito userpool like
allowed_oauth_flows = ["code", "implicit"]
allowed_oauth_scopes = ["email", "openid", "profile"]
now when i add above two line we need to add callbackurl also which i dont want.
i tried
allowed_oauth_flows = ["code", "implicit"]
allowed_oauth_scopes = ["email", "openid", "profile"]
callback_urls = [aws_sagemaker_workforce.workforce.subdomain]
which is giving error :
Cycle: module.sagemaker.aws_cognito_user_pool_client.congnito_client, module.sagemaker.aws_sagemaker_workforce.workforce
as both resource are dependent on each other, i want to pass those two line but it forces me to add callback url also.
here is the final main.tf which is failing with that three line
resource "aws_sagemaker_workforce" "workforce" {
workforce_name = "workforce"
cognito_config {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
}
}
resource "aws_cognito_user_pool" "user_pool" {
name = "sagemaker-cognito-userpool"
}
resource "aws_cognito_user_pool_client" "congnito_client" {
name = "congnito-client"
generate_secret = true
user_pool_id = aws_cognito_user_pool.user_pool.id
explicit_auth_flows = ["ALLOW_REFRESH_TOKEN_AUTH", "ALLOW_USER_PASSWORD_AUTH", "ALLOW_CUSTOM_AUTH", "ALLOW_USER_SRP_AUTH"]
allowed_oauth_flows_user_pool_client = true
supported_identity_providers = ["COGNITO"]
allowed_oauth_flows = ["code", "implicit"]
allowed_oauth_scopes = ["email", "openid", "profile"]
callback_urls = [aws_sagemaker_workforce.workforce.subdomain]
}
resource "aws_cognito_user_group" "user_group" {
name = "user-group"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_cognito_user_pool_domain" "domain" {
domain = "sagemaker-user-pool-ocr-domain"
user_pool_id = aws_cognito_user_pool.user_pool.id
}
resource "aws_sagemaker_workteam" "workteam" {
workteam_name = "worker-team"
workforce_name = aws_sagemaker_workforce.workforce.id
description = "worker-team"
member_definition {
cognito_member_definition {
client_id = aws_cognito_user_pool_client.congnito_client.id
user_pool = aws_cognito_user_pool_domain.domain.user_pool_id
user_group = aws_cognito_user_group.user_group.id
}
}
}
resource "aws_sagemaker_human_task_ui" "template" {
human_task_ui_name = "human-task-ui-template"
ui_template {
content = file("${path.module}/sagemaker-human-task-ui-template.html")
}
}
resource "aws_sagemaker_flow_definition" "definition" {
flow_definition_name = "flow-definition"
role_arn = var.aws_iam_role
human_loop_config {
human_task_ui_arn = aws_sagemaker_human_task_ui.template.arn
task_availability_lifetime_in_seconds = 1
task_count = 1
task_description = "Task description"
task_title = "Please review the Key Value Pairs in this document"
workteam_arn = aws_sagemaker_workteam.workteam.arn
}
output_config {
s3_output_path = "s3://${var.s3_output_path}"
}
}
You do not need to specify the callback URL for the workforce. It is sufficient to specify the following in order to create the aws_cognito_user_pool_client resource:
callback_urls = [
"https://${aws_cognito_user_pool_domain.domain>.cloudfront_distribution_arn}",
]
Then you reference the user pool client in your workforce definition:
resource "aws_sagemaker_workforce" "..." {
workforce_name = "..."
cognito_config {
client_id = aws_cognito_user_pool_client.<client_name>.id
user_pool = aws_cognito_user_pool_domain.<domain_name>.user_pool_id
}
}
Existence of the callback URLs can be proven after applying the terraform configuration by running aws cognito-idp describe-user-pool-client --user-pool-id <pool_id> --client-id <client_id>:
"UserPoolClient": {
...
"CallbackURLs": [
"https://____.cloudfront.net",
"https://____.labeling.eu-central-1.sagemaker.aws/oauth2/idpresponse"
],
"LogoutURLs": [
"https://____.labeling.eu-central-1.sagemaker.aws/logout"
],
It seems as terraform itself does not do anything special on workforce creation (see https://github.com/hashicorp/terraform-provider-aws/blob/main/internal/service/sagemaker/workforce.go). So the callback urls seem to be added by AWS SageMaker itself.
This means that you have to instruct terraform to ignore changes on those attributes in the aws_cognito_user_pool_client configuration:
lifecycle {
ignore_changes = [
callback_urls, logout_urls
]
}
I'm using Terraform to create a Cognito User pool. I'd like to use a lambda function for sending a custom message when a user signs up. When I run attempt to sign up on the client, I get an error saying that "CustomMessage invocation failed due to error AccessDeniedException." I've used Lambda Permissions before, but I can't find any examples of this configuration. How do I give the lambda function permission? The following is my current configuration.
resource "aws_cognito_user_pool" "main" {
name = "${var.user_pool_name}_${var.stage}"
username_attributes = [ "email" ]
schema {
attribute_data_type = "String"
mutable = true
name = "name"
required = true
}
schema {
attribute_data_type = "String"
mutable = true
name = "email"
required = true
}
password_policy {
minimum_length = "8"
require_lowercase = true
require_numbers = true
require_symbols = true
require_uppercase = true
}
mfa_configuration = "OFF"
lambda_config {
custom_message = aws_lambda_function.custom_message.arn
post_confirmation = aws_lambda_function.post_confirmation.arn
}
}
...
resource "aws_lambda_permission" "get_blog" {
statement_id = "AllowExecutionFromCognito"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.custom_message.function_name
principal = "cognito-idp.amazonaws.com"
source_arn = "${aws_cognito_user_pool.main.arn}/*/*"
depends_on = [ aws_lambda_function.custom_message ]
}
...
resource "aws_lambda_function" "custom_message" {
filename = "${var.custom_message_path}/${var.custom_message_file_name}.zip"
function_name = var.custom_message_file_name
role = aws_iam_role.custom_message.arn
handler = "${var.custom_message_file_name}.handler"
source_code_hash = filebase64sha256("${var.custom_message_path}/${var.custom_message_file_name}.zip")
runtime = "nodejs12.x"
timeout = 10
layers = [ var.node_layer_arn ]
environment {
variables = {
TABLE_NAME = var.table_name
RESOURCENAME = "blogAuthCustomMessage"
REGION = "us-west-2"
}
}
tags = {
Name = var.developer
}
depends_on = [
data.archive_file.custom_message,
]
}
Based on OP's feedback in the comment section, changing source_arn property in the aws_lambda_permission.get_blog to aws_cognito_user_pool.main.arn works.
The Terraform resource, aws_db_proxy, has a list of auth block(s) as an argument. Below is an example from the terraform documentation.
Each auth block represents a user, and each user needs a secret in Secrets Manager. Our platform has four different environments (dev,qa,cert,prod), and we do not use secrets in our lower environments to save on costs. Ideally, I would create two lists of auth blocks, one for lower environments and one for upper environments. Then, in the resource I could pick the appropriate one based on environment.
Is there a way to pass a list of auth blocks to the aws_db_proxy resource?
The other solution I was thinking of was to use two separate aws_db_proxy configurations and use the appropriate one for each environment using the count meta-argument. However, I think this could get a little messy.
resource "aws_db_proxy" "example" {
name = "example"
debug_logging = false
engine_family = "MYSQL"
idle_client_timeout = 1800
require_tls = true
role_arn = aws_iam_role.example.arn
vpc_security_group_ids = [aws_security_group.example.id]
vpc_subnet_ids = [aws_subnet.example.id]
auth {
auth_scheme = "SECRETS"
description = "user1"
iam_auth = "DISABLED"
secret_arn = aws_secretsmanager_secret.example1.arn
}
auth {
auth_scheme = "SECRETS"
description = "example2"
iam_auth = "DISABLED"
secret_arn = aws_secretsmanager_secret.example2.arn
}
auth {
auth_scheme = "SECRETS"
description = "example3"
iam_auth = "DISABLED"
secret_arn = aws_secretsmanager_secret.example3.arn
}
tags = {
Name = "example"
Key = "value"
}
}
You could use dynamic blocks to create auth blocks dynamically.
An example usage would depend on exactly how are you defing your aws_secretsmanager_secret for each user, but you could also make it dynamic.
Below is sample code. I haven't run it as its aim is to demonstrate the concept of the use of dynamic blocks and how you could make your aws_secretsmanager_secret:
# list of users
variable "proxy_users" {
default = ["user1", "example2", "example3"]
}
# secret for each user
resource "aws_secretsmanager_secret" "mysecret" {
for_each = toset(var.proxy_users)
name = "example${each.key}"
# rest of attributes
}
resource "aws_db_proxy" "example" {
name = "example"
debug_logging = false
engine_family = "MYSQL"
idle_client_timeout = 1800
require_tls = true
role_arn = aws_iam_role.example.arn
vpc_security_group_ids = [aws_security_group.example.id]
vpc_subnet_ids = [aws_subnet.example.id]
# create auth for each user
dynamic "auth" {
for_each = var.proxy_users
content {
auth_scheme = "SECRETS"
description = auth.key
iam_auth = "DISABLED"
secret_arn = aws_secretsmanager_secret.mysecret[auth.key].arn
}
}
tags = {
Name = "example"
Key = "value"
}
}
Thank you #Marcin
I had the same issue but I needed to insert existing secrets arn. You really helped
I did the following if anybody needs it
locals {
secrets_list = [
"db-credentials/${var.env-name}/user1",
"db-credentials/${var.env-name}/user2",
"db-credentials/${var.env-name}/user3"
]
}
data "aws_secretsmanager_secret" "rds_secrets" {
for_each = toset(local.secrets_list)
name = each.key
}
resource "aws_db_proxy" "rds_db_proxy" {
name = "${var.env-name}-rds-proxy"
engine_family = "MYSQL"
idle_client_timeout = 900
require_tls = true
.
.
.
.
dynamic "auth" {
for_each = local.secrets_list
content {
secret_arn = data.aws_secretsmanager_secret.rds_secrets[auth.value].arn
auth_scheme = "SECRETS"
iam_auth = "REQUIRED"
}
}
}