Currently stuck in the mud with trying to to set up an 'app client' for an AWS Cognito User Pool through Terraform. Here is my resource as it stands:
resource "aws_cognito_user_pool" "notes-pool" {
name = "notes-pool"
username_attributes = ["email"]
verification_message_template {
default_email_option = "CONFIRM_WITH_CODE"
}
password_policy {
minimum_length = 10
require_lowercase = false
require_numbers = true
require_symbols = false
require_uppercase = true
}
tags {
"Name" = "notes-pool"
"Environment" = "production"
}
}
The above works just fine, and my user pool is created. If anybody has any ideas on how to create an app client in the same resource, I'm all ears. I'm beginning to suspect that this functionality doesn't exist!
I believe this was just added to the most recent verison of terraform. You could do something like the following to add a client to your user pool:
resource "aws_cognito_user_pool_client" "client" {
name = "client"
user_pool_id = "${aws_cognito_user_pool.pool.id}"
generate_secret = true
explicit_auth_flows = ["ADMIN_NO_SRP_AUTH"]
}
See here for the docs:Terraform entry on aws_cognito_user_pool_client
UPDATE - this is now supported by terraform. See #cyram's answer.
This feature is not currently supported by Terraform.
There is an open issue on GitHub where this has been requested (give it a thumbs up if you would benefit from this feature).
Until support is added, the best option is to use the local-exec provisioner to create the user pool via the CLI once the resource is created:
resource "aws_cognito_user_pool" "notes-pool" {
name = "notes-pool"
username_attributes = ["email"]
...
provisioner "local-exec" {
command = <<EOF
aws cognito-idp create-user-pool-client \
--user-pool-id ${aws_cognito_user_pool.notes-pool.id} \
--client-name client-name \
--no-generate-secret \
--explicit-auth-flows ADMIN_NO_SRP_AUTH
EOF
}
}
Please note that in order to use this you must have the AWS CLI installed and authenticated (I use environment variables to authenticate with both Terraform and the AWS CLI).
Once user pool is created, you can use create-user-pool-client API to create app-client within the userpool. Please refer the API documentation: https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPoolClient.html
Related
I am trying to deploy REDSHIFT by generating password in AWS secret manager.
Secret works only when I try to connect with sql client.
I wrote python script
import awswrangler as wr
Create a Redshift table
print("Connecting to Redshift...")
con = wr.redshift.connect(secret_id=redshift_credential_secret, timeout=10)
print("Successfully connected to Redshift.")
trying fetch secret from SECRET MANAGER and connect to redshift and do some operations but it gives an error.
redshift_connector.error.InterfaceError: ('communication error', gaierror(-2, 'Name or service not known'))
So for testing I create secret manually in Secret Manager by choosing the type of secret "REDSHIFT CREDENTIALS" and defined it in my python script and it worked. But the secret which I created with terraform not working.
It seems creating usual secret not working with Redshift cluster when you try to fetch it via some programming language. It requiers changing type of the secret in secrets manager.
But there is no such option in terraform to choose the secret type.
Is there any other way to deploy this solution ?
Here is my code below:
# Firstly create a random generated password to use in secrets.
resource "random_password" "password" {
length = 16
special = true
override_special = "!#$%&=+?"
}
# Creating a AWS secret for Redshift
resource "aws_secretsmanager_secret" "redshiftcred" {
name = "redshift"
recovery_window_in_days = 0
}
# Creating a AWS secret versions for Redshift
resource "aws_secretsmanager_secret_version" "redshiftcred" {
secret_id = aws_secretsmanager_secret.redshiftcred.id
secret_string = jsonencode({
engine = "redshift"
host = aws_redshift_cluster.redshift_cluster.endpoint
username = aws_redshift_cluster.redshift_cluster.master_username
password = aws_redshift_cluster.redshift_cluster.master_password
port = "5439"
dbClusterIdentifier = aws_redshift_cluster.redshift_cluster.cluster_identifier
})
depends_on = [
aws_secretsmanager_secret.redshiftcred
]
}
resource "aws_redshift_cluster" "redshift_cluster" {
cluster_identifier = "tf-redshift-cluster"
database_name = lookup(var.redshift_details, "redshift_database_name")
master_username = "admin"
master_password = random_password.password.result
node_type = lookup(var.redshift_details, "redshift_node_type")
cluster_type = lookup(var.redshift_details, "redshift_cluster_type")
number_of_nodes = lookup(var.redshift_details, "number_of_redshift_nodes")
iam_roles = ["${aws_iam_role.redshift_role.arn}"]
skip_final_snapshot = true
publicly_accessible = true
cluster_subnet_group_name = aws_redshift_subnet_group.redshift_subnet_group.id
vpc_security_group_ids = [aws_security_group.redshift.id]
depends_on = [
aws_iam_role.redshift_role
]
}
Unfortunately, until now, Terraform does not support the AWS::SecretsManager::SecretTargetAttachment which CloudFormation does and it supports the Target Type as AWS::Redshift::Cluster.
For more information, you can check the following Open Issue since 2019.
You can perform a workaround by using Terraform to create CloudFormation resource.
How do we create a DMS endpoint for RDS using Terraform by providing the Secret Manager ARN to fetch the credentials? I looked at the documentation but I couldn't find anything.
There's currently an open feature request for DMS to natively use secrets manager to connect to your RDS instance. This has a linked pull request that initially adds support for PostgreSQL and Oracle RDS instances for now but is currently unreviewed so it's hard to know when that functionality may be released.
If you aren't using automatic secret rotation (or can rerun Terraform after the rotation) and don't mind the password being stored in the state file but still want to use the secrets stored in AWS Secrets Manager then you could have Terraform retrieve the secret from Secrets Manager at apply time and use that to configure the DMS endpoint using the username and password combination instead.
A basic example would look something like this:
data "aws_secretsmanager_secret" "example" {
name = "example"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
}
resource "aws_dms_endpoint" "example" {
certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"
database_name = "test"
endpoint_id = "test-dms-endpoint-tf"
endpoint_type = "source"
engine_name = "aurora"
extra_connection_attributes = ""
kms_key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
password = jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)["password"]
port = 3306
server_name = "test"
ssl_mode = "none"
tags = {
Name = "test"
}
username = "test"
}
Trying so assign a created role to a GCP service account which then is used as a workload identity for a k8s deployment.
Terraform:
resource google_project_iam_custom_role sign_blob_role {
permissions = ["iam.serviceAccounts.signBlob"]
role_id = "signBlob"
title = "Sign Blob"
}
resource google_service_account_iam_member document_signer_workload {
service_account_id = module.document_signer_service_accounts.service_accounts_map.doc-sign.name
role = "roles/iam.workloadIdentityUser"
member = local.document_sign_sa
}
module document_signer_service_accounts {
source = "terraform-google-modules/service-accounts/google"
version = "~> 3.0"
project_id = var.gcp_project_name
prefix = "doc-sign-sa"
names = ["doc-sign"]
project_roles = [
"${var.gcp_project_name}=>roles/viewer",
"${var.gcp_project_name}=>roles/storage.objectViewer",
"${var.gcp_project_name}=>roles/iam.workloadIdentityUser",
"${var.gcp_project_name}=>${google_project_iam_custom_role.sign_blob_role.name}"
]
display_name = substr("GCP SA bound to K8S SA ${local.document_sign_sa}. Used to sign document.", 0, 100)
}
Error:
Error: Request "Create IAM Members roles/signBlob serviceAccount:staging-doc-sign#********************.iam.gserviceaccount.com for \"project \\\"********************\\\"\"" returned error: Error applying IAM policy for project "********************": Error setting IAM policy for project "********************": googleapi: Error 400: Role roles/signBlob is not supported for this resource., badRequest
on .terraform/modules/document_signer_service_accounts/main.tf line 46, in resource "google_project_iam_member" "project-roles":
46: resource "google_project_iam_member" "project-roles" {
When I do the same action on the UI though, it allows me to assign the role.
What am I doing wrong here?
It seems that it could be a problem in the way you are calling the custom role.
"${var.gcp_project_name}=>${google_project_iam_custom_role.sign_blob_role.name}"
The custom role already belongs to the project, so it is not necessary to specify ${var.gcp_project_name}
So, the code should be something like:
project_roles = [
"${var.gcp_project_name}=>roles/viewer",
"${var.gcp_project_name}=>roles/storage.objectViewer",
"${var.gcp_project_name}=>roles/iam.workloadIdentityUser",
"${google_project_iam_custom_role.sign_blob_role.name}"
]
Edit 1
According to this documentation
This is the basic usage of the module service-accounts
module "service_accounts" {
source = "terraform-google-modules/service-accounts/google"
version = "~> 2.0"
project_id = "<PROJECT ID>"
prefix = "test-sa"
names = ["first", "second"]
project_roles = [
"project-foo=>roles/viewer",
"project-spam=>roles/storage.objectViewer",
]
}
I think there should be something wrong with the reference to the attribute from your resource.
Nevertheless I have found a github repository that contains some good examples on how to add a custom role to a Service Account:
# https://www.terraform.io/docs/providers/google/r/google_project_iam.html#google_project_iam_binding
resource "google_project_iam_binding" "new-roles" {
role = "projects/${var.project_id}/roles/${google_project_iam_custom_role.new-custom-role.role_id}"
members = ["serviceAccount:${google_service_account.new.email}"]
}
I think you might find it useful to complete this task.
I will start by saying I am very new to both GCP and Terraform, so I hope there is a simple answer that I have just overlooked.
I am trying to create a GCP cloud function and then make it public using Terraform. I am able to create the function but not make it public, despite closely following the documentation's example: https://www.terraform.io/docs/providers/google/r/cloudfunctions_function.html
I receive the error "googleapi: Error 403: Permission 'cloudfunctions.functions.setIamPolicy' denied on resource ... (or resource may not exist)" when the google_cloudfunctions_function_iam_member resource is reached.
How can I make this function public? Does it have something to do with the account/api key I am using for credentials to create all these resources?
Thanks in advance.
my main.tf file:
provider "google" {
project = "my-project"
credentials = "key.json" #compute engine default service account api key
region = "us-central1"
}
terraform {
backend "gcs" {
bucket = "manually-created-bucket"
prefix = "terraform/state"
credentials = "key.json"
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "test-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "my_function_script_zip" {
type = "zip"
source_dir = "../source/scripts/my-function-script"
output_path = "../source/scripts/my-function-script.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "my_function_script_zip" {
name = "index.zip"
bucket = google_storage_bucket.source_code.name
source = "../source/scripts/my-function-script.zip"
}
#create the cloudfunction
resource "google_cloudfunctions_function" "function" {
name = "send_my_function_script"
description = "This function is called in GTM. It sends a users' google analytics id to BigQuery."
runtime = "nodejs10"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.source_code.name
source_archive_object = google_storage_bucket_object.my_function_script_zip.name
trigger_http = true
entry_point = "handleRequest"
}
# IAM entry for all users to invoke the function
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = google_cloudfunctions_function.function.project
region = "us-central1"
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
It seems the only problem with that example from the terraform site are the " Cloud Functions IAM resources" which have been modified since Nov 2019. Now you have to specify these resources as explained here. Now for your user case (public cloud function) I'd recommend you to follow this configuration and just change the "members" attribute to "allUsers" so it'd be something like this
resource "google_cloudfunctions_function_iam_binding" "binding" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
members = [
"allUsers",
]
}
Finally, you can give it a test and modify the functions you've already created here at the #Try this API" right panel and enter the proper resource and request body like this (make sure to enter the "resource" parameter correcly):
{
"policy": {
"bindings": [
{
"members": [
"allUsers"
],
"role": "roles/cloudfunctions.invoker"
}
]
}
}
In addition to adjusting the IAM roles how #chinoche suggested, I also discovered that I needed to modify the service account I was using to give it poject owner permissions. (I guess the default one I was using didn't have this). I updated my key.json and it finally worked.
Terraform allows you to define Postgres master user and password with the options username and password. But there is no option to set up an application postgres user, how would you do that?
The AWS RDS resource is only used for creating/updating/deleting the RDS resource itself using the AWS APIs.
To create users or databases on the RDS instance itself you'd either want to use another tool (such as psql - the official command line tool or a configuration management tool such as Ansible) or use Terraform's Postgresql provider.
Assuming you've already created your RDS instance you would then connect to the instance as the master user and then create the application user with something like this:
provider "postgresql" {
host = "postgres_server_ip1"
username = "postgres_user"
password = "postgres_password"
}
resource "postgresql_role" "application_role" {
name = "application"
login = true
password = "application-password"
encrypted = true
}
addition to #ydaetskcoR answer, here is the full example for RDS PostgreSQL;
provider "postgresql" {
scheme = "awspostgres"
host = "db.domain.name"
port = "5432"
username = "db_username"
password = "db_password"
superuser = false
}
resource "postgresql_role" "new_db_role" {
name = "new_db_role"
login = true
password = "db_password"
encrypted_password = true
}
resource "postgresql_database" "new_db" {
name = "new_db"
owner = postgresql_role.new_db_role.name
template = "template0"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
The above two answers requires the host that runs the terraform has direct access to the RDS database, and usually you do not. I propose to code what you need to do in a lambda function (optionally with secrets manager for retrieving the master password):
resource "aws_lambda_function" "terraform_lambda_func" {
filename = "${path.module}/lambda_function/lambda_function.zip"
...
}
and then use the following data source (example) to call the lambda function.
data "aws_lambda_invocation" "create_app_user" {
function_name = aws_lambda_function.terraform_lambda_func.function_name
input = <<-JSON
{
"step": "create_app_user"
}
JSON
depends_on = [aws_lambda_function.terraform_lambda_func]
provider = aws.primary
}
This solution id generic. It can do what a lambda function can do with AWS API can do, which is basically limitless.