Terraform AWS Redshift and Secret Manager - amazon-web-services

I am trying to deploy REDSHIFT by generating password in AWS secret manager.
Secret works only when I try to connect with sql client.
I wrote python script
import awswrangler as wr
Create a Redshift table
print("Connecting to Redshift...")
con = wr.redshift.connect(secret_id=redshift_credential_secret, timeout=10)
print("Successfully connected to Redshift.")
trying fetch secret from SECRET MANAGER and connect to redshift and do some operations but it gives an error.
redshift_connector.error.InterfaceError: ('communication error', gaierror(-2, 'Name or service not known'))
So for testing I create secret manually in Secret Manager by choosing the type of secret "REDSHIFT CREDENTIALS" and defined it in my python script and it worked. But the secret which I created with terraform not working.
It seems creating usual secret not working with Redshift cluster when you try to fetch it via some programming language. It requiers changing type of the secret in secrets manager.
But there is no such option in terraform to choose the secret type.
Is there any other way to deploy this solution ?
Here is my code below:
# Firstly create a random generated password to use in secrets.
resource "random_password" "password" {
length = 16
special = true
override_special = "!#$%&=+?"
}
# Creating a AWS secret for Redshift
resource "aws_secretsmanager_secret" "redshiftcred" {
name = "redshift"
recovery_window_in_days = 0
}
# Creating a AWS secret versions for Redshift
resource "aws_secretsmanager_secret_version" "redshiftcred" {
secret_id = aws_secretsmanager_secret.redshiftcred.id
secret_string = jsonencode({
engine = "redshift"
host = aws_redshift_cluster.redshift_cluster.endpoint
username = aws_redshift_cluster.redshift_cluster.master_username
password = aws_redshift_cluster.redshift_cluster.master_password
port = "5439"
dbClusterIdentifier = aws_redshift_cluster.redshift_cluster.cluster_identifier
})
depends_on = [
aws_secretsmanager_secret.redshiftcred
]
}
resource "aws_redshift_cluster" "redshift_cluster" {
cluster_identifier = "tf-redshift-cluster"
database_name = lookup(var.redshift_details, "redshift_database_name")
master_username = "admin"
master_password = random_password.password.result
node_type = lookup(var.redshift_details, "redshift_node_type")
cluster_type = lookup(var.redshift_details, "redshift_cluster_type")
number_of_nodes = lookup(var.redshift_details, "number_of_redshift_nodes")
iam_roles = ["${aws_iam_role.redshift_role.arn}"]
skip_final_snapshot = true
publicly_accessible = true
cluster_subnet_group_name = aws_redshift_subnet_group.redshift_subnet_group.id
vpc_security_group_ids = [aws_security_group.redshift.id]
depends_on = [
aws_iam_role.redshift_role
]
}

Unfortunately, until now, Terraform does not support the AWS::SecretsManager::SecretTargetAttachment which CloudFormation does and it supports the Target Type as AWS::Redshift::Cluster.
For more information, you can check the following Open Issue since 2019.
You can perform a workaround by using Terraform to create CloudFormation resource.

Related

Terraform glue connection that avoids overwriting connection_properties upon apply

I have a Terraform resource for an AWS Glue Connection, like this:
resource "aws_glue_connection" "some-connection-name" {
name = "some-connection-name"
physical_connection_requirements {
availability_zone = var.availability_zone
security_group_id_list = var.security_group_id_list
subnet_id = var.subnet_id
}
connection_properties = {
JDBC_CONNECTION_URL = "jdbc:postgresql://change_host_name:5432/db_name"
JDBC_ENFORCE_SSL = "false"
PASSWORD = "change_password"
USERNAME = "change_username"
}
}
For context, this resource was imported, not created originally with Terraform. I have been retrofitting Terraform to an existing project by iteratively importing, planning, and applying.
Of course I do not want to save the credentials in the Terraform file. So I used placeholder values, as you can see above. After deployment, I assumed, I would be able to change the username, password, and connection URL by hand.
When I run terraform plan I get this indication that Terraform is preparing to change the Glue Connection:
~ connection_properties = (sensitive value)
Terraform plans to modify the connection_properties because they differ (intentionally) from the live configuration. But I don't want it to. I want to terraform apply my script without overwriting the credentials. Periodically applying is part of my development workflow. As things stand I will have to manually restore the credentials after every time I apply.
I want to indicate to Terraform not to to overwrite the remote credentials with my placeholder credentials. I tried simply omitting the connection_properties argument but the problem remains. Is there another way to coax Terraform not to overwrite the host, username, and password upon apply?
Based on the comments.
You could use ignore_changes. Thus, the could could be:
resource "aws_glue_connection" "some-connection-name" {
name = "some-connection-name"
physical_connection_requirements {
availability_zone = var.availability_zone
security_group_id_list = var.security_group_id_list
subnet_id = var.subnet_id
}
connection_properties = {
JDBC_CONNECTION_URL = "jdbc:postgresql://change_host_name:5432/db_name"
JDBC_ENFORCE_SSL = "false"
PASSWORD = "change_password"
USERNAME = "change_username"
}
lifecycle {
ignore_changes = [
connection_properties,
]
}
}

How to use secret manager in creating DMS target endpoint for RDS

How do we create a DMS endpoint for RDS using Terraform by providing the Secret Manager ARN to fetch the credentials? I looked at the documentation but I couldn't find anything.
There's currently an open feature request for DMS to natively use secrets manager to connect to your RDS instance. This has a linked pull request that initially adds support for PostgreSQL and Oracle RDS instances for now but is currently unreviewed so it's hard to know when that functionality may be released.
If you aren't using automatic secret rotation (or can rerun Terraform after the rotation) and don't mind the password being stored in the state file but still want to use the secrets stored in AWS Secrets Manager then you could have Terraform retrieve the secret from Secrets Manager at apply time and use that to configure the DMS endpoint using the username and password combination instead.
A basic example would look something like this:
data "aws_secretsmanager_secret" "example" {
name = "example"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
}
resource "aws_dms_endpoint" "example" {
certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"
database_name = "test"
endpoint_id = "test-dms-endpoint-tf"
endpoint_type = "source"
engine_name = "aurora"
extra_connection_attributes = ""
kms_key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
password = jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)["password"]
port = 3306
server_name = "test"
ssl_mode = "none"
tags = {
Name = "test"
}
username = "test"
}

aws secret manager access index

I am trying to retrieve from aws secret manager key value pairs and pass them to my azure SQL Server. Regarding aws secret manager I am using this module.
module "secrets-manager" {
source = "lgallard/secrets-manager/aws"
version = "0.4.1"
secrets = [
{
name = "secretKeyValue"
description = "Secret Key value pair"
secret_key_value = {
username = "username"
password = "password"
}
}
]
}
Then I created a azurerm SQL Server and would like to pass the username and password. What I tried is the following code.
resource "azurerm_sql_server" "sql-server-testing" {
administrator_login = module.secrets-manager.secret_ids[0]
administrator_login_password = module.secrets-manager.secret_ids[0]
location = "westeurope"
name = "sql-name"
resource_group_name = azurerm_resource_group.azure-secrets.name
version = "12.0"
}
I am able to access the secret manager, but it hit only the amazon arn resource and I can't find a way how to pass the secret username and password to my SQL Server.
Thank you very much for any help you can provide
1- Retrieve metadata information about a Secrets Manager secret, via aws secrets manager data resource
data "aws_secretsmanager_secret" "secrets" {
arn = module.secrets-manager.secret_ids[0]
}
data "aws_secretsmanager_secret_version" "current" {
secret_id = data.aws_secretsmanager_secret.secrets.id
}
2- Retrieve a specific value inside that secret (in sql code section)
administrator_login = jsondecode(data.aws_secretsmanager_secret_version.current.secret_string)["username"]
administrator_login_password = jsondecode(data.aws_secretsmanager_secret_version.current.secret_string)["password"]

Terraform AWS Cognito App Client

Currently stuck in the mud with trying to to set up an 'app client' for an AWS Cognito User Pool through Terraform. Here is my resource as it stands:
resource "aws_cognito_user_pool" "notes-pool" {
name = "notes-pool"
username_attributes = ["email"]
verification_message_template {
default_email_option = "CONFIRM_WITH_CODE"
}
password_policy {
minimum_length = 10
require_lowercase = false
require_numbers = true
require_symbols = false
require_uppercase = true
}
tags {
"Name" = "notes-pool"
"Environment" = "production"
}
}
The above works just fine, and my user pool is created. If anybody has any ideas on how to create an app client in the same resource, I'm all ears. I'm beginning to suspect that this functionality doesn't exist!
I believe this was just added to the most recent verison of terraform. You could do something like the following to add a client to your user pool:
resource "aws_cognito_user_pool_client" "client" {
name = "client"
user_pool_id = "${aws_cognito_user_pool.pool.id}"
generate_secret = true
explicit_auth_flows = ["ADMIN_NO_SRP_AUTH"]
}
See here for the docs:Terraform entry on aws_cognito_user_pool_client
UPDATE - this is now supported by terraform. See #cyram's answer.
This feature is not currently supported by Terraform.
There is an open issue on GitHub where this has been requested (give it a thumbs up if you would benefit from this feature).
Until support is added, the best option is to use the local-exec provisioner to create the user pool via the CLI once the resource is created:
resource "aws_cognito_user_pool" "notes-pool" {
name = "notes-pool"
username_attributes = ["email"]
...
provisioner "local-exec" {
command = <<EOF
aws cognito-idp create-user-pool-client \
--user-pool-id ${aws_cognito_user_pool.notes-pool.id} \
--client-name client-name \
--no-generate-secret \
--explicit-auth-flows ADMIN_NO_SRP_AUTH
EOF
}
}
Please note that in order to use this you must have the AWS CLI installed and authenticated (I use environment variables to authenticate with both Terraform and the AWS CLI).
Once user pool is created, you can use create-user-pool-client API to create app-client within the userpool. Please refer the API documentation: https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPoolClient.html

Configure Postgres application users with Terraform for RDS

Terraform allows you to define Postgres master user and password with the options username and password. But there is no option to set up an application postgres user, how would you do that?
The AWS RDS resource is only used for creating/updating/deleting the RDS resource itself using the AWS APIs.
To create users or databases on the RDS instance itself you'd either want to use another tool (such as psql - the official command line tool or a configuration management tool such as Ansible) or use Terraform's Postgresql provider.
Assuming you've already created your RDS instance you would then connect to the instance as the master user and then create the application user with something like this:
provider "postgresql" {
host = "postgres_server_ip1"
username = "postgres_user"
password = "postgres_password"
}
resource "postgresql_role" "application_role" {
name = "application"
login = true
password = "application-password"
encrypted = true
}
addition to #ydaetskcoR answer, here is the full example for RDS PostgreSQL;
provider "postgresql" {
scheme = "awspostgres"
host = "db.domain.name"
port = "5432"
username = "db_username"
password = "db_password"
superuser = false
}
resource "postgresql_role" "new_db_role" {
name = "new_db_role"
login = true
password = "db_password"
encrypted_password = true
}
resource "postgresql_database" "new_db" {
name = "new_db"
owner = postgresql_role.new_db_role.name
template = "template0"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
The above two answers requires the host that runs the terraform has direct access to the RDS database, and usually you do not. I propose to code what you need to do in a lambda function (optionally with secrets manager for retrieving the master password):
resource "aws_lambda_function" "terraform_lambda_func" {
filename = "${path.module}/lambda_function/lambda_function.zip"
...
}
and then use the following data source (example) to call the lambda function.
data "aws_lambda_invocation" "create_app_user" {
function_name = aws_lambda_function.terraform_lambda_func.function_name
input = <<-JSON
{
"step": "create_app_user"
}
JSON
depends_on = [aws_lambda_function.terraform_lambda_func]
provider = aws.primary
}
This solution id generic. It can do what a lambda function can do with AWS API can do, which is basically limitless.