Configure Postgres application users with Terraform for RDS - amazon-web-services

Terraform allows you to define Postgres master user and password with the options username and password. But there is no option to set up an application postgres user, how would you do that?

The AWS RDS resource is only used for creating/updating/deleting the RDS resource itself using the AWS APIs.
To create users or databases on the RDS instance itself you'd either want to use another tool (such as psql - the official command line tool or a configuration management tool such as Ansible) or use Terraform's Postgresql provider.
Assuming you've already created your RDS instance you would then connect to the instance as the master user and then create the application user with something like this:
provider "postgresql" {
host = "postgres_server_ip1"
username = "postgres_user"
password = "postgres_password"
}
resource "postgresql_role" "application_role" {
name = "application"
login = true
password = "application-password"
encrypted = true
}

addition to #ydaetskcoR answer, here is the full example for RDS PostgreSQL;
provider "postgresql" {
scheme = "awspostgres"
host = "db.domain.name"
port = "5432"
username = "db_username"
password = "db_password"
superuser = false
}
resource "postgresql_role" "new_db_role" {
name = "new_db_role"
login = true
password = "db_password"
encrypted_password = true
}
resource "postgresql_database" "new_db" {
name = "new_db"
owner = postgresql_role.new_db_role.name
template = "template0"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}

The above two answers requires the host that runs the terraform has direct access to the RDS database, and usually you do not. I propose to code what you need to do in a lambda function (optionally with secrets manager for retrieving the master password):
resource "aws_lambda_function" "terraform_lambda_func" {
filename = "${path.module}/lambda_function/lambda_function.zip"
...
}
and then use the following data source (example) to call the lambda function.
data "aws_lambda_invocation" "create_app_user" {
function_name = aws_lambda_function.terraform_lambda_func.function_name
input = <<-JSON
{
"step": "create_app_user"
}
JSON
depends_on = [aws_lambda_function.terraform_lambda_func]
provider = aws.primary
}
This solution id generic. It can do what a lambda function can do with AWS API can do, which is basically limitless.

Related

Terraform AWS Redshift and Secret Manager

I am trying to deploy REDSHIFT by generating password in AWS secret manager.
Secret works only when I try to connect with sql client.
I wrote python script
import awswrangler as wr
Create a Redshift table
print("Connecting to Redshift...")
con = wr.redshift.connect(secret_id=redshift_credential_secret, timeout=10)
print("Successfully connected to Redshift.")
trying fetch secret from SECRET MANAGER and connect to redshift and do some operations but it gives an error.
redshift_connector.error.InterfaceError: ('communication error', gaierror(-2, 'Name or service not known'))
So for testing I create secret manually in Secret Manager by choosing the type of secret "REDSHIFT CREDENTIALS" and defined it in my python script and it worked. But the secret which I created with terraform not working.
It seems creating usual secret not working with Redshift cluster when you try to fetch it via some programming language. It requiers changing type of the secret in secrets manager.
But there is no such option in terraform to choose the secret type.
Is there any other way to deploy this solution ?
Here is my code below:
# Firstly create a random generated password to use in secrets.
resource "random_password" "password" {
length = 16
special = true
override_special = "!#$%&=+?"
}
# Creating a AWS secret for Redshift
resource "aws_secretsmanager_secret" "redshiftcred" {
name = "redshift"
recovery_window_in_days = 0
}
# Creating a AWS secret versions for Redshift
resource "aws_secretsmanager_secret_version" "redshiftcred" {
secret_id = aws_secretsmanager_secret.redshiftcred.id
secret_string = jsonencode({
engine = "redshift"
host = aws_redshift_cluster.redshift_cluster.endpoint
username = aws_redshift_cluster.redshift_cluster.master_username
password = aws_redshift_cluster.redshift_cluster.master_password
port = "5439"
dbClusterIdentifier = aws_redshift_cluster.redshift_cluster.cluster_identifier
})
depends_on = [
aws_secretsmanager_secret.redshiftcred
]
}
resource "aws_redshift_cluster" "redshift_cluster" {
cluster_identifier = "tf-redshift-cluster"
database_name = lookup(var.redshift_details, "redshift_database_name")
master_username = "admin"
master_password = random_password.password.result
node_type = lookup(var.redshift_details, "redshift_node_type")
cluster_type = lookup(var.redshift_details, "redshift_cluster_type")
number_of_nodes = lookup(var.redshift_details, "number_of_redshift_nodes")
iam_roles = ["${aws_iam_role.redshift_role.arn}"]
skip_final_snapshot = true
publicly_accessible = true
cluster_subnet_group_name = aws_redshift_subnet_group.redshift_subnet_group.id
vpc_security_group_ids = [aws_security_group.redshift.id]
depends_on = [
aws_iam_role.redshift_role
]
}
Unfortunately, until now, Terraform does not support the AWS::SecretsManager::SecretTargetAttachment which CloudFormation does and it supports the Target Type as AWS::Redshift::Cluster.
For more information, you can check the following Open Issue since 2019.
You can perform a workaround by using Terraform to create CloudFormation resource.

Django migrate won't work due to Postgres InsufficientPrivilege error provisioned by terraform and helm

We're are hitting a issue with our Django app and are unable to find the underlying problem.
Our Django app runs on Kubernetes and is managed by Helm. When we upgrade the app, a Helm upgrade job is triggered that makes sure that a manage.py migrate is run. The migrate job runs with database admin privileges, not the customers postgres role.
The error we're getting is:
InsufficientPrivilege: permission denied for table RouteInstance
This error must have something to do with creating a reference from a new table to a existing table, but we can't find it. Maybe it's in the Terraform resources config, or are the grants not sufficient. We don't use any extra grants since the admin account should be a real admin (Digitalocean doadmin).
Any help would be awesome, we're stuck at the moment.
Some context on our app deployment:
The Helm template is being deployed by Terraform, the application deployment consists of:
Create a subdomain record
Create a namespace
Create S3 buckets and a account
Deploy the Helm chart (helm_release), here we use a values list with a templatefile. Between these values there are variables:
postgres_user_username (used to run the app)
postgres_user_password
postgres_admin_username (used to perform the helm upgrade job containing the migrate)
postgres_admin_password
Create a postgresql_role for the customer (their app runs under the customers postgres role)
Create a postgresql_database for the app
Set postgresql_default_privileges (table, sequence), see config below
Terraform resouce configs:
resource "postgresql_role" "postgres_role" {
name = "customer_user"
login = true
password = "redacted..."
}
resource "postgresql_database" "app_database" {
name = "app_database"
owner = postgresql_role.postgres_role.name
depends_on = [ postgresql_role.postgres_role ]
}
resource "postgresql_default_privileges" "postgres_privileges_database" {
role = postgresql_role.postgres_role.name
database = postgresql_database.app_database.name
schema = "public"
owner = postgresql_role.postgres_role.name
object_type = "table"
privileges = ["SELECT", "INSERT", "UPDATE", "DELETE"]
}
resource "postgresql_default_privileges" "postgres_privileges_sequence" {
role = postgresql_role.postgres_role.name
database = postgresql_database.app_database.name
schema = "public"
owner = postgresql_role.postgres_role.name
object_type = "sequence"
privileges = ["USAGE", "SELECT"]
}
resource "postgresql_grant" "postgres_grant_database" {
database = postgresql_database.app_database.name
role = postgresql_role.postgres_role.name
object_type = "database"
privileges = ["CONNECT"]
}
resource "postgresql_grant" "postgres_grant_tables" {
database = postgresql_database.app_database.name
role = postgresql_role.postgres_role.name
schema = "public"
object_type = "table"
privileges = ["SELECT", "INSERT", "UPDATE", "DELETE"]
}
resource "postgresql_grant" "postgres_grant_sequences" {
database = postgresql_database.app_database.name
role = postgresql_role.postgres_role.name
schema = "public"
object_type = "sequence"
privileges = ["USAGE", "SELECT", "UPDATE"]
}

How to use secret manager in creating DMS target endpoint for RDS

How do we create a DMS endpoint for RDS using Terraform by providing the Secret Manager ARN to fetch the credentials? I looked at the documentation but I couldn't find anything.
There's currently an open feature request for DMS to natively use secrets manager to connect to your RDS instance. This has a linked pull request that initially adds support for PostgreSQL and Oracle RDS instances for now but is currently unreviewed so it's hard to know when that functionality may be released.
If you aren't using automatic secret rotation (or can rerun Terraform after the rotation) and don't mind the password being stored in the state file but still want to use the secrets stored in AWS Secrets Manager then you could have Terraform retrieve the secret from Secrets Manager at apply time and use that to configure the DMS endpoint using the username and password combination instead.
A basic example would look something like this:
data "aws_secretsmanager_secret" "example" {
name = "example"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
}
resource "aws_dms_endpoint" "example" {
certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"
database_name = "test"
endpoint_id = "test-dms-endpoint-tf"
endpoint_type = "source"
engine_name = "aurora"
extra_connection_attributes = ""
kms_key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
password = jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)["password"]
port = 3306
server_name = "test"
ssl_mode = "none"
tags = {
Name = "test"
}
username = "test"
}

aws secret manager access index

I am trying to retrieve from aws secret manager key value pairs and pass them to my azure SQL Server. Regarding aws secret manager I am using this module.
module "secrets-manager" {
source = "lgallard/secrets-manager/aws"
version = "0.4.1"
secrets = [
{
name = "secretKeyValue"
description = "Secret Key value pair"
secret_key_value = {
username = "username"
password = "password"
}
}
]
}
Then I created a azurerm SQL Server and would like to pass the username and password. What I tried is the following code.
resource "azurerm_sql_server" "sql-server-testing" {
administrator_login = module.secrets-manager.secret_ids[0]
administrator_login_password = module.secrets-manager.secret_ids[0]
location = "westeurope"
name = "sql-name"
resource_group_name = azurerm_resource_group.azure-secrets.name
version = "12.0"
}
I am able to access the secret manager, but it hit only the amazon arn resource and I can't find a way how to pass the secret username and password to my SQL Server.
Thank you very much for any help you can provide
1- Retrieve metadata information about a Secrets Manager secret, via aws secrets manager data resource
data "aws_secretsmanager_secret" "secrets" {
arn = module.secrets-manager.secret_ids[0]
}
data "aws_secretsmanager_secret_version" "current" {
secret_id = data.aws_secretsmanager_secret.secrets.id
}
2- Retrieve a specific value inside that secret (in sql code section)
administrator_login = jsondecode(data.aws_secretsmanager_secret_version.current.secret_string)["username"]
administrator_login_password = jsondecode(data.aws_secretsmanager_secret_version.current.secret_string)["password"]

Terraform: How to pass output from one resource to another?

I'm using Aurora serverless Mysql and ECS and trying to use secrets generated by aws secret manager in a file named rds.tf and want to use it another resource in a file called ecs.tf
resource "random_password" "db_instance_aurora_password" {
length = 40
special = false
keepers = {
database_id = aws_secretsmanager_secret.db_instance_aurora_master_password.id
}
Above is rds.tf, which works and generates a random password. In my second file ecs.tf, I want to use the
resource "aws_ecs_task_definition" "task" {
family = var.service_name
container_definitions = templatefile("${path.module}/templates/task_definition.tpl", {
DB_USERNAME = var.db_username
DB_PASSWORD = random_password.db_instance_aurora_password.result
})
}
How to export, the output of the db_password and use it in another resource(ecs.tf)?
output "aurora_rds_cluster.master_password" {
description = "The master password"
value = random_password.db_instance_aurora_password.result }
If all terraform files are in one directory, you can just reference random_password resource as you do it for the database. Then you might not need to output it.
If it's separated, then you can use terraform modules to achieve what you need. In ECS terraform you can reference RDS module and you will have access to its output:
module "rds" {
source = "path/to/folder/with/rds/terraform"
}
resource "aws_ecs_task_definition" "task" {
family = var.service_name
container_definitions = templatefile("${path.module}/templates/task_definition.tpl", {
DB_USERNAME = var.db_username
DB_PASSWORD = module.rds.aurora_rds_cluster.master_password
})
}
Storing password in terraform's output will store it as a plain text. Even if you use encrypted S3 bucket, password can still be accessed at least by terraform. Another option to share password could be for example by using AWS Parameter Store. Module that creates password can store it in Param Store, and another module that needs a password can read it.
P.S. You might want to add sensitive = true to the password output in order to eliminate password value from logs.