aws secret manager access index - amazon-web-services

I am trying to retrieve from aws secret manager key value pairs and pass them to my azure SQL Server. Regarding aws secret manager I am using this module.
module "secrets-manager" {
source = "lgallard/secrets-manager/aws"
version = "0.4.1"
secrets = [
{
name = "secretKeyValue"
description = "Secret Key value pair"
secret_key_value = {
username = "username"
password = "password"
}
}
]
}
Then I created a azurerm SQL Server and would like to pass the username and password. What I tried is the following code.
resource "azurerm_sql_server" "sql-server-testing" {
administrator_login = module.secrets-manager.secret_ids[0]
administrator_login_password = module.secrets-manager.secret_ids[0]
location = "westeurope"
name = "sql-name"
resource_group_name = azurerm_resource_group.azure-secrets.name
version = "12.0"
}
I am able to access the secret manager, but it hit only the amazon arn resource and I can't find a way how to pass the secret username and password to my SQL Server.
Thank you very much for any help you can provide

1- Retrieve metadata information about a Secrets Manager secret, via aws secrets manager data resource
data "aws_secretsmanager_secret" "secrets" {
arn = module.secrets-manager.secret_ids[0]
}
data "aws_secretsmanager_secret_version" "current" {
secret_id = data.aws_secretsmanager_secret.secrets.id
}
2- Retrieve a specific value inside that secret (in sql code section)
administrator_login = jsondecode(data.aws_secretsmanager_secret_version.current.secret_string)["username"]
administrator_login_password = jsondecode(data.aws_secretsmanager_secret_version.current.secret_string)["password"]

Related

Terraform AWS Redshift and Secret Manager

I am trying to deploy REDSHIFT by generating password in AWS secret manager.
Secret works only when I try to connect with sql client.
I wrote python script
import awswrangler as wr
Create a Redshift table
print("Connecting to Redshift...")
con = wr.redshift.connect(secret_id=redshift_credential_secret, timeout=10)
print("Successfully connected to Redshift.")
trying fetch secret from SECRET MANAGER and connect to redshift and do some operations but it gives an error.
redshift_connector.error.InterfaceError: ('communication error', gaierror(-2, 'Name or service not known'))
So for testing I create secret manually in Secret Manager by choosing the type of secret "REDSHIFT CREDENTIALS" and defined it in my python script and it worked. But the secret which I created with terraform not working.
It seems creating usual secret not working with Redshift cluster when you try to fetch it via some programming language. It requiers changing type of the secret in secrets manager.
But there is no such option in terraform to choose the secret type.
Is there any other way to deploy this solution ?
Here is my code below:
# Firstly create a random generated password to use in secrets.
resource "random_password" "password" {
length = 16
special = true
override_special = "!#$%&=+?"
}
# Creating a AWS secret for Redshift
resource "aws_secretsmanager_secret" "redshiftcred" {
name = "redshift"
recovery_window_in_days = 0
}
# Creating a AWS secret versions for Redshift
resource "aws_secretsmanager_secret_version" "redshiftcred" {
secret_id = aws_secretsmanager_secret.redshiftcred.id
secret_string = jsonencode({
engine = "redshift"
host = aws_redshift_cluster.redshift_cluster.endpoint
username = aws_redshift_cluster.redshift_cluster.master_username
password = aws_redshift_cluster.redshift_cluster.master_password
port = "5439"
dbClusterIdentifier = aws_redshift_cluster.redshift_cluster.cluster_identifier
})
depends_on = [
aws_secretsmanager_secret.redshiftcred
]
}
resource "aws_redshift_cluster" "redshift_cluster" {
cluster_identifier = "tf-redshift-cluster"
database_name = lookup(var.redshift_details, "redshift_database_name")
master_username = "admin"
master_password = random_password.password.result
node_type = lookup(var.redshift_details, "redshift_node_type")
cluster_type = lookup(var.redshift_details, "redshift_cluster_type")
number_of_nodes = lookup(var.redshift_details, "number_of_redshift_nodes")
iam_roles = ["${aws_iam_role.redshift_role.arn}"]
skip_final_snapshot = true
publicly_accessible = true
cluster_subnet_group_name = aws_redshift_subnet_group.redshift_subnet_group.id
vpc_security_group_ids = [aws_security_group.redshift.id]
depends_on = [
aws_iam_role.redshift_role
]
}
Unfortunately, until now, Terraform does not support the AWS::SecretsManager::SecretTargetAttachment which CloudFormation does and it supports the Target Type as AWS::Redshift::Cluster.
For more information, you can check the following Open Issue since 2019.
You can perform a workaround by using Terraform to create CloudFormation resource.

Google Cloud - How do I import GCP cloud SQL certificates into Secret Manager using Terraform?

My GCP cloud SQL has SSL enabled. With that, my client will require the server CA cert, client cert and key to connect to the database. The client is configured to retrieve the certs and key from Secret Manager.
I am deploying my setup using Terraform. Once the SQL instance is created, it needs to output the certs and key so that I can create them in Secret Manager. However, Secret Manager only takes in string format but the output of the certs and keys are in list format.
I am quite new to Terraform, what can I do to import the SQL certs and key into Secret Manager?
The following are my Terraform code snippets:
Cloud SQL
output "server_ca_cert" {
description = "Server ca certificate for the SQL DB"
value = google_sql_database_instance.instance.server_ca_cert
}
output "client_key" {
description = "Client private key for the SQL DB"
value = google_sql_ssl_cert.client_cert.private_key
}
output "client_cert" {
description = "Client cert for the SQL DB"
value = google_sql_ssl_cert.client_cert.cert
Secret Manager
module "server_ca" {
source = "../siac-modules/modules/secretManager"
project_id = var.project_id
region_id = local.default_region
secret_ids = local.server_ca_key
# secret_datas = file("${path.module}/certs/server-ca.pem")
secret_datas = module.sql_db_timeslot_manager.server_ca_cert
}
Terraform plan error
Error: Invalid value for input variable
│
│ on ..\siac-modules\modules\secretManager\variables.tf line 21:
│ 21: variable "secret_datas" {
│
│ The given value is not suitable for module.server_ca.var.secret_datas, which is sensitive: string required. Invalid value defined at 30-secret_manager.tf:71,18-63.
eg:
resource "google_secret_manager_secret" "my_secret" {
name = "my-secret"
type = "generic-secret"
project = "my-project"
labels = {
"env" = "prod"
}
}
resource "google_secret_manager_secret_version" "secret_version" {
name = "secret_version"
secret = google_secret_manager_secret.my_secret.id
project = "my-project"
payload = "c2VjcmV0"
}
I managed to solve this issue by:
Exporting the SQL outputs to a file
Use data to read the file contents, add dependency to the output. This dependency is key to the setup, the deployment will wait for the SQL to be created and certificate exported before creating the secrets.
At secret manager use the data as the content for secret_data
If I understand your question correctly, the value of your output is wrong. You know it's a list, so it should be retrieved like this:
output "server_ca_cert" {
description = "Server ca certificate for the SQL DB"
value = google_sql_database_instance.instance.server_ca_cert.0.cert
}
You can find out more from the doc:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/sql_database_instance#server_ca_cert.0.cert

How to add api-key to google secret-manager

With Terraform GCP provider 4.30.0, I can now create an google maps api key and restrict it.
resource "google_apikeys_key" "maps-api-key" {
provider = google-beta
name = "maps-api-key"
display_name = "google-maps-api-key"
project = local.project_id
restrictions {
api_targets {
service = "static-maps-backend.googleapis.com"
}
api_targets {
service = "maps-backend.googleapis.com"
}
api_targets {
service = "places-backend.googleapis.com"
}
browser_key_restrictions {
allowed_referrers = [
"https://${local.project_id}.ey.r.appspot.com/*", # raw url to the app engine service
"*.example.com/*" # Custom DNS name to access to the app
]
}
}
}
The key is created and appears in the console as expected and I can see the API_KEY value.
When I deploy my app, I want it to read the API_KEY string.
My node.js app already reads secrets from secret manager, so I want to add it as a secret.
Another approach could be for the node client library to read the API credential directly, instead of using secret-manager, but I haven't found a way to do that.
I can't work out how to read the key string and store it in the secret.
The terraform resource describes the output
key_string - Output only. An encrypted and signed value held by this
key. This field can be accessed only through the GetKeyString method.
I don't know how to call this method in Terraform, to pass the value to a secret version.
This doesn't work.
v1 = { enabled = true, data = resource.google_apikeys_key.maps-api-key.GetKeyString }
Referencing attributes and arguments does not work the way you tried it. You did quote the correct attribute though, but just failed to specify it:
v1 = {
enabled = true,
data = resource.google_apikeys_key.maps-api-key.key_string
}
Make sure to understand how referencing attributes in Terraform works [1].
[1] https://www.terraform.io/language/expressions/references#references-to-resource-attributes

How to use secret manager in creating DMS target endpoint for RDS

How do we create a DMS endpoint for RDS using Terraform by providing the Secret Manager ARN to fetch the credentials? I looked at the documentation but I couldn't find anything.
There's currently an open feature request for DMS to natively use secrets manager to connect to your RDS instance. This has a linked pull request that initially adds support for PostgreSQL and Oracle RDS instances for now but is currently unreviewed so it's hard to know when that functionality may be released.
If you aren't using automatic secret rotation (or can rerun Terraform after the rotation) and don't mind the password being stored in the state file but still want to use the secrets stored in AWS Secrets Manager then you could have Terraform retrieve the secret from Secrets Manager at apply time and use that to configure the DMS endpoint using the username and password combination instead.
A basic example would look something like this:
data "aws_secretsmanager_secret" "example" {
name = "example"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
}
resource "aws_dms_endpoint" "example" {
certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"
database_name = "test"
endpoint_id = "test-dms-endpoint-tf"
endpoint_type = "source"
engine_name = "aurora"
extra_connection_attributes = ""
kms_key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
password = jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)["password"]
port = 3306
server_name = "test"
ssl_mode = "none"
tags = {
Name = "test"
}
username = "test"
}

Configure Postgres application users with Terraform for RDS

Terraform allows you to define Postgres master user and password with the options username and password. But there is no option to set up an application postgres user, how would you do that?
The AWS RDS resource is only used for creating/updating/deleting the RDS resource itself using the AWS APIs.
To create users or databases on the RDS instance itself you'd either want to use another tool (such as psql - the official command line tool or a configuration management tool such as Ansible) or use Terraform's Postgresql provider.
Assuming you've already created your RDS instance you would then connect to the instance as the master user and then create the application user with something like this:
provider "postgresql" {
host = "postgres_server_ip1"
username = "postgres_user"
password = "postgres_password"
}
resource "postgresql_role" "application_role" {
name = "application"
login = true
password = "application-password"
encrypted = true
}
addition to #ydaetskcoR answer, here is the full example for RDS PostgreSQL;
provider "postgresql" {
scheme = "awspostgres"
host = "db.domain.name"
port = "5432"
username = "db_username"
password = "db_password"
superuser = false
}
resource "postgresql_role" "new_db_role" {
name = "new_db_role"
login = true
password = "db_password"
encrypted_password = true
}
resource "postgresql_database" "new_db" {
name = "new_db"
owner = postgresql_role.new_db_role.name
template = "template0"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
The above two answers requires the host that runs the terraform has direct access to the RDS database, and usually you do not. I propose to code what you need to do in a lambda function (optionally with secrets manager for retrieving the master password):
resource "aws_lambda_function" "terraform_lambda_func" {
filename = "${path.module}/lambda_function/lambda_function.zip"
...
}
and then use the following data source (example) to call the lambda function.
data "aws_lambda_invocation" "create_app_user" {
function_name = aws_lambda_function.terraform_lambda_func.function_name
input = <<-JSON
{
"step": "create_app_user"
}
JSON
depends_on = [aws_lambda_function.terraform_lambda_func]
provider = aws.primary
}
This solution id generic. It can do what a lambda function can do with AWS API can do, which is basically limitless.