terraform create k8s secret from gcp secret - google-cloud-platform

I have managed to achieve the flow of creating sensitive resources in terraform, without revealing what the sensitive details are at any point and therefore won't be stored in plain text in our github repo. I have done this by letting TF create a service account, it's associated SA key, and then creating a GCP secret that references the output from the SA key for example.
I now want to see if there's any way to do the same for some pre-defined database passwords. The flow will be slightly different:
Manually create the GCP secret (in secrets manager) which has a value of a list of plain text database passwords which our PGbouncer instance will use (more info later in the flow)
I import this using terraform import so terraform state is now aware of this resource even though it was created outside of TF, but the secret version I've just added as secret_data = "" (otherwise putting the plain text password details here defeat the object!)
I now want to grab the secret_data from the google_secret_manager_version to add into the kubernetes_secret so it can be used within our GKE cluster.
However, when I run terraform plan, it wants to change the value of my manually created GCP secret
# google_secret_manager_secret_version.pgbouncer-secret-uat-v1 must be replaced
-/+ resource "google_secret_manager_secret_version" "pgbouncer-secret-uat-v1" {
~ create_time = "2021-08-26T14:42:58.279432Z" -> (known after apply)
+ destroy_time = (known after apply)
~ id = "projects/********/secrets/pgbouncer-secret-uat/versions/1" -> (known after apply)
~ name = "projects/********/secrets/pgbouncer-secret-uat/versions/1" -> (known after apply)
~ secret = "projects/********/secrets/pgbouncer-secret-uat" -> "projects/*******/secrets/pgbouncer-secret-uat" # forces replacement
- secret_data = (sensitive value) # forces replacement
Any ideas how I can get round this? I want to import the google secret version to use in kubernetes but not set the secret_data value in the resource as I don't want it to overwrite what i created manually. Here is the relevant terraform config I'm talking about:
resource "google_secret_manager_secret" "pgbouncer-secret-uat" {
provider = google-beta
secret_id = "pgbouncer-secret-uat"
replication {
automatic = true
}
depends_on = [google_project_service.secretmanager]
}
resource "google_secret_manager_secret_version" "pgbouncer-secret-uat-v1" {
provider = google-beta
secret = google_secret_manager_secret.pgbouncer-secret-uat.id
secret_data = ""
}

If you just want to retrieve/READ the secret without actively managing it, then you can use the associated data instead:
data "google_secret_manager_secret_version" "pgbouncer-secret-uat-v1" {
provider = google-beta
secret = google_secret_manager_secret.pgbouncer-secret-uat.id
}
You can then use the value in your Kubernetes cluster as a secret with the data's exported resource attribute: data.google_secret_manager_secret_version.pgbouncer-secret-uat-v1.secret_data. Note that the provider will likely mark this exported resource attribute as sensitive, and that carries the normal consequences with it.
You can also check the full documentation for more information.
Additionally, since you imported the resource into your state to manage it within your Terraform config, you will need to remove it from your state concurrently with removing it from your config. You can do that most easily with terraform state rm google_secret_manager_secret_version.pgbouncer-secret-uat-v1. Then you may safely remove the resource from your config and solely include the data in your config.

Related

How to Promote Cloud SQL replica to primary using terraform so promoted instance should be in TF control

I am creating GCP Cloud SQL instance using terraform with cross region Cloud SQL replica. I am testing the DR scenario as when DR happen I am promoting read replica to primary instance using glcoud API (as there is not settings/resource available in terraform to promote replica) as I am using gcloud command the promoted instance and state file is not in sync so later the promoted instance is not under terraform control.
Cross-region replica setups become out of sync with the primary right after the promotion is complete. Promoting a replica is done manually and intentionally. It is not the same as high availability, where a standby instance (which is not a replica) automatically becomes the primary in case of a failure or zonal outage. You can promote the read replica using gcloud and Google API manually. By doing both of these will make the instance out of sync with Terraform. So what you are looking for seems to be not available while promoting a replica in Cloud SQL.
As a workaround I would suggest you to promote the replica to primary outside of Terraform, and then try to import the resource back into state which would reset the state file.
Promoting an instance to primary is not supported by Terraform's Google Cloud Provider, but there is an issue (which you should upvote if you care) to add support for this to the provider.
Here's how to work around the lack of support in the meantime. Assume you have the following minimal setup: an instance, a database, a user, and a read replica:
resource "google_sql_database_instance" "instance1" {
name = "old-primary"
region = "us-central1"
database_version = "POSTGRES_14"
}
resource "google_sql_database" "db" {
name = "test-db"
instance = google_sql_database_instance.instance1.name
}
resource "google_sql_user" "user" {
name = "test-user"
instance = google_sql_database_instance.instance1.name
password = var.db_password
}
resource "google_sql_database_instance" "instance2" {
name = "new-primary"
master_instance_name = google_sql_database_instance.instance1.name
region = "europe-west4"
database_version = "POSTGRES_14"
replica_configuration {
failover_target = false
}
}
Steps to follow:
You promote the replica out of band, either using the Console or the gcloud CLI.
Next you manually edit the state file:
# remove the old read-replica state; it's now the new primary
terraform state rm google_sql_database_instance.instance2
# import the new-primary as "instance1"
terraform state rm google_sql_database_instance.instance1
terraform import google_sql_database_instance.instance1 your-project-id/new-primary
# import the new-primary db as "db"
terraform state rm google_sql_database.db
terraform import google_sql_database.db your-project-id/new-primary/test-db
# import the new-primary user as "db"
terraform state rm google_sql_user.user
terraform import google_sql_user.user your-project-id/new-primary/test-user
Now you edit your terraform config to update the resources to match the state:
resource "google_sql_database_instance" "instance1" {
name = "new-primary" # this is the former replica's name
region = "europe-west4" # this is the former replica's region
database_version = "POSTGRES_14"
}
resource "google_sql_database" "db" {
name = "test-db"
instance = google_sql_database_instance.instance1.name
}
resource "google_sql_user" "user" {
name = "test-user"
instance = google_sql_database_instance.instance1.name
password = var.db_password
}
# this has now been promoted and is now "instance1" so the following
# block can be deleted.
# resource "google_sql_database_instance" "instance2" {
# name = "new-primary"
# master_instance_name = google_sql_database_instance.instance1.name
# region = "europe-west4"
# database_version = "POSTGRES_14"
#
# replica_configuration {
# failover_target = false
# }
# }}
}
Then you run terraform apply and see that only the user is updated in-place with the existing password. (This is done because Terraform can't get the password from the API and it was removed as part of the promotion and so has to be re-applied for Terraform's sake.)
What you do with your old primary is up to you. It's no longer managed by terraform. So either delete it manually, or re-import it.
Caveats
Everyone's Terraform setup is different and so you'll probably have to iterate through the steps above until you reach the desired result.
Remember to use a testing environment first with lots of calls to terraform plan to see what's changing. Whenever a resource is marked for deletion, Terraform will report why.
Nonetheless, you can use the process above to work your way to a terraform setup that reflects a promoted read replica. And in the meantime, upvote the issue because if it gets enough attention, the Terraform team will prioritize it accordingly.

Terraform - Encrypting a db instance forces replacement

I have a postgres RDS instance in AWS that I created using terraform.
resource "aws_db_instance" "..." {
...
}
Now I'm trying to encrypt that instance by adding
resource "aws_db_instance" "..." {
...
storage_encrypted = true
}
But when I run terraform plan, it says that it's going to force replacement
# aws_db_instance.... must be replaced
...
~ storage_encrypted = false -> true # forces replacement
What can I do to prevent terraform from replacing my db instance?
Terraform is not at fault here. You simply cannot change the encryption setting on an RDS instance after it was originally created. You can / need to create a snapshot of the current db, copy + encrypt the snapshot and then restore from that snapshot: https://aws.amazon.com/premiumsupport/knowledge-center/update-encryption-key-rds/
This will cause a downtime of the DB. And terraform does not do that for you automatically, you need to do this manually. After the DB is restored terraform should not longer try to replace the DB since the expected config now matches the actual config.
Technically you can ignore_changes the storage_encrypted property but of course that causes terraform to simply ignore any storage encryption changes.

How to tell Terraform to skip the secret manager resource if it exists?

The idea is that I want to use Terraform resource aws_secretsmanager_secret to create only three secrets (not workspace-specified secret), one for the dev environment, one for preprod and the third one for production env.
Something like:
resource "aws_secretsmanager_secret" "dev_secret" {
name = "example-secret-dev"
}
resource "aws_secretsmanager_secret" "preprod_secret" {
name = "example-secret-preprod"
}
resource "aws_secretsmanager_secret" "prod_secret" {
name = "example-secret-prod"
}
But after creating them, I don't want to overwrite them every time I run 'Terraform apply', is there a way to tell Terraform if any of the secrets exist, skip the creation of the secret and do not overwrite?
I had a look at this page but still doesn't have a clear solution, any suggestion will be appreciated.
It will not overwrite the secret if you create it manually in the console or using AWS SDK. The aws_secretsmanager_secret creates only the secret, but not its value. To set value you have to use aws_secretsmanager_secret_version.
Anyway, this is something you can easily test yourself. Just run your code with a secret, update its value in AWS console, and re-run terraform apply. You should see no change in the secret's value.
You could have Terraform generate random secret values for you using:
data "aws_secretsmanager_random_password" "dev_password" {
password_length = 16
}
Then create the secret metadata using:
resource "aws_secretsmanager_secret" "dev_secret" {
name = "dev-secret"
recovery_window_in_days = 7
}
And then by creating the secret version:
resource "aws_secretsmanager_secret_version" "dev_sv" {
secret_id = aws_secretsmanager_secret.dev_secret.id
secret_string = data.aws_secretsmanager_random_password.dev_password.random_password
lifecycle {
ignore_changes = [secret_string, ]
}
}
Adding the 'ignore_changes' lifecycle block to the secret version will prevent Terraform from overwriting the secret once it has been created. I tested this just now to confirm that a new secret with a new random value will be created, and subsequent executions of terraform apply do not overwrite the secret.

Terraform Dependancies between accounts [duplicate]

I've been looking for a way to be able to deploy to multiple AWS accounts simultaneously in Terraform and coming up dry. AWS has the concept of doing this with Stacks but I'm not sure if there is a way to do this in Terraform? If so what would be some solutions?
You can read more about the Cloudformation solution here.
You can define multiple provider aliases which can be used to run actions in different regions or even different AWS accounts.
So to perform some actions in your default region (or be prompted for it if not defined in environment variables or ~/.aws/config) and also in US East 1 you'd have something like this:
provider "aws" {
# ...
}
# Cloudfront ACM certs must exist in US-East-1
provider "aws" {
alias = "cloudfront-acm-certs"
region = "us-east-1"
}
You'd then refer to them like so:
data "aws_acm_certificate" "ssl_certificate" {
provider = aws.cloudfront-acm-certs
...
}
resource "aws_cloudfront_distribution" "cloudfront" {
...
viewer_certificate {
acm_certificate_arn = data.aws_acm_certificate.ssl_certificate.arn
...
}
}
So if you want to do things across multiple accounts at the same time then you could assume a role in the other account with something like this:
provider "aws" {
# ...
}
# Assume a role in the DNS account so we can add records in the zone that lives there
provider "aws" {
alias = "dns"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
And refer to it like so:
data "aws_route53_zone" "selected" {
provider = aws.dns
name = "test.com."
}
resource "aws_route53_record" "www" {
provider = aws.dns
zone_id = data.aws_route53_zone.selected.zone_id
name = "www.${data.aws_route53_zone.selected.name"
...
}
Alternatively you can provide credentials for different AWS accounts in a number of other ways such as hardcoding them in the provider or using different Terraform variables, AWS SDK specific environment variables or by using a configured profile.
I would recommend also combining your solution with Terraform workspaces:
Named workspaces allow conveniently switching between multiple
instances of a single configuration within its single backend. They
are convenient in a number of situations, but cannot solve all
problems.
A common use for multiple workspaces is to create a parallel, distinct
copy of a set of infrastructure in order to test a set of changes
before modifying the main production infrastructure. For example, a
developer working on a complex set of infrastructure changes might
create a new temporary workspace in order to freely experiment with
changes without affecting the default workspace.
Non-default workspaces are often related to feature branches in
version control. The default workspace might correspond to the
"master" or "trunk" branch, which describes the intended state of
production infrastructure. When a feature branch is created to develop
a change, the developer of that feature might create a corresponding
workspace and deploy into it a temporary "copy" of the main
infrastructure so that changes can be tested without affecting the
production infrastructure. Once the change is merged and deployed to
the default workspace, the test infrastructure can be destroyed and
the temporary workspace deleted.
AWS S3 is in the list of the supported backends.
It is very easy to use (similar to working with git branches) and combine it with the selected AWS account.
terraform workspace list
dev
* prod
staging
A few references regarding configuring the AWS provider to work with multiple account:
https://terragrunt.gruntwork.io/docs/features/work-with-multiple-aws-accounts/
https://assets.ctfassets.net/hqu2g0tau160/5Od5r9RbuEYueaeeycUIcK/b5a355e684de0a842d6a3a483a7dc7d3/devopscon-V2.1.pdf

How to obtain AWS certificate from different region in terraform? [duplicate]

I've been looking for a way to be able to deploy to multiple AWS accounts simultaneously in Terraform and coming up dry. AWS has the concept of doing this with Stacks but I'm not sure if there is a way to do this in Terraform? If so what would be some solutions?
You can read more about the Cloudformation solution here.
You can define multiple provider aliases which can be used to run actions in different regions or even different AWS accounts.
So to perform some actions in your default region (or be prompted for it if not defined in environment variables or ~/.aws/config) and also in US East 1 you'd have something like this:
provider "aws" {
# ...
}
# Cloudfront ACM certs must exist in US-East-1
provider "aws" {
alias = "cloudfront-acm-certs"
region = "us-east-1"
}
You'd then refer to them like so:
data "aws_acm_certificate" "ssl_certificate" {
provider = aws.cloudfront-acm-certs
...
}
resource "aws_cloudfront_distribution" "cloudfront" {
...
viewer_certificate {
acm_certificate_arn = data.aws_acm_certificate.ssl_certificate.arn
...
}
}
So if you want to do things across multiple accounts at the same time then you could assume a role in the other account with something like this:
provider "aws" {
# ...
}
# Assume a role in the DNS account so we can add records in the zone that lives there
provider "aws" {
alias = "dns"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
And refer to it like so:
data "aws_route53_zone" "selected" {
provider = aws.dns
name = "test.com."
}
resource "aws_route53_record" "www" {
provider = aws.dns
zone_id = data.aws_route53_zone.selected.zone_id
name = "www.${data.aws_route53_zone.selected.name"
...
}
Alternatively you can provide credentials for different AWS accounts in a number of other ways such as hardcoding them in the provider or using different Terraform variables, AWS SDK specific environment variables or by using a configured profile.
I would recommend also combining your solution with Terraform workspaces:
Named workspaces allow conveniently switching between multiple
instances of a single configuration within its single backend. They
are convenient in a number of situations, but cannot solve all
problems.
A common use for multiple workspaces is to create a parallel, distinct
copy of a set of infrastructure in order to test a set of changes
before modifying the main production infrastructure. For example, a
developer working on a complex set of infrastructure changes might
create a new temporary workspace in order to freely experiment with
changes without affecting the default workspace.
Non-default workspaces are often related to feature branches in
version control. The default workspace might correspond to the
"master" or "trunk" branch, which describes the intended state of
production infrastructure. When a feature branch is created to develop
a change, the developer of that feature might create a corresponding
workspace and deploy into it a temporary "copy" of the main
infrastructure so that changes can be tested without affecting the
production infrastructure. Once the change is merged and deployed to
the default workspace, the test infrastructure can be destroyed and
the temporary workspace deleted.
AWS S3 is in the list of the supported backends.
It is very easy to use (similar to working with git branches) and combine it with the selected AWS account.
terraform workspace list
dev
* prod
staging
A few references regarding configuring the AWS provider to work with multiple account:
https://terragrunt.gruntwork.io/docs/features/work-with-multiple-aws-accounts/
https://assets.ctfassets.net/hqu2g0tau160/5Od5r9RbuEYueaeeycUIcK/b5a355e684de0a842d6a3a483a7dc7d3/devopscon-V2.1.pdf