Terraform resets credentials when importing existing RDS db resource - amazon-web-services

We have a bunch of existing resources on AWS that we want to import under terraform managment. One of these resources is an RDS db. So we wrote something like this :
resource "aws_db_instance" "db" {
engine = "postgres"
username = var.rds_username
password = var.rds_password
# other stuff...
}
variable "rds_username" {
type = string
default = "master_username"
}
variable "rds_password" {
type = string
default = "master_password"
sensitive = true
}
Note these are the existing master credentials. That's important. Then we did this:
terraform import aws_db_instance.db db-identifier
And then tried terraform plan a few times, tweaking the code to fit the existing resource until finally, terraform plan indicated there were no changes to be made (which was the goal).
However, once we ran terraform apply, it reset the master credentials of the DB instance. Worse, other resources that had previously connected to that DB using these exact credentials suddenly can't anymore.
Why is this happening? Is terraform encrypting the password behind the scenes and not telling me? Why can't other services connect to the DB using the same credentials? Is there any way to import an RDS instance without resetting credentials?

Related

How to have multiple state files for a workspace in terraform gcs?

In AWS, when i set the backend provider to be S3, I'm able to set multiple keys.
I can have one key for the database, another key for the cluster, etc.
key - (Required) Path to the state file inside the S3 Bucket. When using a non-default workspace, the state path will be /workspace_key_prefix/workspace_name/key (see also the workspace_key_prefix configuration).
Then in a module like my kube deployments, I can grab the remote state and use the cluster, separate from the DB, etc.
The GCS backend does not seem to support key.
https://www.terraform.io/language/settings/backends/gcs
It seems to dump everything in your workspace name like myworkspace.tfstate.
So when I go from one module to another, say from DB to Network, applying the terraform config, it will nuke everything from DB when I apply in network and vice versa in DB from network.
When I get to my deployments, I'm able to grab all the remote state by each key name so I'd have
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = local.tf-state-bucket-name
key = "workspaces/${terraform.workspace}/terraform-network.tfstate"
region = "us-west-1"
}
}
and
data "terraform_remote_state" "rds" {
backend = "s3"
config = {
bucket = local.tf-state-bucket-name
key = "workspaces/${terraform.workspace}/terraform-rds.tfstate"
region = "us-west-1"
}
}
I could then grab the DB username like so
data.terraform_remote_state.rds.outputs.db_username
I keep trying to implement my GCP setup DB/VPC/Network/GKE to look just like my AWS setup, but it just doesn't seem supported.
It seems that the key will always be the name of the workspace?
Would I instead of having in S3 a path like
workspaces/my-workspace/db.tfstate
in GCS, would i do like
db/my-workspace.tfstate

How to Promote Cloud SQL replica to primary using terraform so promoted instance should be in TF control

I am creating GCP Cloud SQL instance using terraform with cross region Cloud SQL replica. I am testing the DR scenario as when DR happen I am promoting read replica to primary instance using glcoud API (as there is not settings/resource available in terraform to promote replica) as I am using gcloud command the promoted instance and state file is not in sync so later the promoted instance is not under terraform control.
Cross-region replica setups become out of sync with the primary right after the promotion is complete. Promoting a replica is done manually and intentionally. It is not the same as high availability, where a standby instance (which is not a replica) automatically becomes the primary in case of a failure or zonal outage. You can promote the read replica using gcloud and Google API manually. By doing both of these will make the instance out of sync with Terraform. So what you are looking for seems to be not available while promoting a replica in Cloud SQL.
As a workaround I would suggest you to promote the replica to primary outside of Terraform, and then try to import the resource back into state which would reset the state file.
Promoting an instance to primary is not supported by Terraform's Google Cloud Provider, but there is an issue (which you should upvote if you care) to add support for this to the provider.
Here's how to work around the lack of support in the meantime. Assume you have the following minimal setup: an instance, a database, a user, and a read replica:
resource "google_sql_database_instance" "instance1" {
name = "old-primary"
region = "us-central1"
database_version = "POSTGRES_14"
}
resource "google_sql_database" "db" {
name = "test-db"
instance = google_sql_database_instance.instance1.name
}
resource "google_sql_user" "user" {
name = "test-user"
instance = google_sql_database_instance.instance1.name
password = var.db_password
}
resource "google_sql_database_instance" "instance2" {
name = "new-primary"
master_instance_name = google_sql_database_instance.instance1.name
region = "europe-west4"
database_version = "POSTGRES_14"
replica_configuration {
failover_target = false
}
}
Steps to follow:
You promote the replica out of band, either using the Console or the gcloud CLI.
Next you manually edit the state file:
# remove the old read-replica state; it's now the new primary
terraform state rm google_sql_database_instance.instance2
# import the new-primary as "instance1"
terraform state rm google_sql_database_instance.instance1
terraform import google_sql_database_instance.instance1 your-project-id/new-primary
# import the new-primary db as "db"
terraform state rm google_sql_database.db
terraform import google_sql_database.db your-project-id/new-primary/test-db
# import the new-primary user as "db"
terraform state rm google_sql_user.user
terraform import google_sql_user.user your-project-id/new-primary/test-user
Now you edit your terraform config to update the resources to match the state:
resource "google_sql_database_instance" "instance1" {
name = "new-primary" # this is the former replica's name
region = "europe-west4" # this is the former replica's region
database_version = "POSTGRES_14"
}
resource "google_sql_database" "db" {
name = "test-db"
instance = google_sql_database_instance.instance1.name
}
resource "google_sql_user" "user" {
name = "test-user"
instance = google_sql_database_instance.instance1.name
password = var.db_password
}
# this has now been promoted and is now "instance1" so the following
# block can be deleted.
# resource "google_sql_database_instance" "instance2" {
# name = "new-primary"
# master_instance_name = google_sql_database_instance.instance1.name
# region = "europe-west4"
# database_version = "POSTGRES_14"
#
# replica_configuration {
# failover_target = false
# }
# }}
}
Then you run terraform apply and see that only the user is updated in-place with the existing password. (This is done because Terraform can't get the password from the API and it was removed as part of the promotion and so has to be re-applied for Terraform's sake.)
What you do with your old primary is up to you. It's no longer managed by terraform. So either delete it manually, or re-import it.
Caveats
Everyone's Terraform setup is different and so you'll probably have to iterate through the steps above until you reach the desired result.
Remember to use a testing environment first with lots of calls to terraform plan to see what's changing. Whenever a resource is marked for deletion, Terraform will report why.
Nonetheless, you can use the process above to work your way to a terraform setup that reflects a promoted read replica. And in the meantime, upvote the issue because if it gets enough attention, the Terraform team will prioritize it accordingly.

How to create Aurora Serverless database cluster with secret manager in Terraform

I've been reading this page: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster
The example there is mainly for the provisioned database, I'm new to the serverless database, is there an Terraform example to create a serverless Aurora database cluster (SQL db), using the secret stored in the secret manager?
Many thanks.
I'm guessing you want to randomize the master_password?
You can do something like this:
master_password = random_password.DatabaseMasterPassword.result
The SSM parameter can be created like so:
resource "aws_ssm_parameter" "SSMDatabaseMasterPassword" {
name = "database-master-password"
type = "SecureString"
value = random_password.DatabaseMasterPassword.result
}
The random password can be defined like so:
resource "random_password" "DatabaseMasterPassword" {
length = 24
special = true
override_special = "!#$%^*()-=+_?{}|"
}
The basic example of creating serveless aurora is:
resource "aws_rds_cluster" "default" {
cluster_identifier = "aurora-cluster-demo"
engine = "aurora-mysql"
engine_mode = "serverless"
database_name = "myauroradb"
enable_http_endpoint = true
master_username = "root"
master_password = "chang333eme321"
backup_retention_period = 1
skip_final_snapshot = true
scaling_configuration {
auto_pause = true
min_capacity = 1
max_capacity = 2
seconds_until_auto_pause = 300
timeout_action = "ForceApplyCapacityChange"
}
}
I'm not sure what do you want to do with secret manager. Its not clear from your question, so I'm providing any example for it.
The accepted answer will just create the Aurora RDS instance with a pre-set password -- but doesn't include Secrets Manager. It's a good idea to use Secrets Manager, so that your database and the applications (Lambdas, EC2, etc) can access the password from Secrets Manager, without having to copy/paste it to multiple locations (such as application configurations).
Additionally, by terraforming the password with random_password it will be stored in plaintext in your terraform.tfstate file which might be a concern. To resolve this concern you'd also need to enable Secrets Manager Automatic Secret Rotation.
Automatic rotation is a somewhat advanced configuration with Terraform. It involves:
Deploying a Lambda with access to the RDS instance and to the Secret
Configuring the rotation via the aws_secretsmanager_secret_rotation resource.
AWS provides ready-to-use Lambdas for many common rotation scenarios. The specific Lambda will vary depending on database engine (MySQL vs. Postgres vs. SQL Server vs. Oracle, etc), as well as whether you'll connecting to the database with the same credentials that you're rotating.
For example, when the secret rotates the process is something like:
Invoke the rotation lambda, Secrets Manager will pass the name of the secret as a parameter
The Lambda will use the details within the secret (DB Host, Post, Username, Password) to connect to RDS
The Lambda will generate a new password and run the "Update password" command, which can vary based on DB Engine
The Lambda will update the new credentials to Secrets Manager
For all this to work you'll also need to think about the permissions Lambda will need -- such as network connectivity to the RDS instance and IAM permissions to read/write secrets.
As mentioned it's somewhat advanced--but results in Secrets Manager being the only persistent location of the password. Once setup it works quite nicely though, and your apps can securely retrieve the password from Secrets Manager (one last tip -- it's ok to cache the secret in your app to reduce Secrets Manager calls but be sure to flush that cache on connection failures so that your apps will handle an automatic rotation).

Error while configuring Terraform S3 Backend

I am configuring S3 backend through terraform for AWS.
terraform {
backend "s3" {}
}
On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error
"Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again."
I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key.
How to resolve this issue?
When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like:
terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"
I also had the same issue, the easiest and the secure way is to fix this issue is that configure the AWS profile. Even if you properly mentioned the AWS_PROFILE in your project, you have to mention it again in your backend.tf.
my problem was, I have already set up the AWS provider in the project as below and it is working properly.
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}"
}
but end of the project I was trying to configure the S3 backend configuration file. therefore I have run the command terraform init and I also got the same error message.
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Note that is not enough for the terraform backend configuration. you have to mention the AWS_PROFILE in the backend file as well.
Full Solution
I'm using the terraform latest version at this moment. it's v0.13.5.
please see the provider.tf
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}" # lets say profile is my-profile
}
for example your AWS_PROFILE is my-profile
then your backend.tf should be as below.
terraform {
backend "s3" {
bucket = "my-terraform--bucket"
encrypt = true
key = "state.tfstate"
region = "ap-southeast-2"
profile = "my-profile" # you have to give the profile name here. not the variable("${var.AWS_PROFILE}")
}
}
then run the terraform init
I've faced a similar problem when renamed profile in AWS credentials file. Deleting .terraform folder, and running terraform init again resolved the problem.
If you have set up custom aws profile already, use the below option.
terraform init -backend-config="profile=your-profile-name"
If there is no custom profile,then make sure to add access_key and secret_key to default profile and try.
Don't - add variables for secrets. It's a really really bad practice and unnecessary.
Terraform will pick up your default AWS profile, or use whatever AWS profile you set AWS_PROFILE too. If this in AWS you should be using an instance profile. Roles can be done too.
If you hardcode the profile into your tf code then you have to have the same profile names where-ever you want to run this script and change it for every different account its run against.
Don't - do all this cmdline stuff, unless you like wrapper scripts or typing.
Do - Add yourself a remote_state.tf that looks like
terraform {
backend "s3" {
bucket = "WHAT-YOU-CALLED-YOUR-STATEBUCKET"
key = "mykey/terraform.tfstate"
region = "eu-west-1"
}
}
now when your terraform init:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
The values in the provider aren't relevant to the perms for the remote_state and could even be different AWS accounts (or even another cloud provider).
Had the same issue and I was using export AWS_PROFILE as I always had. I checked my credentials which were correct.
Re-running aws configure fixed it for some reason.
I had same issue and below is my usecase.
AWS account 1: Management account (IAM user created here and this user will assume role into Dev and Prod account)
AWS account 2: Dev environment account (Role is created here for the trusted account in this case Management account user)
AWS account 3: Prod environment account (Role is created here for the trusted account in this case Management account user)
So I created a dev-backend.conf and prod-backend.conf file with the below content. The main point that fixed this issue is passing the "role_arn" value in S3 backend configuration
Defining below content in dev-backend.conf and prod-backend.conf files
bucket = "<your bucket name>"
key = "< your key path>"
region = "<region>"
dynamodb_table = "<db name>"
encrypt = true
profile = "< your profile>" # this profile has access key and secret key of the IAM user created in Management account
role_arn = "arn:aws:iam::<dev/prod account id>:role/<dev/prod role name >"
Terraform initialise with dev s3 bucket config from local state to s3 state
$ terraform init -reconfigure -backend-config="dev-backend.conf"
Terraform apply using dev environment variables file
$ terraform apply --var-file="dev-app.tfvars"
Terraform initialise with prod s3 bucket config from dev s3 bucket to prod s3 bucket state
$ terraform init -reconfigure -backend-config="prod-backend.conf"
Terraform apply using prod environment variables file
$ terraform apply --var-file="prod-app.tfvars"
I decided to put an end to this issue for once and for all, since there is a bunch of different topics about this same issue. This issue mainly arises because of different forms of authentication used while developing locally versus running a CI/CD pipeline. People tend to mix different authentication options together without taking into account the order of precedence.
When running locally you should definitely use the aws cli, since you don’t wanna have to set access keys every time you run a build. If you happen to work with multiple accounts locally you can tell the aws cli to switch profiles:
export AWS_PROFILE=my-profile
When you want to run (the same code) in a CI/CD pipeline (e.g. Github Actions, CircleCI), all you have to do is export the required environment variables within your build pipeline:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_REGION="eu-central-1"
This only works if you do not set any hard-coded configuration within the provider block. Because the AWS Terraform provider documentation learns us the order of authentication. Parameters in the provider configuration are evaluated first, then come environment variables.
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {}
terraform {
backend "s3" {}
}
Before you plan or apply this, you'll have to initialize the backend:
terraform init \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}"
Best practices:
When running locally use the aws cli to authenticate. When running in a build pipeline, use environment variables to authenticate.
Keep your Terraform configuration as clean as possible, so try to avoid hard-coded settings and keep the provider block empty, so that you'll be able to authenticate dynamically.
Preferably also keep the s3 backend configuration empty and initialize this configuration from environment variables or a configuration file.
The Terraform documentation recommends including .terraform.lock.hcl in your version control so that you can discuss potential changes to your external dependencies via code review.
Setting AWS_PROFILE in a build pipeline is basically useless. Most of the times you do not have the aws cli installed during runtime. If you would somehow need this, then you should probably think of splitting this into separate build pipelines.
Personally, I like to use Terragrunt as a wrapper around Terraform. One of the main reasons is that it enables you to dynamically set the backend configuration. This is not possible in plain Terraform.
If someone is using localstack, for me only worked using this tip https://github.com/localstack/localstack/issues/3982#issuecomment-1107664517
backend "s3" {
bucket = "curso-terraform"
key = "terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraform_state"
dynamodb_endpoint = "http://localhost:4566"
encrypt = true
}
And don't forget to add the endpoint in provider:
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
ec2 = "http://localhost:4566"
s3 = "http://localhost:4566"
dynamodb = "http://localhost:4566"
}
}
in my credentials file, 2 profile names are there one after another caused the error for me. when I removed 2nd profile name this issue was resolved.
I experienced this issue when trying to apply some Terraform changes to an existing project. The terraform commands have been working fine, and I even ran worked on the project couple of hours before the issue started.
I was encountering the following errors:
❯ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: error configuring S3 Backend: IAM Role (arn:aws:iam::950456587296:role/MyRole) cannot be assumed.
│
│ There are a number of possible causes of this - the most common are:
│ * The credentials used in order to assume the role are invalid
│ * The credentials do not have appropriate permission to assume the role
│ * The role ARN is not valid
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I had my organization VPN turned on when running the Terraform commands, and this caused the commands to fail.
Here's how I fixed it
My VPN caused the issue, this may not apply to everyone.
Turning off my VPN fixed it.

Conn Configuration for AWS Lambda Python RDS Postgres IAM Authentication

Recently it was create the possibility to access RDS instances with IAM users and roles. I am confused about how to configure a python connection, since I would not use the database authentication data with psycopg2.
Now I am using like this:
conn = psycopg2.connect("dbname='%s' user='%s' host='%s' password='%s'" % (db_name, db_user, db_host, db_pass))
I have not idea how to use IAM credentials to connect my lambda function with IAM auth.
Please help.
First, you need to create an IAM policy and a DB user as described here:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html
Then you need to create IAM role for your Lambda function and attach the IAM policy created above to it. Your Lambda function will need to be executed with this role to be able to create a temporary DB password for the DB user.
Finally, you can generate a temporary password for your DB user (created above) within your Lambda using a code snippet like this:
from urllib.parse import quote_plus
import boto3
def get_password(rds_hostname, db_user, aws_region=None, url_encoded=True):
if (not aws_region):
aws_region = boto3.session.Session().region_name
if (not aws_region):
raise Exception("Error: no aws_region given and the default region is not set!")
rds_port = 5432
if (":" in rds_hostname):
split_hostname = rds_hostname.split(":")
rds_hostname = split_hostname[0]
rds_port = int(split_hostname[1])
rds_client = boto3.client("rds")
password = rds_client.generate_db_auth_token( Region=aws_region,
DBHostname=rds_hostname,
Port=rds_port,
DBUsername=db_user)
if url_encoded:
return quote_plus( password )
else:
return password
Do not assign the the password to a variable. Get a new password on every run, since the password has limited time validity and your Lambda container might not be recycled before it expires...
Finally, create the DB connection string for whatever python package you use (I would suggest some pure Python implementation, such as pg8000) from your RDS hostname, port, username and the temporary password obtained with the function above (<user>:<password>#<hostname>:<port>/<db_name>).
Connecting to the RDS might be a bit tricky. If you don't know how to set up VPC's properly I would suggest you run your Lambda outside of VPC and connect to the RDS over a public IP.
Additionally, you will probably need to enforce SSL connection and possibly include the RDS CA file in your Lambda deployment package. The exact way how to do this depends on what you use to connect (I could only describe how to do this with pymysql and sqlalchemy).
Each of these steps could be described in a tutorial of it's own, but knowing about them should be enough to get you started.
Good luck!