I'm using Aurora serverless Mysql and ECS and trying to use secrets generated by aws secret manager in a file named rds.tf and want to use it another resource in a file called ecs.tf
resource "random_password" "db_instance_aurora_password" {
length = 40
special = false
keepers = {
database_id = aws_secretsmanager_secret.db_instance_aurora_master_password.id
}
Above is rds.tf, which works and generates a random password. In my second file ecs.tf, I want to use the
resource "aws_ecs_task_definition" "task" {
family = var.service_name
container_definitions = templatefile("${path.module}/templates/task_definition.tpl", {
DB_USERNAME = var.db_username
DB_PASSWORD = random_password.db_instance_aurora_password.result
})
}
How to export, the output of the db_password and use it in another resource(ecs.tf)?
output "aurora_rds_cluster.master_password" {
description = "The master password"
value = random_password.db_instance_aurora_password.result }
If all terraform files are in one directory, you can just reference random_password resource as you do it for the database. Then you might not need to output it.
If it's separated, then you can use terraform modules to achieve what you need. In ECS terraform you can reference RDS module and you will have access to its output:
module "rds" {
source = "path/to/folder/with/rds/terraform"
}
resource "aws_ecs_task_definition" "task" {
family = var.service_name
container_definitions = templatefile("${path.module}/templates/task_definition.tpl", {
DB_USERNAME = var.db_username
DB_PASSWORD = module.rds.aurora_rds_cluster.master_password
})
}
Storing password in terraform's output will store it as a plain text. Even if you use encrypted S3 bucket, password can still be accessed at least by terraform. Another option to share password could be for example by using AWS Parameter Store. Module that creates password can store it in Param Store, and another module that needs a password can read it.
P.S. You might want to add sensitive = true to the password output in order to eliminate password value from logs.
Related
I have a Terraform resource for an AWS Glue Connection, like this:
resource "aws_glue_connection" "some-connection-name" {
name = "some-connection-name"
physical_connection_requirements {
availability_zone = var.availability_zone
security_group_id_list = var.security_group_id_list
subnet_id = var.subnet_id
}
connection_properties = {
JDBC_CONNECTION_URL = "jdbc:postgresql://change_host_name:5432/db_name"
JDBC_ENFORCE_SSL = "false"
PASSWORD = "change_password"
USERNAME = "change_username"
}
}
For context, this resource was imported, not created originally with Terraform. I have been retrofitting Terraform to an existing project by iteratively importing, planning, and applying.
Of course I do not want to save the credentials in the Terraform file. So I used placeholder values, as you can see above. After deployment, I assumed, I would be able to change the username, password, and connection URL by hand.
When I run terraform plan I get this indication that Terraform is preparing to change the Glue Connection:
~ connection_properties = (sensitive value)
Terraform plans to modify the connection_properties because they differ (intentionally) from the live configuration. But I don't want it to. I want to terraform apply my script without overwriting the credentials. Periodically applying is part of my development workflow. As things stand I will have to manually restore the credentials after every time I apply.
I want to indicate to Terraform not to to overwrite the remote credentials with my placeholder credentials. I tried simply omitting the connection_properties argument but the problem remains. Is there another way to coax Terraform not to overwrite the host, username, and password upon apply?
Based on the comments.
You could use ignore_changes. Thus, the could could be:
resource "aws_glue_connection" "some-connection-name" {
name = "some-connection-name"
physical_connection_requirements {
availability_zone = var.availability_zone
security_group_id_list = var.security_group_id_list
subnet_id = var.subnet_id
}
connection_properties = {
JDBC_CONNECTION_URL = "jdbc:postgresql://change_host_name:5432/db_name"
JDBC_ENFORCE_SSL = "false"
PASSWORD = "change_password"
USERNAME = "change_username"
}
lifecycle {
ignore_changes = [
connection_properties,
]
}
}
How do we create a DMS endpoint for RDS using Terraform by providing the Secret Manager ARN to fetch the credentials? I looked at the documentation but I couldn't find anything.
There's currently an open feature request for DMS to natively use secrets manager to connect to your RDS instance. This has a linked pull request that initially adds support for PostgreSQL and Oracle RDS instances for now but is currently unreviewed so it's hard to know when that functionality may be released.
If you aren't using automatic secret rotation (or can rerun Terraform after the rotation) and don't mind the password being stored in the state file but still want to use the secrets stored in AWS Secrets Manager then you could have Terraform retrieve the secret from Secrets Manager at apply time and use that to configure the DMS endpoint using the username and password combination instead.
A basic example would look something like this:
data "aws_secretsmanager_secret" "example" {
name = "example"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
}
resource "aws_dms_endpoint" "example" {
certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"
database_name = "test"
endpoint_id = "test-dms-endpoint-tf"
endpoint_type = "source"
engine_name = "aurora"
extra_connection_attributes = ""
kms_key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
password = jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)["password"]
port = 3306
server_name = "test"
ssl_mode = "none"
tags = {
Name = "test"
}
username = "test"
}
I am using Terraform with AWS and have been able to create an AWS Storage Gateway file gateway using the aws_storagegateway_gateway resource.
The gateway will create and the status will be 'online' however there is not a cache disk added yet in the console which is normal as it has to be done after the gateway is created. The VM does have a disk and it is available to add in the console and doing so in the console works perfectly.
However, I am trying to add the disk with Terraform once the gateway is created and cannot seem to get the code to work, or quite possibly don't understand how to get it to work.
Trying to use the aws_storagegateway_cache resource but I get an error on the disk_id and do not know how to get it to return from the code of the gateway creation.
Might someone have a working example of how to get the cache disk to add with Terraform once the gateway is created or know how to get the disk_id so I can add it?
Adding code
provider "aws" {
access_key = "${var.access-key}"
secret_key = "${var.secret-key}"
token = "${var.token}"
region = "${var.region}"
}
resource "aws_storagegateway_gateway" "hmsgw" {
gateway_ip_address = "${var.gateway-ip-address}"
gateway_name = "${var.gateway-name}"
gateway_timezone = "${var.gateway-timezone}"
gateway_type = "${var.gateway-type}"
smb_active_directory_settings {
domain_name = "${var.domain-name}"
username = "${var.username}"
password = "${var.password}"
}
}
resource "aws_storagegateway_cache" "sgwdisk" {
disk_id = "SCSI"
gateway_arn = "${aws_storagegateway_gateway.hmsgw.arn}"
}
output "gatewayid" {
value = "${aws_storagegateway_gateway.hmsgw.arn}"
}
The error I get is:
aws_storagegateway_cache.sgwdisk: error adding Storage Gateway cache: InvalidGatewayRequestException: The specified disk does not exist.
status code: 400, request id: fda602fd-a47e-11e8-a1f4-b383e2e2e2f6
I have attempted to hard code the disk_id like above or use a variable. On the variable I don't know if it is returned or exists so that could be the issue, new to this.
Before creating the resource "aws_storagegateway_cache", use data to get the disk id. I am using the below scripts and it works fine.
variable "upload_disk_path" {
default = "/dev/sdb"
}
data "aws_storagegateway_local_disk" "upload_disk" {
disk_path = "${var.upload_disk_path}"
gateway_arn = "${aws_storagegateway_gateway.this.arn}"
}
resource "aws_storagegateway_upload_buffer" "stg_upload_buffer" {
disk_id = "${data.aws_storagegateway_local_disk.upload_disk.disk_id}"
gateway_arn = "${aws_storagegateway_gateway.this.arn}"
}
In case your are using two disk's (one for upload and one cahce), use the same code but set the default value of cache_disk_path = "/dev/sdc"
if you use the AWS cli to run: aws storagegateway list-local-disks --gateway-arn [your gateway's arn] --region [gateway's region], you'll get data returned that includes the disk ID.
Then, in your example code, you replace SCSI with "${gateway_arn}:[diskID from command above]" and your cache volume will be created.
One thing I've noticed though - when I've done this and then tried to apply the same Terraform code again, and in some cases even with a targeted deploy of a specific resource within my Terraform, it wants to re-deploy the cache volume, because the Terraform is detecting that the disk ID is changing to a value of "1". Passing in "1" as the value in the Terraform, however, does not seem to work.
This would work also:
variable "disk_path" {
default = "/dev/sdb"
}
provider "aws" {
alias = "primary"
access_key = var.access-key
secret_key = var.secret-key
token = var.token
region = var.region
}
resource "aws_storagegateway_gateway" "hmsgw" {
gateway_ip_address = var.gateway-ip-address
gateway_name = var.gateway-name
gateway_timezone = var.gateway-timezone
gateway_type = var.gateway-type
smb_active_directory_settings {
domain_name = var.domain-name
username = var.username
password = var.password
}
}
data "aws_storagegateway_local_disk" "sgw_disk" {
disk_path = var.disk_path
gateway_arn = aws_storagegateway_gateway.hmsgw.arn
provider = aws.primary
}
resource "aws_storagegateway_cache" "sgw_cache" {
disk_id = data.aws_storagegateway_local_disk.sgw_disk.id
gateway_arn = aws_storagegateway_gateway.hmsgw.arn
provider = aws.primary
}
I need to spin up a bunch of EC2 boxes for different users. Each user should be sandboxed from all the others, so each EC2 box needs its own SSH key.
What's the best way to accomplish this in Terraform?
Almost all of the instructions I've found want me to manually create an SSH key and paste it into a terraform script.
(Bad) Examples:
https://github.com/hashicorp/terraform/issues/1243,
http://2ninjas1blog.com/terraform-assigning-an-aws-key-pair-to-your-ec2-instance-resource/
Terraform fails to import key pair with Amazon EC2)
Since I need to programmatically generate unique keys for many users, this is impractical.
This doesn't seem like a difficult use case, but I can't find docs on it anywhere.
In a pinch, I could generate Terraform scripts and inject SSH keys on the fly using Bash. But that seems like exactly the kind of thing that Terraform is supposed to do in the first place.
Terraform can generate SSL/SSH private keys using the tls_private_key resource.
So if you wanted to generate SSH keys on the fly you could do something like this:
variable "key_name" {}
resource "tls_private_key" "example" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.key_name
public_key = tls_private_key.example.public_key_openssh
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
key_name = aws_key_pair.generated_key.key_name
tags {
Name = "HelloWorld"
}
}
output "private_key" {
value = tls_private_key.example.private_key_pem
sensitive = true
}
This will create an SSH key pair that lives in the Terraform state (it is not written to disk in files other than what might be done for the Terraform state itself when not using remote state), creates an AWS key pair based on the public key and then creates an Ubuntu 14.04 instance where the ubuntu user is accessible with the private key that was generated.
You would then have to extract the private key from the state file and provide that to the users. You could use an output to spit this straight out to stdout when Terraform is applied.
Getting the output from private key is via this command below:
terraform output -raw private_key
Security caveats
I should point out here that passing private keys around is generally a bad idea and you'd be much better having developers create their own key pairs and provide you with the public key that you (or them) can use to generate an AWS key pair (potentially using the aws_key_pair resource as used in the above example) that can then be specified when creating instances.
In general I would only use something like the above way of generating SSH keys for very temporary dev environments that you are controlling so you don't need to pass private keys to anyone. If you do need to pass private keys to people you will need to make sure that you do this in a secure channel and that you make sure the Terraform state (which contains the private key in plain text) is also secured appropriately.
Feb, 2022 Update:
The code below creates myKey to AWS and myKey.pem to your computerand the created myKey and myKey.pem have the same private keys. (I used Terraform v0.15.4)
resource "tls_private_key" "pk" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "kp" {
key_name = "myKey" # Create "myKey" to AWS!!
public_key = tls_private_key.pk.public_key_openssh
provisioner "local-exec" { # Create "myKey.pem" to your computer!!
command = "echo '${tls_private_key.pk.private_key_pem}' > ./myKey.pem"
}
}
Don't forget to make myKey.pem readable only by you running the code below before ssh to your ec2 instance.
chmod 400 myKey.pem
Otherwise the error below occurs.
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0664 for 'myKey.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "myKey.pem": bad permissions
ubuntu#35.72.30.251: Permission denied (publickey).
An extension to the previous answers, doesn't fit in a comment:
To write the generated key to private file with correct permissions:
resource "local_file" "pem_file" {
filename = pathexpand("~/.ssh/${local.ssh_key_name}.pem")
file_permission = "600"
directory_permission = "700"
sensitive_content = tls_private_key.ssh.private_key_pem
}
However one disadvantage of saving a file like this is that the path will end up in the terraform state. Not a big deal if it's just CI/CD and/or one person running the terraform apply, but if more "appliers", the tfstate will get updated whenever someone different from last apply runs apply. This will create some "update" noise. Not a huge deal but something to be aware of.
An alternative that avoids that is to save the pem file in AWS Secrets Manager, or encrypted in S3, and provide a command to fetch it & create local file.
Adding to Kai's answer:
variable "generated_key_name" {
type = string
default = "terraform-key-pair"
description = "Key-pair generated by Terraform"
}
resource "tls_private_key" "dev_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.generated_key_name
public_key = tls_private_key.dev_key.public_key_openssh
provisioner "local-exec" { # Generate "terraform-key-pair.pem" in current directory
command = <<-EOT
echo '${tls_private_key.dev_key.private_key_pem}' > ./'${var.generated_key_name}'.pem
chmod 400 ./'${var.generated_key_name}'.pem
EOT
}
}
you must add this along with #ydaetskcoR answer
output "ssh_key" {
description = "ssh key generated by terraform"
value = tls_private_key.asg_lc_key.private_key_pem
}
Terraform allows you to define Postgres master user and password with the options username and password. But there is no option to set up an application postgres user, how would you do that?
The AWS RDS resource is only used for creating/updating/deleting the RDS resource itself using the AWS APIs.
To create users or databases on the RDS instance itself you'd either want to use another tool (such as psql - the official command line tool or a configuration management tool such as Ansible) or use Terraform's Postgresql provider.
Assuming you've already created your RDS instance you would then connect to the instance as the master user and then create the application user with something like this:
provider "postgresql" {
host = "postgres_server_ip1"
username = "postgres_user"
password = "postgres_password"
}
resource "postgresql_role" "application_role" {
name = "application"
login = true
password = "application-password"
encrypted = true
}
addition to #ydaetskcoR answer, here is the full example for RDS PostgreSQL;
provider "postgresql" {
scheme = "awspostgres"
host = "db.domain.name"
port = "5432"
username = "db_username"
password = "db_password"
superuser = false
}
resource "postgresql_role" "new_db_role" {
name = "new_db_role"
login = true
password = "db_password"
encrypted_password = true
}
resource "postgresql_database" "new_db" {
name = "new_db"
owner = postgresql_role.new_db_role.name
template = "template0"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
The above two answers requires the host that runs the terraform has direct access to the RDS database, and usually you do not. I propose to code what you need to do in a lambda function (optionally with secrets manager for retrieving the master password):
resource "aws_lambda_function" "terraform_lambda_func" {
filename = "${path.module}/lambda_function/lambda_function.zip"
...
}
and then use the following data source (example) to call the lambda function.
data "aws_lambda_invocation" "create_app_user" {
function_name = aws_lambda_function.terraform_lambda_func.function_name
input = <<-JSON
{
"step": "create_app_user"
}
JSON
depends_on = [aws_lambda_function.terraform_lambda_func]
provider = aws.primary
}
This solution id generic. It can do what a lambda function can do with AWS API can do, which is basically limitless.