Dumping Terraform output to a local file - google-cloud-platform

I want to create a file (credentials.json) within a directory, say content using Terraform.
The contents will be the output of a private service account key.
I am using the following code to create the service account and get its key to data:
resource "google_service_account" "my-account" {
account_id = "${var.account_id}"
project = "${var.project_id}"
}
resource "google_service_account_key" "my-account" {
service_account_id = "${google_service_account.my-account.name}"
}
data "google_service_account_key" "my-account" {
name = "${google_service_account_key.cd.name}"
public_key_type = "TYPE_X509_PEM_FILE"
}
How can I then dump it to a local file?
My use case is that I want to create the credentials.json to enable periodic backups of jenkins to a google cloud storage bucket.

You can use the local_file resource to write data to disk in a Terraform run.
So you could do something like the following:
resource "google_service_account" "my-account" {
account_id = "${var.account_id}"
project = "${var.project_id}"
}
resource "google_service_account_key" "my-account" {
service_account_id = "${google_service_account.my-account.name}"
}
resource "local_file" "key" {
filename = "/path/to/key/output"
content = "${base64decode(google_service_account_key.my-account.private_key)}"
}
Note that you should never need a data source to look at the outputs of a resource you are creating in that same Terraform command. In this case you can ditch the google_service_account_key data source because you have the resource available to you.
The benefit of data sources is when you need to look up some generated value of a resource either not created by Terraform or in a different state file.

Your best bet would be to create output for your service account:
output "google_service_account_key" {
value = "${base64decode(data.google_service_account_key.my-account.private_key)}"
}
With the terraform output command you can then query specifically for the key, combined with jq (or another json parser) to find the correct output:
terraform output -json google_service_account_key | jq '.value[0]' > local_file.json

Related

Terraform GCP executes resources in wrong order

I have this main.tf file:
provider "google" {
project = var.projNumber
region = var.regName
zone = var.zoneName
}
resource "google_storage_bucket" "bucket_for_python_application" {
name = "python_bucket_exam"
location = var.regName
force_destroy = true
}
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = "python_bucket_exam"
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = "python_bucket_exam"
}
When executed first time It worked fine, but after terraform destroy and again terraform plan -> terraform apply I've noticed that terraform tries to create object before actually creating a bucket:
Ofc it cant't create object inside something that does'nt exist. Why is that?
You have to create a dependency between your objects and your bucket (see code below). Otherwise, Terraform won't know that it has to create bucket first, and then objects. This is related to how Terraform stores the resources in a directed graph.
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
By doing this, you declare an implicit order : bucket, then objects. This is equivalent to using depends_on in your google_storage_bucket_objects, but in that particular case I recommend using a reference to your bucket in your objects, rather than using an explicit depends_on.

Terraform: How to pass output from one resource to another?

I'm using Aurora serverless Mysql and ECS and trying to use secrets generated by aws secret manager in a file named rds.tf and want to use it another resource in a file called ecs.tf
resource "random_password" "db_instance_aurora_password" {
length = 40
special = false
keepers = {
database_id = aws_secretsmanager_secret.db_instance_aurora_master_password.id
}
Above is rds.tf, which works and generates a random password. In my second file ecs.tf, I want to use the
resource "aws_ecs_task_definition" "task" {
family = var.service_name
container_definitions = templatefile("${path.module}/templates/task_definition.tpl", {
DB_USERNAME = var.db_username
DB_PASSWORD = random_password.db_instance_aurora_password.result
})
}
How to export, the output of the db_password and use it in another resource(ecs.tf)?
output "aurora_rds_cluster.master_password" {
description = "The master password"
value = random_password.db_instance_aurora_password.result }
If all terraform files are in one directory, you can just reference random_password resource as you do it for the database. Then you might not need to output it.
If it's separated, then you can use terraform modules to achieve what you need. In ECS terraform you can reference RDS module and you will have access to its output:
module "rds" {
source = "path/to/folder/with/rds/terraform"
}
resource "aws_ecs_task_definition" "task" {
family = var.service_name
container_definitions = templatefile("${path.module}/templates/task_definition.tpl", {
DB_USERNAME = var.db_username
DB_PASSWORD = module.rds.aurora_rds_cluster.master_password
})
}
Storing password in terraform's output will store it as a plain text. Even if you use encrypted S3 bucket, password can still be accessed at least by terraform. Another option to share password could be for example by using AWS Parameter Store. Module that creates password can store it in Param Store, and another module that needs a password can read it.
P.S. You might want to add sensitive = true to the password output in order to eliminate password value from logs.

Terraform for Firestore creation

is it possible to automate GCP Firestore creation using Terraform or another tool? I cannot find anything about it in docs.
Regards
See https://cloud.google.com/firestore/docs/solutions/automate-database-create#create_a_database_with_terraform
Set the database_type to CLOUD_FIRESTORE or CLOUD_DATASTORE_COMPATIBILITY.
provider "google" {
credentials = file("credentials-file")
}
resource "google_project" "my_project" {
name = "My Project"
project_id = "project-id"
}
resource "google_app_engine_application" "app" {
project = google_project.my_project.project_id
location_id = "location"
database_type = "CLOUD_FIRESTORE"
}
Update 7/23/20: see automating database creation.
You can enable Firestore using the google_project_service resource:
resource "google_project_service" "firestore" {
project = var.project_id
service = "firestore.googleapis.com"
disable_dependent_services = true
}
Edit: I don't see any possibility to create the database itself, however, you can use the google_firebase_project_location to set the location of the Firestore (this will also set the GAE location and the location of the default bucket).

Deploy resources only if a file exists in Terraform

I have a requirement where I have to deploy resources only if a certain file exists at a certain location otherwise it will skip the resource.
Like here is the code to deploy a certain identity provider in certain AWS accounts. Along with this identity provider (say abc) many other identity providers are also deployed from the same main.tf file so all has to be here. The only challenge is while deploying the IAM layer for any accounts we will only deploy this certain resource only if abc-${var.aws_account}.xml file exists in the filepath in
saml_metadata_document part. If it does not exists in the path it will simply ignore the resource creation and will go ahead with the rest of the code.
resource "aws_iam_saml_provider" "xyz" {
name = "abc-${var.aws_account}"
saml_metadata_document = "${file("${path.module}/metadata/abc-${var.aws_account}.xml")}"
}
Folder Structure
IAM-Module
|
main.tf
variables.tf
metadata
|
abc-127367223.xml
abc-983297832.xml
abc-342374384.xml
How can a conditional check be put in Terraform 0.11 to check the file exists?
count can be used to create an array of resources instead of just a single resource, so setting count = 0 will create an array of resources of length 0, effectively disabling the resource.
resource "aws_iam_saml_provider" "xyz" {
name = "abc-${var.aws_account}"
saml_metadata_document = "${file("${path.module}/metadata/abc-${var.aws_account}.xml")}"
count = fileexists("${path.module}/metadata/abc-${var.aws_account}.xml") ? 1 : 0
}
NOTE You will need access to fileexists which only exists in 0.12
If it is allowed. Instead of existence of the file, use the file size. If file size is zero, then do not create a resource, otherwise create.
data "local_file" "hoge" {
filename = "${path.module}/hoge"
}
resource "null_resource" "hoge" {
count = length(data.local_file.hoge.content) > 0 ? 1 : 0
provisioner "local-exec" {
command = <<EOF
cat "${path.module}/${data.local_file.hoge.filename}"
EOF
}
}

Terraform overwriting state file on remote backend

Most probably I am doing something wrong or missing something here.
This is how my terraform template looks like:
locals {
credentials_file_path = "~/gcp-auth/account.json"
}
terraform {
backend "gcs" {
bucket = "somebucket-tf-state"
prefix = "terraform/state/"
credentials = "~/gcp-auth/account.json"
}
}
provider "google" {
region = "${var.region}"
credentials = "${file(local.credentials_file_path)}"
}
module "project" {
source = "../modules/gcp-project/"
project_name = "${var.project_name}"
billing_account = "${var.billing_account}"
org_id = "${var.org_id}"
}
When I run this for multiple times with different parameters, It overwrites the previous state file.
This is what I see in the bucket:
Buckets/somebucket-tf-state/terraform/state/default.tfstate
Is there a way I can create different state files per project I run the template for?
If I understand what you're trying to do correctly, then it sounds like what you need is workspaces.
Just do :
# Select per-project workspace or create new workspace
terraform workspace select $GCE_PROJECT || terraform workspace new $GCE_PROJECT
$ Plan and apply as usual.
terraform plan -out .terraform/.terraform.plan && terraform apply .terraform/.terraform.plan
# Revert to default workspace
terraform workspace select default
The better oprion is to use GitOps. You should create an environment for each branch and for every environment inject the correct value in the bucket name.