Most probably I am doing something wrong or missing something here.
This is how my terraform template looks like:
locals {
credentials_file_path = "~/gcp-auth/account.json"
}
terraform {
backend "gcs" {
bucket = "somebucket-tf-state"
prefix = "terraform/state/"
credentials = "~/gcp-auth/account.json"
}
}
provider "google" {
region = "${var.region}"
credentials = "${file(local.credentials_file_path)}"
}
module "project" {
source = "../modules/gcp-project/"
project_name = "${var.project_name}"
billing_account = "${var.billing_account}"
org_id = "${var.org_id}"
}
When I run this for multiple times with different parameters, It overwrites the previous state file.
This is what I see in the bucket:
Buckets/somebucket-tf-state/terraform/state/default.tfstate
Is there a way I can create different state files per project I run the template for?
If I understand what you're trying to do correctly, then it sounds like what you need is workspaces.
Just do :
# Select per-project workspace or create new workspace
terraform workspace select $GCE_PROJECT || terraform workspace new $GCE_PROJECT
$ Plan and apply as usual.
terraform plan -out .terraform/.terraform.plan && terraform apply .terraform/.terraform.plan
# Revert to default workspace
terraform workspace select default
The better oprion is to use GitOps. You should create an environment for each branch and for every environment inject the correct value in the bucket name.
Related
I'm quite new to Terraform, so I guess I consider Terraform modules as "functions" that I can re-use but that's wrong. I had a scenario where I had to deploy a static web site to cloudfront and s3 bucket. At first, I configured this as raw files in my project: https://github.com/tal-rofe/tf-old/tree/main/terraform/core - you can see I have s3.tf and cloudfront.tf files, raw in my project.
But then I had to deploy an another static web application. So, because now I need 2 applications to be deployed, I could duplicate my code and create x-cloudfront.tf, x-s3.tf and y-cloudfront.tf, y-s3.tf files with same exact configuration, just differ with the domains I guess. So instead of doing so, I tried to create a module which creates s3 and cloudfront resources, and then in my project, I could re-use this module to create 2 web applications.
So I have this project:
https://github.com/tal-rofe/tf-new
And here I created a module:
https://github.com/tal-rofe/tf-new/tree/main/terraform/modules/static-app
As you can see I have variables.tf file:
variable "domain_name" {
description = "The domain name of the application"
type = string
}
variable "zone_id" {
description = "The zone identifier to set domain of the application in"
type = string
}
variable "acm_certificate_arn" {
description = "The certificate ARN"
type = string
}
variable "s3_bucket_name" {
description = "The bucket name of the S3 bucket for the application"
type = string
}
variable "common_tags" {
description = "The tags for all created resources"
type = map(string)
default = {}
}
variable "cloudfront_tags" {
description = "The tags for Cloudfront resource"
type = map(string)
}
variable "www_redirect_bucket_tags" {
description = "The tags for a bucket to redirect www to non-www"
type = map(string)
}
variable "s3_bucket_tags" {
description = "The tags for a bucket to redirect www to non-www"
type = map(string)
}
and output.tf file:
output "cloudfront_distribution_id" {
description = "The distribution ID of deployed Cloudfront"
value = module.cdn.cloudfront_distribution_id
}
And all other files within this module are dedicated for setting up the relevant resources, using another public modules released. So If I do so, I could, in my project, to use this module twice, provide different inputs in each declaration, and get the cloudfront_distribution_id output from each.
This is why I have these 2 files, using this module:
https://github.com/tal-rofe/tf-new/blob/main/terraform/core/docs-static.tf
and
https://github.com/tal-rofe/tf-new/blob/main/terraform/core/frontend-static.tf
And then, I want to output from my project the 2 created cloudfront distributions IDs:
output "frontend_cloudfront_distribution_id" {
description = "The distribution ID of deployed Cloudfront frontend"
value = module.frontend-static.cloudfront_distribution_id
}
output "docs_cloudfront_distribution_id" {
description = "The distribution ID of deployed Cloudfront docs"
value = module.docs-static.cloudfront_distribution_id
}
So when I start applying this whole project with terraform, I don't get these 2 outputs, but I only get one output, called cloudfront_distribution_id. So it seems like I get the output of the custom module I created. But I want to get the outputs of my main project.
So I don't understand what did I do wrong in providing this custom module?
I apply my configuration using GitHub action with these steps:
- name: Terraform setup
uses: hashicorp/setup-terraform#v2
with:
terraform_wrapper: false
- name: Terraform core init
env:
TERRAFORM_BACKEND_S3_BUCKET: ${{ secrets.TERRAFORM_BACKEND_S3_BUCKET }}
TERRAFORM_BACKEND_DYNAMODB_TABLE: ${{ secrets.TERRAFORM_BACKEND_DYNAMODB_TABLE }}
run: |
terraform -chdir="./terraform/core" init \
-backend-config="bucket=$TERRAFORM_BACKEND_S3_BUCKET" \
-backend-config="dynamodb_table=$TERRAFORM_BACKEND_DYNAMODB_TABLE" \
-backend-config="region=$AWS_REGION"
- name: Terraform core plan
run: terraform -chdir="./terraform/core" plan -no-color -out state.tfplan
- name: Terraform core apply
run: terraform -chdir="./terraform/core" apply state.tfplan
You are running terraform apply from the terraform/modules/static-app folder of your project. You need to be running it from the terraform/core folder.
You should always run terraform apply from within the core/root/base folder of your Terraform code.
I have a Terraform project that allows to create multiple cloud functions.
I know that if I change the name of the google_storage_bucket_object related to the function itself, terraform will see the difference of the zip name and redeploy the cloud function.
My question is, there is a way to obtain the same behaviour, but only with the cloud functions that have been changed?
resource "google_storage_bucket_object" "zip_file" {
# Append file MD5 to force bucket to be recreated
name = "${local.filename}#${data.archive_file.source.output_md5}"
bucket = var.bucket.name
source = data.archive_file.source.output_path
}
# Create Java Cloud Function
resource "google_cloudfunctions_function" "java_function" {
name = var.function_name
runtime = var.runtime
available_memory_mb = var.memory
source_archive_bucket = var.bucket.name
source_archive_object = google_storage_bucket_object.zip_file.name
timeout = 120
entry_point = var.function_entry_point
event_trigger {
event_type = var.event_trigger.event_type
resource = var.event_trigger.resource
}
environment_variables = {
PROJECT_ID = var.env_project_id
SECRET_MAIL_PASSWORD = var.env_mail_password
}
timeouts {
create = "60m"
}
}
By appending MD5 every cloud functions will result in a different zip file name, so terraform will re-deploy every of them and I found that without the MD5, Terraform will not see any changes to deploy.
If I have changed some code only inside a function, how can I tell to Terraform to re-deploy only it (so for example to change only its zip file name)?
I hope my question is clear and I want to thank you everyone who tries to help me!
I want to create a file (credentials.json) within a directory, say content using Terraform.
The contents will be the output of a private service account key.
I am using the following code to create the service account and get its key to data:
resource "google_service_account" "my-account" {
account_id = "${var.account_id}"
project = "${var.project_id}"
}
resource "google_service_account_key" "my-account" {
service_account_id = "${google_service_account.my-account.name}"
}
data "google_service_account_key" "my-account" {
name = "${google_service_account_key.cd.name}"
public_key_type = "TYPE_X509_PEM_FILE"
}
How can I then dump it to a local file?
My use case is that I want to create the credentials.json to enable periodic backups of jenkins to a google cloud storage bucket.
You can use the local_file resource to write data to disk in a Terraform run.
So you could do something like the following:
resource "google_service_account" "my-account" {
account_id = "${var.account_id}"
project = "${var.project_id}"
}
resource "google_service_account_key" "my-account" {
service_account_id = "${google_service_account.my-account.name}"
}
resource "local_file" "key" {
filename = "/path/to/key/output"
content = "${base64decode(google_service_account_key.my-account.private_key)}"
}
Note that you should never need a data source to look at the outputs of a resource you are creating in that same Terraform command. In this case you can ditch the google_service_account_key data source because you have the resource available to you.
The benefit of data sources is when you need to look up some generated value of a resource either not created by Terraform or in a different state file.
Your best bet would be to create output for your service account:
output "google_service_account_key" {
value = "${base64decode(data.google_service_account_key.my-account.private_key)}"
}
With the terraform output command you can then query specifically for the key, combined with jq (or another json parser) to find the correct output:
terraform output -json google_service_account_key | jq '.value[0]' > local_file.json
I'm getting the following error when trying to initially plan or apply a resource that is using the data values from the AWS environment to a count.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: Invalid count argument
on main.tf line 24, in resource "aws_efs_mount_target" "target":
24: count = length(data.aws_subnet_ids.subnets.ids)
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
$ terraform --version
Terraform v0.12.9
+ provider.aws v2.30.0
I tried using the target option but doesn't seem to work on data type.
$ terraform apply -target aws_subnet_ids.subnets
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
The only solution I found that works is:
remove the resource
apply the project
add the resource back
apply again
Here is a terraform config I created for testing.
provider "aws" {
version = "~> 2.0"
}
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
resource aws_default_vpc default {
}
data aws_subnet_ids subnets {
vpc_id = aws_default_vpc.default.id
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
Finally figured out the answer after researching the answer by Dude0001.
Short Answer. Use the aws_vpc data source with the default argument instead of the aws_default_vpc resource. Here is the working sample with comments on the changes.
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
// Delete this --> resource aws_default_vpc default {}
// Add this
data aws_vpc default {
default = true
}
data "aws_subnet_ids" "subnets" {
// Update this from aws_default_vpc.default.id
vpc_id = "${data.aws_vpc.default.id}"
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
What I couldn't figure out was why my work around of removing aws_efs_mount_target on the first apply worked. It's because after the first apply the aws_default_vpc was loaded into the state file.
So an alternate solution without making change to the original tf file would be to use the target option on the first apply:
$ terraform apply --target aws_default_vpc.default
However, I don't like this as it requires a special case on first deployment which is pretty unique for the terraform deployments I've worked with.
The aws_default_vpc isn't a resource TF can create or destroy. It is the default VPC for your account in each region that AWS creates automatically for you that is protected from being destroyed. You can only (and need to) adopt it in to management and your TF state. This will allow you to begin managing and to inspect when you run plan or apply. Otherwise, TF doesn't know what the resource is or what state it is in, and it cannot create a new one for you as it s a special type of protected resource as described above.
With that said, go get the default VPC id from the correct region you are deploying in your account. Then import it into your TF state. It should then be able to inspect and count the number of subnets.
For example
terraform import aws_default_vpc.default vpc-xxxxxx
https://www.terraform.io/docs/providers/aws/r/default_vpc.html
Using the data element for this looks a little odd to me as well. Can you change your TF script to get the count directly through the aws_default_vpc resource?
I was able to create a bucket in an amazon S3 using this link.
I used the following code to create a bucket :
resource "aws_s3_bucket" "b" {
bucket = "my_tf_test_bucket"
acl = "private"
}
Now I wanted to create folders inside the bucket, say Folder1.
I found the link for creating an S3 object. But this has a mandatory parameter source. I am not sure what this value have to , since my intent is to create a folder inside the S3 bucket.
For running terraform on Mac or Linux, the following will do what you want
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "Folder1/"
source = "/dev/null"
}
If you're on windows you can use an empty file.
While folks will be pedantic about s3 not having folders, there are a number of operations where having an object placeholder for a key prefix (otherwise called a folder) make life easier. Like s3 sync for example.
Actually, there is a canonical way to create it, without being OS dependent, by inspecting the Network on a UI put you see the content headers, as stated by : https://stackoverflow.com/users/1554386/alastair-mccormack ,
And S3 does support folders these days as visible from the UI.
So this is how you can achieve it:
resource "aws_s3_bucket_object" "base_folder" {
bucket = "${aws_s3_bucket.default.id}"
acl = "private"
key = "${var.named_folder}/"
content_type = "application/x-directory"
kms_key_id = "key_arn_if_used"
}
Please notice the trailing slash otherwise it creates an empty file
Above has been used with a Windows OS to successfully create a folder using terraform s3_bucket_object.
The answers here are outdated, it's now definitely possible to create an empty folder in S3 via Terraform. Using the aws_s3_object resource, as follows:
resource "aws_s3_bucket" "this_bucket" {
bucket = "demo_bucket"
}
resource "aws_s3_object" "object" {
bucket = aws_s3_bucket.this_bucket.id
key = "demo/directory/"
}
If you don't supply a source for the object then terraform will create an empty directory.
IMPORTANT - Note the trailing slash this will ensure you get a directory and not an empty file
S3 doesn't support folders. Objects can have prefix names with slashes that look like folders, but that's just part of the object name. So there's no way to create a folder in terraform or anything else, because there's no such thing as a folder in S3.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
http://docs.aws.amazon.com/AWSImportExport/latest/DG/ManipulatingS3KeyNames.html
If you want to pretend, you could create a zero-byte object in the bucket named "Folder1/" but that's not required. You can just create objects with key names like "Folder1/File1" and it will work.
old answer but if you specify the key with the folder (that doesn't exist yet) terraform will create the folder automatically for you
terraform {
backend "s3" {
bucket = "mysql-staging"
key = "rds-mysql-state/terraform.tfstate"
region = "us-west-2"
encrypt = true
}
}
I would like to add to this discussion that you can create a set of empty folders by providing the resource a set of strings:
resource "aws_s3_object" "default_s3_content" {
for_each = var.default_s3_content
bucket = aws_s3_bucket.bucket.id
key = "${each.value}/"
}
where var.default_s3_content is a set of strings:
variable "default_s3_content" {
description = "The default content of the s3 bucket upon creation of the bucket"
type = set(string)
default = ["folder1", "folder2", "folder3", "folder4", "folder5"]
}
v0.12.8 introduces a new fileset() function which can be used in combination with for_each to support this natively :
NEW FEATURES:
lang/funcs: New fileset function, for finding static local files that
match a glob pattern. (#22523)
A sample usage of this function is as follows (from here):
# Given the file structure from the initial issue:
# my-dir
# |- file_1
# |- dir_a
# | |- file_a_1
# | |- file_a_2
# |- dir_b
# | |- file_b_1
# |- dir_c
# And given the expected behavior of the base_s3_key prefix in the initial issue
resource "aws_s3_bucket_object" "example" {
for_each = fileset(path.module, "my-dir/**/file_*")
bucket = aws_s3_bucket.example.id
key = replace(each.value, "my-dir", "base_s3_key")
source = each.value
}
At the time of this writing, v0.12.8 is a day old (Released on 2019-09-04) so the documentation on https://www.terraform.io/docs/providers/aws/r/s3_bucket_object.html does not yet reference it. I am not certain if that's intentional.
As an aside, if you use the above, remember to update/create version.tf in your project like so:
terraform {
required_version = ">= 0.12.8"
}