Deploy resources only if a file exists in Terraform - amazon-web-services

I have a requirement where I have to deploy resources only if a certain file exists at a certain location otherwise it will skip the resource.
Like here is the code to deploy a certain identity provider in certain AWS accounts. Along with this identity provider (say abc) many other identity providers are also deployed from the same main.tf file so all has to be here. The only challenge is while deploying the IAM layer for any accounts we will only deploy this certain resource only if abc-${var.aws_account}.xml file exists in the filepath in
saml_metadata_document part. If it does not exists in the path it will simply ignore the resource creation and will go ahead with the rest of the code.
resource "aws_iam_saml_provider" "xyz" {
name = "abc-${var.aws_account}"
saml_metadata_document = "${file("${path.module}/metadata/abc-${var.aws_account}.xml")}"
}
Folder Structure
IAM-Module
|
main.tf
variables.tf
metadata
|
abc-127367223.xml
abc-983297832.xml
abc-342374384.xml
How can a conditional check be put in Terraform 0.11 to check the file exists?

count can be used to create an array of resources instead of just a single resource, so setting count = 0 will create an array of resources of length 0, effectively disabling the resource.
resource "aws_iam_saml_provider" "xyz" {
name = "abc-${var.aws_account}"
saml_metadata_document = "${file("${path.module}/metadata/abc-${var.aws_account}.xml")}"
count = fileexists("${path.module}/metadata/abc-${var.aws_account}.xml") ? 1 : 0
}
NOTE You will need access to fileexists which only exists in 0.12

If it is allowed. Instead of existence of the file, use the file size. If file size is zero, then do not create a resource, otherwise create.
data "local_file" "hoge" {
filename = "${path.module}/hoge"
}
resource "null_resource" "hoge" {
count = length(data.local_file.hoge.content) > 0 ? 1 : 0
provisioner "local-exec" {
command = <<EOF
cat "${path.module}/${data.local_file.hoge.filename}"
EOF
}
}

Related

Terraform Provider issue: registry.terraform.io/hashicorp/s3

I current have code that I have been using for quiet sometime that calls a custom S3 module. Today I tried to run the same code and I started getting an error regarding the provider.
╷ │ Error: Failed to query available provider packages │ │ Could not
retrieve the list of available versions for provider hashicorp/s3:
provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/s3 │ │ All modules should specify
their required_providers so that external consumers will get the
correct providers when using a module. To see which modules │ are
currently depending on hashicorp/s3, run the following command: │
terraform providers
Doing some digging seems that terraform is looking for a module registry.terraform.io/hashicorp/s3, which doesn't exist.
So far, I have tried the following things:
Validated that the S3 Resource code meets the standards of the upgrade Hashicorp did to 4.x this year. Plus I have been using it for a couple of months with no issues.
Delete .terraform directory and rerun terraform init (No success same error)
Delete .terraform directory and .terraform.hcl lock and run terraform init -upgrade (No Success)
I have tried to update my provider's file to try to force an upgrade (no Success)
I tried to change the provider to >= current version to pull the latest version with no success
Reading further, it refers to a caching problem of the terraform modules. I tried to run terraform providers lock and received this error.
Error: Could not retrieve providers for locking │ │ Terraform failed
to fetch the requested providers for darwin_amd64 in order to
calculate their checksums: some providers could not be installed: │ -
registry.terraform.io/hashicorp/s3: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/s3.
Kind of at my wits with what could be wrong. below is a copy of my version.tf which I changed from providers.tf based on another post I was following:
version.tf
# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
use_fips_endpoint = true
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
local = {
source = "hashicorp/local"
version = "~> 2.2.1"
}
}
required_version = ">= 1.2.0" #required terraform version
}
S3 Module
I did not include locals, outputs, or variables unless someone thinks we need to see them. As I said before, the module was running correctly until today. Hopefully, this is all you need for the provider's issue. Let me know if other files are needed.
resource "aws_s3_bucket" "buckets" {
count = length(var.bucket_names)
bucket = lower(replace(replace("${var.bucket_names[count.index]}-s3", " ", "-"), "_", "-"))
force_destroy = var.bucket_destroy
tags = local.all_tags
}
# Set Public Access Block for each bucket
resource "aws_s3_bucket_public_access_block" "bucket_public_access_block" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
block_public_acls = var.bucket_block_public_acls
ignore_public_acls = var.bucket_ignore_public_acls
block_public_policy = var.bucket_block_public_policy
restrict_public_buckets = var.bucket_restrict_public_buckets
}
resource "aws_s3_bucket_acl" "bucket_acl" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
acl = var.bucket_acl
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle_rule" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
rule {
id = "${var.bucket_names[count.index]}-lifecycle-${count.index}"
status = "Enabled"
expiration {
days = var.bucket_backup_expiration_days
}
transition {
days = var.bucket_backup_days
storage_class = "GLACIER"
}
}
}
# AWS KMS Key Server Encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_encryption" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.bucket_key[count.index].arn
sse_algorithm = var.bucket_sse
}
}
}
Looking for any other ideas I can use to fix this issue. thank you!!
Although you haven't included it in your question, I'm guessing that somewhere else in this Terraform module you have a block like this:
resource "s3_bucket" "example" {
}
For backward compatibility with modules written for older versions of Terraform, terraform init has some heuristics to guess what provider was intended whenever it encounters a resource that doesn't belong to one of the providers in the module's required_providers block. By default, a resource "belongs to" a provider by matching the prefix of its resource type name -- s3 in this case -- to the local names chosen in the required_providers block.
Given a resource block like the above, terraform init would notice that required_providers doesn't have an entry s3 = { ... } and so will guess that this is an older module trying to use a hypothetical legacy official provider called "s3" (which would now be called hashicorp/s3, because official providers always belong to the hashicorp/ namespace).
The correct name for this resource type is aws_s3_bucket, and so it's important to include the aws_ prefix when you declare a resource of this type:
resource "aws_s3_bucket" "example" {
}
This resource is now by default associated with the provider local name "aws", which does match one of the entries in your required_providers block and so terraform init will see that you intend to use hashicorp/aws to handle this resource.
My colleague and I finally found the problem. Turns out that we had a data call to the S3 bucket. Nothing was wrong with the module but the place I was calling the module had a local.tf action where I was calling s3 in a legacy format see the change below:
WAS
data "s3_bucket" "MyResource" {}
TO
data "aws_s3_bucket" "MyResource" {}
Appreciate the responses from everyone. Resource was the root of the problem but forgot that data is also a resource to check.

Create multiple GCP storage buckets using terraform

I have used terraform scripts to create resources in GCP. The scripts are working fine. But my question is - how do I create multiple storage buckets using a single script.
I have two files for creating the storage bucket-
main.tf which has the terraform code to create the buckets .
variables.tf which has the actual variables like storage bucket name, project_id, etc, which looks like this:
variable "storage_class" { default = "STANDARD" }
variable "name" { default = "internal-demo-bucket-1"}
variable "location" { default = "asia-southeast1" }
How can I provide more than one bucket name in the variable name? I tried to provide multiple names in an array but the build failed.
I don't know all your requirements, however suppose you need to create a few buckets with different names, while all other bucket characteristics are constant for every bucket in the set under discussion.
I would create a variable, i.e. bucket_name_set in a variables.tf file:
variable "bucket_name_set" {
description = "A set of GCS bucket names..."
type = list(string)
}
Then, in the terraform.tfvars file, I would provide unique names for the buckets:
bucket_name_set = [
"some-bucket-name-001",
"some-bucket-name-002",
"some-bucket-name-003",
]
Now, for example, in the main.tf file I can describe the resources:
resource "google_storage_bucket" "my_bucket_set" {
project = "some project id should be here"
for_each = toset(var.bucket_name_set)
name = each.value # note: each.key and each.value are the same for a set
location = "some region should be here"
storage_class = "STANDARD"
force_destroy = true
uniform_bucket_level_access = true
}
Terraform description is here: The for_each Meta-Argument
Terraform description for the GCS bucket is here: google_storage_bucket
Terraform description for input variables is here: Input Variables
Have you considered using terraform provided modules ? It becomes very easy if you use gcs module for bucket creation. It has an option to specify how many buckets you need to create and even the subfolders. I am including the module below for your reference
https://registry.terraform.io/modules/terraform-google-modules/cloud-storage/google/latest

Dumping Terraform output to a local file

I want to create a file (credentials.json) within a directory, say content using Terraform.
The contents will be the output of a private service account key.
I am using the following code to create the service account and get its key to data:
resource "google_service_account" "my-account" {
account_id = "${var.account_id}"
project = "${var.project_id}"
}
resource "google_service_account_key" "my-account" {
service_account_id = "${google_service_account.my-account.name}"
}
data "google_service_account_key" "my-account" {
name = "${google_service_account_key.cd.name}"
public_key_type = "TYPE_X509_PEM_FILE"
}
How can I then dump it to a local file?
My use case is that I want to create the credentials.json to enable periodic backups of jenkins to a google cloud storage bucket.
You can use the local_file resource to write data to disk in a Terraform run.
So you could do something like the following:
resource "google_service_account" "my-account" {
account_id = "${var.account_id}"
project = "${var.project_id}"
}
resource "google_service_account_key" "my-account" {
service_account_id = "${google_service_account.my-account.name}"
}
resource "local_file" "key" {
filename = "/path/to/key/output"
content = "${base64decode(google_service_account_key.my-account.private_key)}"
}
Note that you should never need a data source to look at the outputs of a resource you are creating in that same Terraform command. In this case you can ditch the google_service_account_key data source because you have the resource available to you.
The benefit of data sources is when you need to look up some generated value of a resource either not created by Terraform or in a different state file.
Your best bet would be to create output for your service account:
output "google_service_account_key" {
value = "${base64decode(data.google_service_account_key.my-account.private_key)}"
}
With the terraform output command you can then query specifically for the key, combined with jq (or another json parser) to find the correct output:
terraform output -json google_service_account_key | jq '.value[0]' > local_file.json

terraform count dependent on data from target environment

I'm getting the following error when trying to initially plan or apply a resource that is using the data values from the AWS environment to a count.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: Invalid count argument
on main.tf line 24, in resource "aws_efs_mount_target" "target":
24: count = length(data.aws_subnet_ids.subnets.ids)
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
$ terraform --version
Terraform v0.12.9
+ provider.aws v2.30.0
I tried using the target option but doesn't seem to work on data type.
$ terraform apply -target aws_subnet_ids.subnets
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
The only solution I found that works is:
remove the resource
apply the project
add the resource back
apply again
Here is a terraform config I created for testing.
provider "aws" {
version = "~> 2.0"
}
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
resource aws_default_vpc default {
}
data aws_subnet_ids subnets {
vpc_id = aws_default_vpc.default.id
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
Finally figured out the answer after researching the answer by Dude0001.
Short Answer. Use the aws_vpc data source with the default argument instead of the aws_default_vpc resource. Here is the working sample with comments on the changes.
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
// Delete this --> resource aws_default_vpc default {}
// Add this
data aws_vpc default {
default = true
}
data "aws_subnet_ids" "subnets" {
// Update this from aws_default_vpc.default.id
vpc_id = "${data.aws_vpc.default.id}"
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
What I couldn't figure out was why my work around of removing aws_efs_mount_target on the first apply worked. It's because after the first apply the aws_default_vpc was loaded into the state file.
So an alternate solution without making change to the original tf file would be to use the target option on the first apply:
$ terraform apply --target aws_default_vpc.default
However, I don't like this as it requires a special case on first deployment which is pretty unique for the terraform deployments I've worked with.
The aws_default_vpc isn't a resource TF can create or destroy. It is the default VPC for your account in each region that AWS creates automatically for you that is protected from being destroyed. You can only (and need to) adopt it in to management and your TF state. This will allow you to begin managing and to inspect when you run plan or apply. Otherwise, TF doesn't know what the resource is or what state it is in, and it cannot create a new one for you as it s a special type of protected resource as described above.
With that said, go get the default VPC id from the correct region you are deploying in your account. Then import it into your TF state. It should then be able to inspect and count the number of subnets.
For example
terraform import aws_default_vpc.default vpc-xxxxxx
https://www.terraform.io/docs/providers/aws/r/default_vpc.html
Using the data element for this looks a little odd to me as well. Can you change your TF script to get the count directly through the aws_default_vpc resource?

How to create a folder in an amazon S3 bucket using terraform

I was able to create a bucket in an amazon S3 using this link.
I used the following code to create a bucket :
resource "aws_s3_bucket" "b" {
bucket = "my_tf_test_bucket"
acl = "private"
}
Now I wanted to create folders inside the bucket, say Folder1.
I found the link for creating an S3 object. But this has a mandatory parameter source. I am not sure what this value have to , since my intent is to create a folder inside the S3 bucket.
For running terraform on Mac or Linux, the following will do what you want
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "Folder1/"
source = "/dev/null"
}
If you're on windows you can use an empty file.
While folks will be pedantic about s3 not having folders, there are a number of operations where having an object placeholder for a key prefix (otherwise called a folder) make life easier. Like s3 sync for example.
Actually, there is a canonical way to create it, without being OS dependent, by inspecting the Network on a UI put you see the content headers, as stated by : https://stackoverflow.com/users/1554386/alastair-mccormack ,
And S3 does support folders these days as visible from the UI.
So this is how you can achieve it:
resource "aws_s3_bucket_object" "base_folder" {
bucket = "${aws_s3_bucket.default.id}"
acl = "private"
key = "${var.named_folder}/"
content_type = "application/x-directory"
kms_key_id = "key_arn_if_used"
}
Please notice the trailing slash otherwise it creates an empty file
Above has been used with a Windows OS to successfully create a folder using terraform s3_bucket_object.
The answers here are outdated, it's now definitely possible to create an empty folder in S3 via Terraform. Using the aws_s3_object resource, as follows:
resource "aws_s3_bucket" "this_bucket" {
bucket = "demo_bucket"
}
resource "aws_s3_object" "object" {
bucket = aws_s3_bucket.this_bucket.id
key = "demo/directory/"
}
If you don't supply a source for the object then terraform will create an empty directory.
IMPORTANT - Note the trailing slash this will ensure you get a directory and not an empty file
S3 doesn't support folders. Objects can have prefix names with slashes that look like folders, but that's just part of the object name. So there's no way to create a folder in terraform or anything else, because there's no such thing as a folder in S3.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
http://docs.aws.amazon.com/AWSImportExport/latest/DG/ManipulatingS3KeyNames.html
If you want to pretend, you could create a zero-byte object in the bucket named "Folder1/" but that's not required. You can just create objects with key names like "Folder1/File1" and it will work.
old answer but if you specify the key with the folder (that doesn't exist yet) terraform will create the folder automatically for you
terraform {
backend "s3" {
bucket = "mysql-staging"
key = "rds-mysql-state/terraform.tfstate"
region = "us-west-2"
encrypt = true
}
}
I would like to add to this discussion that you can create a set of empty folders by providing the resource a set of strings:
resource "aws_s3_object" "default_s3_content" {
for_each = var.default_s3_content
bucket = aws_s3_bucket.bucket.id
key = "${each.value}/"
}
where var.default_s3_content is a set of strings:
variable "default_s3_content" {
description = "The default content of the s3 bucket upon creation of the bucket"
type = set(string)
default = ["folder1", "folder2", "folder3", "folder4", "folder5"]
}
v0.12.8 introduces a new fileset() function which can be used in combination with for_each to support this natively :
NEW FEATURES:
lang/funcs: New fileset function, for finding static local files that
match a glob pattern. (#22523)
A sample usage of this function is as follows (from here):
# Given the file structure from the initial issue:
# my-dir
# |- file_1
# |- dir_a
# | |- file_a_1
# | |- file_a_2
# |- dir_b
# | |- file_b_1
# |- dir_c
# And given the expected behavior of the base_s3_key prefix in the initial issue
resource "aws_s3_bucket_object" "example" {
for_each = fileset(path.module, "my-dir/**/file_*")
bucket = aws_s3_bucket.example.id
key = replace(each.value, "my-dir", "base_s3_key")
source = each.value
}
At the time of this writing, v0.12.8 is a day old (Released on 2019-09-04) so the documentation on https://www.terraform.io/docs/providers/aws/r/s3_bucket_object.html does not yet reference it. I am not certain if that's intentional.
As an aside, if you use the above, remember to update/create version.tf in your project like so:
terraform {
required_version = ">= 0.12.8"
}